Displaying reports 5841-5860 of 84041.Go to page Start 289 290 291 292 293 294 295 296 297 End
Reports until 12:24, Tuesday 08 October 2024
LHO FMCS
ryan.crouch@LIGO.ORG - posted 12:24, Tuesday 08 October 2024 (80540)
HVAC Fan Vibrometers FAMIS check

Last checked in alog80348

Nothing really of note, for the OUTs MY_FAN2_270_1 is a little noisy, along with MR_FAN5_170_1 at the CS.

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 12:12, Tuesday 08 October 2024 (80539)
Ops DAY Midshift Status

Still down for maintenance and it will be up to two more hours before we can start relocking - some investigations/experiments will be done to try and narrow down the cause of the FSS issues.

H1 CDS
david.barker@LIGO.ORG - posted 10:19, Tuesday 08 October 2024 - last comment - 14:12, Tuesday 08 October 2024(80537)
EX CNS-II GPS receiver failure 03:54 Tue 08Oct2024 PDT

The independent GPS receiver at EX has failed. It 1PPs signal into the comparator starting oscillating at 03:53 this morning. Its MEDM screen is frozen from that time.

Erik is on his way out to EX to investigate

Images attached to this report
Comments related to this report
erik.vonreis@LIGO.ORG - 12:28, Tuesday 08 October 2024 (80541)

I've brought the EX CNS clock back to the corner station for repair.

filiberto.clara@LIGO.ORG - 13:48, Tuesday 08 October 2024 (80543)

External power supply failed.

erik.vonreis@LIGO.ORG - 14:12, Tuesday 08 October 2024 (80544)

CNS (GPS receiver) restored with new power supply at EX

H1 TCS
camilla.compton@LIGO.ORG - posted 09:27, Tuesday 08 October 2024 (80535)
CO2X and CO2Y left on for 1 hour after lockloss for ITM Absorption Measurement

Before this morning's lockloss until 15:20UTC, we left CO2X and CO2Y on at their nominal annular powers (1.7W into vac) so that we could measure the IFO beams absorption on the ITMs using the HWS data.

H1 SUS (INS, SUS, SYS, VE)
jeffrey.kissel@LIGO.ORG - posted 09:20, Tuesday 08 October 2024 (80533)
Quick Sanity Check: SUS MC2/PR M1 (Top) Cable Comes Out on D3-1C1 Flange
B. Weaver, J. Kissel
WP 12109

Betsy, myself, and several others are looking to propose redlines to the WHAM3 flange cable layout (D1002874) in prep for O5 (see T2400150 for DCN and discussion there-of). 

However, in doing so, we discovered that the flange layout may have double-counted the shared cable for the MC2/PR2 M1 (top) RTSD / T1T2 OSEMs. Other drawings (e.g. the Cable Routing D1101463 and the wiring diagrams D1000599) indicate "yes, there's an extra 'SUS-TRIPLE' entry somewhere between the D6 and D3 allocation," but we wanted to be sure.

As such, Betsy went out to HAM3 today and confirmed that YES the MC2/PR2 M1 (top) RTSD / T1T2 cable, labeled "SUS_HAM3_002" in the wiring diagram or "SUS-HAM3-2" in real life, does come out of D3-1C1 and *not* out of any D6 port, and thus we validate that D6's list of 4x DB25s to support the 'SUS-TRIPLE' (i.e. MC2) in D1002874 is incorrect.

Pictures attached show the D3 flange, and highlight the SUS-HAM3-2 cable at the flange going to D3-1C1, and then the other end of that cable, clearly going into the MC2-TOP/PR2-TOP satamp box in the SUS-R2 field rack (S1301887).

Images attached to this report
H1 SUS
camilla.compton@LIGO.ORG - posted 08:51, Tuesday 08 October 2024 (80531)
Edits to SUS_CHARGE and ESD_EXC_{QUAD} guaridans

Oli, Camilla

Oli found there was 3 sdf diffs after the in-lock charge measurements this morning. One was from lscparams.ETMX_GND_MIN_DriveAlign_gain being changed and th SU_CHARGE guardian not reloaded, the others it seems from an out of date filter and the original tramp ot being reverted correctly. The tramp wasn't explicitly changed but was ramped using a different than nominal value ezca.get_LIGOFilter( ... ramp_time=60).

Code changes attached and all sus charge guardians reloaded.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 08:24, Tuesday 08 October 2024 (80528)
Tue CP1 Fill

Tue Oct 08 08:07:33 2024 INFO: Fill completed in 7min 29secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 08:01, Tuesday 08 October 2024 (80527)
Ops Day Shift Start

TITLE: 10/08 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Commissioning
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 2mph Gusts, 0mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.26 μm/s
QUICK SUMMARY:

Detector Locked for 3.5 hours now and running injections.

H1 CDS
erik.vonreis@LIGO.ORG - posted 07:26, Tuesday 08 October 2024 (80526)
workstations updated

Workstations were updated and rebooted.  This was an OS packages update.  Conda packages were not updated.

H1 General (SEI)
anthony.sanchez@LIGO.ORG - posted 03:45, Tuesday 08 October 2024 - last comment - 08:49, Tuesday 08 October 2024(80525)
Reset ITMX ISI watchdog which was tripped...

H1 called cause ITMX ISI watchdoog trip.

Comments related to this report
ryan.crouch@LIGO.ORG - 08:49, Tuesday 08 October 2024 (80530)SEI

Potentially a cps glitch?

Images attached to this comment
H1 General (PSL)
ryan.crouch@LIGO.ORG - posted 22:00, Monday 07 October 2024 (80517)
OPS Monday EVE shift summary

TITLE: 10/08 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: We've been locked for just under 2 hours, the range has been just under 160Mpc.

H1 PSL (ISC)
oli.patane@LIGO.ORG - posted 18:29, Monday 07 October 2024 (80520)
Plots comparing FSS channels pre- issues vs now

I've taken one of Sheila's and TJ's scripts and adjusted it to plot the max values for PSL-FSS_FAST_MON_OUT_DQ and PSL-FSS_PC_MON_OUTPUT channels before and after we started having issues with the FSS, only looking at data when we were in NLN (600+) and ignoring the data from the last minute in each locked stretch when we lost lock (since the IMC unlocks during locklosses and everything in the detector is generally all over the place).

We started seeing FSS related locklosses starting September 17th, so the plot(attachment1) shows the 'before' in two chunks - June 12 - July 13 (in blue), which was pre- FSS issues and pre- OFI vent, August 24th - September 17th in green, which was after the OFI vent but before we started having FSS issues, and then the 'after'/during is September 17th - October 4th, shown in red.

In both channels we can see that the red tends to reach higher than the blue or green, but the difference isn't as drastic and the glitching doesn't seem to be more frequent during the FSS issues either. By squishing up the plot(attachment2), I did notice that the level FASTMON reaches does look to have been gradually increasing over the last 4 months, which is interesting.

Images attached to this report
H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 17:57, Monday 07 October 2024 - last comment - 08:58, Tuesday 08 October 2024(80521)
Lockloss 00:55 UTC

Lockloss at 00:55 UTC, most likely from earthquake ground motion.

Comments related to this report
ryan.crouch@LIGO.ORG - 19:20, Monday 07 October 2024 (80522)

02:02 UTC lost it at LOWNOISE_ESD_ETMX, ASC_AS_A AND IMC-TRANS lost lock kind of close together, like ~100 ms. It doesn't look like there was any glitching until after ASC_AS_A lost light.

Images attached to this comment
ryan.crouch@LIGO.ORG - 19:52, Monday 07 October 2024 (80523)ISC, Lockloss, PSL

One of the recent FIND_IR lockloss looks like it may have been a oscillation?

Images attached to this comment
ryan.crouch@LIGO.ORG - 20:10, Monday 07 October 2024 (80524)

03:10 UTC back to Observing

H1 AOS
elenna.capote@LIGO.ORG - posted 16:09, Monday 07 October 2024 - last comment - 15:17, Tuesday 08 October 2024(80515)
Range improvement over last few days seems to be mostly below 100 Hz

Our range has increased back to around 160 Mpc on the CALIB CLEAN channel the past few days. I ran the DARM integral compare plots using an observing time on Sept 28 (before OPO crystal and PRCL FF changes) and Oct 5 (after those changes). It appears the largest improvement has occured at low frequency. Some of that can be attributed to the PRCL feedforward, but not all. Based on the previous noise budget measurements, and the change in the coherence of PRCL from Sept 28 to Oct 5, I think the improvement in DARM from 10-30 Hz is likely due to the PRCL improvement. Above 30 Hz, I am not sure what could have caused that improvement. It doesn't appear there is much improvement above 100 Hz, which is where I would expect to see changes from the squeezing, if it improved from the OPO changes.

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 12:07, Tuesday 08 October 2024 (80538)

Sheila pointed out two things to me: first, that if we are not using median averaging, these plots might be misleading if there is a glitch, and second, that some of the improvement at low frequency could be squeezing related.

I went through the noise budget code and found that these plots were made without median averaging. However, changing the code to use median averaging is a simple matter of uncommenting one line of code in /ligo/gitcommon/NoiseBudget/aligoNB/aligoNB/common/utils.py that governs how the PSD is calculated for the noise budget.

I reran the darm_integral_compare code using median averaging. The result shows much less difference in the noise at low frequency between these two times. The range is still improved from 10-50 Hz, but there is a small drop in the range between 50-60 Hz. I still think the change from 10-30 Hz is likely due to PRCL.

Images attached to this comment
elenna.capote@LIGO.ORG - 15:17, Tuesday 08 October 2024 (80546)

As a further confirmation of the necessity of median averaging here, I made a spectrogram of the data span on Sept 28, and a few glitches, especially around low frequency, are evident. I didn't see these glitches in the sensitivity channel that I used to choose the data spans (I just trend the sensmon CLEAN range and look for regions without big dips). However, the Oct 5 data span appears fairly stationary.

Images attached to this comment
H1 SQZ
camilla.compton@LIGO.ORG - posted 10:09, Monday 07 October 2024 - last comment - 12:25, Thursday 10 October 2024(80506)
SQZ ASC turned on using AS42 on ZM4/6

Sheila, Camilla.

New SQZ ASC using AS42 signals with feedback to ZM4 and ZM6 tested and implemented.  We still need to watch that this can keep a good SQZ alignment during thermalization. In O4a we used a SQZ ASC with  ZM5/6, we have not had a SQZ ASC for the majority of O4b.

Prep to improve SQZ:

Testing ASC from 80373:

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 17:16, Monday 07 October 2024 (80519)OpsInfo

In the first 20 minutes of the lock, the SQZ ASC appears to be working well, plot.

Note to operator team: if the squeezing gets really bad, you should be able to use the SQZ Overview > IFO ASC (black linked button) > "!graceful clear history" script to turn off the SQZ ASC. Then change /opt/rtcds/userapps/release/sqz/h1/guardian/sqzparams.py use_ifo_as42_asc to False and go though NO_SQZEEZING then FREQ_DEP_SQUEEZING in SQZ_MANAGER and accept the sdfs for not using SQZ ASC. If SQZ still looks bad, put ZM4/6 osems (H1:SUS-ZM4/6_M1_DAMP_P/Y_INMON) back to when squeezing was last good and if needed run scan sqz alignment and scan sqz angle with SQZ_MANAGER.

Images attached to this comment
camilla.compton@LIGO.ORG - 09:37, Tuesday 08 October 2024 (80534)

Sheila moved the  "0.01:0" integrators from the ASC_POS/ANG_P/Y filters into the ZM4/5/6_M1_LOCK_P/Y filter banks.

This will allow us to more easily adjust the ASC gains and to use the guardian ZM offload states. We turned them on on ZM4/6. Edited OFFLOAD_SQZ_ASC to offload for ZM4,5,6. And tested by putting an offset on ZM4.  We put ZM4/6 back to positions they were in in lock via the osesms. SDFs for filters accepted. I removed the "!offload AS42" button from the SQZ > IFO ASC screen (liked to sqz/h1/scripts/ASC/offload_IFO_AS42_ASC.py) as it caused a lockloss yesterday. 

Images attached to this comment
camilla.compton@LIGO.ORG - 14:10, Wednesday 09 October 2024 (80570)

Oli tested the SQZ_MANAGER OFFLOAD_SQZ_ASC guardian state today and it worked well.  We still need to make the state request-able. 

camilla.compton@LIGO.ORG - 12:25, Thursday 10 October 2024 (80594)

ASC now turns off before  SCAN_SQZANG_FDS/FIS and SCAN_ALIGNMENT_FDS/FIS. It wil check if the ASC is on via H1:SQZ-ASC_WFS_SWITCH and turn the asc off before scanning alignment or angle.

We changed the paths so that to get from SCAN_SQZANG_FDS/FIS and SCAN_ALIGNMENT_FDS/FIS back to squeezing, the guardian will go though SQZ_ASC_FDS/FIS to turn back on ASC afterwards.

Displaying reports 5841-5860 of 84041.Go to page Start 289 290 291 292 293 294 295 296 297 End