Last checked in alog80348
Nothing really of note, for the OUTs MY_FAN2_270_1 is a little noisy, along with MR_FAN5_170_1 at the CS.
Still down for maintenance and it will be up to two more hours before we can start relocking - some investigations/experiments will be done to try and narrow down the cause of the FSS issues.
The independent GPS receiver at EX has failed. It 1PPs signal into the comparator starting oscillating at 03:53 this morning. Its MEDM screen is frozen from that time.
Erik is on his way out to EX to investigate
I've brought the EX CNS clock back to the corner station for repair.
External power supply failed.
CNS (GPS receiver) restored with new power supply at EX
Before this morning's lockloss until 15:20UTC, we left CO2X and CO2Y on at their nominal annular powers (1.7W into vac) so that we could measure the IFO beams absorption on the ITMs using the HWS data.
B. Weaver, J. Kissel WP 12109 Betsy, myself, and several others are looking to propose redlines to the WHAM3 flange cable layout (D1002874) in prep for O5 (see T2400150 for DCN and discussion there-of). However, in doing so, we discovered that the flange layout may have double-counted the shared cable for the MC2/PR2 M1 (top) RTSD / T1T2 OSEMs. Other drawings (e.g. the Cable Routing D1101463 and the wiring diagrams D1000599) indicate "yes, there's an extra 'SUS-TRIPLE' entry somewhere between the D6 and D3 allocation," but we wanted to be sure. As such, Betsy went out to HAM3 today and confirmed that YES the MC2/PR2 M1 (top) RTSD / T1T2 cable, labeled "SUS_HAM3_002" in the wiring diagram or "SUS-HAM3-2" in real life, does come out of D3-1C1 and *not* out of any D6 port, and thus we validate that D6's list of 4x DB25s to support the 'SUS-TRIPLE' (i.e. MC2) in D1002874 is incorrect. Pictures attached show the D3 flange, and highlight the SUS-HAM3-2 cable at the flange going to D3-1C1, and then the other end of that cable, clearly going into the MC2-TOP/PR2-TOP satamp box in the SUS-R2 field rack (S1301887).
Oli, Camilla
Oli found there was 3 sdf diffs after the in-lock charge measurements this morning. One was from lscparams.ETMX_GND_MIN_DriveAlign_gain being changed and th SU_CHARGE guardian not reloaded, the others it seems from an out of date filter and the original tramp ot being reverted correctly. The tramp wasn't explicitly changed but was ramped using a different than nominal value ezca.get_LIGOFilter( ... ramp_time=60).
Code changes attached and all sus charge guardians reloaded.
Tue Oct 08 08:07:33 2024 INFO: Fill completed in 7min 29secs
Gerardo confirmed a good fill curbside.
TITLE: 10/08 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Commissioning
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 2mph Gusts, 0mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.26 μm/s
QUICK SUMMARY:
Detector Locked for 3.5 hours now and running injections.
Workstations were updated and rebooted. This was an OS packages update. Conda packages were not updated.
H1 called cause ITMX ISI watchdoog trip.
TITLE: 10/08 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: We've been locked for just under 2 hours, the range has been just under 160Mpc.
I've taken one of Sheila's and TJ's scripts and adjusted it to plot the max values for PSL-FSS_FAST_MON_OUT_DQ and PSL-FSS_PC_MON_OUTPUT channels before and after we started having issues with the FSS, only looking at data when we were in NLN (600+) and ignoring the data from the last minute in each locked stretch when we lost lock (since the IMC unlocks during locklosses and everything in the detector is generally all over the place).
We started seeing FSS related locklosses starting September 17th, so the plot(attachment1) shows the 'before' in two chunks - June 12 - July 13 (in blue), which was pre- FSS issues and pre- OFI vent, August 24th - September 17th in green, which was after the OFI vent but before we started having FSS issues, and then the 'after'/during is September 17th - October 4th, shown in red.
In both channels we can see that the red tends to reach higher than the blue or green, but the difference isn't as drastic and the glitching doesn't seem to be more frequent during the FSS issues either. By squishing up the plot(attachment2), I did notice that the level FASTMON reaches does look to have been gradually increasing over the last 4 months, which is interesting.
Lockloss at 00:55 UTC, most likely from earthquake ground motion.
02:02 UTC lost it at LOWNOISE_ESD_ETMX, ASC_AS_A AND IMC-TRANS lost lock kind of close together, like ~100 ms. It doesn't look like there was any glitching until after ASC_AS_A lost light.
One of the recent FIND_IR lockloss looks like it may have been a oscillation?
03:10 UTC back to Observing
Our range has increased back to around 160 Mpc on the CALIB CLEAN channel the past few days. I ran the DARM integral compare plots using an observing time on Sept 28 (before OPO crystal and PRCL FF changes) and Oct 5 (after those changes). It appears the largest improvement has occured at low frequency. Some of that can be attributed to the PRCL feedforward, but not all. Based on the previous noise budget measurements, and the change in the coherence of PRCL from Sept 28 to Oct 5, I think the improvement in DARM from 10-30 Hz is likely due to the PRCL improvement. Above 30 Hz, I am not sure what could have caused that improvement. It doesn't appear there is much improvement above 100 Hz, which is where I would expect to see changes from the squeezing, if it improved from the OPO changes.
Sheila pointed out two things to me: first, that if we are not using median averaging, these plots might be misleading if there is a glitch, and second, that some of the improvement at low frequency could be squeezing related.
I went through the noise budget code and found that these plots were made without median averaging. However, changing the code to use median averaging is a simple matter of uncommenting one line of code in /ligo/gitcommon/NoiseBudget/aligoNB/aligoNB/common/utils.py that governs how the PSD is calculated for the noise budget.
I reran the darm_integral_compare code using median averaging. The result shows much less difference in the noise at low frequency between these two times. The range is still improved from 10-50 Hz, but there is a small drop in the range between 50-60 Hz. I still think the change from 10-30 Hz is likely due to PRCL.
As a further confirmation of the necessity of median averaging here, I made a spectrogram of the data span on Sept 28, and a few glitches, especially around low frequency, are evident. I didn't see these glitches in the sensitivity channel that I used to choose the data spans (I just trend the sensmon CLEAN range and look for regions without big dips). However, the Oct 5 data span appears fairly stationary.
Sheila, Camilla.
New SQZ ASC using AS42 signals with feedback to ZM4 and ZM6 tested and implemented. We still need to watch that this can keep a good SQZ alignment during thermalization. In O4a we used a SQZ ASC with ZM5/6, we have not had a SQZ ASC for the majority of O4b.
Prep to improve SQZ:
Testing ASC from 80373:
In the first 20 minutes of the lock, the SQZ ASC appears to be working well, plot.
Note to operator team: if the squeezing gets really bad, you should be able to use the SQZ Overview > IFO ASC (black linked button) > "!graceful clear history" script to turn off the SQZ ASC. Then change /opt/rtcds/userapps/release/sqz/h1/guardian/sqzparams.py use_ifo_as42_asc to False and go though NO_SQZEEZING then FREQ_DEP_SQUEEZING in SQZ_MANAGER and accept the sdfs for not using SQZ ASC. If SQZ still looks bad, put ZM4/6 osems (H1:SUS-ZM4/6_M1_DAMP_P/Y_INMON) back to when squeezing was last good and if needed run scan sqz alignment and scan sqz angle with SQZ_MANAGER.
Sheila moved the "0.01:0" integrators from the ASC_POS/ANG_P/Y filters into the ZM4/5/6_M1_LOCK_P/Y filter banks.
This will allow us to more easily adjust the ASC gains and to use the guardian ZM offload states. We turned them on on ZM4/6. Edited OFFLOAD_SQZ_ASC to offload for ZM4,5,6. And tested by putting an offset on ZM4. We put ZM4/6 back to positions they were in in lock via the osesms. SDFs for filters accepted. I removed the "!offload AS42" button from the SQZ > IFO ASC screen (liked to sqz/h1/scripts/ASC/offload_IFO_AS42_ASC.py) as it caused a lockloss yesterday.
Oli tested the SQZ_MANAGER OFFLOAD_SQZ_ASC guardian state today and it worked well. We still need to make the state request-able.
ASC now turns off before SCAN_SQZANG_FDS/FIS and SCAN_ALIGNMENT_FDS/FIS. It wil check if the ASC is on via H1:SQZ-ASC_WFS_SWITCH and turn the asc off before scanning alignment or angle.
We changed the paths so that to get from SCAN_SQZANG_FDS/FIS and SCAN_ALIGNMENT_FDS/FIS back to squeezing, the guardian will go though SQZ_ASC_FDS/FIS to turn back on ASC afterwards.