H1 had a 4.5hr lock after last night's Timing Error, but then there was a random lockloss this morning just after 10am local.
Sadly, locking has not been trivial with many lockloss for green arms (fully complete & automatic Initial Alignment fixed those). Then had hiccups with DRMI oddly. Had about 3-4 locklosses around the engage ASC for DRMI. I texted Camilla at this point to give a heads up for help, but as we chatted on TeamSpeak, H1 made it past DRMI ASC! (so leaving her on top of call list). Currently H1 is engaging asc for full ifo.
Fred will soon enter the control room with a high school class from New Zealand. Robert is also on-site.
h1 just had a lockloss and green arms looked decent, but it kept hanging up with locklosses (6 so far) with the LOCKING ALS.
Will attempt to tweak up SRM (maybe PRM/BS) at ACQUIRE DRMI 1F to help ease ASC for DRMI.
Attempt #2: automatically taken to check mich fringes then back to drmi...
Attempt #3: Went to PRMI fine.
Attempt #4: DRMI ASC completed FINE this time! (I had phoned Camilla before before this lock with fears I'd need help! Luckily, we both watched it get past DRMI.)
Because of possible heat issues we are running the CP1 fills earlier in the morning, 8am instead of 10am. For today's fill the outside temp at 8am is 30C (85F).
Sat Jul 06 08:17:25 2024 INFO: Fill completed in 17min 21secs
This run used the standard "summer" trip temps of -130C. Both TCs quickly got to -100C and TCA crept down to -100C before the fill completed. I have lowered the trip to -140C to give a margin of safety.
TITLE: 07/06 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 10mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.13 μm/s
QUICK SUMMARY:
H1's been locked/observing 2hrs after the historic efforts in recovery from the timing/dolphin crash last night. All looks decent at the moment (winds died about 1hr ago, but the Xarm (EX + MX) are both above 10mph oddly.
It is Saturday, so that means it's Calibration Day in about 4hrs *knock on wood*.
The front entrance gate won't close all the way so I have exited and entered it without having to swipe in (been told it's due to heat).
Tony handed off the IFO to me at the end of his shift and after the CDS team had recovered following a timing error and Dolphin glitch; see alog78892 for details on that. Generally alignment was very bad, not unsurprising since mutliple seismic platforms tripped as a result of the glitch.
Executive summary: H1 is back to observing as of 12:35 UTC after a lengthy alignment recovery process. See below for my general outline of notes I kept through the process.
FAMIS 26314
There looks to be a slight increase in noise on MR_FAN6 about 5 days ago.
TITLE: 07/06 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
Lockloss 1:06 UTC
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1404263217
Lockloss Screenshots attached
Relocking:
After the Lockloss I had pretty small flashes on X arm.
I allowed Increase flashes to run and it didn't get me better than 0.3.
I then touched it up by hand and could not get it better than 3.1, Trending back I think the goal is above 1.
I then tried to get better alignment by rolling back to an alignment from the beggining of the last lock.
I tried the Alignment after the Last initial alignment.
Im going to try to move just PR2 now.
Revert all movements back to right after we lost lock.
Sheila did a small Pico motor move in HAM3.
Pico Motor ALS/Pop steering HAM1 ( Actually in HAM3).
H1:ALS-C_TRX_A_LF_GAIN was increased temporarily to make the X arm WFS run.
And Sheila did another move of ALS/Pop steering once the WFS were running.
Note the H1:ALS-X_FIBR_A_DEMOD_RFMON beat note dropped down to -38 and the threshold was lowered to -43.
Once this was done we could do an Initial Alignment, BUT we did not have anything on AS Air
Moved IM4 & PRM to get light on AS Air and Refl PRM cam.
Sheila used IM4 PRM & PR2 Osems to match prior OSEM values to do a "manual WFS releave past" to Move PR2 which gave us increased IR flashes.
Touched up PRM in Yaw to lock PRX
Finished Initial Alignment at 5:19 UTC
Locking was being difficult and would lockloss at FIND IR and Locking ALS.
I tried giving another Initial Alignment after it failed a number of time because we did touch it up from hand.
Even after that IA it was still locklossing at Find IR and Lockling ALS. Paused in Locking ALS to allow the WFS to calm down.
Yeah ALS WFS DOF 2 is pulling it away for some reason. But even trying to allow the WFS to melow out, it still catches a Lockloss.
Finally got past DRMI !!! YAY!!!
LOCKLOSS!? From MAX POWER!?
7:30 UTC Random HEPI HAM1 Watchdog trip.
IOP SUS56, 34, & 23 all had a IOP DACkill trip at the same time.
Seems like sush2a Had DAC error calling in CDS team.
CDS team is resetting all the SUS front end models because everything from HAM1 to HAM6 tripped in this timing glitch. ****
LOG:
Sheila remotely Helped get me a good Alignment and got me through a rouch IA
Dave B & Erik helped restart all the Front Ends.
Jim was also called due to a HEPI trip and he was next on the call list.
Every one has been cycled to the bottom of the list.
See Daves alog about the CDS Timing error https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=78892
Tony, Jim, Erik, Dave:
We had a timing error which caused DACKILLs on h1susb123, h1sush34, h1sush56 and DAC_FIFO errors on h1sush2a.
There was no obvious cause of the timing error which caused the Dolphin glitch, we noted that h1calcs was the only model with a DAQ CRC error (see attached).
After diag-reset and crc-reset only the SUS dackill and fifo error persisted.
We restarted all the models on h1susb123, h1sush2a, h1sush34, h1sush56 after bypassing the SWWD SEI systems for BSC1,2,3 and HAM2,3,4,5,6.
SUS models came back OK, we removed the SEI SWWD bypasses and handed the system over to Tony.
H1SUSH2A DACs went into error 300 ms before the CRC SUM increased on h1calcs.
The DAQ (I believe) reports CRC errors 120 ms after a dropped packet, leaving 180 ms. unaccounted for.
FRS31532 created for this issue. I has been closed as resolved-by-restarting.
Model restart logs from this morning:
Sat06Jul2024
LOC TIME HOSTNAME MODEL/REBOOT
01:12:22 h1susb123 h1iopsusb123
01:12:33 h1sush2a h1iopsush2a
01:12:39 h1sush34 h1iopsush34
01:12:43 h1susb123 h1susitmy
01:12:47 h1sush2a h1susmc1
01:12:56 h1sush56 h1iopsush56
01:12:57 h1susb123 h1susbs
01:12:59 h1sush34 h1susmc2
01:13:01 h1sush2a h1susmc3
01:13:10 h1sush56 h1sussrm
01:13:11 h1susb123 h1susitmx
01:13:13 h1sush34 h1suspr2
01:13:15 h1sush2a h1susprm
01:13:24 h1sush56 h1sussr3
01:13:25 h1susb123 h1susitmpi
01:13:27 h1sush34 h1sussr2
01:13:29 h1sush2a h1suspr3
01:13:38 h1sush56 h1susifoout
01:13:52 h1sush56 h1sussqzout
TITLE: 07/05 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 9mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
H1 has been locked and Observing for over 2 hours. Everything seems to be functioning well.
TITLE: 07/05 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
Most of today was supposed to be a straightforward 3hrs of commissioning, but an earthqake lockloss and another random lockloss forced a majority of the shift to be locking/commissioning time. For commissioning there was some PR2 Spot moves, and sheila made a note about possible/probable PR2 & IM4 (pit) adjustments needed for INPUT ALIGN part of initial alignment.
LOG:
Robert, Sheila, Camilla, Corey
Today we moved the spot on PR2, which reduced the power measured at the HAM3 viewport and the appearance of the 48 Hz peak in DARM when the black glass is removed from that viewport.
In our first lock of the day, Robert removed black glass: 16:08 UTC, we took ~20 minutes of quiet time with the black glass removed, before loosing lock due to an earthquake.
We decided to move 42 urad based on the first screenshot shows the May move described in 77949, the arm powers improved in the first 42urad of PR3 yaw move After relocking we moved PR2 by 42 urad, while the IFO was thermalizing (screenshot). Ran the A2L script and got a few minutes of quiet time there, starting at 20:20-20:31 UTC. We then decided to take another step to see if we could reduce the power in the HAM2 beam further, we moved an additional 10urad on PR3 yaw, and saw a small decrease in the beam power. We re-ran A2L here (added values to lscparams and loaded guardian), and took some time with the illuminator on the viewport to check the height of the peak. Camilla noticed that the squeezing didn't look optimal, so she's running SQZ alignment and angle scans. There is some LSC coherence, mostly with PRCL. We will plan to add PRCL FF next week, and re tune MICH and SRCL.
Robert measured power in the beam exiting the HAM3 viewport at a few steps: the original position: 47 mW, -20 urad yaw: 28 mW; -42 urad 19 mW; -52 urad: 17mW.
Old alogs about this move:
Operator note: We moved these beams using a script that moves sliders on several optics, so that ideally we should be able to relock without doing any significant alignment. For yaw, the script was recently adjusted so that it really should work well, for pitch there may be some manual alignment of PR2 + IM4 needed to get initial alignment to lock the X arm in IR.
First run at 10:00 did not complete, its thermocouples reached the trip temp of -130C before we got a flow of LN2.
Fri Jul 05 10:17:33 2024 INFO: Fill completed in 17min 29secs
We re-ran the fill at 12:00 with a lowered trip temp of -140C, this run was successful
Fri Jul 05 12:04:36 2024 INFO: Fill completed in 4min 33secs
Trending the trip temps since this system was installed two years ago, we have never had to run with trip temps as low as -140C (summer settings have been -130C).
Tomorrow we will run the fill at the earlier time of 08:00 in case outside temps were a factor in today's issue. I have set the trip temp back to -130C.
Commissioning has had a rough time due to a big Earthquake (and then a mystery lockloss). H1 is currently recovering from the 2nd lockloss of this morning. The hope is to return to Commissioning if timing works out (with the main task being PR2 spot moves).
As for locking, it's been fairly straightforward.
TITLE: 07/05 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: An earthquake lockloss then a PI lockloss. Currently at MAX_POWER
Lock3:
To recap for SQZ, I have unmonitored 3 SQZ channels on syscssqz (H1:SQZ-FIBR_SERVO_COMGAIN, H1:SQZ-FIBR_SERVO_FASTGAIN, H1:SQZ-FIBR_LOCK_TEMPERATURECONTROLS_ON) that keep dropping us out of observing for now until their root issue can be fixed (Fiber trans PD error, too much power on FIBR_TRANS?). I noticed that each time the GAINS change it also drops our cleaned range
It seems that as you found, the issue was the max power threshold. Once Ryan raised the threshold in 78881, we didn't see this happen again, plot attached. I've re-monitored these 3 SQZ channels: sdfs attached (H1:SQZ-FIBR_SERVO_COMGAIN, H1:SQZ-FIBR_SERVO_FASTGAIN, H1:SQZ-FIBR_LOCK_TEMPERATURECONTROLS_ON) and TEMPERATURECONTROLS_ON accepted.
It's expected that the CLEAN range would drop as that range only reports when the GRD-IFO_READY flag is true (which isn't the case when there's sdf diffs).