Lockloss at 2025-06-20 00:09 UTC after only 10 minutes locked
TITLE: 06/19 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is in DHARD_WFS and LOCKING
Overall a quiet shift with three seemingly different locklosses. Two of these happened less than 2 hours into the lock.
Lockloss 1: During Simulines - Alog 85178. Calibration sweep could not produce the report but BB ran successfully - Alog 85179
Lockloss 2: ETM Glitch - Alog 85182
Lockloss 3: EY Kick right before - Alog 85184
I also reverted the edits that Camilla and Oli made by Camilla’s request. These were to SQZ_FC Beamspot Control, turning it back on (SDFs attached)
Other than that, it seems that locking is taking a very long time to lock DRMI without doing an initial alignment despite excellent flashes. Touching mirrors does not seem to help and whenever DRMI does lock, it stays locked so I do not think this has to do with alignment or too much motion (ground or otherwise).
Microseism is on the rise and there are regular 5.0 quakes happening at the mid-atlantic ridge.
LOG:
None
TITLE: 06/19 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 11mph Gusts, 6mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.13 μm/s
QUICK SUMMARY:
Curently relocking and in ACQUIRE_DRMI. Flashes are great but it won't catch. It caught PRMI pretty fast, but now we're back in DRMI
Unknown Cause lockloss around the same time into NLN as the previous one, which was ETM Glitch. The H1 Lockloss tool does not show this though. Rather, it shows that EY L2 was the first to have a small glitch-like shake followed by the LL.
00:01 Observing
ETM Glitch Lockloss. There were no earthquakes or at the time of the lockloss. Microseism is on the rise though it is nowhere near a lock-affecting amount - same for wind.
H1 Lock Report just ended and it was the ETM Glitch.
Thu Jun 19 10:08:15 2025 INFO: Fill completed in 8min 12secs
Headless Start: 1434382652
2025-06-19 08:42:24,391 bb measurement complete.
2025-06-19 08:42:24,391 bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250619T153714Z.xml
2025-06-19 08:42:24,392 all measurements complete.
Headless End: 1434382962
Simulines Start: 1434383042
2025-06-19 15:55:14,510 | INFO | Drive, on L3_SUSETMX_iEXC2DARMTF, at frequency: 8.99, and amplitude 0.53965, is finished. GPS start and end time stamps: 1434383704, 1434383727
2025-06-19 15:55:14,510 | INFO | Scanning frequency 10.11 in Scan : L3_SUSETMX_iEXC2DARMTF on PID: 2067315
2025-06-19 15:55:14,511 | INFO | Drive, on L3_SUSETMX_iEXC2DARMTF, at frequency: 10.11, is now running for 28 seconds.
2025-06-19 15:55:17,845 | ERROR | IFO not in Low Noise state, Sending Interrupts to excitations and main thread.
2025-06-19 15:55:17,846 | ERROR | Ramping Down Excitation on channel H1:LSC-DARM1_EXC
2025-06-19 15:55:17,846 | ERROR | Ramping Down Excitation on channel H1:SUS-ETMX_L2_CAL_EXC
2025-06-19 15:55:17,846 | ERROR | Ramping Down Excitation on channel H1:SUS-ETMX_L3_CAL_EXC
2025-06-19 15:55:17,846 | ERROR | Ramping Down Excitation on channel H1:CAL-PCALY_SWEPT_SINE_EXC
2025-06-19 15:55:17,846 | ERROR | Ramping Down Excitation on channel H1:SUS-ETMX_L1_CAL_EXC
2025-06-19 15:55:17,846 | ERROR | Aborting main thread and Data recording, if any. Cleaning up temporary file structure.
PDT: 2025-06-19 08:55:22.252424 PDT
UTC: 2025-06-19 15:55:22.252424 UTC
Simulines End GPS: 1434383740.252424
Could not generate a report using the wiki instructions (probably due to incomplete sweep).
Lockloss whilst running simulines. Since this has happened in the last two weeks, caused by the sweep, and there are no immediate other causes, it is probably what caused the Lockloss. We were in Observing for nearly10 hrs.
I wasn't able to confirm for sure that the lockloss was caused by the calibration sweep. It looks like the QUAD channels all had an excursion at the same time, before the lockloss was seen in DARM(ndscope1). It does look like in the seconds before the lockloss, there were two excitations that were ramping up (H1:SUS-ETMX_L2_CAL_EXC_OUT_DQ at ~6 Hz(ndscope2) and H1:SUS-ETMX_L3_CAL_EXC_OUT_DQ at ~10Hz(ndscope3)). Their ramping up is not seen in the QUAD MASTER_OUT channels, however, so it's hard to know if those excitations were the cause.
TITLE: 06/19 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 10mph Gusts, 6mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING since 05:53 UTC (~9hr lock!)
Plan is to continue observing.
TITLE: 06/19 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
Currently relocking and at MOVE_SPOTS. We lost lock earlier after having been locked for 4.5 hours, with half of that locked time being spent messing with the SQZ filter cavity and trying to keep it from unlocking(85169). We still haven't figured out why it was unlocking. For the night we are keeping the FC beam spot control off. SDFs have been accepted to keep the FC beam spot control filter inputs off(sdf). We don't really think that was the issue, but we got out longest span (15 mins before LL) of a locked sqzer after turning it back off after the unlocking issue had started. We'll look more into the sqzing issues tomorrow, so for Tony: if we have SQZ issues overnight, just go to Observing without SQZing and it'll get dealt with tomorrow.
LOG:
23:30 Relocking and in MOVE_SPOTS
23:40 Earthquake mode activated
23:48 NOMINAL_LOW_NOISE
23:50 Observing
00:10 Back to CALM
01:40 Out of Observing due to SQZ unlocking
01:43 Back into Observing
01:47 Out of Observing due to SQZ unlocking
01:50 Back into Observing
01:56 Out of Observing due to SQZ unlocking
02:31 Back into Observing
02:32 Out of Observing due to SQZ unlocking
03:16 Back into Observing
03:17 Out of Observing due to SQZ unlocking
03:29 Back into Observing
03:29 Out of Observing due to SQZ unlocking
03:45 Back into Observing
03:51 Out of Observing due to SQZ unlocking
03:57 Back into Observing
03:59 Out of Observing due to SQZ unlocking
04:03 Back into Observing
04:21 Lockloss
- Manual initial alignment
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:41 | FAC | Tyler | MX | n | Checking out the bees | 00:41 |
Ibrahim turned FC Beamspot control back on while we were relocking this morning as we do not think it was related to the FC locklosses yesterday (85172), more likely was the high wind.
Summary:
Robert's alog 84015 shows that IFO beam hitting the [+X, +Y] side barrel/edge of the BS could be coupling to DARM somehow.
Out of curiosity, we scanned the BS spot position in YAW with 2mm steps from -6mm to +6mm (measured along the BS surface, -6mm being the closest to the [+X,+Y] side barrel) while dithering the YAW angle of PR2 at 31Hz, and measured the coupling from the dither to DARM in nominal low noise.
Expectation is/was that the coupling monotonically gets worse as the beam approaches the [+X, +Y] side, but we didn't see such a simple pattern. See the 1st attachment showing H1:LSC-DARM_IN1_DQ/PR2_M3_DITHER_Y_OUT TF at the dither frequency. There could be a minima at around +2mm position but there's no phase flip around that point. No clear conclusion here.
-6mm (closest to the +X+Y side) | -4mm | -2mm | 0mm | 2mm | 4mm | 6mm (farthest from the +X+Y side) | |
amplitude [ct/ct] | 2.3e-11 | 2.7e-11 | 2.84e-11 | 2.8e-11 | 1.9e-11 | 2.4e-11 | 2.8e-11 |
phase | -21.4deg | -18.5deg | -17.5deg | -16.8deg | -17.3deg | -15.9deg | -15.7deg |
BS spot position and its (rough) calibration
The camera servo uses the centroid position of BS spot obtained from GigE camera image in raw pixels. Nominal spot position on BS is (230, 236) for (P, Y), which is given by H1:ASC-CAM_PIT1_OFFSET=-230, H1:ASC-CAM_YAW1_OFFSET=-236.
The camera is looking at BS from -X+Y direction (WBSC2 G8 viewport). Positive increase in YAW position, which appears as a shift towards the right for the camera, means the shift in -X-Y direction on the BS surface (i.e. away from +X+Y side of the BS).
The entire frame width (640 pixels) of the GigE image is very roughly the same with the diameter of the BS (370mm nominal), so the calibration of the camera image is 370mm/640pix. Using this, +2mm step is 2mm/(370mm/640pix) ~ 3.5 pixels.
BS spot steps were given by increasing/decreasing H1:ASC-CAM_YAW1_OFFSET by integer multiple of 3.5 pixels.
Note that BS is tilted by 45 degrees relative to X and Y axis, thus +2mm on the plot means [-sqrt(2),-sqrt(2)]mm in [X,Y] direction, respectively.
We started with 0mm offset, then proceeded with -2mm, -4mm, -6mm, +2mm, +4mm and +6mm in this order. After each step, we waited until the camera servo settle, measured the transfer function at the injection frequency, and proceeded to the next step.
After we were done, spot position was set back to 0mm offset.
Injection
We made an injection of 8 counts pp at 30Hz into H1:SUS-PR2_M3_DITHER_Y_EXC, which made a huge peak in DARM. Injection was active from ~19:03 UTC to 20:14 UTC on 2025/06/18.
A2L
Before we started injection and before we moved the BS spot position, Ibrahim ran A2L script to set the A2L gains for quads. All measurements, even after BS spot was moved, was done with these A2L gains (4th attachment), and Ibrahim put these gains in the guardian.
A2L measurement was done with +6mm offset after all of the measurents were done, too, as we might need that data later (5th attachment).
DTT template
It's in /ligo/home/keita.kawabe/BSPOS\ VS\ jitter\ 20250618.
REF0-1: no exc, ref2-5: 0mm offset, ref6-9: -2mm, ref10-13: -4mm, ref14-17: -6mm, ref18-21: +2mm, ref22-25 +4mm, ref26-29: +6mm.
Lockloss at 06/19 04:21 UTC due to unknown causes
We are currently Locked and have been for over 3 hours. We've been dropping out of Observing a few timnes due to the FC unlocking, and a couple times it was able to relock itself, one time I had to adjust FC2, and currently I've been working on relocking it but I've been having trouble. I've adjusted FC2 to maximize flashes, I've trended back FC1 and FC2 to a time yesterday when the green trans was good based on the DAMP INMONs and now the M3 Witness channels, and none of that has worked. FC2 seems twice as noisy as usual in pitch and I'm wondering if that's affecting things? The wind is getting in the high 30s at M/EY?
Daniel, Oli
After trying to work on the alignment for the filter cavity (I tried following 72084 and wiki) and not being able to get the filter cavity to stay locked for more than a couple of minutes, I contated Daniel. He tracked down the FC beam spot control that Camilla and Sheila had turned on earlier today (85149), and we tried turning the inputs off to the H1:SQZ-FC_ASC_INJ_ANG_(P,Y) filter bank. The first SQZ lock after this lasted less than a minute, but the next one kept the filter cavity locked for a few minutes. We were unsure as to if this was the solution, but since it didn't lose lock right away, we might as well go back into Observing, so I accepted the SDFs of those inputs being off and we went back into Observing.
As I was writing this, the SQZer lost lock again, so this was not the solution. I let it relock on its own and that worked, and I let it keep the ASC_INJ filter inputs on and accepted those back into SDF since that isn't the issue.
Oli and I checked that neither the FC beamspot control or FC ASC were running away or had signals any different than normal locked times. The ASC YAW signals had some slightly noisier times with FC_ASC_INJ_ANG_Y up to 0.003 rather than the max 0.002 earlier in the day. Oli noted that the wind had picked up during the locklossy time, maybe that is to blame.
We thus don;t think there's reason to keep the FC beamspot control off but are leaving FC beamspot control off for tonight just incase as it's nearly the end of Oli's shift. It could be turned back on tomorrow AM.
Regarding what Camilla said about the ASC being twice as loud during this time - it looks like that started earlier this morning (ndscope1) - before it would make sense if we were counting on my wind theory below being the cause, so my theory is probably incorrect. It looks like then something was changed prior to 06/18 16:50 UTC that is causing the differnce in the FC ASC, which may or may not be related to the FC unlocking issue.
Here is my wind theory - that is currently backed up by only a few data points - is that when the wind is blowing against the FCES at a specific angle above a certain speed, it wiggles FC2 more than it should. I kept seeing messages from SQZ_FC saying that FC2 is too noisy, and FC2 M1 DAMP IN P was twice as noisy as it was a day ago when I trended the pointing back. However, Camilla did note that the Yaw ASC signals were a bit large, so obviously that's the opposite dof from what I saw. Anyway, here's a quick trend back of a few days showing a correlation between the wind pointing in the direction almost perpendicular to the FCES, and the SQZ FC subsequently unlocking multple times while the wind is high (ndscope2)
At 16:14:31 PDT h1daqdc0 crashed. Its EPICS IOC stopped running resulting in white boxes on MEDM. Last log was 15:56.
I connected a monitor to its VGA port, its console was showing the login prompt. The cursor was not flashing and an attached keyboard was unresponsive.
Erik and I rebooted the machine by pressing the front panel RESET button. It booted and started with no problems.
Currently we don't know why dc0 froze this way.
FW0 full frame gap due to crash and restart:
Jun 18 16:14 H-H1_R-1434323584-64.gwf
Jun 18 16:26 H-H1_R-1434324288-64.gwf
Summary of (our knowledge about) 2nd loop array units are available on DCC: https://dcc.ligo.org/LIGO-D1101059
We opened the container of S1202966 in the optics lab for inspection. This is a unit removed from LHAM2 in 2016.
We found no damage (1st picture), all photodiodes and the QPD look OK, no chipping of the optics, but many components are missing.
I decided to disassemble the damaged/contaminated S1202967 partially to send some of the parts to C&B and keep them as the last-resort spares. Jennie sent the following parts to C&B. There are deep scuffs which should be the result of repeated metal-to-metal contact/scratching, but they should be OK for use once they go through C&B.
ISS array cover might be salvageable but the place where the poles are attached is bent so badly, bending it back might break it. See the 2nd picture, the surface is supposed to be flat.
Brief update about the following two items. No spare was found at LHO as of Jun/18/2025, so in the worst case we will use the parts salvaged from the damaged S1202967 assembly (they were already sent to C/B).
02:11 Back to Observing