During PRC Align, IMC unlocked and couldn't relock due to the FSS oscillating a lot - PZT MON was showing it moving all over the place, and I couldn't even take the IMC to OFFLILNE or DOWN due to the PSL ready check failing. To try and fix the oscillation issue, I turned off the autolock for the Loop Automation on the FSS screen, and after a few seconds re-enabled the autolocking, and then we were able to go to DOWN fine, and then I was able to relock the IMC.
TJ said this has happened to him and to a couple other operators recently.
Took a look at this, see attached trends. What happened here is the FSS autolocker got stuck between states 2 and 3 due to the oscillation. The autolocker is programmed to, if it detects an oscillation, jump immediately back to State 2 to lower the common gain and ramp it back up to hopefully clear the oscillation. It does this via a scalar multiplier of the FSS common gain that ranges from 0 to 1, which ramps the gain from 0dB to its previous value (15dB in this case); it does not touch the gain slider, it does it all in block of C code called by the front end model. The problem here is that 0dB is not generally low enough to clear the oscillation, so it gets stuck in this State 2/State 3 loop and has a very hard time getting out of it. This is seen in the lower left plot, H1:PSL-FSS_AUTOLOCK_STATE, it never gets to State 4 but continuously bounces between States 2 and 3; the autolocker does not lower the common gain slider, as seen in the center-left plot. If this happens, turning the autolocker off then on again is most definitely the correct course of action.
We have an FSS guardian node that also raises and lowers the gains via the sliders, and this guardian takes the gains to their slider minimum of -10dB which is low enough to clear the majority of oscillations. So why not use this during lock acquisition? When an oscillation is detected during the lock acquisition sequence, the guardian node and the autolocker will fight each other. This conflict makes lock acquisition take much longer, several 10s of minutes, so the guardian node is not engaged during RefCav lock acquisition.
Talking with TJ this morning, he asked if the FSS guardian node could handle the autolocker off/on if/when it gets stuck in this State 2/State 3 loop. On the surface I don't see a reason why this wouldn't work, so I'll start talking with Ryan S. about how we'd go about implementing and testing this. For OPS: In the interim, if this happens again please do not wait for the oscillation to clear on its own. If you notice the FSS is not relocking after an IMC lockloss, open the FSS MEDM screen (Sitemap -> PSL -> FSS) and look at the autolocker in the middle of the screen and the gain sliders at the bottom. If the autolocker state is bouncing between 2 and 3 and the gain sliders are not changing, immediately turn the autolocker off, wait a little bit, and turn it on again.
VACSTAT detected a vacuum glitch at 03:00:24 this morning originating with PT427 (EY beam tube ion pump station, about 1000 feet from EY). The vacuum pressure rapidly increased from 1.4e-09 to 1.4e-07 Torr. It then rapidly pumped back down to nominal in 8 seconds. The glitch was detected soon afterwards by all of the gauges in EY. They only increased from around 1.0e-09 to 2.0e-09 and they took around 25 minutes to pump down.
The glitch was seen at MY as a much reduced amplitude about 6 minutes after the event.
H1 was not locked at the time.
TITLE: 11/13 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 3mph Gusts, 0mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 1.22 μm/s
QUICK SUMMARY:
Currently in DOWN due to excessively high secondary microseism. We're supposed to have calibration measurements and commissioning today. I'll try for a bit to get us back up, but I doubt we'll get past DRMI since it's worse now than it was last night for Ryan or TJ
I restarted VACSTAT at 08:18 to clear its alarm. Tyler resolved the RO alarm at 06:17 and now the CDS ALARM is GREEN again.
The useism continues to grow. I'll keep the IFO in down and see where things are at again in a few hours.
TITLE: 11/13 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: TJ
SHIFT SUMMARY: H1 was happily observing until mid-shift when the microseism just got too high and caused a lockloss. Haven't been able to make it past DRMI since then due to the ground motion. I'm leaving H1 in 'DOWN' since it's not having any success, but TJ says he'll check on things overnight.
LOG:
Lockloss @ 02:44 UTC - link to lockloss tool
Possibly caused by ground motion, as everything looked to be moving quite a lot at the time, and the secondary microseism band has been rising this evening. Environment plots on the lockloss tool show a quick increase in motion at the time of the lockloss also.
TITLE: 11/13 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Observing at 140 Mpc and have been Locked for over 16.5 hours. One short drop out of Observing due to the squeezer dropping out, but that relocked automatically and we're been Observing ever since. GRB-Short E617667 came in today at 21:29 UTC
LOG:
15:30UTC Observing and Locked for over 7.5 hours
15:52 Out of Observing due to SQZ unlock
15:56 Back into Observing
21:29 GRB-Short E617667
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 16:00 | FAC | Randy | YTube | n | Caulking up those joints | 22:57 |
| 17:04 | FAC | Kim | MX | n | Tech clean | 18:34 |
| 18:21 | Sheila. Kar Meng | Optics Lab | y(local) | OPO prep (Sheila out 18:57) | 19:10 | |
| 18:51 | VAC | Gerardo | MX | n | Looknig for case | 20:07 |
| 20:09 | Corey | OpticsLab | n | Cleaning optics | 23:33 | |
| 21:26 | Kar Meng | Optics Lab | y(local) | OPO prep | 23:28 | |
| 22:01 | RyanS | OpticsLab | n | Cleaning optics | 23:33 | |
| 22:10 | TJ | Optics Lab | n | Spying on Corey and RyanS | 22:22 | |
| 22:59 | Matt | Optics Lab | n | Grabbing wipes | 23:16 | |
| 23:34 | Matt | Prep Lab | n | Putting wipes away | 23:35 |
Tyler has the reverse osmosis water conditioning system offline overnight. The CDS alarms system has an active cell-phone bypass for this channel which expires tomorrow afternoon. This should be the only channel in CDS ALARM.
Bypass will expire:
Thu Nov 13 05:05:22 PM PST 2025
For channel(s):
H0:FMC-CS_WS_RO_ALARM
TITLE: 11/13 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 5mph Gusts, 3mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.56 μm/s
QUICK SUMMARY: H1 has been locked for 16 hours and observing the whole day.
Wed Nov 12 10:08:17 2025 INFO: Fill completed in 8min 13secs
Gerardo confirmed a good fill curbside.
Closes FAMIS#27534, last checked 87787
I was supposed to do this last week but I was out, so doing it now. Last time it was done was 10/28, so this is comparing to measurements from two weeks ago.
Nothing of note, everything looks very similar to how it looked a couple weeks ago.
TITLE: 11/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 1mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.37 μm/s
QUICK SUMMARY:
Observing at 148 Mpc and have been Locked for over 7.5 hours. Currently in a standdown due to Superevent S251112cm that came in at 11/12 15:19 UTC
TITLE: 11/12 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: A completely uneventful shift with H1 observing throughout. Now been locked for almost 7 hours.
As per WP 12876 I updated the run number server with a new version to allow more testing of the new frame writer (h1daqfw2), without allowing it to modify external state in CDS.
The run number server tracks channel list configuration changes in the frames. The basic idea is the the frame writers create a checksum/hash of the channel list, send it to the run number server and get a run number to include in the frame. This is then used by the nds1 server to optimize multi-frame queries (if it knows the configuration hasn't changed some of the structures can be re-used between frames (this has been measured at about a 30% speed-up)).
This update added a new port/interface that the server listens on. It behaves a little different, it will only return the current run number (or 0 if the hash doesn't match) and will not increment the global state, so it is safe to have a test system use.
Now the new frame writer can automatically query the run number server to get the correct value to put in the frame (previously we had been setting it via a epics variable). One step closer to the new frame writer being in production.
FAMIS 28431, last checked in alog87986
New data points for all quads this week, measurement trends attached.
TITLE: 11/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Currently Observing at 150 Mpc and have been Locked for just over an hour. Maintenance day was pretty light, but on the way back up there was some testing done with using the ETMY ESD (88065), so it took us a bit longer to get back to Observing. Besides that, getting back up wasn't bad at all.
LOG:
15:30UTC Locked and Out of Observing due to magnetic injections
15:40 Back to Observing
15:45 Out of Observing due to SUS charge measurements
16:01 Back into Observing after SUS charge measurements finished
16:01 Out of Observing to change the OM2 setpoint and unlock the IFO
16:03 I unlocked the IFO for maintenance
19:58 Started relocking
- initial alignment
- lockloss from LOWNOISE_COIL_DRIVERS due to commissioning
23:17 NOMINAL_LOW_NOISE
23:28 Observing
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:46 | FAC | Randy | YARM | n | Caulking beam tubes | 23:00 |
| 16:05 | PEM | Sam, Alicia | HAM1 | n | Installing accelerometers | 17:00 |
| 16:06 | FAC | Nellie, Kim | LVEA | n | Tech clean | 16:59 |
| 16:12 | ISC | Matt | CR | n | IFO at 60W !!!!!!!!!!!!!!!!!!! | 16:59 |
| 16:12 | VAC | Jordan | EY | n | Purge air line build | 16:15 |
| 16:16 | VAC | Gerardo, Jordan | LVEA | n | Checking CP1 and for HAM1 gate valve | 16:39 |
| 16:19 | PEM | Rene | LVEA | n | Helping with HAM1 accelerometer install | 17:00 |
| 16:30 | FAC | Tyler | MX, EX, EY | n | Roof inspections | 17:44 |
| 16:34 | VAC | Travis | EY | n | Purge air line build | 19:46 |
| 16:35 | FAC | Mitchell | LVEA | n | Putting away and grabbing parts | 16:59 |
| 16:39 | VAC | Jordan, Gerardo | EY | n | Purge air line work | 19:24 |
| 17:00 | FAC | Nellie, Kim | EY | n | Tech clean | 18:31 |
| 17:13 | EE | Fil | EX, EY | n | Prepping for 28-bit DAC | 19:06 |
| 17:14 | PEM | Sam, Rene, Alicia | LVEA | n | Moving accelerometers | 17:44 |
| 17:16 | Richard | LVEA | n | Walking around | 17:36 | |
| 17:28 | Betsy | CER | n | Inventory/looking for stuff | 18:28 | |
| 17:45 | 3IFO | Tyler | LVEA, MX, MY | n | 3IFO checks | 18:29 |
| 18:29 | FAC | Tyler | Yarm backside | N | Inspect well water piping | 19:07 |
| 18:31 | FAC | Nellie, Kim | EX | n | Tech clean | 19:59 |
| 19:06 | EE | Fil | CER | N | Checks | 20:02 |
| 19:41 | Mitchell | LVEA | n | Looking at parts | 19:46 | |
| 19:58 | RyanC | LVEA | n | Sweep | 20:10 | |
| 20:50 | Kar Meng | OpticsLab | n | Looking at components | 21:00 | |
| 21:07 | Corey | OpticsLab | n | Cleaning optics | 00:13 | |
| 21:39 | Rahul | OpticsLab | n | Wrapping up optics | 21:59 | |
| 22:18 | RyanS | OpticsLab | n | Cleaning optics | 00:09 | |
| 22:20 | SUS | Kar Meng | OpticsLab | y(local) | OPO prep work | 23:46 |
| 23:58 | Betsy | OpticsLab | n | Looking for parts | 00:14 | |
| 00:08 | Kar Meng | OpticsLab | y(local) | OPO prep work | ongoing |
TITLE: 11/12 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 5mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.26 μm/s
QUICK SUMMARY: H1 has been locked for about an hour following Tuesday maintenance day.
WP12274
FAMIS28946
We rebooted the h1guardian1 machine today for 3 things:
All 168 nodes came back up and Erik confirmed that nds0 was seeing the traffic after the machine reboot.
Server order is set in the guardian::lho_guardian profile in puppet.