[Sheila, Betsy, Anamaria, Eric, Daniel, Karmeng]
First time opening HAM7.
Removed the contamination control horizontal wafer.
Tools and iris are setup next to the chamber.
Dust count in the chamber and in the clean room are good/low.
4 irises placed at: after ZM1, and CLF REFL. ZM3 and ZM4 are roughly placed, need to realign after CDS are power on. Pump reflection hard to place, and we will not place an iris there.
Jeff, Oli
Earlier, while trying to relock, we were seeing locklosses preceded by a 0.6 Hz oscillation seen in the PRG. Back in October we had a time where the estimator filters were installed incorrectly and caused a 0.6 Hz lock-stopping oscillation (87689). Even though we haven't made any changes to the estimators in over a month now, I decided to try turning them all off (PR3 L/P/Y, SR3 L/P/Y). During the next lock attempt, there were no 0.6 Hz oscillations seen. I checked the filters and settings and everything looks normal, so I'm not sure why this was happening.
I took spectra of the H1:SUS-{PR3,SR3}_M1_ADD_{L,P,Y}_TOTAL_MON_DQ channels for each suspension and each DOF during two similar times before and after the power outage. I wanted the After time to be while we were in MICROSEISM, since it maybe seems like maybe the ifo isn't liking the normal WINDY SEI_ENV right now, so I wanted both the Before and After times to be in a SEI_ENV of MICROSEISM and the same ISC_LOCK states. I chose the After time to be 2025/12/09 18:54:30 UTC, when we were in an initial alignment, and then found a Before time of 2025/11/22 23:07:21 UTC.
Here are the sprectra for PR3 and SR3 for those times. PR3 looks fine for all DOF, and SR3 P looks to be a bit elevated between 0.6 - 0.75 Hz, but it doesn't look like it should be enough of a difference to cause oscillations.
Then, while talking to Jeff, we discovered the difference in overall noise in the total damping for L and P changed depending on the seismic state we were in, so I made a comparison between MICROSEISM and CALM SEI_ENV states (PR3, SR3). USEISM time was 2025/12/09 12:45:26 UTC and CALM was 2025/12/09 08:54:08 UTC with a BW of 0.02. The only difference in the total drive is seen in L and P, where it's higher below 0.6 Hz when we are in CALM.
So during those 0.6 Hz locklosses earlier today, we were in USEISM. Is it possible that the combination of the estimators in the USEISM state create an unstable combination?
This is possibly true. The estimator filters are designed/measured using a particular SEI environment, so it is expected that they would underperform when we change the SEI loops/blends.
Additionally, we use the GS13 signal for the ISI-->SUS transfer function .It might be the case that the different amount of in-loop/out-of-loop ness of the GS13 might do something to the transfer functions. I don't have any math conclusions from it yet, but Brian and I will think about it.
I'm pretty confident that the estimators aren't a problem, or at least a red herring. Just clarifying the language here -- "oscillation" is an overloaded term. And remember, we're in "recovery" mode from Last Thursday's power outage -- so literally *everything* is suspect and wild guesses are are being thrown on around like flour in a bakery, and we only get brief, but separated by 10s of minutes time, unrepeatable, evidence that something's wrong. The symptom was "we're trying 6 different things at once to get the IFO going. Huh -- the ndscope time-series IFO build ups as we're locking one time looked to exponentially grow to lock-loss in one lock stretch and in another it just got noisier halfway through this lock stretch. What happened? Looks like something at 0.6 Hz." We're getting to "that point" in the lock acquisition sequence maybe once every 10 minutes. There's an entire rack's worth of analog electronics that go dark in the middle of this, as one leg of its DC power failed. (LHO:88446) The microseism is higher than usual and we're between wind storms, so we're trying different ISI blend configurations (LHO:88444) We're changing around global alignment because we thing suspensions moved again during the "big" HAM2 ISI trip at the power outage (LHO:88450) There's a IFO-wide CDS crash after a while that requires all front-ends to be rebooted; with the suspicion that our settings configuration file track system might have been bad . (LHO:88448)... Everyone in the room thinks "the problem" *could* be the thing they're an expert in, when it's likely a convolution of many things. Hence, Oli trying to turn OFF the estimators. An near that time, we switch the configuration of the sensor correct / blend filters of all the ISIs (switching the blends from WINDY to MICROSEISM -- see LHO:88444). So -- there was - only one, *maybe* two where an "oscillation" is seen, in the sense of "positive feedback" or "exponential growth of control signal." - only one "oscillation" where it's "excess noise in the frequency region around 0.6 Hz," but they check if it actually *is* 0.6 Hz again isn't rigorous. That happens to be frequency of the lowest L and P modes of the HLTSs, PR3 and SR3. BUT -- Oli shows in their plots that: - Before vs. after the power outage, when looking at times when the ISI platforms are in the same blend state PR3 and SR3 control is the same. - The comparing the control request when the ISI platforms are in microseims vs. in windy show the expected change in control authority from ISI input, as the change in shape of the ASD of PR3 and SR3 between ~0.1 and ~0.5 Hz matches the change in shape of the blends. Attached is an ndscope of all the relevant signals -- our at least the signals in question, for verbal discussion later.
J. Driggers, R. Short, D. Barker, J. Kissel, R. McCarthy, M. Pirello, F. Clara WP 12925 FRS 36300 The power supply for the negative rail (-18V) of the SUS-C1 rack in the CER -- the right power supply in VDC-C1, U3-U1 failed on 2025-12-09 22:55:40 UTC. This rack houses the coil drivers, AAs, AIs, and BI/BO chassis -- all the analog electronics for SUS-MC2, SUS-PR2, SUS-SR2. We found the issue via DIAG_MAIN, which said "OSEMs in Fault" calling out MC2, PR2, and SR2. We confirmed that the IMC wasn't locking, and MC2 couldn't push the IMC through fringes. Also, the OSEM PD inputs on all stages of these suspensions were digital zero (not even ADC noise). Marc/Fil/Richard were quick on the scene and with the replacement. We brought the HAM3 and HAM4 ISI -- ISI_DAMPED_HEPI_OFFLINE -- in prep for a front-end model / IO chassis restart if necessary. Un-managed the MC2, PR2, and SR2 guardians, by bringing their MODE to AUTO. Used those gaurdians to bring those SUS to SAFE. - Richard/Marc/Fil replaced the failed -18V power supply. - While there, the +18 V supply had already been flagged in Marc's notes for replacement, so we replaced that as well (see D2300167). Replaced failed +18V power supply S1201909 with new power supply S1201944 Replaced failed -18V power supply S1201909 with new power supply S1201957. The rack was powered back up, suspensions, and seismic restored by 2025-12-09 23:31 UTC. The suspensions appear fully functional. Awesome work team!
Because we are back into the time of year when the ground motion can change a lot, I'm posting a quick reminder of what the blend filter low passes look like for the main two blends we use for horizontal dofs on the St1 of the ISI. Attached figure shows bode plots of the BSC St1 lowpasses on the right and the HAM St1 low passes on the left. Blue lines are the ~100mhz blends we use for the microseism states, red lines are the 250mhz blends we use for the "windy" states. The blends are the main component that changes between calm/windy and useism, I think we use the same sensor correction for both states. I won't go into detail about what these plots mean, this a iykyk kind of post. Sorry.
TITLE: 12/09 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 21mph Gusts, 17mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.45 μm/s
QUICK SUMMARY:
DEC 8 - FAC - annual fire system checks around site : In Progress
MON Tues - RELOCKING IFO : In Progress
TUES AM - HAM7 Pull -Y Door : Completed
FARO work at BSC2 (Jason, Ryan C) : Post poned in favor of Locking?
HAM7 - in-chamber work to swap OPO assembly : In Progress.
All other work Status is currently unknown.
---------------------------------------------------------------------------------------------------
Notes:
Fire Alarm went off at 16:58 UTC
HVAC system down Temperature will climb everywhere says Tyler @ 17:15 UTC
HVAC Fans turn back on @ 17:18 UTC Tyler says Air handlers are back on line , should be just a little Blip in temp.
Tumbleweeds at EY piled up too high to access EY. Chris has since cleared this.
Fire alarm in MSR going off @ 18:07 UTC
Initial Alignment process:
Held ISC_LOCK in Idle
Forced X arm to get locked, Moved ITMX to increase the Comm beat note, then Offloaded it.
When we tried for an initial Alignment we skipped.Green arms in Initial alignment.
Pushed PRM to get locked while we were in PRC align.
Pushed BS when Mich_bright aligning.
Pushed SRM to help SRC alignment.
Locking:
Jumped straight to Green Arms Manual and aimed for Offload_DRMI_ASC.
Jenne D. by hand Offloaded a "PR2 osem Equivelent", during one of out locks that got DRMI locked but locklossed at Turn_On_BS_Stage2.
another LL at Turn_On_BS_Stage2.
We made it past Turn_On_BS_Stage2 when Jenne D. Told PRC1 & SRC1 to not run in ASC.
H1 has been losing lock at a number of places before power up.
Vac finished pulling the -Y door and the 2 access ports on the +Y side, I went out and locked the ISI. A,B and C lockers were fine, the D locker couldn't be fully engaged, which I think is a known issue for this ISI. I just turned until I started feeling uncomfortable resistance, so D is partially engaged.
Tue Dec 09 10:13:17 2025 INFO: Fill completed in 13min 13secs
CDS is almost recovered from last Thrusday's power outage. Yesterday Patrick and I started the IOCs for:
picket_fence, ex_mains, cs_mains, ncalx, h1_observatory_mode, range LED, cds_aux, ext_alert, HWS ETMY dummy
I had to modify the picket fence code to hard-code IFO=H1, the python cdscfg module was not working on cdsioc0.
We are keeping h1hwsey offline, so I restarted the h1hwsetmy_dummy_ioc service.
The EDC disconnection list is now down to just the HWS ETMX machine (84 channels), we are waiting for access to EX to reboot.
Jonathan replaced the failed 2TB disk in cdsfs0.
Jonathan recovered the DAQ for the SUS Triple test stand in the staging building.
I swapped the power supplies for env monitors between MY and EY, EY has been stable since.
I took the opportunity to move several IOCs from hand-running to systemd control on cdsioc0 configured by puppet. As mentioned, some needed hard-coding IFO=H1 due to cdscfg issues.
CDS Overview.
Note there is a bug in the H1 Range LED display, a negative range is showing as 9MPc.
GDS still needs to be fully recovered.
TITLE: 12/09 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 11mph Gusts, 6mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.54 μm/s
QUICK SUMMARY:
SUS in-lock charge measurements did not run due to being unlocked.
HAM7 Door Bolts have been loosened & door is ready to come off.
Potential Tuesday Maintenance Items:
CDS - Log in and check on vacuum system computer (Patrick)
18-bit DACs in iscey should be replaced with 20-bit DACs???
Beckhoff upgrades
installing SUS front-end model infrastructure for JM1 and JM3, and renaming the h1sushtts.mdl to h1sush1.mdl
RELOCK CHECKPOINT IMC
18-bit DACs in h1oaf should be replaced with 20-bit DACs
h1sush2a/h1sush2b >> sush12 consolidation
Upgrade susb123 to LIGO DACs
Add sush6 chassis
DEC 8 - FAC - annual fire system checks around site
MON - RELOCKING IFO Reached DRMI last night
TUES AM - HAM7 Pull -Y Door
FARO work at BSC2 (Jason, Ryan C)
[Joan-Rene Merou, Alicia Calafat, Sheila Dwyer, Anamaria Effler, Robert Schofield] This is a continuation of the work performed to mitigate the set of near-30 Hz and near-100 Hz combs as described is Detchar issue 340 and lho-mallorcan-fellowship/-/issues/3. As well as the work in alogs 88089, 87889 and 87414. In this search, we have been moving around two magnetometers provided to us by Robert. Given our previous analyses, we thought the possible source of the combs would be around either the electronics room or the LVEA close to input optics. We have moved around these two magnetometers to have a total of more than 70 positions. In each position, we left the magnetometers alone and still for at least 2 minutes, enough to produce ASDs using 60 seconds of data and recording the Z direction (parallel to the cylinder). For each one of the positions, we recorded the data shown in the following plotThat is, we compute the ASD using 60s FT and check the amplitude of the ASD at the frequency of the first harmonic of the largest of the near-30 Hz combs, the fundamental at 29.9695 Hz. Then, we compute the median of the +- 5 surrounding Hz and save the ASD value at 29.9695 Hz "peak amplitude" and the ratio of the peak against the median to have a sort of "SNR" or "Peak to Noise ratio". Note that we also check the permanent magnetometer channels. However, in order to compare them to the rest, we multiplied the ASD of the magnetometers that Robert gave us times a hundred so that all of them had units of Tesla. After saving the data for all the positions, we have produced the following two plots. The first one shows the peak to noise ratio of all positions we have checked around the LVEA and the electronics room:
Where the X and Y axis are simply the image pixels. The color scale indicates the peak to noise ratio of the magnetometer in each position. The background LVEA has been taken from LIGO-D1002704. Note that some points slightly overlap with other ones, this is because in some cases we have check different directions or positions in the same rack. It can be seen how from this SNR plot the source of the comb appears to be around the PSL/ISC Racks. Things become more clear if we also look at the peak amplitude (not ratio) as shown in the following figure:
Note that in this figure, the color scale is logarithmic. It can be seen how, looking at the peak amplitudes, there is one particular position in the H1-PSL-R2 rack whose amplitude is around 2 orders of magnitude larger than the other positions. Note that this position also had the largest peak to noise ratio. This position, that we have tagged as "Coil", is putting the magnetometer into a coil of white cables behind the H1-PSL-R2 rack, as shown in this image:
The reason that led us to put the magnetometer there is that we also found the peak amplitude to be around 1 order of magnitude larger than on any other magnetometer on top of one set of white cables that go from inside the room towards the rack and up towards we are not sure where:
This image shows the magnetometer on top of the cables on the ground behind the H1-PSL-R2 rack, the white ones on the top of the image appear to show the peak at its highest. It could be that the peak is louder in the coil because there being so many cables in a coil distribution will generate a stronger magnetic field. This is the actual status of the hunt. These white cables might indicate that the source of these combs is the different interlocking system in L1 and H1, which has a chassis in the H1-PSL-R2 rack. However, we still need to track down exactly these white cables and try turning things on and off based on what we find in order to see if the combs dissapear.
The white cables in question are mostly for the PSL enclosure environmental monitoring system, see D1201172 for a wiring diagram (page 1 is the LVEA, page 2 is the Diode Room). After talking with Alicia and Joan-Rene there are 11 total cables in question: 3 cables that route down from the roof of the PSL enclosure and 8 cables bundled together that route out of the northern-most wall penetration on the western side of the enclosure (these are the 8 pointed out in the last picture of the main alog). The 3 that route from the roof and 5 of those from the enclosure bundle are all routed to the PSL Environmental Sensor Concentrator chassis shown on page 1 of D1201172, which lives near the top of PSL-R2. This leaves 3 of the white cables that route out of the enclosure unaccounted for. I was able to trace one of them to a coiled up cable that sits beneath PSL-R2; this particular cable is not wired to anything and the end isn't even terminated, it's been cleanly cut and left exposed to air. I haven't had a chance to fully trace the other 2 unaccounted cables yet, so I'm not sure where they go. They do go up to the set of coiled cables that sits about half-way up the rack, in between PSL-R1 and PSL-R2 (shown in the next-to-last picture in the main alog), but their path from there hasn't been traced yet.
I've added a PSL tag to this alog, since evidence points to this involving the PSL.
[Joan-Rene, Alicia] We tried yesterday disconnecting the PSL Environmental Sensor Concentrator where some of the suspicious white cables were going, but no change was seen in the comb amplitude. Continuing our search with the magnetometer in the same rack, we found out that the comb is quite strong when the magnetometer is put besides the power supply that is close to the top of the rack:So it may be that these lines may be transmitted elsewhere through this power supply. We connected a voltage divider and connected it to the same channel we were using for the magnetometer (H1:PEM-CS_ADC_5_23_2K_OUT_DQ):
![]()
Out of this power supply, two dark green cables come out, the first one goes to the H1-PSL-R1 rack:
However, the comb did not appear as strong when we put the magnetometer besides the chassis where the cable leads. On the other hand, the comb does appear strong if we follow the other dark green cable, that goes to this object
Which Jason told us it may be related to the interlock system. Following the white cables that go from this object, it would appear that they go into the coil, where we saw that the comb was very strong. We think it would be interesting to see what here can be turned off and see if the comb does disappear.
[Anamaria, RyanS, Jenne, Oli, RyanC, MattT, JeffK]
We ran through an initial alignment (more on that in a moment), and have gotten as far as getting DRMI locked for a few minutes. Good progress, especially for a day when the environmental conditions have been much less than favorable (wind, microseism, and earthquakes). We'll try to make more progress tomorrow after the wind has died down overnight.
During initial alignment, we followed Sheila's suggestion and locked the green arms. The comm beatnote was still very small (something like -12 dBm). PR3 is in the slider/osem position that it was before the power outage. We set Xarm ALS to use only the ETM_TMS WFS, and not use the camera loop. We then walked ITM to try to improve the COMM beatnote. When I did it, I had thought that I only got the comm beatnote up to -9 dBm or so (which is about where it was before the power outage), but later it seems that maybe I went too far and it's all the way up at -3 dBm. We may consider undoing some of that ITM move. The ITMX, ETMX, and TMSX yaw osem values nicely matched where they had been before the power outage. All three suspensions' pitch osems are a few urad different, but going closer to the pre-ooutage place made the comm beatnote worse, so I gave up trying to match the pitch osems.
We did not reset any camera setpoints, so probably we'll want to do the next initial alignment (if we do one) using only ETM_TMS WFS for Xgreen.
The rest of initial alignment went smoothly, after we checked that all other optics' sliders were in their pre-outage locations. Some were tens of urad off on the sliders, which doesn't make sense. We had to help the alignment in several places by hand-aligning the optics a bit, but no by-hand changes to controls servos or dark offsets or anything like that.
When trying to lock, we struggled to hold Yarm locked and lock COMM and DIFF until the seismic configuration auto-switched to the microseism state. Suddenly things were much easier.
We caught DRMI lock 2x times on our first lock attempt, although it lost DRMI lock during ASC. We also were able to lock PRMI, but lost lock while I was trying to adjust PRM. Later, we locked PRMI, and were able to offload the PRMI ASC (to PRM and BS).
The wind has picked back up and it's a struggle to hold catch DRMI lock, so we're going to try again tomorrow.
During this process, I also flipped the "manual_control" flag in lscparams so that ALS will not scan alignment on its own and ISC_LOCK won't automatically jump to PRMI from DRMI or MICH from PRMI.
(Randy, Jordan, Travis, Filiberto, Gerardo)
We closed four small gate valves, two at the relay tube, RV-1 and RV-2. Two at the filter cavity tube, between BSC3 and HAM7, FCV-1 and FCV-2. The purge air system was on since last week, with a dew point reported by the dryer tower of -66 oC, and -44.6 oC measured chamber side. Particulate was measured at the port by the chamber, zero for all sizes. HAM7 ion pump was valved out.
Filiberto helped us out with making sure high voltage was off at HAM7, we double checked with procedure M1300464. Then, system was vented per procedure E2300169 with no issues.
Other activities at the vented chamber:
Currently the chamber has the purge air active at a very low setting.
(Randy, Travis, Jordan, Gerardo)
-Y door was removed, no major issues removing it, but the usual O-ring sticking to flat flange, they stuck around the bottom part of the door, 5-8 O'clock. Other than that no other issues. Both blanks were removed and the ports were covered with an aluminum sheet.
Note, the soft cover will rub against ZM3 if the external jig to pull the cover is not used.
TITLE: 12/08 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY:
HAM7 was prepped then vented today and there are 4 bolts left on each door, the HAM7 HV was turned off alog88421. I changed the nuc30 DARM fom to the NO_GDS template in the launch.yaml file. Team CDS has been working their way through some of the EDC channel disconnects, the list length shrinks everytime I look at it.
We wanted to lock for health checks today but the Earth disagreed, a 7.6 from Japan rumbled in around 14:00 UTC and then a 6.7 from the same region at 22:00 UTC, and the wind started to pick up around 21:00UTC. Windy.com reports the wind will increase/remain elevated till it peaks around 10/11 PM PST | 06/07 UTC then it will should start to decrease. Ground motion and wind is still elevated as of the end of the shift.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 15:45 | FAC | Nellie | LVEA | N->Y->N | Tech clean | 18:21 |
| 16:03 | FAC | Kim | LVEA | N->Y->N | Tech clean | 18:21 |
| 16:14 | FAC | Randy | LVEA | N | Door prep, HAM6/7 | 17:07 |
| 16:44 | SAF | Sheila | LVEA | N -> Y | LASER HAZARD transition to HAZARD | 16:53 |
| 16:53 | LASER HAZARD | LVEA | Y | LVEA IS LASER HAZARD | 18:00 | |
| 16:54 | ISC | Sheila | LVEA | Y | SQZT7 work | 17:53 |
| 17:08 | CAL | Tony | PCAL lab | Y | PCAL measurement | 18:28 |
| 17:13 | EE | Fil | Mid/EndY | N | Power cycle electronics, timing issue | 17:46 |
| 17:36 | EE | Marc, Daniel | LVEA | Y | Check on racks by SQZT7 | 18:46 |
| 17:47 | EE | Fil | LVEA | Y | Join Marc | 18:13 |
| 17:55 | ISC | Matt | Prep lab | N | Checks, JOT lab | 18:28 |
| 18:05 | VAC | Travis | LVEA | N | Prep for HAM7 vent | 19:20 |
| 18:10 | EE | Fil | LVEA | n | Shutting HV off for HAM7 | 19:10 |
| 18:17 | SAF | Richard | LVEA | N | Check on FIl and Marc | 18:28 |
| 18:21 | VAC | Gerardo | LVEA | N | HAM7 checks | 19:36 |
| 18:29 | CAL | Tony, Yuri | FCES | N | 19:19 | |
| 20:29 | CDS | Dave | MidY and EndY | N | Plug in switch | 21:16 |
| 20:46 | CAL | Tony | PCAL lab | LOCAL | Grab goggles | 20:48 |
| 21:48 | VAC | Randy | LVEA | N | Door bolts | 22:58 |
| 22:05 | VAC | Gerardo | LVEA | N | HAM7 doors | 23:20 |
| 22:12 | VAC | Jordan | LVEA | N | HAM7 door bolts | 22:38 |
| 22:16 | FAC | Tyler +1 | MY, EY | N | Fire inspections | 00:06 |
| 22:18 | ISC | Matt | Prep Lab | N | Parts on CHETA table | 22:53 |
| 22:43 | VAC | Travis | LVEA | N | Join door crew | 23:19 |
| 22:58 | Anamaria, Rene, Alicia | LVEA | N | Checks by PSL | 23:30 | |
| 23:55 | CAL | Tony | PCAL lab | LOCAL | Take a quick picture | 00:03 |
| 23:57 | ISC | Jennie | Prep lab, LVEA | N | Gather parts | Ongoing |
18:32 UTC SEI_CONF back to AUTO from MAINTENANCE where it was all weekend
18:58 UTC HAM7 ISI tripped
22:03 UTC Earthquake mode as a 6.6 from Japan hit us
J. Kissel, J. Warner Trending around this morning to understand the reported ETMY software watchdog (SWWD) trips over the weekend (LHO:88399 and LHO:88403), Jim and I conclude that -- while unfortunate -- nothing in software, electronics or hardware is doing anything wrong or broken; we just had a whopper Alaskan earthquake (see USGS report for EQ us6000rsy1 at 2025-12-06 20:41:49 UTC) and had a few big aftershocks. Remember, since the upgrade to the 32CH, 2^28 bit DAC last week, both end station's DAC outputs will "look CRAZY" to all those whom are used to looking at the number of counts of a 2^20 bit DAC. Namely, the maximum number of counts is a factor of 2^10 = 1024x larger than previously, saturating at +/- 2^27 = +/- 134217728 [DAC counts] (as opposed to +/-2^19 = +/- 524288 [DAC counts]). The real conclusion: Both SWWD thresholds and USER WD Sensor Calibration need updating; they were overlooked in the change of the OSEM Sat Amp whitening filter from 0.4:10 Hz to 0.1:5.3 Hz per ECR:E2400330 / IIET:LHO:31595. The watchdogs use a 0.1 to 10 Hz band-limited RMS as their trigger signal, and the digital ADC counts they use (calibrated into either raw ADC voltage or microns, [um], of top mass motion) will see a factor of anywhere from 2x to 4x increase in RMS value for the same OSEM sensor PD readout current. In otherwords, the triggers are "erroneously" a factor 2x to 4x more sensitive to the same displacement. As these two watchdog trigger systems are currently mis-calibrated, I put all reference of their RMS amplitudes in quotes, i.e. ["um"]_RMS for the USER WDs and ["mV"]_RMS and quote a *change* in value when possible. Note -- any quote of OSEM sensors (i.e. the OSEM basis OSEMINF_{OSEM}_OUT_DQ and EULER basis DAMP_{DOF}_IN1_DQ) in [um] are correctly calibrated and the ground motion sensors (and any band-limited derivatives thereof; the BLRMS and PeakMons) are similarly well-calibrated. Also: The L2 to R0 tracking went into oscillation because the USER WDs didn't trip. AGAIN -- we really need to TURN OFF this loop programmatically until high in the lock acquisition sequence. It's too hidden -- from t a user interface standpoint -- for folks to realize that it should never be used, and always suspect, when the SUS system is barely functional (e.g. when we're vented, or after a power outage, or after a CDS hardware / software change, etc.) Here's the timeline leading up to the first SUS/SEI software watchdog that helped us understand it there's nothing wrong with the software / electronics / hardware but instead it was the giant EQ that tripped things originaly, but then subsequent trips were because of an overlooked watchdog trigger sensor vs. threadhold mis-calibration coupled with the R0 tracking loops. 2025-12-04 20:25 Sitewide Power Outage. 22:02 Power back on. 2025-12-05 02:35 SUS-ETMY watchdog untripped, suspension recovery 20:38 SEI-ETMY system back to FULLY ISOLATED (large gap in recovery between SUS and SEI due to SEI GRD non-functional because the RTCDS file system had not yet recovered) 20:48 Locking/Initial alignment start for recovery. 2025-12-06 20:41:49 Huge 7.0 Mag EQ in Alaska 20:46:30 First s&p-waves hit the observatory; corner station peakmon (in Z) is around 15 [um/s]_peak (30-100 mHz band) SUS-ETMY sees this larger motion, motion on M0 OSEM sensors in 0.1 to 10 Hz band increases from 0.01 ["um"]_RMS to 1 ["um"]_RMS. SUS-SWWD using the same sensors, in the same band but calibrated into ADC volts is 0.6 ["mV"]_RMS to ~5 ["mV"]_RMS 20:51:39 ISI-ETMY ST1 USER watchdog trips because the T240s have tilted off into saturation, killing ST1 isolation loops SUS-ETMY sees the large DC shift in alignment from the "loss" of ST1, and SUS-ETMY sees the very large motion, increasing to ~100 ["um"]_RMS (with USER WD threshold set to 150 ["um"]_RMS) -- USER WD never trips. But -- peak motion is oscillating to the 300 ["um"]_peak range (but not close to saturating the ADC.) SUS-SWWD reports an RMS voltage increase to 500 [mV_RMS] (with the SWWD WD threshold set to 110 ["mV"]_RMS) -- starts the alarm count-down of 600 [sec] = 10 [min]. 20:51:40 ISI-ETMY ST2 USER watchdog trips ~0.5 sec later as the GS13s go into saturation, and actuators try hard to keep up with the "missing" ST1 isolation SUS-ETMY really starts to shake here. 20:52:36 The peak love/rayleigh waves hit the site, with the corner station Z motion peakmon reporting at 140 [um/s], and the 30 - 100 mHz BLRMS reporting 225 [um/s]. At this point its clear from the OSEMs that the mechanical system (either the ISI or the QUAD) is clanking against earthquake stops, as the OSEMs show a saw-tooth-like waveforms. 20:55:39 SWWD trips for suspension, shutting off suspension DAC output -- i.e. damping loops and alignment offsets -- and sending the warning that it'll trip the ISI soon. Since the SUS is still ringing naturally recovering from the still-large EQ and uncontrolled ISI. 20:59:39 SWWD trips for seismic, shutting off all DAC output for HEPI and ISI ETMY SUS-ETMY OSEMs don't really notice -- it's still naturally ringing down with a LOT of displacement. There is a noticable small alignment shift as HEPI sloshes to zero. 21:06 SUS-ETMY SIDE OSEM stops looking like a saw-tooth, the last one to naturally ring-down. After this all SUS looks wobbly, but normal. ISI-ETMY ST2 GS-13 stops saturating 21:08 SUS-ETMY LEFT OSEM stops exceeding the SWWD threshold, the last one to do so. 2025-12-07 00:05 HPI-ETMY and ISI-ETMY User WDs are untripped, though it was a "tripped again ; reset" messy restart for HPI because we didn't realize that the SWWD needed to be untripped. The SEI manager state was trying to get bck to DAMPED, which includes turning on the ISO loops for HPI. Since no HPI or ISI USER WDs know about the SWWD DAC shut-off, they "can begin" to do so, "not realizing" there is no physical DAC output. The ISI's local damping is "stable" without DACs because there's just not a lot that these loops do and they're AC coupled. HPI's feedback loops, which are DC coupled, will run away. 00:11 SUS and SEI SWWD is untripped 00:11:44 HPI USER WD untripped, 00:12 RMS of OSEM motion begins to ramp up again, the L / P OSEMs start to show an oscillation at almost exactly 2 Hz. The R0 USER WD never tripped, which allowed the H1 SUS ETMY L2 (PUM) to R0 (TOP) DC coupled longitudinal loop to flow out to the DAC. with the Seismic system in DAMPED (HEPI running, but ST1 and ST2 of the ISIs only lightly damped), and with the M0 USER WD still tripped and the main chain without any damping or control, after HEPI turned on, causing a shift in the alignment of the QUAD, changing the distance / spacing of the L2 stage, and the L2 "witness" OSEMs started feeding back the undamped main chain L2 to the reaction chain M0 stage, and slowly begain oscillating in positive feedback. see R0 turn ON vs. SWWD annotated screenshot. Looking at the recently measured open loop gain of this longitudinal loop -- taken with the SUS in it's nominally DAMPED condition and the ISI ISOLATED, there's a damped mode at 2 Hz. It seems very reasonably that this mode is a main chain mode, and when undamped would destroy the gain margin at 2 Hz and go unstable. See R0Tracking_OpenLoopGain annoted screenshot from LHO:87529. And as this loop pushes on the main chain, with an only-damped ISI, it's entirely plausible that the R0 oscillation coupled back into the main chain, causing a positive feedback loop. 00:22 The main chain OSEM RMS exceeds the SWWD threshold again, as the positive feedback gets out of control peaking around ~300 ["mV"]_RMS, and the USER WD says ~100 ["um"]_RMS. Worst for the pitch / longitudinal sensors, F1, F2, F3. But again, this does NOT trip the R0 USER WD, because the F1, F2, F3 R0 OSEM motion is "only" 80 ["um"]_RMS still below the 150 ["um"]_RMS limit. 00:27 SWWD trips for suspensions AGAIN as a result, shutting off all DAC output -- i.e. damping loops and alignment offsets -- and sending the warning that it'll trip the ISI soon. THIS kills the 00:31 SWWD trips for seismic AGAIN, shutting off all DAC output for HEPI and ISI ETMY 15:59 SWWDs are untripped, and because the SUS USER WD is still tripped, the same L2 to R0 instability happens again. This is where the impression that "the watchdogs keep tripping; something broken" enters in. 16:16 SWWD for sus trips again 16:20 SWWD for SEI trips again 2025-12-08 15:34 SUS-ETMY USER WD is untripped, main chain damping starts again, and recovery goes smoothly. 16:49 SUS-ETMY brought back to ALIGNED
Jeff alerted me that we had never updated the SUS watchdog compensation filters for the suspension stages with the upgraded satamps (ECR E2400330). In the SUS watchdog filter banks, in the BANDLIM bank, FM6 contains the compensation filter for the satamps. I used a script to go through and update all of these for the suspensions and stages with their precise compensation filter values (the actual values of each satamp's channel's responses were measured and live in /ligo/svncommon/SusSVN/sus/trunk/electronicstesting/lho_electronics_testing/satamp/ECR_E2400330/Results/), then loaded the new filter files in. This filter module was updated for:
I ran the dry air system thru its quarterly test, FAMIS task. The system was started around 8:20 am local time, and turned off by 11:15 am. System achieved a dew point of -50 oF, see attached photo taken towards the end of the test. Noted that we may be running low on oil at the Kobelco compressor, checking with vendor on this. Picture of oil level is while system is off.
(Jordan, Gerardo)
We added some oil to the Kobelco reservoir whith the compressor off. We added about 1/2 gallon to get the level to come up to the half mark, see attached photo of level, photo was taken after the compressor had been running for 24+ hours. Level now is at nominal.
We took some photos to help us determine where to place the irises and how to route the new in-vacuum cabling. Linking them here since they might be useful when planning for future vents:
https://photos.app.goo.gl/xYCUhbqwxzZVwe7c6