Oli, Ryan S, Rahul
Except ETMX and ETMY (timing error, work currently ongoing), we have recovered all other suspensions by un-tripping the WD and setting it to SAFE for tonight. The inmons looks fine - eyeballed them all (bosems and aosem), nothing out of the order.
For ETMX and ETMY, Dave is currently performing a computer restart, following which they will set to safe as well.
TITLE: 12/04 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY:
After CDS & H1-Locking work of yesterday, today we transitioned to starting prep work for the upcoming vent of HAM7.
However, there was more CDS work today which was related to ISC Slow Controls....but in the middle of this...
There was a 90min power outage on site!!
LOG:
Recovery milestones:
22:02 Power back-up
22:04 CDS / GC seeing what's on/functional, then bring infrastructure back up
22:10 VAC team starts Kebelco to support the ar pressure that's keeping the cornerstation gate valves for closing
22:13 Phones are back
22:17 LHO GC Internet's back
22:25 GC wifi back up, alog and CDS back up
22:40 RyanS, Jason, Patick got PSL beckoff back up
22:55 VAC in LVEA/FCES back up
22:57 CDS back up (only controls)
23:27 PSL computer back
00:14 Safety interlock is back
00:14 HV and ALSY on at EY
00:35 opslogin0 back up
00:37 CS HV back up
00:53 CS HEPI pump station controller back up
01:05 CO2X and CO2Y back up
01:10 HV and ALSY on at EY
01:22 PSL is alive
Note: EY 24MHz oscillator had to be power cycled to resync
Casualties:
- lost ADC on seiH16
- 18V power supply failure at EY in SUS rack
Log:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 22:32 | VAC | Gerardo | MX, EX, MY, EY | n | Bringing back VAC computers | 23:41 |
| 23:32 | VAC | Jordan | LVEA, FCES | n | Bringing back VAC computers | 22:55 |
| 22:54 | PSL | RyanS, Jason | LVEA | n | Turning PSL chiller back on | 23:27 |
| 23:03 | VAC | Jordan | LVEA | n | Checking out VAC computer | 23:41 |
| 23:56 | SAF | Richard | EY | n | Looking at safety system | 00:25 |
| 00:02 | JAC | Jennie | JOAT, OpticsLab | n | Putting components on table | 01:24 |
| 00:19 | SAF | TJ | EX | n | Turning on HV and ALSX | 00:49 |
| 00:38 | TCS | RyanC | LVEA | n | Turning CO2 lasers back on | 01:06 |
| 01:03 | EE | Marc | CER | n | Power cycling io chassis | 01:14 |
| 01:06 | PSL | RyanS, Jason | LVEA | n | Checking makeup air for PSL | 01:10 |
| 01:07 | JAC | Jennie | LVEA | n | Grabbing parts | 01:14 |
| 01:08 | beckhoff | patrick | cr | - | BRSx recovery | 02:20 |
| 01:08 | sei | jim | EX.EY | - | HEPI pump station recovery | 01:58 |
| 01:25 | SEI | Patrick | EX | n | BRSX troubleshooting | 01:54 |
| 01:27 | EE | Marc | EX | n | Looking at RF sources | 02:09 |
| 02:00 | EE | Fil | EY | n | Power cycling SUSEY | 02:19 |
| 02:09 | VAC | Gerardo | MX | N | Troubleshooting VAC computer | 02:27 |
| 02:12 | EE | Marc | CER | N | Checking power supplies | 02:16 |
Since Oli's alog, I tried to keep a rough outline of the goings-on:
Marc and Fil went down to EY to replace the failed power supply, which brought life back to the EY frondends.
Dave noticed several models across site had timing errors, so he restarted them.
Gerardo continued to torubleshoot VAC computers at the mid-stations.
Once CDS boots were finished, I brought all suspension Guardians to either ALIGNED or MISALIGNED so that they're damped overnight.
I started to recover some of the Guardian nodes that didn't come up initially. When TJ started the Guardian service earlier, it took a very long time, but most of the nodes came up and he put them into good states for the night. The ones that didn't come up (still white on the GRD overview screen) I've been able to revive with a 'guardctrl restart' command, but I can only do a couple at a time or else the process times out. Even this way, the nodes take several minutes to come online. I got through many of the dead nodes, but I did not finish as I am very tired.
Main things still to do for recovery: (off the top of my head)
R. Short, P. Thomas, J. Oberling
We have recovered the PSL after today's power outage. Some notes for the future:
I've attached a picture of the Settings table for PSL sensor calibration and operating hours for future reference. Again, our persistent operating hours (that track total uptime of PSL laser components; OPHRS A in the attached picture) will continue to be wrong as we cannot update this value. The current operating hours, which tracks operating hours of currently operating components (i.e. we've been running this specific NPRO for X hours; OPHRS in the attached picture) are correct.
We have the PMC and FSS RefCav locked, but have left the ISS OFF overnight while the PMC settles. The PMC requires a beam alignment tweak (normal after an extended time off, like a 90 minute power outage) but we don't yet have Beckhoff so we don't have access to our picomotor mounts. I'll tweak the beam alignment tomorrow once Beckhoff has been recovered.
[Sheila, Eric, Karmeng]
We checked the NLG of the OPO without cavity lock, threshold power is roughly at 3.7mW, at the same order as the test performed at MIT (E2500270)
Checked the crystal alignment, found 9 good dual resonant position, from -2689um to 1223um, with 490um separation between each point. Everything seems to be in a good position.
Plus a clipped dual resonance position at -3190um. Crystal edge is roughly at 1500um and -3200um.
Red alignment does not shift between the various crystal position.
It's been a few minutes so far. There is emergency lighting. Luckily since it was lunchtime there were no people out on the floor. Recovery Begins!
FAMIS 38886, last checked in alog88025
Both HAM5 and the BS chambers were in 'ISI_DAMPED_HEPI_OFFLINE' at the time this was run, but all other chambers were nominal.
There are 15 T240 proof masses out of range ( > 0.3 [V] )!
ETMX T240 2 DOF X/U = -1.565 [V]
ETMX T240 2 DOF Y/V = -1.633 [V]
ETMX T240 2 DOF Z/W = -0.886 [V]
ITMX T240 1 DOF X/U = -2.256 [V]
ITMX T240 1 DOF Z/W = 0.467 [V]
ITMX T240 3 DOF X/U = -2.386 [V]
ITMY T240 3 DOF X/U = -1.073 [V]
ITMY T240 3 DOF Z/W = -2.895 [V]
BS T240 1 DOF Y/V = -0.323 [V]
BS T240 2 DOF Z/W = 0.327 [V]
BS T240 3 DOF X/U = -0.547 [V]
BS T240 3 DOF Z/W = -0.401 [V]
HAM8 1 DOF X/U = -0.638 [V]
HAM8 1 DOF Y/V = -0.729 [V]
HAM8 1 DOF Z/W = -1.11 [V]
All other proof masses are within range ( < 0.3 [V] ):
ETMX T240 1 DOF X/U = 0.007 [V]
ETMX T240 1 DOF Y/V = -0.049 [V]
ETMX T240 1 DOF Z/W = -0.017 [V]
ETMX T240 3 DOF X/U = -0.005 [V]
ETMX T240 3 DOF Y/V = -0.056 [V]
ETMX T240 3 DOF Z/W = -0.051 [V]
ETMY T240 1 DOF X/U = 0.045 [V]
ETMY T240 1 DOF Y/V = 0.198 [V]
ETMY T240 1 DOF Z/W = 0.249 [V]
ETMY T240 2 DOF X/U = -0.073 [V]
ETMY T240 2 DOF Y/V = 0.218 [V]
ETMY T240 2 DOF Z/W = 0.037 [V]
ETMY T240 3 DOF X/U = 0.268 [V]
ETMY T240 3 DOF Y/V = 0.076 [V]
ETMY T240 3 DOF Z/W = 0.147 [V]
ITMX T240 1 DOF Y/V = 0.254 [V]
ITMX T240 2 DOF X/U = 0.151 [V]
ITMX T240 2 DOF Y/V = 0.258 [V]
ITMX T240 2 DOF Z/W = 0.223 [V]
ITMX T240 3 DOF Y/V = 0.104 [V]
ITMX T240 3 DOF Z/W = 0.094 [V]
ITMY T240 1 DOF X/U = 0.074 [V]
ITMY T240 1 DOF Y/V = 0.123 [V]
ITMY T240 1 DOF Z/W = -0.013 [V]
ITMY T240 2 DOF X/U = 0.017 [V]
ITMY T240 2 DOF Y/V = 0.22 [V]
ITMY T240 2 DOF Z/W = 0.148 [V]
ITMY T240 3 DOF Y/V = 0.091 [V]
BS T240 1 DOF X/U = 0.184 [V]
BS T240 1 DOF Z/W = -0.223 [V]
BS T240 2 DOF X/U = 0.094 [V]
BS T240 2 DOF Y/V = -0.224 [V]
BS T240 3 DOF Y/V = 0.004 [V]
There are 2 STS proof masses out of range ( > 2.0 [V] )!
STS EY DOF X/U = -4.625 [V]
STS EY DOF Z/W = 2.321 [V]
All other proof masses are within range ( < 2.0 [V] ):
STS A DOF X/U = -0.498 [V]
STS A DOF Y/V = -0.915 [V]
STS A DOF Z/W = -0.485 [V]
STS B DOF X/U = 0.154 [V]
STS B DOF Y/V = 0.939 [V]
STS B DOF Z/W = -0.337 [V]
STS C DOF X/U = -0.685 [V]
STS C DOF Y/V = 0.77 [V]
STS C DOF Z/W = 0.505 [V]
STS EX DOF X/U = -0.195 [V]
STS EX DOF Y/V = -0.111 [V]
STS EX DOF Z/W = 0.112 [V]
STS EY DOF Y/V = 1.271 [V]
STS FC DOF X/U = 0.071 [V]
STS FC DOF Y/V = -1.254 [V]
STS FC DOF Z/W = 0.567 [V]
Last night I took a calibration measurement (broadband and simulines) after the IFO was locked for about 3.5 hours. This generated calibration report 20251204T035347Z.
The results from broadband and simulines on the front page look reasonable. The plot label indicates that these are from PCAL X and Y but I think the broadband and simulines are run on PCAL Y only, so I don't know if PCAL X agrees with Y. I believe as of last night, the DAC for PCAL X had been swapped.
I made no changes to the pydarm ini file to account for the DAC change before running the report.
Jeff and I looked over the report and have concluded that the changes we see are very minimal, and do not require any updates to the calibration at this time.
There is a 1% change in the L3 actuation strength, comparing the measurement from 11/18 and 12/04. This could be from the DAC change or it could be from charging. Either way, it is small enough that kappa TST is likely correcting this properly. The overall systematic error is still around 1%, as it was before the DAC change.
We don't think there needs to be any changes to the pydarm ini file at this time.
Thu Dec 04 10:15:45 2025 INFO: Fill completed in 15min 42secs
Plot has zoomed y-scale to reduce number of divisions in order to show detail.
FAMIS 27645
pH of PSL chiller water was measured to be between 10.0 and 10.5 according to the color of the test strip.
Summary of where we are in the upgrade:
h1susey
All Gen Std DACs replaced with 2 LIGO-DACs. All AI-Chassis upgraded to SCSI daisy-chain.
Timing card replaced with one having latest firmware
New models [h1iopsusey, h1susetmy, h1sustmsy, h1susetmypi]
All models running RCG-5.5.2
h1susex
All Gen Std DACs and first-gen LIGO-DAC replaced with 2 LIGO-DACs. All AI-Chassis upgraded to SCSI daily-chain.
Timing card firmware upgraded in place
New models [h1iopsusex, h1susetmx, h1sustmsx, h1susetmxpi]
All models running RCG-5.5.2
h1iscex
18bit-DAC replaced with 20bit-DAC.
Timing card firmware upgraded in place
New models [h1iopiscex, h1calex, h1pemex]
All models running RCG-5.5.2
h1iscey
Timing card firmware upgraded in place
h1seiex, h1seiey
Timing card firmware upgraded in place
h1susauxex, h1susauxey
Timing card firmware upgraded in place.
All models running RCG-5.5.2 (first test of new RCG)
h1pemmx, h1pemmy
Timing card firmware upgraded in place.
At approx 8am the cleanroom above HAM 5/6/7 was powered on in prep for vent activities and initial cleanings. As to be expected, zone 4g temp has spiked. No action on behalf of FAC is required at this time. T. Guidry K. Stewart N. Sanchez
On Tuesday 02dec2025 Marc upgraded the firmware version of the IO Chassis timing cards at both end stations and mid stations. This upgrade was done "in place" using a laptop connected to the JTAG port on the timing card.
The current timing card firmware versions for H1 are:
All are running 0xfe0 except the following which are running 0x635
h1omc0: upgraded some time ago during O4 to move its DUOTONE frequencies from [960/961Hz] to [1920/1921Hz]
h1susey: upgraded Mon 01dec2025 when its timing card was replaced as part of the LIGO-DAC upgrade
Upgraded by Marc Tue 02dec2025:
h1iscex
h1iscey
h1pemmx
h1pemmy
h1seiex
h1seiey
h1susauxex
h1susauxey
h1susey
Jonathan, Jeff, Tony, Dave:
Following the upgrade of h1iscex 18bit-DAC upgrade to a 20bit-DAC Tuesday 02dec2025 we discovered the 20bit-DAC was not outputting any voltage.
Wed 03dec2025 Tony and I went to h1iscex to replace the suspect 20bit-DAC with the second DAC which was removed from the h1susex upgrade Tue02dec2025.
All while H1 was locked, we powered down h1iscex and its IO Chassis, pulled the chassis from the rack, took its lid off, replaced the 20bit-DAC and powered the chassis up while it was open.
I powered up the computer and auto started the models. H1 lost lock soon after the IOP model started running (possible Dolphin IPC minor break in transmission?).
We verified that the replacement 20bit-DAC was working, looking at both the DAC_chan7->ADC_chan30 DUOTONE inteface card loop back and h1calex drive of DAC_chan6.
We closed the chassis and pushed it back into place.
| Suspect broken 20bit-DAC (removed) | 190219-10 |
| Replacement 20bit-DAC (installed) | 150311-01 (S1700286) |
Note that both of these 20bit-DACs came out of h1susex during its upgrade to LIGO-DACs Tue02dec2025. I'm not sure which one was driving the PI signals, and initially I chose the 2019 card for h1iscex over the older 2015 card.
Also note that the 2015 card has a LIGO S-number tag, the first I've seen on a GenStd IO card.
TITLE: 12/04 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 1mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.33 μm/s
QUICK SUMMARY:
H1 has been locked almost 15hrs....but H1 is going DOWN in prep for the HAM7 vent and will be set to IDLE.
There will be more CDS (have heard Beckhoff) work, but no "proof of life" for H1 after that work will be done.
Now the focus is on prep to vent with Kim & Nellie commencing shortly with thorough technical cleaning of HAM chambers. Other possible items on the list for today:
FYI: Elenna ran a Calibration last night (she noted in Mattermost and files were generated at 818pmPDT).
It seems like it would be useful to have this recipie written in one place. After the gate valves are opened and closed, we often have a hard time relocking because of alignment.
Here's what we think we should do to recover after gate valves open.
Thank you Sheila for writing up this process in very helpful detail.
I wanted to add a few more notes based on many iterations of this process.
#2 and 3 add on: often, if the vent was invasive enough, the other DRMI loops besides SRC ASC may also give some trouble. There are options in the use_DRMI_ASC and use_PRMI_ASC to also disengage PRC1 (PRMI and DRMI), INP1 (DRMI), PRC2 (DRMI) and MICH (PRMI and DRMI). I recommend disengaging PRC1, INP1 and PRC2 if the DRMI ASC is not working well and PRC1 for the PRMI ASC. This will then require moving PRM (PRC1), PR2 (PRC2) and INP1 (IM4) by hand. We rarely disengage MICH. Generally MICH needs to stay on during CARM offset reduction so the beamsplitter can follow DHARD. Also note that SRC2 controls both SRM and SR2, so SR2 may need to be moved as well as SRM.
#4 add on: the DHARD error signal may be very bad if the arms are too far away from resonance, and the error signals can flip sign as the CARM offset is reduced. To make DHARD engagement easier, it might be helpful to step the CARM offset by hand while moving DHARD manually, even taking smaller steps than the guardian code does. Then, DHARD can be engaged. Note: if you find yourself flipping the sign of the DHARD loops to engage them, this is a problem, and you will likely have a lockloss during CARM offset reduction if the DHARD loops are closed with the wrong sign!
#6 and 7 add on: if the alignment is poor enough, it might be necessary to walk the IFO alignment further before engaging the loops in ENGAGE_ASC_FOR_FULL_IFO by putting the guardian code in the terminal. This is the process I follow:
I like the recommendation of noting the green camera references throughout the process. If you lose lock before you get a chance to reset the references, you will have to repeat the whole process (alignment does not offload after DRMI!). Jenne's metaphor is that the green references are like video game save points, so noting them throughout the process can be helpful in case a lockloss happens, you can return to where you were before. The final green reference setting should be after all alignment loops are closed (including soft loops!).
Another offset that is useful to reset is the POP A offset. In DRMI, PRC1 runs on the POP A QPD and the alignment is offloaded. If the POP A offsets are set to the PRM's final location in ENGAGE_ASC_FOR_FULL_IFO, the ADS convergence for DOF3 will be much faster, as DRMI will put the PRM where it already needs to be. The convergence of this loops is so slow it sets how long this state takes, which can be many minutes if the PRM is far off from where it needs to be.
Created a wiki page for Recovering the IFO After Gate Valve Closures and added a link to that wiki in our ops Troubleshooting the IFO wiki. This way we can make changes next time we exercise this procedure and hopefully it leaves more bread crumbs to find it again.
TITLE: 12/04 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Planned Engineering & LOCKED
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 10mph Gusts, 6mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.28 μm/s
QUICK SUMMARY:
SQZ_OPO_LR was giving us this error: "CLF frequency out of range, check CLF CMB". I do not recall how this was resolved by Daniel while he was checking out the CMB. Once that was resolved we got the classic: "pump fiber rej power in ham7 high, nominal 35e-3, align fiber pol on sqzt0." issue. The Quarter & Half wave plates on SQZT0 were touched up to minimize H1:SQZ-SHG-FIBR_REJECTED_DC_POWER_MON. OPO was locked. Great!
Then the FC wasn't locking:
So FC2 was moved from its nominal location. only after I accidentally moved ZM1 and reverted it.
Fumbled with the FC2 sliders for a while, then used the clear history button.
Fumbled around some more with the FC2 sliders.
Locked the the FC with FC2 P @ 246.1 Y @ 51.1
And now the SQZ has been SQuoZe.
Forcast for tonight's wind looks good for a stable lock.
Once CDS reboots were finished, I took all suspensions to either ALIGNED or MISALIGNED so that they're damped overnight.