Remotely turned on the TCS X and Y chillers.
I restarted the ETMX and ETMY HWS code, following TCS wiki. ITMs were already running. ITM lasers turned back on by themselves.
I had some difficulty connecting to the corner hepi pump controller after the power outage. The workstations have gotten too far ahead of the old Athena controller, so when I attempted to ssh to the pump station, I got:
Unable to negotiate with XX.XXX.X.XX port 22: no matching key exchange method found. Their offer: diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
I had to add some options to ssh:
ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 -oHostKeyAlgorithms=+ssh-dss controls@h1hpipumpctrll0
After deleting the old host key again I was able to connect
.
We had a site wide power outage around 12:11 local time. Recovery of CDS has started.
I've turned the alarms system off, it was producing too much noise.
We are recovering front end models.
Jonathan, Erik, Richard, Fil, Patrick, EJ, TJ, RyanS, Dave:
CDS is recovered. CDSSDF showing WAPs are on, FMCSSTAT showing LVEA temp change.
Alarms are back on (currently no active alarms). I had to restart the locklossalert.service, it had gotten stuck.
BPA Dispatcher on duty said they had a breaker at the Benton substation open & reclose. At that time, they did not have a known cause for the breaker operation. Hanford fire called to report a fire off Route 4 by Energy Northwest near the 115KV BPA power lines. After discussions with the BPA dispatcher the bump on the line or breaker operation, may have been caused by a fault on the BPA 115KV line causing the fire. BPA was dispatching a line crew to investigate.
J. Kissel Ops corps -- please run the following DARM FOM template on the top, front-and-center, wall display from 08:00 to 12:00 PDT on Saturday (9/13). /opt/rtcds/userapps/release/cds/h1/scripts/fom_startup/nuc30/ H1_DARM_FOM_wO1.xml Details: For this upcoming Saturday's tours only, we'd like to celebrate how much the detector sensitivity has improved in the past 10 years by displaying the early O1 sensitivity (from G1501223), rather than show the L1 trace (whose live sensitivity will still be captured by the display of BNS range). As such, I've augmented the standard template from /opt/rtcds/userapps/release/isc/h1/scripts/H1_DARM_FOM.xml with the following changes. Functional: ** Changed the pwlech FFT chunk "% overlap" parameter to be 50%, rather than 75%, which is appropriate for the Hanning window specified to be used. ** Changed the number of rolling exponential averages parameter to be 10 rather than 3. - Updated the H1 reference from April 11 2024 (representing O4B sensitivity) to yesterday, September 09 2025, so as to better represent the O4C sensitivity - Imported G1501223 H1 "start of O1" representative sensitivity, replacing the L1 reference. ** these will have the effect of "slowing down" the live traces, and/or making "glitches contaminate the sensitivity for longer," but this comes with the benefit of a better (less noisy) estimate of the current stationary noise. That makes it a more apples-to-apples comparison with the O1 sensitivity. Aesthetic: - Added "Displacement" to the "aLIGO DARM" title - Synchronized the format of the legend entries for the references - Updated the O1 reference trace color so it can be seen a bit better - Re-ordered the last three traces from "PCALY, GWINC, PCALX" to "PCALX, PCALY, GWINC" (while preserving the colors and symbols) - Moved the legend to be centered in the window, but still showing violin modes. The attached screenshot is of this template running live yesterday afternoon. The hope is that this version of the central display will still fully function as a monitor of how the detector is doing for observational readiness, but also work as a good display piece for the tours. This can reverted to the standard template as soon as the tours are done.
Satellite amplifiers for MC1 and MC3 were swapped during maintenance (15:00-19:00 UTC) on July 22, 2025 (85922), and MC2 satamp had been swapped before that (85770). I wanted to see if we could determine any improvement in noise in the LSC and ASC input mode cleaner channels.
LSC Channel:
- H1:IMC-L_OUT_DQ
ASC Channels:
- H1:IMC-MC2_TRANS_PIT_OUT_DQ
- H1:IMC-MC2_TRANS_YAW_OUT_DQ
- H1:IMC-WFS_A_DC_PIT_OUT_DQ
- H1:IMC-WFS_A_DC_YAW_OUT_DQ
- H1:IMC-WFS_B_DC_PIT_OUT_DQ
- H1:IMC-WFS_B_DC_YAW_OUT_DQ
- H1:IMC-WFS_A_I_PIT_OUT_DQ
- H1:IMC-WFS_A_Q_PIT_OUT_DQ
- H1:IMC-WFS_A_I_YAW_OUT_DQ
- H1:IMC-WFS_A_Q_YAW_OUT_DQ
- H1:IMC-WFS_B_I_PIT_OUT_DQ
- H1:IMC-WFS_B_Q_PIT_OUT_DQ
- H1:IMC-WFS_B_I_YAW_OUT_DQ
- H1:IMC-WFS_B_Q_YAW_OUT_DQ
I looked at many times before and after these swaps looking for the lowest noise for each to be the best representative of the noise level we can achieve, and have settled on a set of before and after times that differ depending on the channel.
BEFORE:
2025-06-18 09:10 UTC (DARK RED)
2025-07-07 09:37 UTC (DARK BLUE)
AFTER:
2025-08-01 07:05 UTC (GREEN)
2025-08-01 10:15 UTC (PINK)
2025-08-02 08:38 UTC (SEA GREEN)
2025-09-05 05:23 UTC (ORANGE)
These measurements were taken for 47 averages with a 0.01 BW.
Results:
Channel | Comments |
---|---|
H1:IMC-L_OUT_DQ | Here the best 'after swap' time that I was able to find is noticeably worse than the best before time, so either the swap made the LSC noise worse, or we just haven't been able to reach that level of lowest noise again since the swap. |
H1:IMC-MC2_TRANS_PIT_OUT_DQ | Small noise drop between 0.8-3 Hz |
H1:IMC-MC2_TRANS_YAW_OUT_DQ | Slight noise drop between 0.6-3 Hz? |
H1:IMC-WFS_A_DC_PIT_OUT_DQ | Looks about the same as before, maybe a bit of improvement between 7-9.5 Hz |
H1:IMC-WFS_A_DC_YAW_OUT_DQ | Noise between 6-10 Hz has dropped slightly |
H1:IMC-WFS_B_DC_PIT_OUT_DQ | No difference |
H1:IMC-WFS_B_DC_YAW_OUT_DQ | No difference |
H1:IMC-WFS_A_I_PIT_OUT_DQ | Looks like the noise above 1 Hz has dropped slightly |
H1:IMC-WFS_A_Q_PIT_OUT | No difference |
H1:IMC-WFS_A_I_YAW_OUT | No difference |
H1:IMC-WFS_A_Q_YAW_OUT | No difference |
H1:IMC-WFS_B_I_PIT_OUT | Looks about the same as before, maybe a bit of improvement between 7-9.5 Hz. Showing the sea green AFTER trace to verify that the pink AFTER bump seen at 10 Hz is not an issue caused by the satamp swap. |
H1:IMC-WFS_B_Q_PIT_OUT | Looks about the same as before, maybe a bit of improvement between 7-9.5 Hz. Showing the sea green AFTER trace to verify that the pink AFTER bump seen at 10 Hz is not an issue caused by the satamp swap. |
H1:IMC-WFS_B_I_YAW_OUT | Looks about the same as before, maybe a bit of improvement between 7-9.5 Hz |
H1:IMC-WFS_B_Q_YAW_OUT | Looks about the same as before, maybe a bit of improvement between 7-9.5 Hz |
The SQZ filter cavity version of this: 86624
The plots from LHO:86253 show the before vs. after M1 OSEM noise performance for MC1 MC2, and MC3. Comparing the page 3 summaries of each of these .pdfs show that - We only expect change between ~0.2 and ~8 Hz. So, any improvement seen in the 7 to 9.5 Hz region is very likely *not* related to the sat amp whitening change. - For MC2, the LF and RT OSEM degrees of freedom -- both the before and after traces show that L and Y are well above the expected noise floor below for the entire region below 10 Hz. - The change all DOFs is a broadband change in noise performance; not only is there nothing *sensed* by these OSEMs above ~5 Hz, but there's also no change and thus any resonance-like features that change in the IMC signals will not be related to the sat amp whitening filter change. We should confirm that the data used for the MC2 "before vs after" were both taken with the IMC length control OFF (i.e. the IMC was OFFLINE). Back to the IMC metrics -- (1) Only IMC-L and the MC TRANS signals appear to be dominated by residual seismic / suspension noise below 10 Hz. Hence (a) I believe the improvement shown in the MC TRANS QPD, tho I would have expected more. (b) I believe that we're seeing something that is limited by seismic / suspension noise in IMC-L, but the fact that got worse doesn't agree with the top-mass OSEM's stated improvement from LHO:86253, so I suspect the story is more complicated. (2) it's clear that all of the DC "QPD" signals from the IMC WFS are not reading out anything seismically related below 10 Hz. Maybe this ASD shape is a function of the beam position? Maybe the real signal is drown out by ADC noise, i.e. the signal is only well-whitened above 10 Hz? Whatever it is, it's drowning out the improvements that we might have expected to see -- we don't see *any* of the typical residual seismic / suspension noise features. (3) WFS B RF signals appear to also be swamped by this same color of noise. WFS A RF signals show a similar color, but it's less in magnitude, so you see some residual seismic / suspension peaks popping up above it. But it's not low enough to show where we might expect the improvements -- the troughs between resonances. So given the quality of this metric, I'm not mad that we didn't see an improvement. So -- again -- I really only think that metrics (1), i.e. IMC-L and MC2 TRANS are showing noise that is/was dominated by residual seismic / suspension noise below 10 Hz, so that's why we see no change in the IMC WFS signals; not because we didn't make an improvement in what they're *supposed* to be measuring. We'll have to figure out what happened with IMC-L between June and Sep 2025, but the plethora of new high-Q (both "mechanical" and "digital") features between 3 and 10 Hz suggests that it's got nothing to do with the sat amp whitening improvement. The slope of increased broadband noise between 1 to 3 Hz doesn't match what we would expect, nor does it match the demonstrated improvement in the local M1 OSEM sensors. We should check the DC signal levels, or change the window type, to be sure this isn't spectral leakage.
Our range hasn't been the most stable during this lock. There's one drop in particular that goes down about 8Mpc to 145Mpc for 20min or so (attachment 1). Running our range comparison script, and changing the span appropriately (thanks Oli!), it looks to be a very broadband from ~40-200Hz. The SQZ BLRMS see something around that time (attachment 2).
Sheila had me run another template for that time. The template, which was copied from one of Camilla's, I've now saved in the sqz userapps - attachment 3. Green being the trace from this lock.
Wed Sep 10 10:07:21 2025 INFO: Fill completed in 7min 17secs
Gerardo confirmed a good fill curbside.
TITLE: 09/10 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 10mph Gusts, 6mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY: Locked for 5.5 hours, automated relock. Calm environment. Some dust alarms at the LAB1 station in the night, odd.
Plan for today is to observe.
The lock loss over night didn't leave many clues as to what it could be (1441523038 [Sept 10 0703UTC]). ETMX L2 out and DARM IN1 seem to be the first to react to something, DARM possibly moving first.
TITLE: 09/10 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
INCOMING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 10mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.09 μm/s
SHIFT SUMMARY:
Only thing note worthy was a possible Superevent candidate : S250910b @ 00:07:52 UTC.
I didn't really hear any thunder either.
LOG:
No log
TITLE: 09/09 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 4mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
H1 has been locked for 2.75 hours.
All subsystems seem to be running smoothly.
The weather report suggests that there may be thunderstorms soon.
TITLE: 09/09 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: It was a pretty standard maintenance day with a few aftershocks from the oregon earthquake to slow down the reacquisition. Observing for 2.75hours.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
14:44 | SYS | Betsy | Opt Lab | n | In and out of lab for stuffs, quick into the LVEA for more parts | 16:35 |
14:47 | CEBEX | Bubba, contractors | MY | n | Drilling and surveying | 22:41 |
15:00 | SYS | Randy | OSB | n | Forklifting to mech room | 15:22 |
15:00 | FAC | Kim | EX | n | Tech clean | 16:02 |
15:00 | FAC | Nelly | EY | n | Tech clean | 15:49 |
15:08 | VAC | Jordan, Gerardo, Anna | LVEA | n | NEG pump regen, GV checks, valve in NEG | 17:58 |
15:10 | FAC | Chris | site | n | Forklifting from staging to woodshop area | 17:00 |
15:15 | CDS | Fil | LVEA | n | Cable trays near HAM5 with Ken & Drilling with Randy | 18:54 |
15:15 | FAC | Ken | LVEA | n | LVEA light replacement | 18:49 |
15:22 | FAC | Christina | VPW | n | Fork lifting | 15:42 |
15:22 | SYS | Randy | LVEA | n | W & E bay craning | 17:22 |
15:30 | SYS | Mitchell | LVEA | n | Grabbing parts | 15:50 |
15:43 | OPS | Richard | LVEA | n | Checking on things | 16:03 |
15:47 | VAC | Janos | MX, MY | n | Pump install work continuing | 18:55 |
15:49 | FAC | Nelly, Kim | FCES | N | Tech clean | 16:55 |
15:58 | GRD | TJ | CR | n | h1guardian1 reboot | 16:07 |
16:38 | PEM | Sam, Jonathan (student) | LVEA | n | Tour/ looking at PEM things | 17:34 |
16:50 | ISC | Daniel | LVEA | n | OMC whitening chassis removal | 17:16 |
16:51 | FAC | Mitchell | Mid X | N | moving pelican cases | 17:38 |
16:51 | TSC | TJ | Mech rm -> LVEA | N | Folling chiller lines | 17:32 |
16:55 | safety | McCarthy | LVEA | N | Checking on the LVEA safety | 17:42 |
16:56 | FAC | Kim & Nelly | High bay & LVEA | N | Technical cleaning garbing supplies | 18:34 |
17:07 | SUS | Ryan C | Crtl Rm | N | ETMX OPLEV SUS charge measurements | 18:41 |
17:27 | OPT | Camilla | LVEA, OpticsLab | n | Looking for parts | 18:04 |
17:39 | PCAL | Tony, Mitchell | PCAL Lab | - | Looking for parts | 17:53 |
17:52 | SYS | Randy | EY | n | Check in receiving | 18:17 |
17:54 | CDS | Marc | LVEA | n | Check in with Fil | 18:04 |
18:18 | SYS | Randy | LVEA | n | Take some measurements | 18:34 |
18:32 | - | Richard, student | OSB roof | n | Roof tour | 18:47 |
18:42 | VAC | Jordan, Anna, Gerardo | LVEA | n | Shut down NEG pump | 19:14 |
18:51 | - | Oli | LVEA | n | Sweep | 18:58 |
19:09 | FAC | Tyler | MY | n | Checking with crew down there | 21:11 |
19:51 | - | Fil, Betsy | LVEA | n | Measure something | 20:11 |
21:47 | SPI | Jeff | OpticsLab | n | Figuring out what is in the bag | 21:59 |
I processed one hour of no-squeezing data taken in November 2024 (81468) for the purpose of running a correlated noise budget. This data was taken shortly after noise budget injections were run (80596), so I was able to use the measurement of the jitter noise to perform jitter subtraction from 100-1000 Hz, similar to my work in 85899.
I followed the same procedure as I documented in 85899: I plotted the whitened time series and saw two glitches, which I excised from the data by removing the individual segments with the glitches. I then mean-averaged the remaining segments. As a part of the correlated noise data collection in Nov 2024, we took a full calibration measurement, which I was able to use to generate a model to calibrate the DCPD signals. I used the IMC WFS yaw signal as a witness to subtract jitter from 100-1000 Hz. I then calculated the full correlated noise estimate also as described in 85899 (in that log I called it the "full classical noise estimate", which I now realize is confusing).
The comparison of the Nov 2024 time (O4b) and June 2025 time is shown here.
The noise below 100 Hz is lower in June 2025, however the noise above 100 Hz is higher in June 2025. It also appears that the slope of the noise above 100 Hz has changed slightly. The high frequency noise is also different, possibly because the frequency noise coupling to DARM has changed.
I also plotted the ratio of the noise from 40-300 Hz, showing that the noise below 100 Hz is reduced by up to 10% in amplitude, and above 100 Hz increase by up to 10% in amplitude.
I added the GWINC thermal noise trace to the plot above, and took the ratio of the correlated noise to the total thermal noise to that trace to highlight the change in noise. The ratio shows that the overall slope and amplitude of the excess noise, compared to the full thermal noise trace, has changed.
And going back even further, the correlated noise budget was run in O4a in Dec 2023, which we used in the O4 detector paper. We only had 900 seconds of data, so the results are more noisy. Comparing the three traces and comparing their ratios to the GWINC thermal noise trace. I did not do a jitter subtraction on the Dec 2023 data. The nearest valid calibration report is on Oct 27, which is what I used to calibrate the data.
Maintenenace recovery was fairly straight forward but there were a few more aftershocks from Oregon that caused us to lose lock while acquiring. We eventually made it up at 2047UTC, with violins very elevated. I'll work on damping those a bit faster that the automated settings.
After the watchdogs were reset a lightening strike caused a power glitch. GC UPS reported going into and off of battery power.
h1susauxh2 and h1susauxex went offline, turned out they were rebooting themselves and came back after a few minutes.
PSL is in a bad way, Ibrahim is calling for support.
The lights in my house flickered at the time I saw the sus-aux machines go down. We have been having a storm roll over us for the past hour, moving from Oregon northwards.
Ibrahim is on his way to the CER MEZ to reset the PSL REFCAV high-voltage supply.
Back on.
Here are details of the power glitches due to the electrical storm Monday night. We had a large glitch at 21:51:26 followed 5 seconds later by a smaller glitch.
Lockloss due to an EQ that hit before EQ mode could activate. Seems to be either very local or very large - not on USGS yet.
The end stations were easier with their beckhoff controllers. EX wouldn't start pumping at first, but I think the vfd just needed powered off more completely. I turned it off for 5 secs when I first arrived, that apparently wasn't enough to reset the vfd. Fil turned it off for maybe 20 secs, and fans came on when he powered it back up, which I didn't hear when I power cycled it earlier. EY came right back up.