After problems with PRC1 ASC, reported here, I checked the PRC1 P error signal and saw that the REFL signal seems to have a small offset. However, it appears that the POP X I signal has no offset, so I updated the error signal and tested the sign. The ISC DRMI code is updated with this new matrix.
Sheila, Matt, Camilla
As the interferometer stayed locked during the early part of maintence day, we did some injections into the filter cavity length to try to measure the noise caused by backscatter. (Previous measurements 78579 and 78262, recent range drops that we think are filter cavity related: 86596)
The attached screenshot from Camilla shows the excitations and their appearance in DARM. (Compare to Naoki's measurement here, and to the time with no injection but elevated noise here). I made a simple script to make the projection based on this injection, which is available here. It does show about a factor of 3 a higher level of noise in DARM than Naoki's projection 78579, this is large enough that we should add it to our noise budget but far too small to explain our low range times last week.
M. Todd
I did a sweep of the LVEA, everything looked fine.
I unplugged an extension cord from outside the PSL racks, and noted that a forklift by the 3IFO racks was left plugged it. I left it plugged in as it was.
Robert was still in there while I was doing my sweep, and he was notified that he was the last one.
TITLE: 09/02 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: MAINTENANCE
Wind: 5mph Gusts, 2mph 3min avg
Primary useism: 0.30 μm/s
Secondary useism: 0.13 μm/s
QUICK SUMMARY: A bit delayed of a entry, but we are in maintenance mode. We stayed locked until 1627 UTC (927PT) before we were knocked out by some maintenance activities.
The FSS TPD has been slowly trending down again, so I did a quick remote adjustment of beam alignment into both the PMC and FSS RefCav today.
Starting with the PMC, I turned the ISS OFF, then adjusted the beam alignment from the Control Room using our picomotor mirrors. Unfortunately, I did not get any improvement in either PMC Trans or PMC Refl. PMC Refl has been very slowly increasing over the last couple of months, it may be time to take a look at adjusting the amplifier pump currents; we'll monitor this over the next week or two and move forward from there as necessary. To end the PMC adjustments, I turned the ISS back ON. It was diffracting ~4.2% (jumping between 4% and 4.5%), so I decreased the ISS RefSignal -1.99 V (from -1.98 V) to lower this a small bit. The ISS diffracted power is now hanging around 3.8% (jumping between 3.6% and 4%); at this new diffracted power, PMC Trans is ~105.7 W.
Moving on to the FSS RefCav, with the IMC unlocked the TPD began at ~0.78 V. I was able to get the TPD up to ~0.809 V, an OK improvement but not back where it was at the end of the last on-table alignment. With the IMC locked the TPD is ~0.813 V.
Tue Sep 02 10:08:32 2025 INFO: Fill completed in 8min 29secs
Gerardo confirmed a good fill curbside.
No new issues found.
We tried staying locked through some of the maintenance activities and we made it pretty far. I had the SEI_ENV in Light Maintenance. There were people in the LVEA at the time, as well fork lifting into OSB receiving and maybe even some craning, so I'm unsure what actually knocked us out.
Test for Thursday morning at 7.45 am, assuming we are thermalised.
conda activate labutils
python auto_darm_offset_step.py
Wait until program has finished ~20 mins.
Turn OMC ASC back on by putting master gain slider back to 0.020.
Test for Thursday morning at 7.45 am, assuming we are thermalised, rewrote instructions above and took out last part.
conda activate labutils
python auto_darm_offset_step.py
Wait until program has finished ~15 mins.
Turn OMC ASC back on by putting master gain slider back to 0.020.
Commissioners will turn off OMC ASC and close beam diverter once heating has finished then do DARM offset step and other tests before turning on ASC and opening beam diverter before cooling down OM2 again.
[Fil, Erik]
Hardware watchdog at EX started count down when disconnected from either satellite amp.
TITLE: 09/02 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: One lockloss from the fire alarm, easy relock. We've been locked for just over 2 hours.
LOG: No log.
There was a small LVEA temperature excursion from AHU2 not restarting after the fire alarm. It looks mostly leveled out now.
01:30 UTC lockloss, the fire alarm went off. It was reported as a "trouble alarm, I called Richard and I then reset the alarm on the panel in the CUR.
03:03 UTC Observing
TITLE: 09/01 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Observing at 153 Mpc and have been Locked for over 1.5 hours. Relocking after the lockloss today went fine besides needing to touch ALSX and ALSY a bit. Weirdly, the first two times ALSX locked and turned on WFS, after a bit the WFS pulled the alignment away and caused ALSX to unlock. I set everything up to be able to turn the WFS off for the next time, but then there was no problem.
LOG:
14:30 UTC Relocking and in DRMI_LOCKED_CHECK_ASC
15:09 NOMINAL_LOW_NOISE
15:11 Observing
18:59 Superevent S250901cb
19:38 Earthquake mode activated - I moved us into earthquake mode because of some ground motion that USGS hadn't seen yet
19:46 Back to CALM
20:07 Lockloss
- ALSX and Y needed my help for locking
- Twice ALSX WFS pulled ALSX away and unlocked it
21:54 NOMINAL_LOW_NOISE
21:55 Observing
Back on July 28th, we doubled the bias on ETMX in an effort to try and survive more DARM Glitch locklosses (previously referred to as ETMX Glitch locklosses) (86027). Now that we've been at this new bias for a month, I have made some visuals to compare the number of locklosses from low noise before and after the change. We wanted to stay at this bias for at least a few weeks because sometimes throughout O4, we've had periods of a couple weeks or so where we barely have any locklosses caused by DARM Glitch, and we wanted to make sure we weren't just entering one of those periods.
TLDR;
In August, we spent more time locked with longer locks and less locklosses. The difference between August and the other months of O4 has been very drastic, and the plots make it look like to me that this is due to the bias change in ETMX.
Not TLDR:
A birds-eye view:
O4 All Locklosses vs DARM Glitch Locklosses
I've posted similar versions of this figure before, it just gives a visual representation of the amount of locklosses we've had during O4 that have been attributed to DARM Glitch versus every other lockloss. From this plot, you can see that since we doubled the ETMX bias, we have seen less ETMX glitch locklosses, and less locklosses in general!
A more in-depth examination:
I've decided to compare the month of August to the other full months of Observing we've had in O4c: February, March, June, and July. April was only a couple days of Observing, and May we were fully venting. The important thing to note here though is that for June, we started Observing 5 days into the month, and there was a lot of commissioning going on the rest of the month, so the June data points aren't the best comparison.
O4 Lock Length Distribution by Month
The xaxis shows the number of hours we had been locked for each NLN lockloss during the month, and the yaxis shows how many locklosses occurred after that lock lengh. Each months' plot has the same x and y axes limits, and on the right side of each plot is the total number of locklosses from low noise for the month as well as the average lock length.
You can see the distributions for February, March, June, and July all look pretty similar, with the majority of locklosses happening after 10 hours or less of being locked, and the average lock length for those four months is around 5.5 hours. February, March, and July also all have a similar number of locklosses, while June has a quarter less, partially due to commissioing at the beginning of the month and partially for unknown reasons.
August, however, is completely different. The distribution of lock lengths is a lot wider, and not just by one or two longer locks, in August there were 9 locks that were longer than during the other four months. There is also a lot flatter distribution for the shorter locks. This results in an average lock length of 13.3 hours, more than double the other months. There were also approximately half the number of locklosses during August, which is a very drastic drop. Even compared to June we had a lot less locklosses.
Lockloss Stats per Month for O4c
Here's a table I made with some more lock and lockloss stats. There's the number of days that that month consists of, the number of locklosses during that period of time, the number of locklosses attributed to DARM Glitches during that time, the average length of lock, and the total time spent locked during that period of time. The new info here is the total time locked - in August we spent about 25 days locked, which is 6 days more than the next highest month, July, and is a big jump up compared to the total locked time between February, March, and July.
TITLE: 09/01 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 6mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
Link to report here.
Summary:
Lockloss at 2025-09-01 20:07 UTC after almost 5 hours locked
21: 55 UTC Observing
During the commissioning window this morning, I worked on the St0 to St1 feedforward for the HAM1 ISI. This time I borrowed some code from Huyen to try the RIFF frequency domain fitting package from Nikhil. This required using matlab 2023b which seems to have a lot of computationally heavy stuff like code suggestions added so it was kind of clunky to use and I'm not sure what all of the different fitting options do, so each dof took multiple rounds of fitting to get working. I also had to add an ac-coupling high pass after the fact to the filters because they all went to 1 at 0hz. Still the results I got for HAM1 seem to work pretty well. Attached spectra are the on-off data I collected for the X,Y and Z dofs. Refs are the ff off, live traces ff on spectra. Top of each image are the asds for the ff on & off, bottom is the magnitude of the st0 l4c to st1 gs13 tf. The improvement is broad ~10x less motion from 5 hz up to ~50hz. I'm looking at the rotational dofs still, but there is less coherence there, so not as much to win.
Elenna has said this seemed to have improved chard asc, maybe she has some plots to add.
There is about an order of magnitude improvement in the CHARD P error signal between 10-20 Hz as a result of these improvements, comparing the NLN spectra from three days ago versus today. Fewer noisy peaks are also present in INP1 P. I included the CHARD P coherence with GS13s, focusing on the three DoFs with the most coherence: RX, RZ, and Z. The improvements Jim made greatly reduced that coherence. To achieve the CHARD P shot noise floor at 10 Hz and above, there is still some coherence of CHARD P with GS13 Z that is likely contributing noise. However, for the IFO, this is sufficient noise reduction to ensure that CHARD P is not directly limiting DARM above 10 Hz. I also compare the CHARD P coherence with OMC DCPD sum from a few days ago to today, see plot.
In terms of how this compares with our passive stack + L4C feedforward performance, I found some old templates where I compared upgrades to our HAM1 feedforward. I compare our ISI performance now with the passive stack, no L4C feedforward to ASC, and passive stack with the best-performance feedforward we achieved: the results. It's actually a pretty impressive difference! (Not related to the ISI seems to be a change in the shot noise floor- looks like the power on the REFL WFS may have changed from the vent.)
The coupling of CHARD P to DARM appears to be largely unchanged, so this generally means we are injecting about 10x less noise from CHARD into DARM. from 10-30 Hz.
Using the calibration factors I report in this alog, here is the CHARD plot roughly calibrated into radians.