TITLE: 11/20 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 4mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.30 μm/s
QUICK SUMMARY: Locked for 6 hours but it looks like the squeezer has an issue. Working on it now. Dust alarm in the diode room, but other than that, no alarms overnight.
Since I saw H1 was locked, I started the full magnetic injection suite (49 injections total over the 7 coils across site) at 05:40 UTC using the PEM_MAG_INJ Guardian. This dropped H1 out of observing, but it should return to observing once the injections finish. The suite will finish in roughly 2 hours or if the IFO loses lock, whichever comes first. I'll check things later tonight in case this poses any issues.
Injection suite finished at 08:00 UTC and H1 returned to observing.
TITLE: 11/20 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY: PEM measurements were running smoothly until a lockloss @ 01:28 UTC. Since H1 is having a bit of trouble getting relocked (locklosses during LASER_NOISE_SUPPRESSION and DRMI) for reasons I'm unsure of, I've set things to run overnight the same as last night, with H1_MANAGER attempting to keep H1 up but not call for help. I've also set Sheila's SQZ angle stepper script to start at 09:00 UTC (01:00am PST) running on cdsws39 with the hope that H1 is locked.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 01:13 | CAL | Tony | PCal Lab | N | Changing spheres | 01:21 |
| 01:40 | PEM | Robert, Sam, Genevieve | EX -> LVEA | N | Moving shaker | 02:50 |
| 02:12 | VAC | Gerardo | LVEA | N | Checking AIP controller | 03:02 |
IFO locked and was in NLN, but not observing, because I'd left a pico controller enabled and it was causing an SDF diff. I disabled it and now we are in observing.
TITLE: 11/20 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 6mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.33 μm/s
QUICK SUMMARY: H1 has been locked for about 30 minutes and PEM measurements are ongoing.
TITLE: 11/20 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Currently at low noise, PEM characterization all day and one SQZ measurement started. There was one lock loss during the shift caused by us trying to move the corner station sensor correction off while in lock to allow Robert and a UW visitor look at the ITMY seismometer. Didn't work. Relocking was an initial alignment and then waiting at DRMI while we still had SC off, but all auto.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 16:25 | PEM | Robert, Sam, Genevieve | CR/LVEA | n | PEM measurements, in and out of the LVEA all day | 22:05 |
| 17:07 | FAC | Kim, Nellie | OSB | n | Opening receiving roll up door | 17:22 |
| 17:48 | PEM | Robert, Sam | EX | n | Setup measurement | 19:33 |
| 19:34 | SEI | Richard, UW | desert | n | Checking on the vault seismometers | 20:25 |
| 19:42 | SYS | Betsy | Opt Lab | n | Check on lab | 19:47 |
| 20:22 | IO | Corey | Prep lab | n | JAC table enclosure | 22:58 |
| 20:57 | SQZ | Sheila | CR | n | SQZ meas. | 22:07 |
| 21:40 | VAC | Travis | EY | n | Check in mech room | 22:04 |
| 21:56 | SQZ | Sheila, Kar Meng | Opt Lab | n | OPO unboxing | 23:00 |
| 22:04 | PEM | Sam, Genevieve, Jennie | LVEA | n | Move speaker | 22:40 |
| 22:06 | PEM | Robert, UW Paul | LVEA | n | Look at seismometer | 23:21 |
| 22:13 | PEM | Richard | LVEA | n | Join Robert and Paul | 22:39 |
| 23:22 | pem | Robert, Sam, Genevieve | EX | n | Set up shaker | 00:11 |
Jonathan and Yuri
This is a continuation of WP 12886. This is the last physical step in the control room/MSR for the reconfiguration. We ran a fiber from the warehouse patch pannel to h1daqdc0 and replaced the network card with a newer card. We will be renaming h1daqdc0, likely to h1daqkc0 (daqd kafka connector), and converting it to send the IFO data into the LDAS as part of the NGDD project.
Next steps on this work is to work on the software setup, and to install the corresponding fiber patch in the LDAS room.
Checking the safe SDF settings for the LSC and ASC model.
ASC model not monitored channels here. No outstanding SDF diffs while unlocked
LSC model not monitored channels here. Also no outstanding SDF diffs while unlocked.
ASCIMC has no unmonitored channels. No safe diffs with IFO unlocked.
I've ran a few different huddle test with the new TemTop dust monitors against 3 different MetOne GT521s. The first tests I ran in the Optics lab and Diode room showed some discrepancies between the spike times but that could be due to a difference in sample times vs the two dust monitors. The PMS21 and GT521s both sample for 60 seconds then hold for 300s, but the PMD331 does not have a built in hold time so it samples continuously or manually. The PMS21 usually reads an order of magnitude or two lower than the other two which makes me wonder if it needs a scale factor, but it also sees these big spikes occasionally that the others don't see which is confusing. The flow rate is listed as being 0.1 CFM on the PMS21, but the GT521s and the PMD331 are also listed as having flow rates of 0.1 CFM. CFM = Cubic feet / minute, its also = 2.83 L/min which is what I read when running the flow test on all of the DMs. *Also the times do not account for daylight savings, so each y-axis timestamp is actually an hour behind the actual PST*.
Test 1 - Optics Lab:
I tested the TemTops one at a time in the Optics lab, the results of the PMD331 and the results of the PMS21.
Test 2 - Diode Room:
I tested the TemTops at the same time in the Diode room, I started off with only the PMS21 then I added the PMD331. For only the PMS21 we saw the these counts, for only the PMD331 we saw these counts, and for both of them we saw these counts,all against a MetOne GT521s.
Test 3 - Control Room:
I tested three dust monitors at the same time in the control room ( I grabbed a spare pumped GT521s from the storage racks by the OSB receiving, it's our last properly working spare). I did a day of samples with holds enabled and a day of continuous sampling. When the dust monitors were sampling at slightly different intervals we saw the peaks offset from each other such as at 11-18 ~07:00 PST at the right of the plot. During the continuous testing we can see the peaks from everyone coming into the control room for the end of O4 celebration, I'm not sure why there's some time between the peaks, zooming in on this plot to cut out the large peaks from said celebration we can see the PMD331 and the MetOne GT521s following each other pretty closely, but the PMS21 wasn't really reading much, there are small bumps around where the peaks from the other DMs are. Adding a scale factor of 10 to the PMS21 counts yield a better looking plot, playing with the scale factor till the PMS21 counts looked more in-line with the other DMs I got to a scale factor of 40 to get this plot.
In breaks in the PEM work this morning I improved the script I used last night so that I don't have to do as many things by hand. The script is in sheila.dwyer/SQZ/automate_dataset/SQZ_ANG_stepper.py, I'll try to clean it up a bit more and add to userapps soon.
It's running now and will take 5 minutes of no sqz time, and do the steps needed to measure nonlinear gain (it doesn't give a result, but some times that can be looked at later). Then it takes some angle steps in 10 degree steps on the steep side of the elipse and 30 degree steps on the slow side of the elipse (where SQZ angle changes slowly with demod angle).
This current version should take 1 hour to take 3 minutes of data at each point, or 1.5 hours for 4 minutes of data.
It's now running for FDS with nominal psams settings, this can be aborted if PEM wants to do injections before it's over.
Wed Nov 19 10:09:29 2025 INFO: Fill completed in 9min 25secs
FRS36084 Dave, TJ:
TJ discovered that some HWS SLED settings were being monitored by multiple slow-SDF systems (h1syscstcssdf and h1tcshwssdf). The former's monitor.req is computer generated, the later's is hand built.
There were 4 duplicated channels: H1:TCS-ITM[X,Y]_HWS_SLEDSET[CURRENT,TEMPERATURE]
I removed these channels from tcs/h1/burtfiles/h1tcshwssdf_[monitor.req, safe.snap, OBSERVE.snap] and restarted the psudo-frontend h1tcshwssdf on h1ecatmon0 at 09:00 Wed 19nov2025 PST. The number of monitored channels dropped from 64 to 60 as expected.
I did a full scan of all the monitor.req files and confirmed that there were only these 4 channels duplicated.
I'll add a check to cds_report.py to look for future duplications.
TITLE: 11/19 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 3mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.39 μm/s
QUICK SUMMARY: Locked for 13 hours, but it looks like we just got into observing 45min ago due to an SDF diff. Lights are on at MY, Y_BT AIP still in error as mentioned in Ryan's log. Plan for to day is to continue PEM characterization.
Because the CDS WIFIs will always be on from now onwards, I've accepted these settings into the CDS SDF
unamplified seed: 0.0055 amplified seed: 0.052, deamplified: 0.00073 NLG = 9.45, seems too small
no sqz: 1447564297 - 1447565492
reduced seed, adjusted temp: amplified max:0.00547 minimum 8.3e-5, unamplified: 2.54e-4 NLG 21.5
FIS, ran scan sqz angle kHz, CLF 6 demod phase left at 149.7 degrees. 1447566037
I started the script to scan squeezing angles with FIS, Ryan Short gave me a pointer on how to have my script change guardian states (puts SQZ ANGLE SERVO to IDLE now). It is set up to try the current angle +/-10 degrees, run through a bunch of angles in 20 degree steps, flip the CLF sign and run through angles again. When finished it should request frequency dependent squeezing again and Ryan has set things up so the IFO should go to observing when that happens.
The script ran anf completed, the IFO didn't go to observing when it completed because I forgot to turn off the pico controler after I lowered the seed. Here is the log of times:
log of changes :
149.7 : 1447566755.0
159.7 : 1447566995.0
139.7 : 1447567235.0
200.0 : 1447567475.0
180.0 : 1447567715.0
160.0 : 1447567956.0
140.0 : 1447568195.0
120.0 : 1447568436.0
100.0 : 1447568676.0
80.0 : 1447568916.0
60.0 : 1447569156.0
40.0 : 1447569396.0
20.0 : 1447569636.0
0.0 : 1447569876.0
180.0 : 1447570116.0
160.0 : 1447570356.0
140.0 : 1447570596.0
120.0 : 1447570837.0
100.0 : 1447571076.0
80.0 : 1447571317.0
60.0 : 1447571557.0
40.0 : 1447571797.0
20.0 : 1447572037.0
0.0 : 1447572277.0
TITLE: 11/19 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY:
H1 had a lockloss from an unknown source early on in my shift, but after an alignment, relocking went easily. Since then, I've been tuning magnetic injection parameters to be used later this week. Also had an alarm for an annulus ion pump at the Y-arm BTM, so I phoned Travis as I was unsure how critical this was. He took a look into it and believes addressing this can wait until morning.
I've set the remote owl operator so that H1 will relock overnight if it unlocks for some reason so that it's ready to go in the morning, but no calls will be made for help. H1 will also start observing, if it can, on the off-chance there's a potential candidate signal.
Sheila has started a script to change the SQZ angle while in frequency independent squeezing, which should take about 90 minutes and will put settings back to nominal when done, at which point H1 should go into observing.
During PRC Align, IMC unlocked and couldn't relock due to the FSS oscillating a lot - PZT MON was showing it moving all over the place, and I couldn't even take the IMC to OFFLILNE or DOWN due to the PSL ready check failing. To try and fix the oscillation issue, I turned off the autolock for the Loop Automation on the FSS screen, and after a few seconds re-enabled the autolocking, and then we were able to go to DOWN fine, and then I was able to relock the IMC.
TJ said this has happened to him and to a couple other operators recently.
Took a look at this, see attached trends. What happened here is the FSS autolocker got stuck between states 2 and 3 due to the oscillation. The autolocker is programmed to, if it detects an oscillation, jump immediately back to State 2 to lower the common gain and ramp it back up to hopefully clear the oscillation. It does this via a scalar multiplier of the FSS common gain that ranges from 0 to 1, which ramps the gain from 0dB to its previous value (15dB in this case); it does not touch the gain slider, it does it all in block of C code called by the front end model. The problem here is that 0dB is not generally low enough to clear the oscillation, so it gets stuck in this State 2/State 3 loop and has a very hard time getting out of it. This is seen in the lower left plot, H1:PSL-FSS_AUTOLOCK_STATE, it never gets to State 4 but continuously bounces between States 2 and 3; the autolocker does not lower the common gain slider, as seen in the center-left plot. If this happens, turning the autolocker off then on again is most definitely the correct course of action.
We have an FSS guardian node that also raises and lowers the gains via the sliders, and this guardian takes the gains to their slider minimum of -10dB which is low enough to clear the majority of oscillations. So why not use this during lock acquisition? When an oscillation is detected during the lock acquisition sequence, the guardian node and the autolocker will fight each other. This conflict makes lock acquisition take much longer, several 10s of minutes, so the guardian node is not engaged during RefCav lock acquisition.
Talking with TJ this morning, he asked if the FSS guardian node could handle the autolocker off/on if/when it gets stuck in this State 2/State 3 loop. On the surface I don't see a reason why this wouldn't work, so I'll start talking with Ryan S. about how we'd go about implementing and testing this. For OPS: In the interim, if this happens again please do not wait for the oscillation to clear on its own. If you notice the FSS is not relocking after an IMC lockloss, open the FSS MEDM screen (Sitemap -> PSL -> FSS) and look at the autolocker in the middle of the screen and the gain sliders at the bottom. If the autolocker state is bouncing between 2 and 3 and the gain sliders are not changing, immediately turn the autolocker off, wait a little bit, and turn it on again.
Slight correction to the above. The autolocker did not get stuck between states 2 and 3, as there is no path from state 3 to state 2 in the code. What's happening is the autolocker goes into state 4, detects an oscillation, then immediately jumps back to state 2; so this is a loop from states 2 -> 3 -> 4 -> 2 due to the oscillation and the inability of the autolocker gain ramp to effectively clear it. This happens at the clock speed of the FSS FE computer, while the channel that monitors the autolocker state is only a 16 Hz channel. So the monitor channel is no where close to fast enough to see all of the state changes the code is going through during an oscillation.
Something happened with BRSY this morning during maintenance that caused it to ring up more than normal and now the damping is not behaving quite as expected. For now, I have paused the ETMY sensor correction guardian with the BRSY out of loop and turned off the output of the BRS so it won't be used for eq mode, should that transition happen.
So far today, I did a bunch of "recapturing frames" in the BRS C# code, which has usually fixed this issue in the past. We also restarted the beckhoff computer, then the plc, C# and epics ioc. This did not recover the BRS either. Marc, Fil and I went to EY and looked at the damping drive in the enclosure and I think it was behaving okay. When the drive came on, the output would reach ~1.8V, then go to 0V when it turned off.
I've contacted UW and we will take a look at this again tomorrow.
Looked at this with Michael and Shoshana and the BRS is damped down now. Still not sure what is wrong but we have a theory that one side of the capacitive damper is not actuating. This seems to work okay when the velocities are either low or very high, but if they are moderate the high gain damping doesn't work well enough to get the BRS below a certain threshold, and instead keeps the BRS moderately rung up. We adjusted the damping on/off thresholds so the high gain damping will turn off at a higher velocities.
I will try to do some tests with it next week to see if we can tell if one side of the damper is working better than the other. For now, we should be able to bring the BRS back in loop.
I've accepted these thresholds in SDF since it seems that this is the new normal.
Notifications from the SQZ suite of guardians:
SQZ_MANAGER - request Reset_SQZ_ASC_FDS
SQZ_OPO_LR - "pump fiber rej power in ham7 high, nominal 35e-3, align fiber pol on sqzt0."
SQZ_LO_LR - "LOW OMC_RF3 power < -22! Align SQZ-OMC_RF3, or more CLF power if aligned."
This seems like a similar situation as alog87361