We're having locking troubles due to Microseism and EQs.
After failing to get past DRMI and experiencing ~10 LLs, we finally did it and got all the way to powering up. Then a 5.6 EQ hit and we lost lock at TRANSITION_ETMX, with clear seismic signs in the few seconds leading up to the Lockloss (similar to the last LL). Lock reacquisition has been as rocky as pre-EQ. Meanwhile, Microseism has continued to increase, with traces now passing the 1 micron threshold, making lock reacquisiton harder yet.
We've just gotten back to PRMI post EQ so hoping this will be a successful reacquisition.
Sheila, Matt, Daniel, Camilla.
We went to the SQZ racks to check the loops, all (LO, OPO, CLF) looked stable, the OPO looked peaky but stable. Once we opened beamdiv and were squeezing there was the large oscillation in ndscopes which SR785 showed to be at 14kHz.
Issues seen:
We think it could be FSS or PMC or laser going multi mode. Can see EOMRMSMON gets worse when this noise appears. Will continue to investigate tomorrow.
Tagging OpsInfo, if you see this oscillation i.e. SQZ is injected but DARM looks terrible like NO SQZ, try taking SQZ out and in again, if this doesn't help, go to Observing with NO SQZ tonight, using Ryan's wiki Instructions.
Due to so much SQZ strangeness over the weekend, Sheila set the sqzparams.py use_sqz_ang_servo to False and I changed the SQZ_ANG_ADJUST nominal state to DOWN and reloaded SQZ_MANAGER and SQZ_ANG_ADJUST.
We set the H1:SQZ-CLF_REFL_RF6_PHASE_PHASEDEG to a normal good value of 190. If the operator thinks the SQZ is bad and wants to change this to maximize the range AFTER we've been locked 2+ hours, they can. Tagging OpsInfo.
Daniel, Sheila, Camilla
This morning we set the SQZ angle to 90deg and scanned to 290deg using 'ezcastep H1:SQZ-CLF_REFL_RF6_PHASE_PHASEDEG -s '3' '+2,100''. Plot attached.
You can see that the place with the best SQZ isn't a good linear range for H1:SQZ-ADF_OMC_TRANS_SQZ_ANG, which is why the SQZ angle servo has ben going unstable. We are leaving the SQZ angle servo off.
Daniel noted that we expect the ADF I and Q channels to rotate around zero, which they aren't. So we should check that the math calculating these is what we expect. We struggles the find the SQZ-ADF_VCXO model block, it's in the h1oaf model (so that the model runs faster).
Today Mayank and I scanned the ADF phase via 'ezcastep H1:SQZ-ADF_VCXO_PLL_PHASE -s '2' '+2,180''. You can see in the attached plot the I and Q phases show sine/cosine functions as expected. We think we may be able to adjust this phase to improve the linearity of H1:SQZ-ADF_OMC_TRANS_SQZ_ANG around the best SQZ so that we can again use the SQZ ANG servo. We started testing this, plot, but found that the SQZ was v. freq dependent and needed the alignment changed (83009) so ran out of time.
TITLE: 02/19 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Maintenance day. Notable tasks from today's maintenance are below. We recently had another lock loss, we have started initial alignment and will start main locking shortly.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
19:34 | SAF | Laser Haz | LVEA | YES | LVEA is laser HAZARD!!! | 06:13 |
15:33 | FAC | Kim, Nelly | Opt Lab | n | Tech clean | 15:57 |
15:51 | VAC | Ken | EX | n | Compressor electrical | 17:19 |
15:52 | VAC | Janos, contractor | EX | n | Compressor work | 17:35 |
15:57 | FAC | Kim, Nelly | EY, EX, FCES | n | Tech clean | 18:40 |
15:59 | FAC | Eric | Mech room | n | Heater 3a replacement | 17:27 |
16:58 | - | Christina, truck | VPW | n | Large truck delivery to VPW | 16:59 |
17:27 | CDS | Jonathan, Fil, Austin | MSR, CER | n | Move temp switch and camera server | 20:10 |
17:28 | VAC | Ken | EY | n | Disconnect compressor electrical | 19:49 |
17:35 | VAC | Travis | EX | n | Compressor work | 18:33 |
17:46 | VAC | Gerardo, Jordan | MY | n | CP4 check | 19:16 |
17:49 | SQZ | Sheila, Camilla | LVEA | YES | SQZ table meas. | 19:39 |
17:50 | SUS | Jason | LVEA | yes | PR3 OpLev alignment | 18:04 |
18:19 | SUS | Jason, Ryan S | LVEA | YES | PR3 OpLev recentering at VP | 21:05 |
18:19 | SUS | Matt, TJ | CR | n | ETMX TFs for rubbing | 21:21 |
18:37 | FAC | Tyler | Opt Lab | n | Check on lab | 19:03 |
18:39 | CDS | Erik | CER | n | Check on switch | 19:16 |
18:41 | FAC | Kim | LVEA | yes | Tech clean | 19:51 |
18:56 | VAC | Travis | Opt Lab | n | Moving around flow bench | 19:03 |
18:58 | PEM | Robert | LVEA | yes | Check on potential view ports | 19:51 |
19:04 | FAC | Tyler | Mids | n | Check on 3IFO | 19:25 |
20:21 | VAC | Gerardo | LVEA | yes | Checking cable length on top of HAM6 | 20:50 |
20:27 | CDS | Fil | EY | n | Updating drawings | 22:14 |
20:46 | PEM | Robert | LVEA | yes | Viewport checks | 21:26 |
20:51 | VAC | Janos | EX | n | Mech room work | 21:25 |
21:18 | OPS | TJ | LVEA | - | Sweep | 21:26 |
21:19 | FAC | Chris | X-arm | n | Check filters | 22:54 |
22:44 | PEM | Robert | LVEA | yes | Setup tests | 00:06 |
22:46 | SQZ | Sheila, Camilla, Matt | LVEA | yes | SQZ meas at racks | 00:06 |
00:19 | CDS | Dave, Marc | EY | N | Timing card issue fix | 01:19 |
TITLE: 02/19 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT_USEISM
Wind: 9mph Gusts, 5mph 3min avg
Primary useism: 0.06 μm/s
Secondary useism: 0.77 μm/s
QUICK SUMMARY:
IFO is in INITIAL_ALIGNMENT and MAINTENANCE
We're still recovering from maintenance today with a few key issues:
Given all that, we shall begin locking when initial alignment is done.
The new clean air supply (compressor, tank, dryer and a series of extra filters), which was received in the end of 2024, was installed in the EX mechanical room as a replacement for the old system. The installation work was carried out in the last few weeks, the last step - the startup by Rogers Machinery - happened today. This new system has a 69 cfm air delivery operating with 3 pcs. of 7.5 HP motors (in sum 22.5 HP). In comparison, the old system had a 50 cfm air delivery, operating with 5 pcs. of 5 HP motors (in sum 25 HP). Moreover, the new system (unlike the old one) has an automatic dew point monitor, and a complete pair of redundant dryer towers. So, this new system is a major improvement. The reason for this 69 cfm limit (and not more) is that the cooling of the compressor units in the MER is still feasible, moreover, the filters and the airline do not need any upgrades, they can still accommodate the airflow. Both the new and old systems are able to produce at least -40 deg F dew point air on paper. During the startup, the new system was however able to produce much better than this - it was ~-70 deg F (and dropping) - as you can see in the attached photo. Last but not least, a huge congratulations to the Vacuum team for the installation, as this was the first instance, when the installation of a clean air system was carried out by LIGO staff, so this is indeed a huge achievement. Also, big thanks to Chris, who cleaned some parts for the compressor, to Tyler, who helped a lot in the heavy lifting, and to Richard & Ken, who did the electrical wiring. From next week on, we repeat this same installation at the EY station.
[ Matt, TJ ]
We looked at the transfer functions of the suspensions of the Test Masses, SR3 and PRM (top stage of each), to see if we could see signs of rubbing. This is a limited number of transfer functions; however, it does seem like there is some peak shifting in the ITMY-P.
PRM LPY, and BS LPY. I had to turn off the BS OpLev damping, which is mentioned in the Jeff K TF instructions wiki.
I swept the LVEA after maintenance activities today. I unplugged one power strip in the CER, and heard a UPS in one of the vacuum racks going off. I silenced the alarm and talked to Gerardo and Richard, but we should keep an ear out.
As per WP 12333 we installed a POE switch (sw-lvea-aux1) in the CER, moved four cameras onto the switch and migrated them to h1digivideo4. This is to expand and test our infrastructure to host more digital cameras. Jonathan H. and Austin F. moved the switch from its test location in the MSR to the CER. As part of the work we moved sw-lvea-aux up 1 space in the rack to make more room and installed a cable management bar beneath the switches to provide some strain relief. Fil C. setup the DC power for the switch. We had wanted to also convert sw-lvea-aux to full DC power, but Fil would like us to split the load between the two switches more before removing the 1 AC power supply from sw-lvea-aux. Patrick worked to migrate the cameras from h1digivideo1 to h1digivideo4.
J. Oberling, R. Short
After the recent PRC alignment work (moving PR3 to better center the beam on PR2) the OpLev beam had fallen off of its QPD, so today I attempted to re-center the QPD. Using only the translation stages, I could not re-center the QPD since the horizontal translation stage hit its max travel limit well before the beam was back on the QPD (SUM counts went from ~700 to ~850, well short of the usual ~22k). I enlisted Ryan S. to help and we set out to adjust the OpLev beam alignment from the transmitter. Turns out that the beam gets clipped before we can get it centered on the QPD. We first tried to move the translation stage away from its limit but could only get the beam on 2 of the QPD segments (segments 1 and 3) before the SUM counts started dropping; at no point could we get the beam onto QPD segments 2 and 4 in this configuration. We then moved the horizontal stage to its limit and tried to align the beam onto the QPD; this worked a little better, but still not great. The best SUM counts we could get and have a "centered" QPD was ~17k, roughly 75% of the total SUM counts for this OpLev, so still some significant clipping. We decided to unclip the beam to make the OpLev more trustworthy (we've had questions about this OpLev in the past). Max SUM counts were ~22.7k, and this gives a Yaw reading of roughly -12.5 µrad; any closer to zero and the beam begins to clip again. We left the OpLev in this configuration and will investigate the clipping at a later date.
Tagging OpsInfo: Around -12.5 µrad is the current "yaw zero" for the PR3 OpLev until we can figure out and alleviate the clipping.
As for the clipping, my best guess at this point is we're clipping somewhere on/in the ~4' long tube that runs from the exit viewport for the OpLev receiver and the OpLev's QPD. We'll have to climb on top of HAM3 to check this, which we will do at a later date.
This closes WPs 12336 and 12341.
Edit: Fixed typo in yaw reading in bolded sentence (12.7 to 12.5).
Shiela, Camilla, Sheila and Vicky moved the AOM and realigned in 80830.
Summary: Today we measured the powers on SQZT0, see attached photo. Aimed to understand where we were loosing our green light as we had to turn down the OPO setpoint last week 82787. Adjusted SHG EOM, AOM and fiber pointing are dramatically increased available green light. OPO setpoint back to 80uW with plenty of extra green avaible.
After taking the attached power measurements we could see we the SHG conversion was fine, 130mW green out of SHG for 360mW of IR in.
We were loosing a lot of power through the EOM 96mW in to 76mW out (expect ~100%). We could see this clipping in yaw by taking a photo. Sheila adjusted the EOM alignment increasing the power out to 95mW.
As ISS control monitor is 7.0 is a little high with 26.5mW SHG launched (OPO unlocked), I further turned the SHG launch power down to 20mW and adjusted the HAM7 rejected light. ISS control monitor left at 5.4 which is closer to the middle of the 0-10V range.
Tue Feb 18 10:08:06 2025 INFO: Fill completed in 8min 3secs
Gerardo confirmed a good fill curbside. TCmins [-51C, -48C] OAT (+2C, 36F) DeltaTempTime 10:08:06.
missing data in plot is due to DAQ restart at that time.
I reviewed the weekend lockloss where lock was lost during the calibration sweep on Saturday.
I've compared the calibration injections and what DARM_IN1 is seeing [ndscopes], relative to the last successful injection [ndscopes].
Looks pretty much the same but DARM_IN1 is even a bit lower because I've excluded the last frequency point in the DARM injection which sees the least loop suppression.
It looks like this time the lockloss was a coincidence. BUT. We desperately need to get a successful sweep to update the calibration.
I'll be reverting the cal sweep INI file, in the wiki, to what was used for the last successful injection (even though it includes that last point which I suspected caused the last 2 locklosses), out of abundance of caution and hoping the cause of locklosses is something more subtle that I'm not yet catching.
Despite the lockloss, I was able to utilise the log file saved in /opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/H1/
(log file used as input into simulines.py), to regenerate the measurement files.
As you can imagine the points where the data is incomplete are missing but 95% of the sweep is present and fitting all looks great.
So it is in some way reassuring that in case we lose lock during a measurement, data gets salvaged and processed just fine.
Report attached.
How to salvage data from any failed attempt simulines injections:
/opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/{IFO}/ for IFO=L1,H1
'./simuLines.py -i /opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/H1/{time-name}.log'
for time-name resembling
20250215T193653Z/simuLines.py
' is the simulines exectuable and can have some full path like the calibration wiki does: './ligo/groups/cal/src/simulines/simulines/simuLines.py
'The operator team have been finding 11Hz oscillation locklosses. Attached is the spectrum of MICH, PRCL and SRCL from one of our last long (2 day) locks to a more recent 6 hour lock. There is a debatable PRCL bump around 10-11Hz.
I'm comparing plots that Oli made in this alog to a plot I added to this alog from early O4a where we were having frequent locklosses due to marginal stability in PRCL. The ring up looks very similar and I would guess that we should measure the PRCL OLG and adjust the gain. Just scrolling back the last few days on the summary pages, I don't visually see the excess noise around 11 Hz for a long time before the locklosses, like we saw in O4a, but that doesn't mean much since I could be fooled by the color scale.
BS Camera stopped updating just like in alogs:
This takes the Camera Sevo guardian into a neverending loop (and takes ISC LOCK out of Nominal and H1 out of Observe). See attached screenshot.
So, I had to wake up Dave so he could restart the computer & process for the BS Camera. (Dave mentioned there is a new computer for this camera to be installed soon and it should help with this problem.)
As soon as Dave got the BS camera back, the CAMERA SERVO node got back to nominal, but I had accepted the SDF diffs for ASC which happened when this issue started, so I had to go back and ACCEPT the correct settings. Then we automatically went back to Observing.
OK, back to trying to go back to sleep again! LOL
Full procedure is:
Open BS (cam26) image viewer, verify it is a blue-screen (it was) and keep the viewer running
Verify we can ping h1cam26 (we could) and keep the ping running
ssh onto sw-lvea-aux from cdsmonitor using the command "network_automation shell sw-lvea-aux"
IOS commands: "enable", "configure terminal", "interface gigabitEthernet 0/35"
Power down h1cam26 with the "shutdown" IOS command, verify pings to h1cam26 stop (they did)
After about 10 seconds power the camera back up with the IOS command "no shutdown"
Wait for h1cam26 to start responding to pings (it did).
ssh onto h1digivideo2 as user root.
Delete the h1cam26 process (kill -9 <pid>), where pid given in file /tmp/H1-VID-CAM26_server.pid
Wait for monit to restart CAM26's process, verify image starts streaming on the viewer (it did).
FRS: https://services1.ligo-la.caltech.edu/FRS/show_bug.cgi?id=33320
Forgot once again to note timing for this wake-up. This wake-up was at 233amPDT (1033utc), and I was roughly done with this work in about 45min after phoning Dave for help.
ALS EY WFS F9's came up as (2) SDFs. Accepted (see attached).
The last lockloss was most likely due to an EQ (Equador?). I was already up, so I stayed up to proactively run an alignment and got almost done with SRC OFFLOADED but H1 Manager took ALIGN IFO DOWN (!!!!) and started comepletely over---I guess I shoudl have taken H1 Manager down?
At any rate, ran a manual alignment for SRC again & H1 made it back up all the way except for the ALSey diffs. OK, back to bed.
Here's the lockloss that preceded this 1253amPDT (853utc) Wake-up call (the one where I happened to still be up at 1115pmPDT); and it does not have an EQ tag, but I could have sworn I remembered hearing Verbal going to EQ Mode a few min before the lockloss---this is why I stayed up to run an alignment! Since I was up, this is the one where I tried to help H1 by running an alignment before trying to go to bed, but ended up fighting H1_MANAGER with the alignment attempts. Then I was awakened just before 1am for the SDF diffs noted above.
This wake-up call was only me getting out of bed for a few minutes to ACCEPT the ALS SDF diffs (which might have been due to me and my errant alignments).
The corner RGA (located on output arm between HAM4 and HAM5) lost connection to the control room computer and stopped collecting data around 6pm yesterday (2/6/25). The software gave an error stating "Driver Error: Run could not be stopped".
I could not ping the unit from the terminal, but Erik confirmed the port is still open on the network switch, so it seems to be an issue with the RGA electronics. Other RGAs connected to this computer can still be accessed.
I restarted the software and attempted to reconnect to the RGA, but no luck. I will have to wait until next Tuesday maintenance to troubleshoot. This unit had been collecting data for the past ~6 months without issue. I will perform a hardware reset at the next opportunity to try and bring the unit back online, otherwise we have a new PrismaPro we can replace this unit with during the next vent.
2/18/25
Today, Erik was able to reconfigure the IPs of the RGAs. Able to ping all three RGAs currently connected to network, and corner monitoring scans have resumed. Filaments are turned on for all three RGAs, Corner, HAM6 and EX. Reminder, both HAM6 and EX have 10 l/s pumps on RGA volume.