Sheila, Naoki, Daniel
Overnight, the OPO was scanning for about 5 hours, during which time the 6MHz demod was seeing flashes from the CLF reflected off the OPO. This morning, we still see DC light on the diode, but no RF power on the demod channel. There aren't any errors on the demod medm screen.
We did a manual check that we have nonlinear gain using the seed, (we can't use the guardian because of the RF6 problem), and it seems that we do have NLG, so the OPO temperature correct.
Daniel found that the CLF frequency was far off from normal (5MHz), because the boosts were on in the CLF common mode board. Turning these off solved the issue. We've added a check in the OPO guardian in PREP_LOCK_CLF to check if this frequency is more than 50kHz off, if so it will not return true and will give a notificiation to check the common mode board.
Starting 17:43 Fri 30jul2024 the DTS environment monitoring channels went flatline (no invalid error just unchanging values).
We caught this early this morning when Jonathan rebooted x1dtslogin and the DTS channels did not go white-invalid. When x1dtslogin came back, we restarted the DTS cdsioc0 systemd services (dts-tunnel, dts-env) and the channels are active again.
Opened FRS31994
Tue Sep 03 08:11:49 2024 INFO: Fill completed in 11min 45secs
Jordan confirmed a good fill curbside. The low TC temperatures outside of the fill over the weekend was tracked to an ice build up at the end of the discharge line which has now been cleared. 1-week trend of TC-A also attached.
Workstations and displays were updated and rebooted. This was an os packages update. Conda packages were not updated.
TITLE: 09/03 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 16mph Gusts, 12mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
Currently in NOMINAL_LOW_NOISE but not Observing due to some SDF diffs from the PEM injections. We are planning on trying to stay locked during today's maintenance.
H1 called for assistance following some trouble relocking, the previous lock only lasted ~8 minutes.
08:50 UTC lockloss
09:46 UTC lockloss
10:40 UTC started an IA which took about 20 minutes
11:30 lost it at LOW_NOISE_LENGTH_CONTROL
I had a lot of trouble getting DRMI to lock, flashes were fairly decent (>100)
12:41 UTC back to NLN, the ISS refused to stay locked I finally put us into obs without sqzing after many tries 13:21UTC
TITLE: 09/03 Eve Shift: 2300-0500 UTC (1600-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Only one lockloss this shift. Recovery has been taking a while as it's been windy this evening, but H1 is now finally mostly relocked, currently waiting in OMC_WHITENING to damp violins. Otherwise a pretty quiet shift.
LOG:
No log for this shift.
Lockloss @ 03:31 UTC - link to lockloss tool
No obvious cause. Looks like there was some shaking of the ETMs about half a second before the lockloss. Wind speeds have come up to 30mph in the past 45 minutes, so it's possible that could have something to do with the lockloss.
TITLE: 09/02 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 143Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
Since the last update the only thing that happend was @ 21:58 UTC Lockloss from an unknown cause.
Wind wasn't elevated, not was the Primary microseism.
No PI ring up.
relocked and Back to Obsering at 22:51 UTC
LOG:
No log
TITLE: 09/02 Eve Shift: 2300-0500 UTC (1600-2200 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 19mph Gusts, 12mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY: H1 just began observing as I walked in; sounds like several locklosses have been due to PI ringups, and a couple from EQs.
Lockloss page
No warning or notice of an Earthquake was given it was a sudden small spike in ground motion.
It was Observed in Picket fence.
USGS didn't post this right away, but it was a M 4.2 - 210 km W of Bandon, Oregon right off the coast.
I took ISC_lock to Initial Alignment after a lockloss at check miche fringes.
Relocking now.
Interesting.
Naoki called the control room pretty early and suspected that the Locklosses from last night were from some PI ring ups as seen in Ryans alog 79860 .
So I told him I'd check it out and document it.
Last night during the spooky & completely automated OWL shift, when no one was around to see it. There were 5 episodes of lock aquisition and locklosses that strangely all happened around the time when the Lock Clock approached the 2 hour mark.
Turns out, Naoki's gut instinct was right, they were PI ring ups !
Not only was he correct that the Locklosses were caused by PI ring ups but they were Dreaded Double PI Ring up! Some Say that the Double PI ring up is just a myth or an old operator's legend. Its supposedly a rare event when 2 different Parametric Instabilities modes ring up at the same time!
But here is a list of the Dreaded Double PI ring up sitings from just last night!
2024-09-02_05:34:01Z ISC_LOCK NOMINAL_LOW_NOISE -> LOCKLOSS Cause: The Dreaded Double PI 28 And 29!!! The SUS-PI guardian did not change the phase of compute mode 28 at all. Lockloss page
2024-09-02_08:06:14Z ISC_LOCK NOMINAL_LOW_NOISE -> LOCKLOSS Cause: Another Deaded Double PI 28 & 29! Compute mode 28 phase was not moved again this time. Lockloss page
2024-09-02_10:37:06Z ISC_LOCK NOMINAL_LOW_NOISE -> LOCKLOSS Cause: Dreaded Double PI ring up But this time the Phase for 28 changed, But by then it was too late for everyone involved! ( No one was invloved, cause this was completetly automated.)
Lockloss page
3: 2024-09-02_12:38:45Z ISC_LOCK NOMINAL_LOW_NOISE -> LOCKLOSS Ok I'll admit that this one is not a Dreaded Double PI ring up... but it certainly is a PI 24 ring up!
Lockloss page After getting some second Eyes on this i am now Convinced that this is a Wind Gust Lockloss.
Maybe it is just the SUS-PI Guardian is having trouble damping PI28 and instead is trying to damp PI29 but that must be ringing up PI mode 29.
If it happens again, Naoki has asked me to simply take the SUS-PI Guardian to IDLE and take the damping gains for 28 & 29 to 0.
Wish me luck.
Mon Sep 02 08:11:20 2024 INFO: Fill completed in 11min 16secs
TITLE: 09/02 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 15mph Gusts, 10mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY:
14:33 UTC Superevent candidate S240902bq
H1 Is Locked and Observing for 1 hour
NUC 33 stopped working at 23:40-Local time laste night. Hard Shutdown and reboot
A number of Locks and locklosses happened last night, I will start my morning off with investigating those.
TITLE: 09/02 Eve Shift: 2300-0500 UTC (1600-2200 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
LOG:
No log for this shift.
Lockloss @ 01:35 UTC - link to lockloss tool
PI mode 28 suddenly rang up about 5 minutes before the lockloss (almost exactly 2 hours since reaching NLN), callouts for mode 29 started also soon after. It appears that Guardian was trying to damp mode 29 instead of 28; I recall these modes being faily close together, but I wonder if this is the best way to damp this mode as Guardian was unsuccessful. I, unfortunately, was not quick enough to intervene.
While the PIs were rung up shortly before the lockloss, I noticed the OMC TRANS camera start shaking like it has before (most recently in alog79748), and I don't recall seeing it do that so far this weekend.
H1 back to observing at 03:46 UTC. Fully automatic relock preceded by an initial alignment.
While relocking following this lockloss, I changed the nominal input max power back to 60W at Naoki's recommendation to hopefully avoid these 80kHz PIs.
Once H1 relocked to NLN this evening, I noticed it didn't look like FDS was being injected, and then I saw a notification on the SQZ_OPO_LR Guardian node saying, "too much pump light into fiber? Adjust the half wave plate." Tony pointed me to alog78696 where he needed to make a similar adjustment. I attempted to follow that procedure to lower the green SHG launch power below 35 (threshold in Guardian) from where it was around 36.6 using the half-wave plate, but I was unsuccessful. Moving the 'SQZ SHG FIBR HWP/QWP' picomotor (motor 3) made the launch power signal shake as the wave plate moved, but the level did not change. I called Naoki for assistance and he corrected me, saying that I should have been moving the half-wave plate before the launch and rejected PDs instead of the one after them; making the correct motor be 'SQZ Laser FIBR HWP/SHG GR power' (motor 2). He made the adjustments necessary to bring the launch power down and rejected power up, then used Guardian to inject squeezing without issue.
H1 started observing at 00:18 UTC.
TITLE: 09/01 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
11 Hour Lock was lost due to a 6.4 Mag Earthquake in the Soloman Islands.
Initial alignment started when ground motion as seen by PeakMon was consistently below 800 for 5 mins. @ 21:42 UTC
I.A. Completed and the Gorund Motion was still above 200.
Locking started at 22:09 UTC
22:40 UTC Lockloss from LOWNOISE_COIL_DRIVER when a rogue increase in ground motion struck.
Nominal_Low_Noise reached at 23:30 UTC
LOG:
No log