At 01:02 UTC, H1 dropped from observing due to the SHG dropping out, and it would not come back on its own. I brought SQZ_MANAGER to 'DOWN' and raised the SHG temperature (H1:SQZ-SHG_TEC_SETTEMP) from 34.6 to 35.2 to raise the OPO ISS control signal and SHG green power back up to around 2.8 and 120mW, respectively. After that, I requested SQZ_MANAGER back to 'FREQ_DEP_SQZ' and everything came back without issue. Once I accepted the new SHG temperature in SDF, H1 returned to observing at 01:13 UTC.
TITLE: 02/02 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 143Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 20:31 UTC
Many, many issues today all SQZ related. In short, our range is terrible and we know it is SQZ related.
Most of the story can be found in alog 82588 (and comments) but to summarize:
Otherwise, Dave did a vacstat restart due to a glitch - alog 82595
LOG:
TITLE: 02/02 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 22mph Gusts, 18mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.41 μm/s
QUICK SUMMARY: H1 has been locked for 3.5 hours. Ibrahim is bringing me up to speed on the SQZ issues of the morning.
VACSTAT detected a single BSC3 sensor glitch at 13:34 this afternoon. Last glitch was 20 days ago. I restarted vacstat_ioc.service on cdsioc0 at 13:46 and disabled HAM6's gauge.
Unknown cause Lockloss. Sheila and I were troubleshooting a host of SQZ mysteries when this happened (not actually touching anything just discussing). It's tempting to say SQZ related since all our issues this weekend have been but yet unknown.
Back to NLN as of 20:31 UTC
Sheila and I troubleshot more SQZ stuff, ultimately deciding to turn off SQZ ANG ADJUST since it was oscillating the phase, and thus our range, unnecessarily. I edited sqzparams.py to turn it off and then had to edit SQZ ANG guardian to make DOWN the nominal state. Now we're observing but will need to manually change the angle to optimize range, which I just did and will do again once thermalized.
Sun Feb 02 10:09:12 2025 INFO: Fill completed in 9min 9secs
TCmins [-95C, -93C] OAT (2C, 35F) DeltaTempTime 10:09:14
SQZ Morning Troubles
After reading Tony's alog, I had the suspicion that this was the SHG power being too low, causing lock issues. I trended Sheila's SHG Power while adjusting the SHG temp to maximize. This worked to lock SQZ almost immediately - screenshot below.
However, despite SQZ locking, it looked like in trying to lock for hours the SQZ_ANG_ASC had gone awry and as the wiki put it, "ran away" I requested the reset SQZ_ANG guardian but this didn't work. Then, I saw another SQZ Angle reset guardian, called "Reset_SQZ_ANG_FDS", which immediately broke guardian. The node was in error and a bunch of settings in ZM4 abd ZM5 got changed. I reloaded the SQZ guardian and successfully got back to FREQ_DEP_SQZ. I saw there were some SDF changes that had occured so I undid them, successfully making guardian complain less and less. Then, there were 2 TCS SDF OFFSETs that were ZM4/5 M2 SAMS OFFSET and they were off by 100V, sitting at their max 200V. I recalled that when SQZ would drop out, it would notify something along the lines of "is this thing on?". I then made the assumption that because this slider was at its max, all I'd need to do was revert this change, whic happened over 3hrs ago. I did that and then we lost lock immediately.
What I think happened in order:
Now, I'm relocking, having reverted the M2 SAM OFFSETs that guardian made and having maximized the SHG Temp power (which interestingly had to go down contrary to yesterday). I've also attached as screenshot of the SDF change related to the AWC-SM4_M2_SAMS_OFFSET because I'm about to rever the change on the assumption it is erroneous.
What I find interesting is that lowering the SHG temp increases the SHG power first linearly, then after some arbitrary value, exponentially but only for one big and final jump (kind of like a temperature resonance if you can excuse my science fiction). You can see this in the SHG DC POWERMON trend that shows a huge final jump.
Recovered from Lockloss and got to OBSERVING automatically - 17:58 UTC.
Only SDF was the only change I made which was re-adjusting SHG temperature to maximize its power.
Investigating further, I realize that once I fixed the SHG temp issue, the message "SQZ ASC AS42 not on??" was the flashing notification, in which I should have followed: alog 71083. Instead, SDF led me to what the specific issue with ASC was. I will continue to monitor and investigate. Since range is fluctuating, I will probably go into comissioninbg to try and optimize squeezing once thermalized.
Ibrahim summarized this over the phone for me, and here's our summary of 4 things that need a fix or looking into (sometime):
In regards to Sheila's final bullet point above, the change to the PSAMS offsets happened as a result of Tony's misclick (alog82585) where he brought SQZ_MANAGER to 'SET_ZM_SLIDERS_IFO [52]' as reported by the Guardian log at that time:
I also show this interaction in the attached ndscope.
TITLE: 02/02 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 1mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.45 μm/s
QUICK SUMMARY:
IFO is at NLN as of 12:44 UTC
SQZ is failing to lock FC per Tony's OWL alog. I am investigtaing now.
TITLE: 02/02 Owl Shift: 0600-1530 UTC (2200-0730 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 3mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.37 μm/s
QUICK SUMMARY:
I got a call to help H1 get relocked. When I logged in to start helping we were in POWER_10W.
Once we reached NOMINAL_LOW_NOISE ... we lost lock before Observing can be reached.
Once H1 Got past DRMI_1F I tried to get some Z's.
Allowing the H1 some time to relock, once it got back to Nominal_low_noise again the SQZ manager was stuck on beam diverter step, and hence H1 Called me back.
After poking around and trying to recreate my steps last time this happened: alog 82192
I started reading the troubleshhoting guide for SQZ: https://cdswiki.ligo-wa.caltech.edu/wiki/Troubleshooting%20SQZ
I tried to request FDS_READY_IFO[50] and miss clicked [52] Which changed the alignment of the FC2 mirrors and ZM4. [Not any other ZM mirrors!]
Since then: I was able to narrow down the problem to the SQZ_FC not locking anymore. Likely because the ZM3 mirrors have now been moved.
OK, after returning FC2 and ZM4 back to their prior positions. I am now getting the Beam diverter error message again, which is where I started when the IFO called me the second time.
TITLE: 02/02 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Largely uneventful shift with one lockloss and a somewhat lengthy relock. Wind hasn't been too bad and microseism is steady. H1 has now been locked for just over 2 hours.
Lockloss @ 02:03 UTC - link to lockloss tool
No obvious cause, but there's an ETMX glitch immediately prior. End lock stretch at 16.5 hours.
H1 back to observing at 03:53 UTC.
I went immediately into an initial alignment to see if the ALS dropout issues were seen, and they were not. Since I believe this is the third time we've seen IA make the dropouts go away, is there something different about IA that fixes them?
Even with an alignment, relocking still took a while as there were ETMX saturations starting at DARM_TO_RF that eventually caused a lockloss during CARM_OFFSET_REDUCTION. Also, DRMI took a while to lock even with decent flashes.
TITLE: 02/02 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 09:29 UTC (15 hr lock!) with a two drops from OBSERVING
OBS Drop 1: Planned Saturday Calibration Sweep - alog 82579 - CALIBRATION from 19:32 to 20:15 UTC (~45 mins)
OBS Drop 2: SQZ opo lockloss - PREVENTATIVE MAINTENANCE from 21:51 to 23:31 UTC (~1hr 40 mins). Details below.
Troubleshooting with Camilla and Sheila led to the same conclusion that the SHG power was too low. Sheila determined that it was as high as it could go but tuning the SHG temperature gave it some extra power, which is ultimately what allowed SQZ-OPO CONTROLMON signal to get back to a lockable number. This will likely need to be readjusted since it is slowly going down in counts if the opo/sqz have issues locking (though may have leveled out check screenshot). The instructions are: Sitemap -> SQZ -> SQZT0 -> Colorful SHG Box -> Set Temperature and such that H1:SQZ-SHG_GR_DC_POWERMON is maximized in counts (picture below of Sheila's adjustments). Accepted SHG temp change SDF attached.
Other SQZ Adjustments:
Other: the H2 building had a temperature excursion overnight that prompted Dave to troubleshoot with me first remotely and then in-person. He came on site to see the issue with the HVAC not keeping the temperature low (set point was 67 but temp went all the way to over 80). Since power cycling failed, Jonathan and Dave agreed to turn off non-essential computers and electronics until they can more thoroughly investigate the issue during the week. I've attacched the H2 Building Environment medm which shows the temperature peaking at 85F before electronics were turned off and thermostats power cycled. The blue trace shoes the air flow failing to correctly adjust with the temperature and the red trace shows the temperature.
LOG:
None
TITLE: 02/02 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 21mph Gusts, 14mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.39 μm/s
QUICK SUMMARY: H1 has been locked for 14.5 hours.