FAMIS 31053
The NPRO power, amplifier powers, and a few of the laser diodes inside the amps look to have some drift over the past several days that follows each other, sometimes inversely. Looking at past checks, this relationship isn't new, but the NPRO power doesn't normally drift around this much. Since it's still not a huge difference, I'm mostly just noting it here and don't think it's a major issue.
The rise in PMC reflected power has definitely slowed down, and has even dropped noticeably in the past day or so by almost 0.5W.
The operator team has been noticing that we are dropping out of observing for the PMC PZT to relock (80368, 80214, 80206...).
There's already a PZT_checker() in SQZ_MANGER at FDS_READY_IFO and FIS_READY_IFO to check the OPO and SHG PZTs are not at their end of their range (OPO PZT re-locks if not between 50-110V and SHG PZT is 15-85V). If they are, it requests them to unlock and relock. This is to force them into a better place before we go into observing.
I've added PMC to the PZT_checker, will relock if it's outside of 15-85V. Full range is 0-100V. SQZ_MANAGER reloaded. Plan to take GRD though DOWN and back to FDS.
Annulus ion pump failed about 4:04 AM. Since the annulus volume is shared by HAM1 and HAM2, the other pump is actively working on the system, noted on the attached trend plot.
System will be evaluated as soon as posible and determine if the pump or controller need replacing.
I tried Elenna's FM6 from 80287, this made the PRCL coupled noise worse, see first attached plot.
Then Ryan turned off CAL lines and we retook to preshaping (PRCLFF_excitation_ETMYpum.xml) and PRCL injection (PRCL_excitation.xml) templates. I took the PRCL_excitation.xml injection with the PRCL FF off and increased amplitude from 0.02 to 0.05 to increase coherence over 50Hz. Exported as prclff_coherence/tf.txt, and prcl_coherence/tf_FFoff.txt. All in /opt/rtcds/userapps/release/lsc/h1/scripts/feedforward
Elenna pointed out that I tested the wrong filter, the new one is actual FM7, labeled "new0926". We can test that on Thursday.
The "correct" filter today in FM7 was tested today and still didn't work. Possibly because I still didn't have the correct pre-shaping applied in this fit. I will refit using the nice measurement Camilla took in this alog.
Mon Sep 30 08:11:07 2024 INFO: Fill completed in 11min 3secs
Jordan confirmed a good fill curbside.
Because of cold outside temps (8C, 46F) the TCs only just cleared the -130C trip. I have increased the trip temps to -110C, the early-winter setting.
I don't consider this a full investigative report into why we've been having these FSS_OSCILLATION tagged locklosses, but here are some of my quick initial findings:
Please feel free to add comments of anything else anyone finds.
Two interesting additional notes:
TITLE: 09/30 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 2mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.13 μm/s
QUICK SUMMARY: H1 has been locked and observing for almost 4 hours. One lockloss overnight, which doesn't have an obvious cause but looks to have a sizeable ETMX glitch. Commissioning time is scheduled today from 15:30 to 18:30 UTC.
TITLE: 09/30 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Reacquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Trying to relock and at LOCKING_ALS. Relocking from the previous lockloss wasn't too bad so hopefully this relocking goes smoothly as well. We had superevent candidate S240930aa during my shift!
LOG:
23:00UTC Observing and have been Locked for 2 hours
23:32 Lockloss
00:09 Started an initial alignment after cycling through CHECK_MICH_FRINGES
00:47 Initial alignment done, relocking
01:28 NOMINAL_LOW_NOISE
01:32 Observing
04:00 Superevent S240930aa
04:54 Lockloss
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
21:43 | PEM | Robert | Y-arm | n | Testing equipment | 23:33 |
We went out of Observing at 04:53:56, and then lost lock four seconds later at 04:54:01.
It looks like the reason for dropping observing at 04:53:56 UTC (four seconds before lockloss) was due to the SQZ PMC PZT exceeding its voltage limit, so it unlocked and Guardian attempted to bring things back up. This has happened several times before, where Guardian is usually successful in bringing things back and H1 returns to observing within minutes, so I'm not convinced this is the cause of the lockloss.
However, when looking at Guardian logs around this time, I noticed one of the first things that could indicate a cause for a lockloss was from the IMC_LOCK Guardian, where at 04:54:00 UTC it reported "1st loop is saturated" and opened the ISS second loop. While this process was happening over the course of several milliseconds, the lockloss occurred. Indeed, it seems the drive to the ISS AOM and the secondloop output suddenly dropped right before the first loop opened, but I don't notice a significant change in the diffracted power at this time (see attached screenshot). Unsure as of yet why this would've happened to cause this lockloss.
Other than the ISS, I don't notice any other obvious cause for this lockloss.
After comparing Ryan's channels to darm, still not sure whether this lockloss was caused by something in the PSL or not, see attached
TITLE: 09/29 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY: H1 was locked this morning until winds picked up, which kept it down through midday. Was able to get relocked this afternoon, but we just had another lockloss as I'm drafting this log.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:58 | SAF | H1 | LVEA | YES | LVEA is laser HAZARD | Ongoing |
17:01 | PEM | Robert | Y-arm | n | Testing equipment | 18:34 |
21:43 | PEM | Robert | Y-arm | n | Testing equipment | Ongoing |
Lockloss @ 09/29 23:32UTC. Ryan S and I had been watching the FSS channels glitching but Ryan closed them literally seconds before the lockloss happened so not sure yet if it was the FSS but we think it might've been. Wind is also above 30mph and increasing so could've been that too
Even though the wind was (just a bit) over threshold, it definitely looks like the FSS (green, lower right) was the cause of the lockloss
09/30 01:30UTC Observing
TITLE: 09/29 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 27mph Gusts, 16mph 3min avg
Primary useism: 0.10 μm/s
Secondary useism: 0.15 μm/s
QUICK SUMMARY:
Observing at 150Mpc and have been Locked for 2 hours. Winds are still above 25mph but microseism is getting lower.
Since the lockloss at earlier this morning, high winds up to 40mph have made locking H1 problematic; on occasion we can lock DRMI, but rarely. Unfortunately, the forecast says these winds will stick around for most of today.
Lockloss @ 16:51 UTC - link to lockloss tool
Cause looks to me to be from high winds; gusts just hit up to 40mph and suspensions were moving right before the lockloss in a familiar way.
H1 back to observing at 21:09 UTC. After a few hours, thanks to a lull in the wind, H1 was able to get relocked. Fully automatic acquisition.
Sun Sep 29 08:11:15 2024 INFO: Fill completed in 11min 11secs