Tue Nov 19 10:12:46 2024 INFO: Fill completed in 12min 42secs
WP12201
Marc, Fil, Dave, Ryan
h1susex is powered down to restore cabling from the new 28bit LIGO-DAC to the original 20bit General Standards DACs.
Procedure:
Put h1iopseiex SWWD into long bypass
Safe the h1susetmx, h1sustmsx and h1susetmxpi models
Stop all models on h1susex
Fence h1susex from the Dolphin fabric
Power down h1susex.
D. Barker, F. Clara, M. Pirello
Swap to original 20 bit DAC complete. Here are the steps we took to revert from LD32 to GS20
Included images of front and rear of rack prior to changes.
Tue19Nov2024
LOC TIME HOSTNAME MODEL/REBOOT
09:59:38 h1susex h1iopsusex
09:59:51 h1susex h1susetmx
10:00:04 h1susex h1sustmsx
10:00:17 h1susex h1susetmxpi
15:22 UTC lockloss - 1416065051
FSS_FAST_MON grew just below the LL, IMC lost it at the same time as ASC. FSS, and ISS lost it at the same time.
TITLE: 11/19 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventative Maintenance
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 7mph Gusts, 5mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.54 μm/s
QUICK SUMMARY:
TITLE: 11/19 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
Shift consisted of 3 locks and 2 IMC locklosses
Locking has been fairly straight forward once the IMC decides to get and stay locked.
Survived a 5.6M Tonga Quake, and a few PI ring ups.
Tagging SUS because ITMY Mode 5 is ringing up. I have turned OFF the Gain to it as the current Nominal state is making IYM5 worse and has been for the last few locks.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
00:24 | EPO | Corey +1 | Overpass, Roof | N | Tour | 00:24 |
01:06 | PCAL | Rick S | PCAL Lab | Y | Looking for parts | 02:16 |
4:20 UTC Lockloss https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1416025235
This Alog was brought to you by your favorite IMC Lockloss tag.
TITLE: 11/19 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 16mph Gusts, 9mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.67 μm/s
QUICK SUMMARY:
1:57:53 UTC IMC Lockloss
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/?event=1416016690
Fought with IMC a little to get the IMC to lock, by requesting Down, and Init, and offline a few times. But once it finally locked, relocking went fast.
2:55:01 UTC Nominal LowNoise
Incoming 5.6M EQ.
3:04 UTC Observing Reached
TITLE: 11/19 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 9mph Gusts, 6mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.70 μm/s
QUICK SUMMARY:
Despite the Usiesm being elevated, H1 has been Locked for 20 minutes.
The Plan is to continue Observing all night.
TITLE: 11/19 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as 00:17 UTC
A rough day for locking.
First, we had planned comissioning from 8:30AM to 11:30AM, during which comissioners continued investigating the IMC.
Sheila went into the LVEA to realign onto the POP and ALS beatnotes following a PR3 move that got higher buildups, which was successful. For this, we stayed at CHECK_VIOLINS_BEFROE_POWERUP (losing lock twice from there due to IMC losing lock).
We then got to locking and were able to acquire NLN and go into OBSERVING for 30 minutes (starting 21:44 UTC). The IMC glitch then caused a Lockloss (alog 81338).
After this LL, we were unable to lock DRMI despite just having done an initial alignment. Tried touching up PRM and SRM during PRMI and DRMI respectively, succeeding in the former and failing in the latter.
I decided to do an initial alignment, which happened fully automatically. We were able to lock DRMI but a lockloss at CARM_5_PICOMETERS brought us down due to the IMC glitch. We were then able to come back up to NLN fully automatically, finally getting to OBSERVING.
Overall, high microseism and IMC glitches made locking difficult. We still got there in the end though.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
21:41 | SAF | Laser | LVEA | YES | LVEA is laser HAZARD | 08:21 |
16:36 | FAC | Karen | Optics Lab | N | Technical Cleaning | 16:36 |
16:53 | FAC | Kim | MX, EX | N | Technical Cleaning | 17:57 |
16:58 | FAC | Karen | MY | N | Technical cleaning | 18:57 |
17:38 | VAC | Janos | EY | N | Compressor measurement | 17:59 |
18:28 | ISC | Sheila, Oli | LVEA | Y | Realigning ALS Beatnote | 19:28 |
22:33 | CDS | Patrick | MSR | N | Timing removed from the Beckhoff system | 22:41 |
00:24 | EPO | Corey +1 | Overpass, Roof | N | Tour | 00:24 |
Dave, Fernando, Patrick Fernando and I powered off the computer, unplugged the power and other cables from the back of it to slide it forward in the rack, took the top off the chassis, disconnected the blue timing fiber from the PCI board, capped the fiber ends, pulled the fiber out of the chassis, slid the computer back into the rack, and plugged the cables back in, except for the kvm switch cable that is not used anyway. We left the computer on. This was to stop the timing overview from turning red when the computer is powered on. We did not put in a work permit, but cleared it with Daniel and informed the operator.
IMC caused Lockloss 30 mins into observing.
We had another instance of the -800nS timing glitch of the EY CNS-II GPS receiver this morning from 05:47:50 to 06:00:04 PDT (duration 12min 11 sec).
A reminder of this receiver's history:
At EX and EY we had no CNS-II issues for the whole of O4 until the past few months.
The EY 1PPS, nominally with a -250 +/- 50 nS difference to the timing system, went into a -800nS diff state on 30th Sept, then again 1st October and again 18th October.
We replaced its power supply Tue 22nd October 2024, after which there have been no glitches for 27 days until this morning.
Closes FAMIS26017, last checked in alog81183.
HAMs:
HAM sensor noise at 7-9Hz seems to be reduced for most chambers (HAMs and BSCs both stages of BSCs).
BSCs:
ITMY_ST1_CPSINF_V2 looks reduced above 30 Hz.
All the BSC ST2 specifically, sensor noise at 6-9Hz is reduced.
Mon Nov 18 10:13:07 2024 INFO: Fill completed in 13min 4secs
Sheila, Ibrahim, Jenne, Vicky, Camilla
After Ibrahim told Sheila that DRMI has been struggling to lock, she checked the POP signals and found that somethings been drifting and POP now appears to be clipping, see POP_A_LF trending down. We use this PD in full lock so don''t want any clipping.
We has similar issues last year that gave us stability issues. These issues last year didn't have "IMC" lock-losses so we think this isn't the main issue we've having now but maybe effecting stability.
Trends showing the POP clipping last year, and now. Last year we moved PR3 to removing this slipping while looking at the coherence between ASC-POP_A_NSUM (one of the QPD on the sled in HAM3) and LSC-POP_A (LSC sensor in HAM1): 74578, 74580, 74581.
Similar coherence plots to 74580, for now are show the coherence is bad:
Sheila is moving the PRC cavity now which is improving POPAIR signals, plot attached is the ASC-POP_A_NSUM to LSC-POP_A coherence with DRMI only locked before and during the move. See improvements. Sheila is checking she's in the best position now.
We have been holding 2W with ASC on before powering up, and using the guardian state PR2_SPOT move, which lets us move the sliders on PR3 and moves PR2, IM4 and PRM sliders to follow PR3.
Moving PR3 by -1.7 urad (slider counts) increased the power on LSC POP, and POPAIR 18, but slightly misaligned the ALS comm beatnote. We continued moving PR3 to see how wide the plateau is on LSC POP, we moved it another -3urad without seeing power drop on LSC POP, but the ALS paths started to have trouble staying locked so we stopped there. (POP18 was still improving, but I went to ISCT1 after the OFI vent and adjusted alignment onto that diode 79883 so it isn't a nice reference). I moved PR3 yaw back to 96 urad on the yaw slider (we started at 100urad), a location where we were near the top for POP and POPAIR 18, so in total we started with PR3 yaw at 100 on the slider and ended with 96 on the slider.
H1 was dropped out of OBSERVING due to the TCS ITMy CO2 laser unlocking at 0118utc. The TCS_ITMY_CO2 guardian relocked the 2min.
It was hard to see the reason why at first (there were no SDF Diffs, but eventually saw a User Message via GRD IFO (on Ops Overview) pointing to something wrong with TCS_ITMY_CO2. Oli was also here and they mentioned seeing this, along with Camilla, on Oct9th (alog)---this was the known issue of the TCSy laser nearing the end of its life. It was replaced a few weeks later on Oct22nd (alog).
Here are some of the lines from the LOG:
2024-11-13_19:43:31.583249Z TCS_ITMY_CO2 executing state: LASER_UP (10)
2024-11-14_01:18:56.880404Z TCS_ITMY_CO2 [LASER_UP.run] laser unlocked. jumping to find new locking point
.
.
2024-11-14_01:20:12.130794Z TCS_ITMY_CO2 [RESET_PZT_VOLTAGE.run] ezca: H1:TCS-ITMY_CO2_PZT_SET_POINT_OFFSET => 35.109375
2024-11-14_01:20:12.196990Z TCS_ITMY_CO2 [RESET_PZT_VOLTAGE.run] ezca: H1:TCS-ITMY_CO2_PZT_SET_POINT_OFFSET => 35.0
2024-11-14_01:20:12.297890Z TCS_ITMY_CO2 [RESET_PZT_VOLTAGE.run] timer['wait'] done
2024-11-14_01:20:12.379861Z TCS_ITMY_CO2 EDGE: RESET_PZT_VOLTAGE->ENGAGE_CHILLER_SERVO
2024-11-14_01:20:12.379861Z TCS_ITMY_CO2 calculating path: ENGAGE_CHILLER_SERVO->LASER_UP
2024-11-14_01:20:12.379861Z TCS_ITMY_CO2 new target: LASER_UP
CO2Y has only unlocked/relocked once since we power cycled the chassis on Thursday 14th (t-cursor in attached plot).
0142: ~30min later had another OBSERVING-drop due to CO2y laser unlock.
While it is normal for the CO2 lasers to unlock from time to time, whether it's from running out of range of their PZT or just generically losing lock, this is happening more frequently than normal. The PZT doesn't seem to be running out of range, but it does seem to be running away for some reason. Looking back, it's unlocking itself ~2 times a day, but we haven't noticed since we haven't had a locked IFO for long enough lately.
We aren't really sure why this would be the case, chiller and laser signals all look as they usually do. Just to try the classic "turn it off and on again", Camilla went out to the LVEA and power cycled the control chassis. We'll keep an eye on it today and if it happens again, and we have the time to look further into it, we'll see what else we can do.
While looking at locklosses today, Vicky and I noticed that after we reach NLN, the MC2/IM4 TRANS power increases 0.1 to 0.5%.
Daniel helped look at this and we expect the ISS to change to keep the power out of the IMC constant, but the power after the IMC on IM4 TRANS (not centered) changes ~1% too. Everything downstream of the ISS AOM sees this change plot, something is seeing a slow ~1hour thermalization.
The same signals at LLO are show a similar amount of noise (slightly more in the MC2_TRANS) but no thermalization drift, but LLO has a lot less IFO thermalization.
Elenna and Craig noted this in 68370 too.
After discussions with control room team: Jason, Ryan S, Sheila, Tony, Vicky, Elenna, Camilla
Conclusions: The NPRO glitches aren't new. Something changed to make us not be able to survive them as well in lock. The NRPO was swapped so isn't the issue. Looking the timing of these "IMC" locklosses, they are caused by something in the IMC or upstream 81155.
Tagging OpsInfo: Premade templates for looking at locklosses are in /opt/rtcds/userapps/release/sys/h1/templates/locklossplots/PSL_lockloss_search_fast_channels.yaml and will come up with command 'lockloss select' or 'lockloss show 1415370858'.
Updated list of things that have been checked above and attache a plot where I've split the IMC only tagged locklosses (orange) from those tagged IMC and FSS_OSCIALTION (yellow). The non IMC ones (blue) are the normal lock losses and (mostly) only once we saw before September.
Camilla, Oli
Recently, because of the PSL/IMC issues, we've been having a lot of times where the IFO (according to verbals) seemingly goes into READY and then immediately goes to DOWN because the IMC is not locked. Camilla and I checked this out today and it turns out that these locklosses are actually from LOCKING_ARMS_GREEN - the checker in READY that is supposed to make sure the IMC is locked was actually written as nodes['IMC_LOCK'] == 'LOCKED' (line 947), which just checks that the requested state for IMC_LOCK is LOCKED, and it doesn't actually make sure the IMC is locked. So READY will return True, it will continue to LOCKING_ARMS_GREEN, and immediately lose lock because LOCKING_ARMS_GREEN actually makes sure the IMC is locked. This all happens so fast that verbals doesn't have time to announce LOCKING_ARMS_GREEN before we are taken to DOWN.
To (hopefully) solve this problem, we changed nodes['IMC_LOCK'] == 'LOCKED' to be nodes['IMC_LOCK'].arrived and nodes['IMC_LOCK'].done, and this should make sure that we stay in READY until the IMC is fully locked. ISC_LOCK has been reloaded with these changes.
The reason it has been doing this is because there is a return True in the in main method of ISC_LOCK's READY state. Then a state returns True in its main method, it will skip the run method.
I've loaded the removal of this in ISC_LOCK.