TITLE: 09/13 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: The main efforts of today have been trying to get past DHARD_WFS. DRMI is now less troublesome since Sheila and Elenna rephased REFLAIR and some dark offsets were updated, and we are able to make it up to DARM_TO_RF consistently without intervention other than an occasional SRM alignment touchup. See the rephasing alog comments for details on locking efforts this afternoon.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
18:39 | CDS | Richard | CER | N | Checking RF | 18:46 |
E. Capote, R. Short
While locking DRMI sevral times this morning, I had noticed the SRC loops are what have been pulling alignment away once DRMI ASC engages. I have put INP1, PRC1, and PRC2 back into Guardian to be turned on during DRMI ASC, and so far they have been working well. Elenna looked into this and noticed the dark offsets for sensors used for SRC needed to be updated, so she did so and the accepted SDFs are attached (all changes in the screenshot were accepted except for the last two in the list related to RF36).
Elenna, Ryan S, Sheila
This morning Ryan and Elenna were having difficulty getting DRMI to lock, so we locked PRMI and checked the phase of RELFAIR with the same template/PRM excitation described in 84630.
45 MHz phase changed by 7 degrees, 9MHz phase changed by 5 degrees. The attached screenshot shows that the phasing of RFL45 did have an impact on the OLG, and improved the phase margin. In that measurement we also added 25% to the MICH gain.
We accepted the phases in SDF so they were in effect at our next (quick) DRMI lock, but not the gain change.
When we next locked DRMI, we measured the MICH OLG again, and here it seems that the gain is a bit high. We haven't changed the gain in the guardian since this seemed to work well, but the third attached screen shot shows the loop gain.
After this we went to DARM to RF, and manually increased the TR_CARM offset to -7. The REFL power diped to 97% of it's unlocked value while the arm cavity transmission was 24 times the single arm level, so following the plot from 62110 we'd estimate our power recycling gain was between 20 and 30.
We made two attempts to close the DHAD WFS. In the first, we stepped the TR_CARM offset to -7 and tried the guardian steps. Looking at the lockloss it looked like the sign of both DHARD loops was wrong and pulling us to a bad alignment.
In the second attempt, we stayed at the TR_CARM offset of -3 (from DARM_TO_RF), and tried manually to engage them. The yaw loop was fine and we were able to use the normal full gain. For the pitch loop, it did seem to have the wrong sign so we tried flipping the sign. The guardian would step this gain from -1 to -10 at the end of the DHAR_WFS state, we stepped it from 1 to 3, which seemed to be driving the error signal to zero, but we lost lock partway through this.
I have made three more attempts through DHARD_WFS, both unsuccessful. Each time, once in DARM_TO_RF, I've manually engaged DHARD_Y's initial gain, watched the error signal converge, then increased to the final gain without issue. I then would engage DHARD_P's initial gain with the opposite sign and watch the error signal slowly converge after many minutes. Both times, whenever I would increase the DHARD_P gain, soon after the control signal would start moving away from zero (away from the direction it came) and there would be a lockloss.
On the thrid attempt I did the same as before, but this time stepped the CARM offset from -3 to -5 before engaging DHARD_P, but ultimately this attempt didn't work either. I noticed that once the DHARD_P error signal crosses zero, DHARD_Y starts running away (?). If I'm fast enough, I can turn the gains to zero before a lockloss happens, then bring the buildups back up by adjusting PRM. This juggling act of turning the DHARD loops on and off while adjusting PRM went on for a while, but inevitably ended in a lockloss.
Elenna, Ibrahim, Ryan, Sheila
We had one more attempt at locking, we were able to close DHARD Y wfs with the guardian, and we stepped DHARD P as we stepped up the CARM offset. Elenna fixed up DRMI along the way as well.
We engaged the DHARD P loop with the usual sign and gain when the TR_CARM offset was -35. Then we let the guardian continue, and lost lock in the CARM_TO_ANALOG state.
Ibrahim has a few plans of what to try next.
Sat Sep 13 10:08:37 2025 INFO: Fill completed in 8min 34secs
TITLE: 09/13 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 1mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.17 μm/s
QUICK SUMMARY: I will start H1 running an alignment then try locking to see where we're still having trouble.
The SEI and SUS trips from the EQ moved alignment of the IMs quite a bit, so to lock XARM_IR, I restored these alignments based on top-mass OSEMs to what they were pre-trip. This was the only intervention needed for initial alignment. Starting main locking now.
Lockloss at DHARD_WFS. After locking DRMI, I tried moving PRM, SRM, and IM4 to get the buildups better, but I could not. I then was stepping through the following states slowly to see if an issue cropped up, and sure enough, after DHARD_WFS had completed I was waiting for DHARD to converge, and alignment was pulled away to eventually cause a lockloss; screenshot attached. I believe this was happening yesterday also. POP18/90 had been slowly degrading since locking DRMI, which may be because only MICH is enabled during DRMI ASC.
TITLE: 09/13 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: NONE
SHIFT SUMMARY:
IFO is in IDLE for MAINTENANCE (and EQ) for the night. OWL is cancelled due to ongoing corrective maintenance from the most recent outage.
7.4 EQ from Kamchatka, Russia.
Sheila got to it first and untripped the tripped systems (alog 86896). I noticed that some of the suspensions (PRs and SRs) stalled so set them to their respective request states - and they got there successfully.
IFO is still in EQ mode but ringing down.
No HW WDs tripped and everything seems to be "normal" now.
LOG:
None
ops overvoew screenshot attached. I untripped things but will not try locking tonight.
While untripping suspensions I noticed that oddly IM2 watchdog rms monitors were just below the trip threshold, while the other IMs were 2-3 orders of magnitude smaller. The second screen shot shows this curious behavoir. What about IM2 undamped is so different from the other IMs?
Summary: omicron glitch rate increases after the power outage, and when the ISS loop is turned off
Using omicron, I made some plots of the glitch rates both before/after the power outage, and then again when the ISS loops were turn on/off. The times chosen were based on the detchar request made by Elenna in alog 86878. For the glitches, I used the omicron frequency/snr parameters of 10 < freq < 1000 Hz and 5 < snr < 500. The first slide in the attached pdf is a comparison of before/after the power outage. It's fairly obvous from the summary pages that the glitch rate increased, but I wanted to quantify just how much it changed. Before the power outage, it was around 45 glitches per hour, and after, it jumped up to about 600 glitches per hour. For both periods, it was around 10 hours of low noise I used for the comparison.
I then did a comparison between when the ISS loop was on versus off. I have found that the glitch rate increased when the ISS loop was turned back off, as can be seen in slide 2. On slide 3, I have summary page plots showing the WFS A YAW error signal, which looks noisier when the ISS loop is off, and the glitchgram below. On slide 4, I made a spectra comparing H1:CALIB_STRAIN_CLEAN during when the ISS loop was off/on, and we can see an increase in noise from ~ 0.06 Hz - 2 Hz, and at slightly higer frequencies of ~ 10 Hz - 40 Hz.
IFO is still trying to lock. I was given instructions to lock until PREP_ASC_FOR_FULL_IFO but after an initial alignment, the highest we could get to was CARM_TO_ANALOG. DRMI is just very unstable, even with manual intervention to maximize POP signal width.
IFO will keep trying to lock until PREP_ASC_FOR_FULL_IFO. I will change the request per instructions.
TITLE: 09/12 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Been another busy day of troubleshooting following the power outage. There have been many logs entered with things people have looked into and tried (please see those for more information, as it would be impossible for me to summarize all of it here), but the gist is that we have accepted for now the apparent alignment shift through the IMC, even though this shows much higher reflected power, and reduced the amount of light on the IMC REFL WFS to keep them safe. Since then, we have been trying to relock H1 in this new configuration, but this has been so far plagued with challenges; see Elenna's alog for more on that.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:41 | FAC | Randy | Hi-bay | N | Moving boom lift outside | 16:41 |
15:54 | ISC | Sheila | LVEA | N | Plugging SR785 in for IMC measurement | 16:06 |
16:11 | VAC | Richard | LVEA | N | Checking vacuum pump | 16:18 |
17:48 | PEM | TJ | LVEA | N | Check DM by HAM6 | 18:18 |
18:44 | VAC | Gerardo | LVEA | N | Replacing AIP controller on HAM6 | 19:05 |
19:42 | ISC | Camilla | Opt Lab | N | Getting some bolts and power meter | 20:15 |
19:58 | ISC | Sheila, Elenna | LVEA | LOCAL | Adjusting IOT2L waveplate | 20:59 |
21:01 | ISC | Elenna | LVEA | N | Check SR785 | 21:37 |
21:37 | CDS | Marc | MY | N | Finding parts | 22:37 |
We have been trying to lock the IFO since we reduced the IMC REFL power. DRMI locking has been causing lots of trouble. First time we locked DRMI quickly, but then as the ASC engaged it pulled the buildups away and I couldn't turn off the ASC fast enough before a lockloss occurred. Then, locking DRMI and PRMI starting talking a very long time, 30 minutes or more. Sheila and I tried turning off the DRMI ASC except the beamsplitter. Then, Ryan S touched up the rest of the DRMI alignment by hand. We made it to ENGAGE ASC FOR FULL IFO, but then as the ASC engaged again the POP 18 buildup started dropping, and then we saw large glitches in all the signals- LSC and ASC, then lockloss. We don't know what caused the issues in full IFO ASC, but it seems similar to the DRMI issue- for whatever reason the ASC is not working properly.
We would like to go back to PREP ASC FOR FULL IFO and slowly engage the ASC by hand to figure out what's going wrong. But DRMI locking is still taking 30 minutes or more, and our recent attempt even the beamsplitter ASC didn't work properly. I think it's safe to say we won't be able to lock, let alone observe, for a while. Even if we could solve these ASC issues, we still need to get to laser noise suppression and check various things to ensure the IMC and ISS loops are working properly, we have the right amount of laser power, no significant glitches or degradations, etc.
TITLE: 09/12 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 8mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.23 μm/s
QUICK SUMMARY:
IFO is LOCKING in DRMI
We still don't know excactly what happened and if we fixed it but:
Now dealing with DRMI instability. Hoping to get locked.
Removed and replaced the ion pump controller for the annulus system at HAM6.
No issues doing the replacement, but system AIP system continues railed, we wait and we will continue with troubleshooting next week.
(Anna I., Jordan V., Gerardo M.)
Late entry.
Last Tuesday we attempted to "activate" the NEG pump located on one of the top ports of the output mode cleaner tube, but we opted to do a "conditioning" instead, this due to issues encountered with the gauge related to this NEG pump. Also, what prompted the work on the NEG pump was the "noted" bump on the pressure inside the NEG housing by the gauge, bump on pressure also noted inside the main vacuum envelope, see attachment for pressure at the NEG housing (PT193) and pressure at PT170.
Side note: Two modes of heating for the NEG pump are available, ACTIVATION and CONDITIONING. The difference between both options is the maximum applied temperature, for "conditioning" the max temperature is 250 oC, where for "conditioning" the max temperature is 550 oC. The maximum temperature holds for 60 minutes.
We started by making sure that the isolation valve was closed, checked it and it was closed. We connected a pump cart+can turbo pump to the system and pump down the NEG housing volume, the aux cart dropped down to 5.4x10^-05 torr, ready for "activation". However, we noted no response from the gauge attached to this volume, PT193 signal remained flat. So, we aborted the "activation" and instead we ran a "conditioning" on the NEG. After the controller finished running the program, we allowed the system to cool down, while pumping on it, then the NEG housing was isolated from the active pumping, hoses and can turbo were removed, then system was returned to "nominal status" or no mechanical pumping on it.
Patrick logged in h0vaclx, but system showed the gauge nominal, and the system did not allowed to turn filaments because pressure is high.
Long story short, the gauge is broken. Next week we'll visit NEG1, and its gauge PT191.
ls /opt/rtcds/userapps/release/sus/common/scripts/quad/InLockChargeMeasurements/rec_LHO -lt | head -n 6
total 295
-rw-r--r-- 1 guardian controls 160 Sep 9 07:58 ETMX_12_Hz_1441465134.txt
-rw-r--r-- 1 guardian controls 160 Sep 9 07:50 ETMY_12_Hz_1441464662.txt
-rw-r--r-- 1 guardian controls 160 Sep 9 07:50 ITMY_15_Hz_1441464646.txt
-rw-r--r-- 1 guardian controls 160 Sep 9 07:50 ITMX_13_Hz_1441464644.txt
-rw-r--r-- 1 guardian controls 160 Sep 2 07:58 ETMX_12_Hz_1440860334.txt
In-Lock SUS Charge Measurements did indeed run this Tuesday.
python3 all_single_charge_meas_noplots.py
Cannot calculate beta/beta2 because some measurements failed or have insufficient coherence!
Cannot calculate alpha/gamma because some measurements failed or have insufficient coherence!
Something went wrong with analysis, skipping ITMX_13_Hz_1440859844
Sheila and I went out to IOT2L. We measured the power before and after the splitter called "IO_MCR_BS1" in this diagram of IOT2L. This is listed as a 50/50 beamsplitter. However, there is no possible way it can be, because we measure about 2.7 mW just before that splitter, and 2.45 mW on the IMC refl path and 0.25 mW on the WFS path. So we think it must be a 90/10.
Sheila then adjusted the have wave plate upstream of this splitter so that the light on IMC refl dropped from 2.4 mW to 1.2 mW., as measured from the IMC refl diode.
We compensated this change by raising the IMC gains by 6 dB in the guardian. I doubled all the IMC WFS input matrix gains, SDF attached.
SDFed the IMC PZT position that we think we like.
After the waveplate adjustment and updating of gains in Guardian, we took an IMC OLG measurement with the IMC locked at 2W input. The UGF was measured at 35.0 kHz.