Wed Nov 05 10:11:19 2025 INFO: Fill completed in 11min 15secs
TJ had another lockloss around the time of the beam diverters closing this morning, it happened during OMC whitening. Following up on what Jenne and Elenna saw last night 87962 , I looked at HAM1 GS13s during the lockloss.
In the attached screenshot the first time cursor shows when the CLOSE_BEAM_DIVERTERS guardian state ends and the guardian moves on to switching the OMC whitening state, the HAM1 ISI is still shaking from the beam diverters when the OMC DCPD gains are changed. I've saved this template as sheila.dwyer/ndscope/LOCKLOSS/beam_diverter_lockloss_check.yml
I've added a 10 second timer to the close beam diverters state, so that these things will be separated in time.
This didn't seem to work, another lock loss when closing beam diverters.
I rewrote the Close_Beam_Diverters state to close the 3 beam diverters some number of seconds apart. I chose 2 seconds to start with because Sheila had mentioned the the GS13's looked to have settled in about that time. We lost lock 1 second after closing the first beam diverter, the A REFL diverter.
At this point I think we will leave them open, unless some other ideas pop up before we get back to that point.
Jenne, Elenna, TJ, Sheila
There is a small signal in the GS13 (500 counts on H1 INF) when the beam diverter starts moving, about 0.3 seconds before the open switch turns false. Then the large kick (6000 counts) happens 0.3 seconds after the close switch shows that the beam diverter is closed.
Looking at the HAM6 beam diverter, it only has a small (500 counts) kick after the beam diverter closed. Jim tells us that H1 is on the +y -x side of the ISI, so close to the REFL and POP beam diverter in HAM6.
It seems that the REFL beam diverter was the cause of one lockloss, after TJ separated them in time.
Below are screenshots of the sequence of kicks seen by the GS13s when both REFL and POP beam diverters are opened and closed, opening and closing looks very similar. We see the same pattern in HAM6 when moving the AS beam diverter, although with a smaller amplitude. (I attached a screenshot of H1, Elenna looked at all the GS13s in HAM6 and says they are all smaller)
This kind of kick has happened in each lock since the ISI was installed, although it seems to be smaller June 4/5. This didn't cause locklosses until Oct 21st.
Here is the look at all the GS13s in both HAM1 and HAM6.
TITLE: 11/05 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 11mph Gusts, 6mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.60 μm/s
QUICK SUMMARY: Lost lock about an hour ago, trying to lock DRMI now. Had two lock losses and relocked during the last shift. Looks like the MSR WAP is stuck. Stand down query failure, I'll restart that now. The useism is up again, somehow we locked last night though. Today's plan is to Observe!
Just lost lock at Close_Beam_Diverters. We had two successful relocks last night that went through this state just fine, so I hoped this would go well again today. I'll step through close beam diverters next time.
We tend to have more automated relocks at night, and I wonder the eleveated 1-3Hz ground motion that we see during the day might be shaking us just enough during the day.
TITLE: 11/05 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
Bit of a rough start to the shift with H1 having issues getting to NLN. Managed to finally get to Observing after a few hours of work (see earlier alogs). Violins were high (but not off the DARM FOM), and have steadily been dropping over the 2+hrs of this lock.
Had a LOW SQZ_PMC voltage, but squeezer lockloss and automatic relock brought the voltage back to a better state (70+ volts).
LOG:
At the beginning of the shift H1 had a lockloss during late stages of ISC_LOCK where we slowly stepped through states and found the lockloss occurred at CLOSING BEAM DIVERTERS (see Elenna's alog 87961).
Took a couple of locking attempts for H1 to return to this same point of ISC_LOCK locking. (during this time is when Jenne noticed/addressed the Power Scaling issue (see alog 87965).
While locking chatted with Oli and they mentioned for some of their locklosses, they would go state by state, but ALSO if there was a 1Hz ring-up during these states, they would go to "ASC HiGn" AND then wait there a few minutes until the 1Hz ring up (seen on the PR_GAIN channel on PRMI.sb of nuc30 + ASC-INP1_P_INMON on nuc29).
So for this 2nd attempt of getting through CLOSE BEAM DIVERTERS this shift, went through the states:
Made it to NLN, but could not go to Observing for (2) (I believe) separate reasons.
1) SQZ_OPO_LR: "CMB EXC is ON" and SDF Diffs (see attachement #1). For the SDF Diffs, I reverted these with no problems.
At this point, I tried to go to OBSERVING, but could not.
2) Then noticed for SQZ_PMC, we had the notification: "PMC PZT volts low". Thought I would address this, but then I was able to take H1 to OBSERVING, so I kicked the can and went to OBSERVING instead of addressing the low voltage. See attachement #2 for PMC Voltage over the last week. Still have the notification for the low pmc voltage.
I ran the dry air system thru its quarterly test, FAMIS task. The system was started around 8:20 am local time, and turned off by 11:15 am. System achieved a dew point of -50 oF, see attached photo taken towards the end of the test. Noted that we may be running low on oil at the Kobelco compressor, checking with vendor on this. Picture of oil level is while system is off.
[Corey, Jenne]
We lost lock on the way up (not at all sure why, we hadn't yet gotten to the state we were worried about, but microseism is on an upward trend again), and when the IFO was relocking, the new power scaling (from alog 87806) did something funny.
The guardian is set up to use the IM4 output if the IMC is locked, and the IMC input power otherwise. It looks like this has worked successfully for the whole week since it was implemented, except for right now.
The MC2 Trans Sum was plenty high (but had only just gotten there), correctly indicating that the IMC was locked less than a second before the guardian had to decide whether the IMC was locked, so the Laser power guardian took for the power scaling the IM4 trans output. However, it looks like the laser power guardian read that IM4 value, while IM4 trans was still settling, since it has some whitening / dewhitening.
Once I realized that the power scaling was wonky, I just set the offset to 2, so that we could get back to locking while I investigated, which worked. Since the laser power guardian does reset that value every time there's a change to the laser power, the guardian has already picked it up and put in the more correct value of 1.8.
To prevent this from happening again, I put in a bit of logic to the set_power_scale() function of the LASER_PWR, such that if IM4 trans reads less than 1W, just use the input power as we used to do. We basically never request 1 W or less from the PSL and also are locking the full IFO, so this should be fine.
I tested it by bringing the LASER_PWR guardian to 3W (it used the IM4 trans as it should), and then down to 1W (which ends up as about 0.9W at IM4 trans), and the guardian correctly used the input power for the power scale. So, I think it should work fine.
Functionality test for the corner station turbo pumps, see notes below:
Output mode cleaner tube turbo station;
Scroll pump hours: 7284.7
Turbo pump hours: 7306
Crash bearing life is at 100%
X beam manifold turbo station;
Scroll pump hours: 3387.3
Turbo pump hours: 3391
Crash bearing life is at 100%
Y beam manifold turbo station;
Scroll pump hours: 4157.7
Turbo pump hours: 2826
Crash bearing life is at 100%
We have had multiple locklosses right near the end of the locking sequence around CLOSE BEAM DIVERTERS and OMC WHITENING. Today, we held at different states while locking. First, we pasued at MAX POWER while we waited for the R waves of a few different earthquakes to pass. Then, we proceeded successfully through to LASER NOISE SUPPRESSION. There are large glitches in this state, probably because of the CARM loop, so we stayed in the state to wait for the glitches to die down. Then, Corey selected ADS TO CAMERAS. This state proceeded fine, but we saw the 1 Hz starting to ring up. Corey selected the ASC Hi Gain script button on the seismic page to damp the 1 Hz back down. It damped down. We next went to INJECT SQUEEZING, which was fine. We waiting again about a minute after the state finished. Finally, Corey selected CLOSE BEAM DIVERTERS and we lost lock. Here is the guardian log so we can see the timestamps of each step.
We had the same lockloss earlier, it's logged at being from "omc whitening" but usually the beam diverters state is very fast so we are in omc whitening by the time the lockloss happens. Attached is the suspension signals from both locklosses. It looks like something rings up in DARM.
2025-11-05_00:36:48.787114Z ISC_LOCK [LASER_NOISE_SUPPRESSION.run] ezca: H1:IMC-REFL_SERVO_IN2GAIN => -29
2025-11-05_00:36:48.787719Z ISC_LOCK [LASER_NOISE_SUPPRESSION.run] ezca: H1:IMC-REFL_SERVO_FASTGAIN => -16
2025-11-05_00:37:24.445085Z ISC_LOCK REQUEST: ADS_TO_CAMERAS
2025-11-05_00:37:24.445085Z ISC_LOCK calculating path: LASER_NOISE_SUPPRESSION->ADS_TO_CAMERAS
2025-11-05_00:37:24.446434Z ISC_LOCK new target: ADS_TO_CAMERAS
2025-11-05_00:37:24.501243Z ISC_LOCK EDGE: LASER_NOISE_SUPPRESSION->ADS_TO_CAMERAS
2025-11-05_00:37:24.501532Z ISC_LOCK calculating path: ADS_TO_CAMERAS->ADS_TO_CAMERAS
2025-11-05_00:37:24.505287Z ISC_LOCK executing state: ADS_TO_CAMERAS (578)
2025-11-05_00:37:24.506308Z ISC_LOCK [ADS_TO_CAMERAS.enter]
2025-11-05_00:37:24.532550Z ISC_LOCK [ADS_TO_CAMERAS.main] ezca: H1:GRD-CAMERA_SERVO_REQUEST => CAMERA_SERVO_ON
2025-11-05_00:37:24.533290Z ISC_LOCK [ADS_TO_CAMERAS.main] camera servo guardian state
2025-11-05_00:37:24.533992Z ISC_LOCK [ADS_TO_CAMERAS.main] ezca: H1:ASC-AS_A_RF36_Q_YAW_SW1 => 8
2025-11-05_00:37:24.785056Z ISC_LOCK [ADS_TO_CAMERAS.main] ezca: H1:ASC-AS_A_RF36_Q_YAW => ON: OFFSET
2025-11-05_00:38:24.227696Z ISC_LOCK [ADS_TO_CAMERAS.run] ezca: H1:SUS-ETMX_L2_DRIVEALIGN_P2L_SPOT_GAIN => 3.13
2025-11-05_00:38:24.228345Z ISC_LOCK [ADS_TO_CAMERAS.run] ezca: H1:SUS-ETMY_L2_DRIVEALIGN_P2L_SPOT_GAIN => 4.89
2025-11-05_00:38:24.230162Z ISC_LOCK [ADS_TO_CAMERAS.run] ezca: H1:SUS-ETMX_L2_DRIVEALIGN_Y2L_SPOT_GAIN => 4.85
2025-11-05_00:38:24.230577Z ISC_LOCK [ADS_TO_CAMERAS.run] ezca: H1:SUS-ETMY_L2_DRIVEALIGN_Y2L_SPOT_GAIN => 0.78
2025-11-05_00:39:40.380293Z ISC_LOCK REQUEST: INJECT_SQUEEZING
2025-11-05_00:39:40.383415Z ISC_LOCK calculating path: ADS_TO_CAMERAS->INJECT_SQUEEZING
2025-11-05_00:39:40.386240Z ISC_LOCK new target: INJECT_SQUEEZING
2025-11-05_00:39:40.448133Z ISC_LOCK EDGE: ADS_TO_CAMERAS->INJECT_SQUEEZING
2025-11-05_00:39:40.448133Z ISC_LOCK calculating path: INJECT_SQUEEZING->INJECT_SQUEEZING
2025-11-05_00:39:40.453186Z ISC_LOCK executing state: INJECT_SQUEEZING (580)
2025-11-05_00:39:40.453467Z ISC_LOCK [INJECT_SQUEEZING.enter]
2025-11-05_00:39:40.471153Z ISC_LOCK [INJECT_SQUEEZING.main] ezca: H1:GRD-SQZ_MANAGER_REQUEST => FREQ_DEP_SQZ
2025-11-05_00:39:40.472228Z ISC_LOCK [INJECT_SQUEEZING.main] timer['CheckSqz'] = 90
2025-11-05_00:39:40.473316Z ISC_LOCK [INJECT_SQUEEZING.main] ezca: H1:GRD-SUS_PI_REQUEST => PI_DAMPING
2025-11-05_00:39:40.720211Z ISC_LOCK [INJECT_SQUEEZING.run] USERMSG 0: SQZ_MANAGER: has notification
2025-11-05_00:39:45.281174Z ISC_LOCK [INJECT_SQUEEZING.run] USERMSG 0: SQZ_MANAGER: has notification
2025-11-05_00:40:51.377382Z ISC_LOCK REQUEST: CLOSE_BEAM_DIVERTERS
2025-11-05_00:40:51.378087Z ISC_LOCK calculating path: INJECT_SQUEEZING->CLOSE_BEAM_DIVERTERS
2025-11-05_00:40:51.378087Z ISC_LOCK new target: CLOSE_BEAM_DIVERTERS
2025-11-05_00:40:51.459548Z ISC_LOCK EDGE: INJECT_SQUEEZING->CLOSE_BEAM_DIVERTERS
2025-11-05_00:40:51.459721Z ISC_LOCK calculating path: CLOSE_BEAM_DIVERTERS->CLOSE_BEAM_DIVERTERS
2025-11-05_00:40:51.464167Z ISC_LOCK executing state: CLOSE_BEAM_DIVERTERS (590)
2025-11-05_00:40:51.465760Z ISC_LOCK [CLOSE_BEAM_DIVERTERS.enter]
2025-11-05_00:40:51.475467Z ISC_LOCK [CLOSE_BEAM_DIVERTERS.main] Closing REFL beam diverter
2025-11-05_00:40:51.476845Z ISC_LOCK [CLOSE_BEAM_DIVERTERS.main] ezca: H1:SYS-MOTION_C_BDIV_A_CLOSE => 1
2025-11-05_00:40:51.476939Z ISC_LOCK [CLOSE_BEAM_DIVERTERS.main] Closing AS beam diverter
2025-11-05_00:40:51.478119Z ISC_LOCK [CLOSE_BEAM_DIVERTERS.main] ezca: H1:SYS-MOTION_C_BDIV_D_CLOSE => 1
2025-11-05_00:40:51.478494Z ISC_LOCK [CLOSE_BEAM_DIVERTERS.main] ezca: H1:VID-CAM16_EXP_REQ => 36000
2025-11-05_00:40:51.579758Z ISC_LOCK [CLOSE_BEAM_DIVERTERS.main] ezca: H1:VID-CAM16_EXP_SET => 1
2025-11-05_00:40:51.579758Z ISC_LOCK [CLOSE_BEAM_DIVERTERS.main] Closing POP beam diverter
2025-11-05_00:40:51.580545Z ISC_LOCK [CLOSE_BEAM_DIVERTERS.main] ezca: H1:SYS-MOTION_C_BDIV_B_CLOSE => 1
2025-11-05_00:40:53.156310Z ISC_LOCK [CLOSE_BEAM_DIVERTERS.run] Unstalling IMC_LOCK
2025-11-05_00:40:53.347649Z ISC_LOCK [CLOSE_BEAM_DIVERTERS.run] Unstalling VIOLIN_DAMPING
2025-11-05_00:40:53.521338Z ISC_LOCK JUMP target: LOCKLOSS
2025-11-05_00:40:53.521845Z ISC_LOCK [CLOSE_BEAM_DIVERTERS.exit]
2025-11-05_00:40:53.582367Z ISC_LOCK JUMP: CLOSE_BEAM_DIVERTERS->LOCKLOSS
2025-11-05_00:40:53.582367Z ISC_LOCK calculating path: LOCKLOSS->CLOSE_BEAM_DIVERTERS
2025-11-05_00:40:53.585523Z ISC_LOCK new target: DOWN
2025-11-05_00:40:53.590140Z ISC_LOCK executing state: LOCKLOSS (2)
I double checked that the squeezer wasn't doing anything (just in case waiting longer than 1 min might help disentangle things). The SQZ_MANAGER was hanging out in state 69, where it asks the SQZ_ANG_ADJUST guardian to go to 'ADJUST_SQZ_ANG_ADF'. On the way to that state, the last thing the sqz manager had done was tell the SQZ_ANG_ADJUST guardian to go to WAIT_SQZ_AND_ADF, which just sits and waits for ISC_LOCK to get to NomLowNoise. So, I think I've convinced myself the lockloss is nothing to do with the squeezer.
I plotted several channels (starting from the lockloss 'scope). I'm still not really sure why the test mass L2 channels get a bit fuzzy about one second before the beam diverters close, but I think it's a red herring anyway. The real thing that seems to be interesting is that the HAM1 GS13s see a kick (due to the beam diverter moving) right when DARM sees a kick (the kick that Elenna pointed out).
So, I agree with Elenna and Corey's plan for now, to do the last few locking states slowly, then doing CLOSE_BEAM_DIVERTERS by hand to figure out which beam diverter is causing the problem. I also think it could be interesting to trend the GS13s for HAMs 1 and 6 for other locks (the other one tonight that we lost lock, as well as a few that were successful), to see if the kicks have gotten bigger or if they are the same size as always.
I also think that if closing the beam diverters causes another lockloss, we should just manual past the CLOSE_BEAM_DIVERTERS state, run for the night with them open, and tag DetChar. I'm pretty sure we've shown that they no longer really matter if they are open or closed, so I think we'd rather try to have some data with the beam diverters open than none at all.
Dr. Dripta B. and I went to EX today to do a PCAL End Station measurement with PS4 using the T1500062 ES Power Sensor RR Meas : Procedures & Log.
I took a picture of the beam spots before we started, and after we were done.
We ran Miriam's fantastic shell tool & left her some feed back written in the margins of the 2.2 section of the T1500062 Proc & Log document linked above.
We tried to run the measurement right at the end station but we didn't know that NDS servers were having a problem this week.
I eventually changed the server to be h1daqnds1 port 8088 to get the data instead.
Command Ran:
python generate_measurement_data.py --WS PS4 --date 2025-11-03
Reading in config file from python file in scripts
../../../Common/O4PSparams.yaml
PS4 rho, kappa, u_rel on 2025-11-03 corrected to ES temperature 299.5 K :
-4.7017855975867215 -0.0002694340454223 2.686163396659873e-05
Copying the scripts into tD directory...
Connected to h1daqnds1
martel run
reading data at start_time: 1446313243
reading data at start_time: 1446313659
reading data at start_time: 1446313992
reading data at start_time: 1446314497
reading data at start_time: 1446314910
reading data at start_time: 1446315256
reading data at start_time: 1446315430
reading data at start_time: 1446316080
reading data at start_time: 1446316490
Ratios: -0.46104301054554775 -0.4663040860974578
writing nds2 data to files
finishing writing
Background Values:
bg1 = 9.224674; Background of TX when WS is at TX
bg2 = 4.512480; Background of WS when WS is at TX
bg3 = 9.191238; Background of TX when WS is at RX
bg4 = 4.365167; Background of WS when WS is at RX
bg5 = 9.279967; Background of TX
bg6 = 0.429854; Background of RX
The uncertainty reported below are Relative Standard Deviation in percent
Intermediate Ratios
RatioWS_TX_it = -0.461043;
RatioWS_TX_ot = -0.466304;
RatioWS_TX_ir = -0.455583;
RatioWS_TX_or = -0.461207;
RatioWS_TX_it_unc = 0.084627;
RatioWS_TX_ot_unc = 0.080522;
RatioWS_TX_ir_unc = 0.093808;
RatioWS_TX_or_unc = 0.083107;
Optical Efficiency
OE_Inner_beam = 0.988311;
OE_Outer_beam = 0.989318;
Weighted_Optical_Efficiency = 0.988814;
OE_Inner_beam_unc = 0.058828;
OE_Outer_beam_unc = 0.054324;
Weighted_Optical_Efficiency_unc = 0.080074;
Martel Voltage fit:
Gradient = 1637.051728;
Intercept = 0.515075;
Power Imbalance = 0.988718;
Endstation Power sensors to WS ratios::
Ratio_WS_TX = -1.078345;
Ratio_WS_RX = -1.392858;
Ratio_WS_TX_unc = 0.050215;
Ratio_WS_RX_unc = 0.044908;
=============================================================
============= Values for Force Coefficients =================
=============================================================
Key Pcal Values :
GS = -5.135100; Gold Standard Value in (V/W)
WS = -4.701786; Working Standard Value
costheta = 0.988362; Angle of incidence
c = 299792458.000000; Speed of Light
End Station Values :
TXWS = -1.078345; Tx to WS Rel responsivity (V/V)
sigma_TXWS = 0.000541; Uncertainity of Tx to WS Rel responsivity (V/V)
RXWS = -1.392858; Rx to WS Rel responsivity (V/V)
sigma_RXWS = 0.000626; Uncertainity of Rx to WS Rel responsivity (V/V)
e = 0.988814; Optical Efficiency
sigma_e = 0.000792; Uncertainity in Optical Efficiency
Martel Voltage fit :
Martel_gradient = 1637.051728; Martel to output channel (C/V)
Martel_intercept = 0.515075; Intercept of fit of Martel to output (C/V)
Power Loss Apportion :
beta = 0.998895; Ratio between input and output (Beta)
E_T = 0.993842; TX Optical efficiency
sigma_E_T = 0.000398; Uncertainity in TX Optical efficiency
E_R = 0.994941; RX Optical Efficiency
sigma_E_R = 0.000398; Uncertainity in RX Optical efficiency
Force Coefficients :
FC_TxPD = 7.895132e-13; TxPD Force Coefficient
FC_RxPD = 6.181524e-13; RxPD Force Coefficient
sigma_FC_TxPD = 5.093350e-16; TxPD Force Coefficient
sigma_FC_RxPD = 3.738196e-16; RxPD Force Coefficient
data written to ../../measurements/LHO_EndX/tD20251104/
This produced the following plots:
Martel_Voltage_test.png
WS_at_TX.png
WS_at_RX.png
WS_at_RX_BOTH_BEAMS.png
Running the the Following command makes the Trends plots for the ES measurement:
python pcalPublishReportsV5.py LHO_EndX tD20251104
LHO_EndX_PD_ReportV5.pdf
All of this can be found at the PCAL git lab repo.
TITLE: 11/05 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Maintenance Tuesday. Relocking hasn't been very successful so far. CO2Y was having IR faulting issues (alog87952), then we had a lock loss that occurred when we got to the Close_Beam_Diverters state, which we saw last week (alog87877). Now we are having a few earthquakes rolling through as we sit in max power. We don't want to advance until the ground stop shaking a bit so we can keep high gain asc on.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 16:06 | ISC | Matt | CR | n | IMC 60W meas. | 17:01 |
| 16:07 | SYS | Mitchell, Randy | LVEA | n | Mega cleanroom sock install | 17:15 |
| 16:10 | FAC | Kim, Nellie | LVEA | n | Tech clean | 18:03 |
| 16:11 | VAC | Jordan, Travis, Janos | EY | n | Purge air line install | 19:59 |
| 16:12 | VAC | Gerardo | LVEA, Mech room | n | Run Kobelco, pump CP1 LN2 line, test turbo pumps | 20:35 |
| 16:12 | CDS | Betsy | CER | n | Putting stickers on racks | 16:23 |
| 16:20 | FAC | Tyler, Rays | Site | n | Septic tank inspection and pumping | 17:59 |
| 16:22 | CDS | Fil | EX | n | Connecting fibers to prep for RFM test | 17:27 |
| 16:31 | PCAL | Dripta, Tony | PCAL lab | LOCAL | Check on lab | 16:47 |
| 16:38 | SYS | Betsy | LVEA | n | Check on sock and domes | 17:14 |
| 16:44 | VAC | Norco | CP1 | n | LN2 fill | 18:49 |
| 16:44 | FAC | Chirs, pest | LVEA, outbuildings | n | Pest control | 17:49 |
| 16:49 | PCAL | Tony, Dripta | EX | YES | PCAL calibration meas. | 19:31 |
| 17:07 | CDS | Betsy | CER | n | Removing stickers from ISCC3&4 | 17:16 |
| 17:17 | VAC | Norco | CP5 MX | n | LN2 fill | 19:14 |
| 17:31 | CDS | Fil | LVEA | n | Cable pull around HAM2/3 | 20:05 |
| 17:34 | - | Richard | LVEA | n | Checking on a few things | 17:57 |
| 17:34 | SUS | Oli | CR | n | PRM, SRM estimator measurements | 20:08 |
| 17:40 | PEM | Robert, Sam | LVEA | n | Mounting accel. | 18:49 |
| 17:45 | SYS | Randy | LVEA | n | Taking measurements of HAM3 and BSC2 areas | 18:27 |
| 17:49 | FAC | Chris | LVEA | n | FAMIS checks | 18:32 |
| 17:49 | PSL | Jason | LVEA | n | Walking the PSL chiller lines, FAMIS | 18:01 |
| 17:57 | SQZ | Sheila, Matt | CR | n | OM2 hot OMC meas. | 20:08 |
| 17:59 | FAC | Tyler, Geotech | MY | n | CEBEX surverying | 23:24 |
| 18:03 | FAC | Kim | EX | yes | Tech clean | 19:17 |
| 18:04 | FAC | Nellie | EY | n | Tech clean | 19:20 |
| 18:05 | CDS | Erik | EY, EX | n | Grabbing equipment from EY, doing RFM test at EX | 19:18 |
| 18:13 | CDS | Marc | LVEA | n | Helping Fil with cables at HAM2/3 | 20:05 |
| 18:14 | IO | Rahul | Opt Lab | LOCAL | JAC testing | 19:07 |
| 18:55 | PEM | Robert, Sam | LVEA | n | More accelerometer moving | 20:28 |
| 19:20 | FAC | Rana, Mike, Tour | LVEA | n | Tour | 20:12 |
| 19:41 | FAC | Kim | FCES | n | Tech clean | 19:42 |
| 19:51 | FAC | Randy | XTube | n | Filling cracks | 23:02 |
| 20:47 | PCAL | Tony | PCAL lab | LOCAL | Wrapping up meas. | 21:32 |
| 20:52 | TCS | Ryan C | LVEA | n | Turn TCS CO2Y back on | 20:53 |
| 20:58 | PCAL | Rene, Alicia | PCAL lab | LOCAL | Check out lab with Tony | 21:44 |
| 21:09 | TCS | TJ | LVEA | N | Reset CO2Y laser | 21:24 |
| 22:19 | SPI | Corey, Ryan S | Opt Lab | n | Optics cleaning | 23:29 |
| 23:04 | PEM | Robert, Sam | Arms | n | BTE cell phone signal testing | 02:04 |
FAMIS 27570
pH of PSL chiller water was measured to be between 10.0 and 10.5 according to the color of the test strip.
TITLE: 11/05 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: EARTHQUAKE
Wind: 10mph Gusts, 6mph 3min avg
Primary useism: 0.52 μm/s
Secondary useism: 0.35 μm/s
QUICK SUMMARY:
Maintenance Day Recovery continues through the hand-off, but H1 locking is also being impacted by strong seismic activity (Russia EQs + microseism also inching back up). Violins look elevated as well, so there has been attention given to them.
Hand-Offs from TJ:
Ibrahim, Rahul
In short, it appears that BRD damping is working.
M. Todd, S. Dwyer
We wanted to heat up OM2 and retake an OMC scan to get another measurement to compare against our models, as Sheila had the idea to use the SQZ beam which we know well at ZM5 and measure the overlap to the OMC. We were able to take some data at the very end of the maintanence period but we were having issues for a long time and gave up on having the OM2 hot for the measurement and let it cool down so that relocking could be on time today.
We did all the usual steps of turning the DC centering loops on for the OMs and put the OMC ASC on to center on the QPDs. In the future, here's what we should do.
Go to IFO_OUT > OM2 and in the input box below "POWER_SET" in the bottom panel, set the power to 4.6. Wait 1hr45min.
Also make sure that there is no IFO power when you go to take your measurement.
Make sure the beam diverter is closed and that the seed launch power is around 75 (if not you may have to pico the half wave plate by going to SQZT0 and clicking on the lambda/2 near "Seed Launch"). Then take the SQZ manager to LOCKED_SEED_DITHER and make sure that OPO locks. You should also take the FC to misaligned, as this was causing flashes in our DCPDs we did not want and were confusing us.
By opening the beam diverter and the fast shutter you should be able to get light on the AS_A and AS_B QPDs so that you can close the DC3 and DC3 centering loops. Then you can take the OMC guardian to OMC ASC and make sure the beam is centered on the OMC QPDs. The NSUM values we saw with the SQZ beam was around 0.001, so there isn't a ton of light on them.
--- You should be able to measure an OMC scan now ---
If the HG10 peak is really high compared to the OM2 you may try closing the OMC LSC loop and waiting for things to settle before unlocking and remeasuring.
The OMC scans from today were not very good as the HG10 modes were much too high and don't make a good quadratic estimate of the mode-mismatch.
The IR sensor for CO2Y was intermittently going into fault. I resat the connector on the sat box and it seems to be okay now.
Today during maintenance the CO2Y laser tripped off 3 times. The first one we figured was due to cable pulling activity in the area, the second was a bit more of a mystery, and the third made us think there was definitely something wrong. Ryan C restarted the first two at the control box, needing to power cycle it at least once each time. The third time I went out there and noticed the IR fault light going on and off. I touched the cable that goes to the IR sensor itself and the lights on the IR satellite box went red. Simply placing my finger on the cable near the connector on the box was enough to bring it into fault. I checked the connector there and it had some play in it, so I tried to seat it a bit better and then it seemed to not be as sensitive to me touching it. I turned the key off and on again and it was good to go.
Because this last trip happened while we were powering up to 25W, there was about 20 minutes or so when CO2X was on at its nominal annular power, but CO2Y was off (~2107-2131UTC). Perhaps this is an interesting bit of time for someone to look at?
We ended up losing lock when powering up to 60W. Perhaps I should have waited longer after CO2Y was back to let it "catch up" to ITMX's thermal state.
(Jordan V., Gerardo M.)
Connected valves to be able to pump down the CP1 LN2 fill line, we started with 26 micron of pressure and we pumped down on the system with a scroll pump for most of the maintenance period, we stopped the pump down of the fill line at around 11:45 AM with pressure down to 4 micron. Both valves were closed before turning off the scroll pump. Hoses, adapter and valves remain connected, we intent to visit this item next week again.
Today all hardware used to pump down the fill line was removed, initial reading was that of 3 microns, but after all the items were removed the reported internal pressure went up to 7 microns. We'll take another reading next Tuesday.