20:53 UTC lockloss
There were HAM3 CPS glitches on verbal before the LL, and PRM was the first saturating suspension from the LL tool.
Another CPS glitch tripped the HAM3 ISI
All of the glitches are on the H2 & V2 sensor, which I believe have their own rack, corner 1 & 3 are on the other rack. I've asked Ryan to go and power cycle the CPS electronics are the rack in the CER. We'll see if that fixes the issue.
I called Jim who advised I go to the CER and power cycle the HAM3 ISI interace chassic CPS power as we were seeing the glitches in H2 and V2, I did this which has fixed the issue (for now at least).
22:46 UTC Observing
Lockloss at 15:22 UTC
16:35 UTC Observing
Sun Sep 22 08:12:37 2024 INFO: Fill completed in 12min 32secs
TITLE: 09/22 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 5mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.14 μm/s
QUICK SUMMARY:
TITLE: 09/22 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Quiet shift after relocking following a PI-caused lockloss.
H1 has now been locked and observing for 5 hours.
LOG:
No log for this shift.
TITLE: 09/21 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: We did not take the calibration measurement as LLO wasn't locked and we weren't fully thermalized either.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:58 | SAF | H1 | LVEA | YES | LVEA is laser HAZARD | 18:24 |
17:56 | PEM | Robert | Xarm | N | CEX PEM investigations/tests | 21:00 |
H1 back to observing at 23:59 UTC. Fully automated relock, no initial alignment needed.
The lockloss this morning at 12:38 UTC which ended a 31+ hr lock had the rarely seen FSS_OSCILLATION tag, so I started looking back at some related signals just to get a better idea of what happened. I've attached a trend which seems to show some IMC signals (mainly splitmon and IMC_F) see the first action before the lockloss, then the FSS fastmon has its first "glitch" about 140ms later. I'll admit I don't have a complete understanding of all the interactions here or what this is indicative of, but I figured it might be useful to take a look at since these FSS_OSCILLATION locklosses are quite rare.
TITLE: 09/21 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 7mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.16 μm/s
QUICK SUMMARY: H1 just lost lock from PI mode 24 ringing up after being locked for 90 minutes. Starting lock acquisition.
Back to observing at 21:30 UTC
RickS, FranciscoL, TonyS, JoeB, Dripta.
On Saturday, Sept. 21, 2024, the Pcal force coefficient EPICS record values were updated.
At X-end:
Old : H1:CAL-PCALX_FORCE_COEFF_RHO_T 8305.09
New : H1:CAL-PCALX_FORCE_COEFF_RHO_T 8300
Old : H1:CAL-PCALX_FORCE_COEFF_RHO_R 10716.6
New : H1:CAL-PCALX_FORCE_COEFF_RHO_R 10713.3
Old : H1:CAL-PCALX_FORCE_COEFF_TX_PD_ADC_BG 9.6571
New : H1:CAL-PCALX_FORCE_COEFF_TX_PD_ADC_BG 8.81815
Old : H1:CAL-PCALX_FORCE_COEFF_RX_PD_ADC_BG 0.7136
New : H1:CAL-PCALX_FORCE_COEFF_RX_PD_ADC_BG 0.56678
Old : H1:CAL-PCALX_FORCE_COEFF_TX_OPT_EFF_CORR 0.9938
New : H1:CAL-PCALX_FORCE_COEFF_TX_OPT_EFF_CORR 0.99331
Old : H1:CAL-PCALX_FORCE_COEFF_RX_OPT_EFF_CORR 0.9948
New : H1:CAL-PCALX_FORCE_COEFF_RX_OPT_EFF_CORR 0.9944
Old : H1:CAL-PCALX_XY_COMPARE_CORR_FACT 0.9991
New : H1:CAL-PCALX_XY_COMPARE_CORR_FACT 0.99855
At Y-End:
Old : H1:CAL-PCALY_FORCE_COEFF_RHO_T 7145.62
New : H1:CAL-PCALY_FORCE_COEFF_RHO_T 7155.15
Old : H1:CAL-PCALY_FORCE_COEFF_RHO_R 10649.6
New : H1:CAL-PCALY_FORCE_COEFF_RHO_R 10663.6
Old : H1:CAL-PCALY_FORCE_COEFF_TX_PD_ADC_BG 18.2388
New : H1:CAL-PCALY_FORCE_COEFF_TX_PD_ADC_BG 18.3088
Old : H1:CAL-PCALY_FORCE_COEFF_RX_PD_ADC_BG -0.2591
New : H1:CAL-PCALY_FORCE_COEFF_RX_PD_ADC_BG -0.7353
Old : H1:CAL-PCALY_FORCE_COEFF_TX_OPT_EFF_CORR 0.9923
New : H1:CAL-PCALY_FORCE_COEFF_TX_OPT_EFF_CORR 0.99191
Old : H1:CAL-PCALY_FORCE_COEFF_RX_OPT_EFF_CORR 0.9934
New : H1:CAL-PCALY_FORCE_COEFF_RX_OPT_EFF_CORR 0.9931
Old : H1:CAL-PCALY_XY_COMPARE_CORR_FACT 1.0005
New : H1:CAL-PCALY_XY_COMPARE_CORR_FACT 1.00092
The caput commands used for updating the ePICS records can be found here: https://git.ligo.org/Calibration/pcal/-/blob/master/O4/EPICS/results/CAPUT/Pcal_H1_CAPUTfile_O4brun_2024-09-16.txt?ref_type=heads
SDF diff has been accepted.
It has been roughly 4 months since the last update on May 2024. We plan to update the EPICS reords regularly, since over the course of a year , we observed that the Rx sensor calibration exhibits a systematic variation by as much as +/- 0.25 % at LHO. End station measurements allow us to monitor this systematic change over time. We will monitor the X/Y comparison factor to see if these EPICS changes had any significant effect on it. If the update is done correctly, there should be no significant change in the X/Y calibration comparison factor. If we find that there is a signifricant change in it, then we will correct the ePICS records accordingly.
Attached .pdf file shows the Rx calibration (ct/W) trend at LHOX and LHOY end for the entire O4 run. It marks the measurements used to calculate the Pcal force coefficients EPICS record values for the previous update at the start of the O4b run as well as the current update.
The pngs are the screenshots of the MEDM screen after the EPICS update.
Sat Sep 21 08:11:59 2024 INFO: Fill completed in 11min 55secs
TITLE: 09/21 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 10mph Gusts, 5mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.15 μm/s
QUICK SUMMARY:
H1 had been stuck in IA for 1.5 hours, something was clearly yawed from the camera. I looked at the osems scopes for the PRC and the ITM, ETM, TMS and set the test mass sliders to recover the osem values from when we were locking on the 20th. ITMY_Y ended up having to be moved the most, now we're moving along IA at regular pace.
Followed instructions in 74681. Last done in 78782, 79988. Saved in /ligo/gitcommon/NoiseBudget/aligoNB/aligoNB/H1/couplings/ and pushed to git.
CHARD_P, CHARD_Y, MICH, PRCL, SRCL screenshots attached. The IFO had been in NLN for 4h15 when these were taken. The AS_A_YAW and PRCL offsets were turned off.
I modified the script used in 78969 to project PRCL noise to DARM through MICH and SRCL, and now added coupling through CHARD P + Y.
There is high coherence between PRCL and CHARD, but these active injections that Camilla did show that the main coupling of PRCL to DARM is not through CHARD.
This script is now in /ligo/gitcommon/NoiseBudget/simplepyNB/PRCL_excitation_projections.py (not actually a git repo)
Sheila, Vicky - we ran the noise budget using these updated couplings from Camilla.
Noise budget with squeezing - pdf and svg (and the quantum sub-budget with squeezing). Plots without unsqueezed DARM here: pdf and svg. Picked a time with high range, and good ~4.8dB squeezing.
Noise budget without squeezing - pdf and svg (and the quantum sub-budget without squeezing).
The remaining sub-budgets are for squeezed DARM: Laser, Jitter, LSC, ASC, Thermal, PUMDAC.
Noise below 25 Hz looks pretty well accounted for: 10-15 Hz ~ ASC (CHARD Y, then CSOFT P, CHARD P). 15-20 Hz ~ LSC (PRCL).
Some comments and caveats for these budget plots:
The code has been pushed to aligoNB repo, commit 95d3a88b. To make these noise budget plots:
Thanks to Erik von Reis for helping us update the aligoNB environment to newer python and scipy (etc) versions. Next to-do: have the budget code use median averaging to be more robust to glitches (which can be done now that the environment has an updated scipy!).