Today I ran injections for CHARD and CSOFT Y using the NB injections template. To not mess up the templates, I made sure to get both quiet and injection times. They are saved in the usual couplings folder '/ligo/gitcommon/NoiseBudget/aligoNB/aligoNB/H1/couplings'
We can use these injections as part of the determination of the usefulness of CHARD blends.
Mon Sep 09 08:10:16 2024 INFO: Fill completed in 10min 12secs
Richard caught the tail of the fill on camera, it looked good.
15:31 UTC went out of Observing to start Commissioning
18:31 Back into Observing
TITLE: 09/09 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 3mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY:
Currently Observing at 158Mpc and have ben Locked for almost 5 hours. Dust monitors were going off in the Optics lab, but everything else normal.
TITLE: 09/09 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: The range was in the 150s, "PSL_ISS diff power is low" notification was flashing occassionally on DIAG_MAIN. We've been locked for 5:30
21:45 UTC Lockloss
23:05 - 23:30 UTC The violins got a little rung up by either of the locklosses so we had to damp them in OMC_WHITENING for a bit
23:30 UTC Observing
23:43 UTC 12 disconnected chans on CDS overview, NUC33 (cdsws04) froze, I had to physically restart it to get it back. This cleared the dc'd chans as expected.
TITLE: 09/08 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Just got into Observing after having waited in OMC_WHITENING for a while damping violins - they seem to have gotten rung up from a lockloss we had from ENGAGE_ASC_FOR_FULL_IFO. This last relocking process went smoothly otherwise, and the lockloss from earlier today was also not complicated, besides needing to wait a good while for ALSY WFS 3 Yaw to converge.
LOG:
14:30UTC Observing and Locked for 16:50hours
15:21 Lockloss after 17:39 hours locked
15:41 Going to run an initial alignment after locking green arms
- ALS_YARM was sitting in the INITIAL_ALIGNMENT state for a while without starting offloading because WFS_3_Y was taking a while to get under the threshold. I took ALS_YARM to UNLOCKED just in case and then back to INITIAL_ALIGNMENT_OFFLOAD and it converged eventually.
16:36 Initial alignment done, relocking
17:18 NOMINAL_LOW_NOISE
17:21 Observing
21:45 Lockloss
22:16 Lockloss from ENGAGE_ASC_FOR_FULL_IFO
23:30 NOMINAL_LOW_NOISE
23:30 Observing
TITLE: 09/08 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 5mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY:
Back to Observing at 23:30 UTC
CDS reports 12 disconnected channels, all related to NUC33. The NUC could probably use a restart, I can't vnc into it and pinging it gives nothing back, its frozen.
Lockloss @ 09/08 21:45UTC after 4.5 hours locked
Currently Observing at 158Mpc nd have been Locked for 2.5 hours. Quiet day with nothing to report
Lockloss @ 09/08 15:21UTC after 17:39 locked
17:21 Observing
Sun Sep 08 08:09:39 2024 INFO: Fill completed in 9min 35secs
TITLE: 09/08 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 5mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
Observing at 158Mpc and have been Locked for almost 17 hours. Everything looking normal, but the dust monitor alarm for the optics lab was going off so I'll make sure we don't have more sand appearing in there.
TITLE: 09/08 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 21:52 UTC (7hr 20 min lock) with some minor squeeze exceptions. Otherwise, very smooth day with 0 locklosses during my shift.
The squeezer has been acting up today:
23:04 UTC COMISSIONING: Squeezer was far from optimal and while still observing, we were only getting 120ish MPc range. As such, Oli and I went into temp comissioning to run the temperature optimization script before trying to relock it. While this was happening, Naoki called and said he thought it was a Pump ISS issue and then took hold of IFO to fix it. He was successful and we were back to obsering at our recent 155ish MPc. OBSERVING again as of 23:27 UTC
01:53 UTC COMISSONING: Squeezer unlocked, sending us into comissiong but within a few minutes it relocked again automatically. I was watching it do so. We were OBSERVING again as of 01:58 UTC.
Other:
We successfuly rode through a 6.0 mag EQ from Tonga. EQ mode triggered successfully.
Dust is high in the Optics Lab - I was told by Oli yesterday that there's a strange acummulation of sand by a dust monitor and that some measures were taken to remove the sand though perhaps more has built up. The 300NM PCF monitor is at RED alert and the 500NM PCF monitor is at YELLOW.
LOG:
None
TITLE: 09/07 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: We have been Locked for close to 2 hours. Not currently Observing because Naoki is trying to adjust some squeeze stuff since our range is really bad right now. Quiet day with one lockloss and easy relocking
LOG:
14:30UTC Observing and Locked for 7:47hrs
15:28 Plane passes overhead
15:38 Superevent S240907cg
18:30 Left Observing to run calibration sweep
19:04 Calibration measurements finished, back into Observing
19:36 Lockloss
20:20 We started going through CHECK_MICH_FRINGES for the second time so I took us to DOWN and started an initial alignment
20:41 Initial alignment done, relocking
21:41 NOMINAL_LOW_NOISE
- OPO couldn't catch, I lowered opo_grTrans_setpoint_uW to 69 in sqzparams, reloaded OPO, locked the ISS, and then adjusted the OPO temperature a bit until SQZ-CLF_REFL_RF6_ABS was maximized. Accepted new temperature setpoint and went into Observing
21:52 Observing
23:04 Left Observing to run sqz alignment
TITLE: 09/07 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 136Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT
Wind: 13mph Gusts, 6mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 21:52 UTC
Ibrahim, Oli, Jeff, Betsy, Joe, Others
alog 79079: Recent Post-TF Diagnostic Check-up - one of the early discoveries of the drift and pitch instability.
alog 79181: Recent M1 TF Comparisons. More recent TFs have been taken (found at: /ligo/svncommon/SusSVN/sus/trunk/BBSS/X1/BS/SAGM1/Data on the X1 network). We are waiting on updated confirmation of model parameters in order to know what we should correctly be comparing our measurements to. We just confirmed d4 a few days ago following the bottom wire loop change and now seek to confirm d1 and what that means with respect to our referential calibration block.
alog 79042: First investigation into the BOSEM drift - still operating erroneously under the tmperature assumption.
alog 79032: First discovery of drift issue, originally erroneously thought to be part of the diurnal temperature driven suspension sag (where I though that blades sagging more than others contributed to the drift in pitch).
We think that this issue is related to the height of the blades for these reasons:
We need to know how the calibration block converts to model parameters in d1 and whether that's effective or physical d1 in the model. Then we can stop using referential units.
Update to the triplemodelcomp_2024-08-30_2300_BBSS_M1toM1 file Ibrahim attached - there is an update to the legend. In that version I had the description for the July 12th measurement as 'New wire loop, d1=-1.5mm, no F1 drift', but there was actually F1 drift during that measurement - it had just started over a week before so the OSEM values weren't declining as fast as they had been earlier that week. I also want to be more specific as to what d1 means in that context, so in this updated version I changed July's d1 to be d1_indiv to hopefully better show that that value of -1.5mm is the same for each blade, whereas for the August measurements (now posted ) we have d1_net, because the blades heights differ by multiple .1 mms, but they still average out to the same -1.5mm.