TITLE: 09/01 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 10mph Gusts, 7mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
Just chilling out max on this sunny Sunday!
Neil and his parents stopped by the control room for a quick chat about what we do here and a quick stroll to the overpass.
It's been great over her for exactly 11 Hours.
~!--.--!~
There is an incoming 6.6 Magnitude earthquake coming from the Soloman Islands......
VACSTAT reported a glitch in BSC3, H0:VAC-LX_Y8_PT132_MOD2_PRESS_TORR at 02:21:57 Sun 01 Sep 2024 PDT.
This looks like a sensor glitch, it is only 6 seconds wide and has no characteristic pump-down curve. Nothing was seen in the neighbourhood BSC2 at this time.
Attachment shows VACSTAT MEDM and ndscope of PT132 covering about 40 mins.
Sun Sep 01 08:11:50 2024 INFO: Fill completed in 11min 46secs
TITLE: 09/01 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 2mph Gusts, 0mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
When I walked in this morning the IFO had been locked and observing for 5 hours.
IFO_Notify has not contacted anyone over the OWL shift.
The IFO did lock it's self over the night it just took some time and had multiple locklosses before doing an Initial alignment which helped it get get to NLN @ 9:28 UTC.
NUC 33 survived the night!
TITLE: 09/01 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Two locklosses this evening, but relocking was straightforward and so far fully automated each time. H1 is currently relocking, currently locking green arms.
LOG:
No log for this shift.
Lockloss @ 05:45 UTC - link to lockloss tool
No obvious cause, but it looks like LSC-MICH started seeing something wobble about 2 seconds before the lockloss.
TITLE: 08/31 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
I came in and it was locked. but lost lock shortly after from the dreaded Double PI Ring up.
Relocked with out an IA.
Another Unkown lockloss.
I requested an IA which went easy peazy.
The Locking process did do the DRMI lockloss but didn't fully "lockloss" all the way to Down.
Got back up and runnig in NLN around 18:32 UTC , you know , just in time to postpone the Calibration until 22:00
Ran Francisco's ETMX Drive Align Script to try to get KAPPA back to 1.
Did a Calibration Sweep
@ 23:00 UTC I gave NUC 33 a Hard shutdown, and Found out that the spare NUC that I'd like to replace it with is mounted to the wall insuch a way that is Not easily removed without dismounting the monitor bracket.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:58 | SAF | H1 | LHO | YES | LVEA is laser HAZARD | 18:24 |
17:12 | Vac | Gerardo | VPW | N | Checking on parts | 17:53 |
Lockloss @ 23:14 UTC - link to lockloss tool
No obvious cause; maybe some small motion by ETMX immediately before lockloss like we've seen before, but it's much smaller than usual.
H1 back to observing at 00:14 UTC.
To go to observing, I reverted the SDF diff on the susetmx model for the new ETMX drivealign L2L gain provided by Francisco's script (screenshot attached; see alog79841). This had not been updated in Guardian-space, so it was set to the previous setpoint during TRANSITION_FROM_ETMX. I've updated the gain to 191.711514 in lscparams.py, saved it, and loaded ISC_LOCK.
The following gains were set to Zero:
caput H1:CAL-PCALY_PCALOSC1_OSC_SINGAIN 0
caput H1:CAL-PCALY_PCALOSC2_OSC_SINGAIN 0
caput H1:CAL-PCALY_PCALOSC3_OSC_SINGAIN 0
caput H1:CAL-PCALY_PCALOSC4_OSC_SINGAIN 0
caput H1:CAL-PCALY_PCALOSC9_OSC_SINGAIN 0
caput H1:CAL-PCALX_PCALOSC1_OSC_SINGAIN 0
caput H1:CAL-PCALX_PCALOSC1_OSC_COSGAIN 0
caput H1:CAL-PCALX_PCALOSC4_OSC_SINGAIN 0
caput H1:CAL-PCALX_PCALOSC5_OSC_SINGAIN 0
caput H1:CAL-PCALX_PCALOSC6_OSC_SINGAIN 0
caput H1:CAL-PCALX_PCALOSC7_OSC_SINGAIN 0
caput H1:CAL-PCALX_PCALOSC8_OSC_SINGAIN 0
Then I took ISC_LOCK to NLN_CAL_MEAS
22:13 UTC ran the calibration following command:
pydarm measure --run-headless bb
notification: end of test
diag> save /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240831T221301Z.xml
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240831T221301Z.xml saved
diag> quit
EXIT KERNEL
2024-08-31 15:18:13,008 bb measurement complete.
2024-08-31 15:18:13,008 bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240831T221301Z.xml
2024-08-31 15:18:13,008 all measurements complete.
anthony.sanchez@cdsws29:
anthony.sanchez@cdsws29: gpstime;python /ligo/groups/cal/src/simulines/simulines/simuLines.py -i /ligo/groups/cal/src/simulines/simulines/newDARM_20231221/settings_h1_newDARM_scaled_by_drivealign_20231221_factor_p1.ini;gpstime
PDT: 2024-08-31 15:18:49.210990 PDT
UTC: 2024-08-31 22:18:49.210990 UTC
GPS: 1409177947.210990
2024-08-31 22:41:50,395 | INFO | Commencing data processing.
Traceback (most recent call last):
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 712, in
run(args.inputFile, args.outPath, args.record)
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 205, in run
digestedObj[scan] = digestData(results[scan], data)
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 621, in digestData
coh = np.float64( cohArray[index] )
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/gwpy/types/series.py", line 609, in __getitem__
new = super().__getitem__(item)
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/gwpy/types/array.py", line 199, in __getitem__
new = super().__getitem__(item)
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/astropy/units/quantity.py", line 1302, in __getitem__
out = super().__getitem__(key)
IndexError: index 3074 is out of bounds for axis 0 with size 0
ICE default IO error handler doing an exit(), pid = 2858202, errno = 32
PDT: 2024-08-31 15:41:53.067044 PDT
UTC: 2024-08-31 22:41:53.067044 UTC
GPS: 1409179331.067044
These changes were reverted and restored back to their previous values.
H1:CAL-PCALY_PCALOSC1_OSC_SINGAIN
H1:CAL-PCALY_PCALOSC2_OSC_SINGAIN
H1:CAL-PCALY_PCALOSC3_OSC_SINGAIN
H1:CAL-PCALY_PCALOSC4_OSC_SINGAIN
H1:CAL-PCALY_PCALOSC9_OSC_SINGAIN
H1:CAL-PCALX_PCALOSC1_OSC_SINGAIN
H1:CAL-PCALX_PCALOSC1_OSC_COSGAIN
H1:CAL-PCALX_PCALOSC4_OSC_SINGAIN
H1:CAL-PCALX_PCALOSC5_OSC_SINGAIN
H1:CAL-PCALX_PCALOSC6_OSC_SINGAIN
H1:CAL-PCALX_PCALOSC7_OSC_SINGAIN
H1:CAL-PCALX_PCALOSC8_OSC_SINGAIN
I then took ICS_LOCK back back to NOMINAL_LOW_NOISE.
TITLE: 08/31 Eve Shift: 2300-0500 UTC (1600-2200 PST), all times posted in UTC
STATE of H1: Calibration
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 12mph Gusts, 7mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY: H1 has been locked for 4.5 hours. Tony and I are wrapping up some calibration time to take the regular sweeps while Louis helped troubleshoot (see their alogs for details). Will resume observing soon.
anthony.sanchez@cdsws29: python3 /ligo/home/francisco.llamas/COMMISSIONING/commissioning/k2d/KappaToDrivealign.py
Fetching from 1409164474 to 1409177074
Opening new connection to h1daqnds1... connected
[h1daqnds1] set ALLOW_DATA_ON_TAPE='False'
Checking channels list against NDS2 database... done
Downloading data: |█████████████████████████████████████████████████████████████████████████████████████| 12601.0/12601.0 (100%) ETA 00:00
Warning: H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN changed.
Average H1:CAL-CS_TDEP_KAPPA_TST_OUTPUT is -2.3121% from 1.
Accept changes of
H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN from 187.379211 to 191.711514 and
H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN from 184.649994 to 188.919195
Proceed? [yes/no]
yes
Changing
H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN and
H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN
H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN => 191.7115136134197
anthony.sanchez@cdsws29:
I'm not sure if the value set by this script is correct. KAPPA_TST was 0.976879 (-2.3121%) at the time this script looked at it. The L2L DRIVEALIGN GAIN inH1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN
was 184.65 at the time of our last calibration update. This is the time at which KAPPA_TST was set to 1. So to offset the drift in the TST actuation strength we should change the drivealign gain to 184.65 * 1.023121 = 188.919. This script chose to update the gain to 191.711514 instead; this is 187.379211 * 1.023121, with 187.379211 being the gain value at the time the script was run. At that time, the drivealign gain was already accounting for a 1.47% drift in the actuation strength (this has so far not been properly compensated for in pyDARM and may be contributing to the error we're currently seeing...more on that later this weekend in another post.). So I think this script should be basing corrections as percentages applied with respect to the drivealign gain value at the time when the kappa's were last set (i.e. just after the last front end calibration update) *not* at the current time. also, the output from that script claims that it also updatedH1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN
but I trended it and it hadn't been changed. Those print statements should be cleaned up.
to close out this discussion, it turns out that the drivealign adjustment script is doing the correct thing. Each time the drivealign gain is adjusted to counteract the effect of ESD charging, the percent change reported by Kappa TST should be applied to the drivealign gain at that time rather than what the gain was when the kappa calculations were last updated.
If the IFO is locked and thermalized when the normal calibration measurement time rolls around today, please do the following before moving to ISC_LOCK state 700 (NLN_CAL_MEAS):caput H1:CAL-PCALY_PCALOSC1_OSC_SINGAIN 0 caput H1:CAL-PCALY_PCALOSC2_OSC_SINGAIN 0 caput H1:CAL-PCALY_PCALOSC3_OSC_SINGAIN 0 caput H1:CAL-PCALY_PCALOSC4_OSC_SINGAIN 0 caput H1:CAL-PCALY_PCALOSC9_OSC_SINGAIN 0 caput H1:CAL-PCALX_PCALOSC1_OSC_SINGAIN 0 caput H1:CAL-PCALX_PCALOSC1_OSC_COSGAIN 0 caput H1:CAL-PCALX_PCALOSC4_OSC_SINGAIN 0 caput H1:CAL-PCALX_PCALOSC5_OSC_SINGAIN 0 caput H1:CAL-PCALX_PCALOSC6_OSC_SINGAIN 0 caput H1:CAL-PCALX_PCALOSC7_OSC_SINGAIN 0 caput H1:CAL-PCALX_PCALOSC8_OSC_SINGAIN 0
then follow the normal calibration procedure at https://cdswiki.ligo-wa.caltech.edu/wiki/TakingCalibrationMeasurements. then go back to NLN. Then after going back to NLN (state 600)caput H1:CAL-PCALY_PCALOSC1_OSC_SINGAIN 115 caput H1:CAL-PCALY_PCALOSC2_OSC_SINGAIN 5000 caput H1:CAL-PCALY_PCALOSC3_OSC_SINGAIN 5000 caput H1:CAL-PCALY_PCALOSC4_OSC_SINGAIN 1430 caput H1:CAL-PCALY_PCALOSC9_OSC_SINGAIN 3619 caput H1:CAL-PCALX_PCALOSC1_OSC_SINGAIN 30000 caput H1:CAL-PCALX_PCALOSC1_OSC_COSGAIN 30000 caput H1:CAL-PCALX_PCALOSC4_OSC_SINGAIN 40 caput H1:CAL-PCALX_PCALOSC5_OSC_SINGAIN 30 caput H1:CAL-PCALX_PCALOSC6_OSC_SINGAIN 30 caput H1:CAL-PCALX_PCALOSC7_OSC_SINGAIN 60 caput H1:CAL-PCALX_PCALOSC8_OSC_SINGAIN 2000
This is to facilitate a test for the gstlal pipeline. I will get something similar into guardian before the next calibration measurements.
Unknown Lockloss
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1409159062
No PI rings ups.
No Wind gusts.
Sat Aug 31 08:11:26 2024 INFO: Fill completed in 11min 22secs
Had to help H1 Manager out because we were in NOMINAL_LOW_NOISE but the OPO was having trouble getting locked with the ISS. OPO trans was having trouble getting to 70 and so definitely couldn't reach its setpoint of 80uW. I changed the setpoint to 69 since it was maxing out around 69.5, and I changed the OPO temp and accepted it in sdf.
Lockloss From an Earthquake.
USGS 6.6 Mag Near Soloman Islands
Holding ISC_lock in IDLE for the ground motion to settle.