TITLE: 08/26 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 0mph Gusts, 0mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY: Looks like we lost the SUSEY and ISCEY front ends. This created connection errors and put many Guardians, including IFO_NOTIFY, into an error state. Contacting CDS team now.
TITLE: 08/26 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 136Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Currently Observing and have been Locked for 1.5 hours. We were down for a while because of the big earthquake from Tonga, and recovery took a while because we were having issues with the ALSX crystal frequency. Eventually it fixed itself, as it tends to do. We got some lower state locklosses but eventaully were able to get back up with just the one initial alignment I ran when I first started relocking.
LOG:
23:30 Observing and Locked for 4 hours
23:41 Lockloss due to earthquake
01:07 Started trying to relock, starting initial alignment
- ALSX crystal frequency errors again
- Eventually went away, I didn''t do anything besides toggling Force and No Force
02:05 Initial alignment done, relocking
02:09 Lockloss from FIND_IR
02:17 Lockloss from OFFLOAD_DRMI_ASC
02:21 Lockloss form LOCKING_ALS
03:21 NOMINAL_LOW_NOISE
03:23 Observing
TITLE: 08/25 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 140Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
Fairly easy shift, with one lockloss in the middle and had a BIG Tonga EQ to end the shift with!
LOG:
TITLE: 08/25 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: EARTHQUAKE
Wind: 14mph Gusts, 8mph 5min avg
Primary useism: 0.54 μm/s
Secondary useism: 0.62 μm/s
QUICK SUMMARY: Just lost lock from a large earthquake that's going to keep us down for a bit. Before that, we had been locked for 4 hours.
Lockloss 08/25 @ 23:41UTC due to 6.9 earthquake in Tonga. We were knocked out really quickly by the S-wave and the R-wave won't be arriving for another ~30 mins, so it might be a while before we're back up.
03:23 UTC Observing
For FAMIS 26286:
Laser Status:
NPRO output power is 1.829W (nominal ~2W)
AMP1 output power is 64.59W (nominal ~70W)
AMP2 output power is 137.5W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 5 days, 19 hr 56 minutes
Reflected power = 21.01W
Transmitted power = 105.7W
PowerSum = 126.7W
FSS:
It has been locked for 0 days 7 hr and 40 min
TPD[V] = 0.6499V
ISS:
The diffracted power is around 2.3%
Last saturation event was 0 days 6 hours and 33 minutes ago
Possible Issues:
AMP1 power is low
PMC reflected power is high
FSS TPD is low
Sun Aug 25 08:10:32 2024 INFO: Fill completed in 10min 28secs
TITLE: 08/25 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 141Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 2mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
H1's been lockwed for 3.5hrs (looked like Tony got a wake-up call during h1 downtime, but had minimal intervention). BS camera is operational. Seismically fairly quiet for the last few hours (notable noise was corner station 3-10Hz had elevated levels from 1210-1300utc, attached). No Alarms.
With H1 returning to Observing yesterday, within hours, H1 returns to join L1 & V1 with the gravitational wave candidate, S240825ar (H1's last contribution to a gravitational wave candidate with L1 & V1 was back on July 16th for S240716b.)
TITLE: 08/25 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 139Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Currently Observing and have been Locked for over 10.5 hours. Shift has been quiet. Since Corey said relocking this morning was hands-off and smooth, so hopefully that'll continue through the night.
LOG:
23:30 Observing and have been Locked for over 5 hours
00:02 Verbal: "WAP on in LVEA, EX, EY" (Last time this happened that I could find was mid-May 77860)
Currently Observing and have been Locked for 9.5 hours. Since getting back to Observing we haven't had any issues!
TITLE: 08/24 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 135Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 21mph Gusts, 15mph 5min avg
Primary useism: 0.12 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
Observing and have been Locked for 5.5 hours!
Completed tasks:
Calibration (unthermalized): 79676
Calibration (1.5 hours in): 79689
Calibration (fully thermalized): 79691
LSCFF measurements: 79693
TITLE: 08/24 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
Have had the BS camera die on us a couple times in the last 24hrs (this requires contacting Dave to restart the camera's computer--atleast until this camera can be moved to another computer), and so the beginning of the shift was restoring this and also figuring out why H1 had a lockloss, BUT did not drop the Input Power down from 61W to 2W.
After the camera issue was fixed, ran an alignment with no issues, and then took H1 all the back to NLN also with no issues.
Then completed taking Calibration measurements after 1.5hrs and 3hrs. Oli also ran LSC Feedforward measurements......
Then there was ANOTHER BS Camera Computer Crash! Dave brought us back fast.
Now back to OBSERVING!
LOG:
Looking into why the input power didn't come back down after a lockloss at ADS_TO_CAMERAS, seems that the proper decorators that check for a lockloss are missing from the run method (but are there in main). This means that while ISC_LOCK was waiting for the camera servos to turn on, it didn't notice that the IFO lost lock, and therefore didn't run through the LOCKLOSS or DOWN states which would reset the input power.
I've added the decorators to the run method of ADS_TO_CAMEARS so this shouldn't happen again.
3.5 hours into our lock we took LSCFF measurements. Files can be found in /opt/rtcds/userapps/release/lsc/h1/scripts/feedforward/ and are basically the name of the measurement template+'_20240824', but I'll list their filenames below anyway.
MICH Preshaping (run as 30 fixed averages)
MICHFF_excitation_ETMYpum_20240824.xml
MICHFF (30 accumulative averages)
MICH_excitation_20240824.xml
SRCL Preshaping (32 accumulative averages)
SRCLFF_excitation_ETMYpum_20240824.xml
SRCLFF (30 accumulative averages)
SRCL_excitation_20240824.xml
PRCLFF (30 accumulative averages)
PRCL_excitation_20240824.xml
For H1 recovery road back to Observing, it was requested to take calibrations (1) immediately at NLN, (2) 1.5hrs after NLN, and then the (3) standard one of atleast 3hrs after NLN.
Measurement NOTES:
Now that this is complete, we can cross this off the white board, and Oli immediately started running LSC Feed Forward measurements.
(Corey, Oli, RyanS [remote], Sheila [remote])
After roughly 6-weeks (Jul 12, 2024), H1 made it back to Observing (at 2042utc Aug24) to join L1 & V1 (albeit briefly since we needed to drop from Observing to take a thermalized calibration measurement + LSC FeedForward offset measurement). H1 range is only about 140Mpc.
We are running with input power setting of 61W (to help avoide PI Mode24).
To get to Observing, SDF Diffs needed to be ACCEPTED (see attached image with all the diffs).
For H1 recovery road back to Observing, it was requested to take calibrations (1) immediately at NLN, (2) 1.5hrs after NLN, and then the (3) standard one of atleast 3hrs after NLN
Measurement NOTES:
h1cam26 (BS) was restarted Thur 12:56 PDT after stopping late Wed 22:23. It only lasted less than two days and stopped responding Fri night at 22:30.
I restarted it again this morning at 11:07 following the standard procedure.
and again...
h1cam26 stopped running at 15:25 (only ran for 4hr 18min). I power cycled camera at 15:41.
Dave and Jonathan have fixed teh CDS FE issues, we are now starting recovery. I also found HAM5 ISI tripped as well as SRM and OFI, looks like this happened about 4 hours ago, a few hours after the FEs tripped. no idea why they tripped yet.