TITLE: 10/02 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 21mph Gusts, 13mph 3min avg
Primary useism: 0.08 μm/s
Secondary useism: 0.45 μm/s
QUICK SUMMARY:
Sudden Lockloss @ 21:55 UTC very likley cause by a PSL FSS issue.
IMC Had a hard time relocking locking after the lockloss.
NLN Reached @ 22:54 UTC
Observing reached @ 22:56 UTC
Definitely looks like the FSS had a large glitch and lost lock before DARM saw the lockloss. This lockloss didn't have the FSS glitches happening before though.
The FMCS RO alarm, which has had several 1-minute alarms over the past few weeks after months of silent running, when into alarm at 05:09 Tue 01oct2024 for a duration of 2 hours and 47 mins, ending at 07:57.
Attached plot shows the time difference between the EY CNS-II independent GPS receiver and the timing system. For many months (since April) it has been steady at -250 +/- 50 nS.
Early Monday morning, 30sep2024 it flip-flopped between -250nS and -800nS from 05:55 to 06:42 PDT (48mins).
Then on Tuesday it did a similar thing, for 3 mins at 07:15 and for 9 mins at 07:46. The second period coincides with Erik working on the h1iscey replacement.
The EX CNS-II time difference has been steady throughout.
TITLE: 10/02 Day Shift: 2030-0500 UTC (1330-2200 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: Camilla
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 20mph Gusts, 11mph 3min avg
Primary useism: 0.06 μm/s
Secondary useism: 0.41 μm/s
QUICK SUMMARY:
Taking over H1 for Eve Shift Early, for a full length OPS Shift.
Camilla did a great job as a substitute operator, glad to have her back in the Operator core again! She passed me a locked and Observing IFO!
2ndary useism is falling.
TITLE: 10/02 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Short locks this shift but there is still some wind and microseism. We reduced input power to 60W from 61W.
LOG:
All re-locks today and last night were fully automatic.
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:01 | FAC | Karen | Optics lab/Vac Prep | N | Technical Cleaning | 15:24 |
15:43 | FAC | Karen | MY | N | Technical Cleaning | 16:42 |
16:21 | SQZ | Sheila | SQZT0 | LOCAL | Realign SHG fiber | 16:52 |
16:39 | FAC | Chris | MX,MY | N | FAMIS tasks: fire and first aid kit checks | 17:16 |
17:04 | FAC | Karen | Woodshop/Fire pump room | N | Technical cleaning | 17:53 |
19:00 | PEM | Robert/Anamaria/Lance | X-arm | N | CE test setup, staying outside | Still out |
19:57 | SQZ | Sheila | LVEA | N | Grabbing parts | 20:01 |
Sheila and I reducded the IMC input power from 61W back to 60W. As we haven't been very stable today, it's possible this might help.
It was increased in 80295 in an attmept to avoid PIs. No IFO retuning was done with this setting This didn't help the PIs and the ETMY ring heater was increased in 80320 which successfully avoided the PI locklosses.
Just lost lock but were locked and observing with range around 150MPc for the last 2 hours.
Eric is watching the LVA temperature as it's changing a little.
Following up on the need to decrease the OPO pump circulating power last night (80413), I went to SQZT0 to see if I could improve the pump fiber alignment. I was only able to get a small improvement by walking the steering mirrors into the pump fiber, and also a small decrease in the rejected polarization in HAM7 by walking the half wave plate and quarter wave plate in the pump fiber path.
The attached trend shows that since the OFI vent,the green power reflected from the OPO, in both polarizations, has been steadily increasing while the transmitted power has been constant, which means that the launched power has had to increase. This indicates that we should try to move the OPO crystal soon.
I'm tagging Detchar here because when I went to the table I realized this morning that the lights have been on in SQZT0 since last Tuesady, 80266, they are off now as of 17:00 UTC on October 2nd. Apologies, if there is any change in 60Hz or sidebands during this time.
1411920195 Not sure why we lost lock. This was not a lcokloss related the the FSS glitching. 20mph wind and SEI config is in USEISM still.
Wed Oct 02 08:08:28 2024 Fill completed in 8min 20secs
Jordan confirmed a good fill curbside.
EY trip level is @ 8-1/8 running at 8-1/4
EX trip level is @ 7 running at 7-3/16
Corner trip level is @ 5-5/16 running at 5-3/4
Dust monitor pumps are running smoothly. End station pump 0108112950 had a seized bearing. I new bearing was installed and it is now back in rotation.
In the LVEA, the heating coils have had their first stages manually enabled. This will prevent unnecessary staging in and out of heating stages which would cause coupling into the IFO. This will require some additional monitoring of temps in the LVEA.
TITLE: 10/02 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 18mph Gusts, 11mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.45 μm/s
QUICK SUMMARY:
IFO is re-locking at OFFLOAD_DRMI_ASC after a lockloss from state 575 LASER_NOISE_SUPPRESION. I reverted owl operation so IFO_NOTIFY is now IDLE but took H1_MANAGER back to in in control by requesting INIT, LOW_NOISE.
Locklosses during the night:
Gerardo, Jordan, Travis, Janos Reacting to the issue in aLog https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=80370, the HAM1 AIP was checked for controller and pump failures. After trying a couple of controllers, it was clear that the pump broke down. Shortly after the maintenance period, because of the high seismic activity, locking was impossible, so the vacuum team took the opportunity and swapped the pump (as normally it would have been impossible during the 4 hours of the maintenance period). The AIP was swapped with a noble diode pump - this was not exactly intentional, but turned out to be a happy accident, as it seemingly works much better than the Starcell or even the Galaxy pumps. However, as the noble diode pump has positive polarity, a positive polarity controller was needed: the only available piece is an Agilent IPC Mini (see it in the picture attached), which works well, but in the MEDM screen it appears to be faulty, due to its wiring differences. All in all, the HAM1/HAM2 twin-annulus system was pumped with the 2 AIPs and an Aux cart - Turbo pump bundle. The noble diode pump stabilized very nicely (at 5-7E-7 Torr, which is unusually low), so eventually the Aux cart Turbo pump bundle was switched off - at 6:55 pm PT. Since then, the 2 AIPs continue to decrease the annulus pressure, which is indeed very nice, so practically we are back to normal. In the meantime, Gerardo quickly modified an appropriate Varian controller to have positive polarity, so at the next opportunity the vacuum team will swap it with the Agilent controller, so the MEDM screen will also be normal. Note that the Aux cart - Turbo bundle remain there until this swap happens.
While the ion pump was replaced we managed to trip the cold cathode giving us the signal for the vacuum pressure internal to HAM1, PT100B, found the CC off last night, but since the IFO was collecting data decided to wait until we were out lock, I turned the CC back on this morning. See trend data attached.
Diode pumps are typically faster than triode pumps including Starcell cathode types. Nice work!
Removed the Agilent IPC Mini controller that was temporarily installed on Tuesday, and replaced it with a positive (+) MiniVac controller. Attached is a trend of current load for both controllers, HAM1 and HAM2.
Note for the vacuum team, we still have the aux cart connected to HAM2 annulus ion pump isolation valve.
LVEA has been Swept.
Ready to start locking now.
Travis noticed that the paging system was buzzing in the offices this morning. At 16:25UTC we unplugged both the paging system and PSL phone that had been missed yesterday. Tagging DetChar in case that caused any excess noise in last night's locks.
Looking at data from a couiple of days ago, there is evidence of some transient bumps at multiples of 11.6 Hz. Those are visible in the summary pages too around hour 12 of this plot.
Taking a spectrogram of data starting at GPS 1410607604, one can see at least two times where there is excess noise at low frequency. This is easier to see in a spectrogram whitened to the median. Comparing the DARM spectra in a period with and without this noise, one can identify the bumps at roughly multiples of 11.6 Hz.
Maybe somebody from DetChar can run LASSO on the BLRMS between 20 and 30 Hz to find if this noise is correlated to some environmental of other changes.
I took a look at this noise, and I have some slides attached to this comment. I will try to roughly summarize what I found.
I first started by taking some 20-30 hz BLRMS around the noise. Unfortunately, the noise is pretty quiet, so I don't think lasso will be super useful here. Even taking a BLRMS for a longer period around the noise didn't produce much. I can re-visit this (maybe take a narrower BLRMS?), but as a separate check I looked at spectra of the ISI, HPI, SUS, and PEM channels to see if there was excess noise anywhere in particular. I figured maybe this could at least narrow down a station where there is more noise at these frequencies.
What I found was:
I was able to run lasso on a narrower strain blrms (suggested by Gabriele) which made the noise more obvious. Specifically, I used a 21 Hz - 25 Hz blrms of auxiliary channels (CS/EX/EY HPI,ISI,PEM & SUS channels) to try and model a strain blrms of the same frequency via lasso. In the pdf attached, the first slide shows the fit from running lasso. The r^2 value was pretty low, but the lasso fit does pick up some peaks in the auxiliary channels that do line up with the strain noise. In the following slides, I made time series plots of the channels that lasso found to be contributing the most to the re-creation of the strain. The results are a bit hard to interpret though. There seems to be roughly 5 peaks in the aux channel blrms, but only 2 major ones in the strain blrms. The top contributing aux channels are also not really from one area, so I can't say that this narrowed down a potential location. However, two HAM8 channels were among the top contributors (H1:ISI_HAM8_BLND_GS_X/Y). It is hard to say if that is significant or not, since I am only looking at about an hours worth of data.
I did a rough check on the summary pages to see if this noise happened on more than one day, but at this moment I didn't find other days with this behavior. If I do come across it happening again (or if someone else notices it), I can run lasso again.
I find that the noise bursts are temporally correlated with vibrational transients seen in H1:PEM-CS_ACC_IOT2_IMC_Y_DQ. Attached are some slides which show (1) scattered light noise in H1:GDS-CALIB_STRAIN_CLEAN from 1000-1400 on Septmeber 17, (2) and (3) the scattered light incidents compared to a timeseries of the accelerometer, and (4) a spectrogram of the accelerometer data.
WP 12080 After turning the HWS lasers back on in 80043, today I reinstalled the masks. To get the right amount of light on the CCD camera, ITMX is set to 1Hz and ITMY to 20Hz. New references taken at 15:50UTC, 30 minutes after we lost lock, so the IFO wouldn't have been 100% cold.
HWS is aligned and working well.
Plots are attached comparing 40 seconds, 120 seconds and 20 minutes after we power to 60W input.
Comparing to March 76385, the ITMX main point absorber looks to be heating up more than it has in the past but the main IFO beam also looks a little lower and offset than March (origin cross is the same), so that could be causing the point absorber to look different. The ITMX P2L is the same as March and Y2L 0.1 higher.
Noticed the ITMY HWS code had stopped, error attached. While we were out of observing today, I restarted it.