We went into no SQZ from 15:34UTC to 15:42UTC. I checked NLG as in 76542.
OPO Setpoint | Amplified Max | Amplified Min | UnAmp | Dark | NLG |
80 | 0.0134871 | 0.00017537 | 0.0005911 | -2.57e-5 | 21.9 |
TITLE: 10/09 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 4mph 3min avg
Primary useism: 0.08 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY:
H1's been locked 16.5+hrs (even rode through an M6.0 Russia EQ an hr ago!).
OVERNIGHT: SQZ dropped H1 from Observing for 4min due to the PMC. NO Wake up Calls!
ALSO: Thurs Calibration from 830-noon (local time)...where Calibration will be deferred to later in the morning for Elenna task and/or Robert task starting things off.
Did the Dust Monitor Check and a new (to me atleast) problem would be for DR1 (Diode Room 1).
TITLE: 10/09 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: We stayed locked the whole shift, 7 hours. Calm evening.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
22:52 | SAF | Laser HAZARD | LVEA | YES | LVEA is Laser HAZARD \u0d26\u0d4d\u0d26\u0d3f(\u239a_\u239a) | 13:52 |
16:48 | psl | ryanS.jason | optics lab | YES | NPRO! | 00:43 |
22:36 | iss | keita.jennyq | optics lab | yes | iss array align | 00:53 |
22:46 | ISS | Rahul | Optics lab | YES | Join iss array work | 00:05 |
00:02 | FAC | Tyler | X1 beamtube | N | Checks | 00:30 |
Closes FAMIS28426, last checked in alog87272
ETMX's Veff error bars are pretty big, as expected. The values looks fairly stable over time.
TITLE: 10/08 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
One ETMx Glitch lockloss and fire observed on Rattlesnake.
LOG:
TITLE: 10/08 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 10mph Gusts, 5mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.15 μm/s
QUICK SUMMARY:
At 2047utc, H1 went out of Observing due to the SQZ PMC. It was almost back, but then H1 had a lockloss at 2050utc & this was due to an ETMx Glitch. Recovery was fairly quick, but I need to tweak ETMx & mostly TMSx. And then DRMI looked bad, so I went through CHECK MICH FRINGES + PRMI. Thern H1 got back to OBSERVING automatically.
FAMIS 27425
pH of PSL chiller water was measured to be between 10.0 and 10.5 according to the color of the test strip.
Jennie W, Keita, Rahul
Monday:
Keita and I re-optimised the array input pointing using the tilt actuator on the PZT steering mirror and the translation stage.
After taking a vertical and horizontal coupling measurement, he realised that the DC values were very low when we optimised the pointing to improve the vertical coupling. Looking at the QPD the cursor was in the bottom half and so we cannot use the QPD y-readout channel to work out the 'A' channel for either measurement (where the TF is B/A).
So for the A channel in the TF for the vertical coupling we had to use
A = QPD_X/(cos(QPD_angle*pi/180))/sqrt(2) /Calib_X where A is the times series for thr TF, 'QPD_angle' is the angle between the horizontal dither axis and the QPD x-axis, Calib_X is the calibration of motion on the QPD in the x-axis in V/mm (LHO alog #85897).
And for the A channel in the TF for the horizontal coupling we had to use
A = QPD_X/(cos(QPD_angle*pi/180))/Calib_X.
The data is plotted here.
Yesterday Keita and I double-shcked the beam size calculation I did on 26th June when we reset the alignment to the ISS array form the laser after it got bumped (we think). The beam size calculated was 0.23 mm beam radius on PD1 (the one with no transmissions through the mirrors) in x direction and 0.20 mm in y direction. The beam size calculated on the QPD was 0.20 mm in x direction and 0.19 mm in y direction. The waist should be close to this point (behind the plane of the array photodiodes) as the Rayleigh range is 9cm in x direction and 10cm in the y direction.
This check is because our beam calibration as reported in this entry, seems to be at least a factor of 2 off from Mayank and Shiva's measurements reported here (dcc LIGO-T2500077).
Since we already know the QPD was slightly off from our measurements on the 6th October, Keita and Rahul went in and retook the calibration measurements of volts on the QPD to mm on the QPD.
In the process Keita noticed that the ON-TRAK amplifier for the QPD had the H indicator lit and so was saturating. He turned the input pump current down from 130mA to120mA and the gain value on the amplifier from G3 (64k ohm) to G2 (16k ohm). The QPD was also recentred on the input pointing position where we had good vertical and horizontal coupling as we had left it in the position we found on Monday where it was off in yaw. We had to do several iterations of alignment switching between vertical and horizontal dither and still could only find a place where the coupling of PDs 1-4 were optimised. PDs 5-8 have bad coupling at this point. At this position we also took calibration measurments where we scanned the beam and noted down the voltage on the QPD X, Y and SUM channels.
Keita notes that for the QPD to be linear the voltages need to be below +/- 7V.
I will attach the final set of measurements in a comment below.
We left the alignment in this state with respect to the bullseye QPD readout.
The coupling measurement from the last measurements we took on Tuesday is here, and the calibration of the motion on the QPD is here.
I was calibrating the above data using the Calib_X and Calib_Y values instead of by sqrt(Calib_X^2 + Calib_Y^2).
Fixed this in the attached graph.
Also 3 of the PDs are starting to near the edge of their aligned range which can be seen looking at the spread of DC values on the array PDs in this graph.
Wed Oct 08 10:06:51 2025 INFO: Fill completed in 6min 48secs
Gerardo addressed the TC-B issue due to ice buildup. Fill completed using the working TC-A.
As H1 made its first attempt at Squeezing at 2150utc, when H1 was at NLN, the SQZ_Manager was in an odd state. Althought it passed through the Inject Squeezing state, it hadn't even attempted squeezing (the SQZ ndscope fom had not action/changes on it). It looked like SQZ_MANAGER was requesting DOWN, but it was not able to complete the DOWN.
I've not seen this before. So was going to take the opportunity to follow the new SQZ Flow Chart....
All these nodes were already DOWN (SQZ_MANAGER was requesting DOWN, but not in the (green) DOWN state):
These nodes were LOCKED: PMC & SHG
Started going through SQZ Flow Chart Camilla passed on to opearators recently (T2500325). NOTE: I did not take all SQZ nodes to DOWN...PMC & SHG were nominal/LOCKED, so I kept them there since this was a first step of Flow Chart (but was this the reason the SQZ_MANAGER wasn't in the DOWN state?). In hindsight, I probably should have taken these down to truly follow the instructions, at any rate...
Going through this flow chart, I got to the step where we check the SHG Power to see if it is less then 90 (it was down at ~85). And at this point I adjusted the SHG TEC temp to increase the SHG Power...made it up to ~119, but at this point attention was switched to the SQZ_MANAGER and why it wasn't DOWN. Around this time Oli & Tony offered assistance.
At this point started relocking SQZ normally, and discovered the OPO_LR was going into "siren mode"---basically in a fast cycle of going to DOWN + it had the notification about "pump fiber rej power in ham7 high...". To address the latter "common notification" involved working with SQZT0 waveplates (this was very likley do the big adjustment UP in SHG power I made).
Tony & Oli (whom had experience w/ these waveplate adjustments) needed several iterations of adjusting SQZT0 waveplates (due to the increased SHG power): 1/2 & 1/4 waveplates downstream of the SHG Launch beamsplitter path (see image #1) AND 1/2-waveplate upstream of the SHG Rejected + Launch paths (see image #2). This took some time because of how big SHG power was increased earlier and because every waveplate adjustment unlocked the PMC & SHG.
Eventually they were able to get SHG_FIBR_REJECTED_DC_POWERMON & SHG_LAUNCH_DC_POWERMON signals to their nominal values (these signals are also in Tony's screnshots, i.e. images #1 & #2). After 30+min of adjusting, the SQZ was able to lock & H1 was automatically taken to Observing as soon as the new SHG TEC SETTEMP was accepted in SDF (see image#3). H1 went to Observing just as Sheila was walking into the Control Room & able to assess and get a rundown of what happened.
Many thanks to Oli & Tony for the help and Tony's screenshots!
It looks like the issue was that SQZ_MANAGER was staying in DOWN rather than moving up to the requested FRS_READY_IFO as the beam divertor had been left open after the 87342 tests. Re-reqesting SQZ_MANGER to DOWN would have closed the beam divertor and allowed it to try to relock.
Corey correcty increased the available SHG power with the TEC, however this mean that there was too much power going into the SHG fiber for the OPO to lock. This is surprising to me, I would have expected it to lock with no ISS and then unlock when the ISS couldn't get the correct OPO trans power, but that wasn't the case. Oli then correctly adjusted the power control wave plate (PICO I #3) to reduce power going into fiber, this is annoying to do as unlocks everything each time the waveplate is touched, hence not being in the T2500325 flowchart.
This is an usual reason why squeezing didn't work so no changes have been made to anything.
TITLE: 10/08 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 3mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.16 μm/s
QUICK SUMMARY:
H1's been locked 1.25hrs. Quiet environmentally. 2nd day in row of having a dust alarm for the LAB1 500nm alarm (ringing but not active) in the Optics Lab.
Overnight:
1056-1319utc DOWN due to Lockloss from M5.0 Guatamala earthquake. H1 spent 49min trying to get to DRMI w/ no luck before running an alignment & then the alignment was plagued by a glitch SRM with (2) SRM watchdog trips (TJ was awakened by this and alogged this morning). After the SRM hub-bub (glitches from 1215-1228utc), TJ able to have H1 automatically make it back up to Observing.
SRM M1-3 had tripped while in initial alignment while trying to lock SRY. This has been the case for the last few weeks. Even after the WD trips, the SRM model will continue to saturate for several minutes. After it failed on its own a few times, I requested ALIGN_IFO to DOWN and adjusted SRM in P to get AS_A brighter and the camera looking better. This worked. I ended up moving it by 26urad.
For whatever reason, trying to trend what what happening with SRM right before the trips freezes my ndscope. Everything else is running fine. I'll look into this back on site.
Just made it past DRMI, all looks good.
Looks like SRY locked but then immediately lost it, perhaps right on the cusp of having enough signal. ALIGN_IFO goes to DOWN and starts to turn things off, but the L drive from SRCL is still on for ~2.5 seconds. Even after it gets turned off SRM is still swinging. attachment 1
I'm a bit confused how Y is seen to be moving at 1Hz so much when we are only driving L. Looks like this issue has only been happening for a few weeks when I trend the SRM WD trips. We have tripped not only during SRY locking but a few times during DRMI acquisition as well. The timing is very suspicious with the SRM sat amp swap from Sept 23 - alog87103 alog87105.
TITLE: 10/08 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: We stayed locked the whole shift, just over 7 hours. PEM-MAG-INJ has a CFC.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
22:52 | SAF | Laser HAZARD | LVEA | YES | LVEA is Laser HAZARD \u0d26\u0d4d\u0d26\u0d3f(\u239a_\u239a) | 13:52 |
23:52 | PSL | Jennie, Rahul | Optics lab | LOCAL | ISS Array work, in at 23:00 | 00:18 |
Gosh---I thought I posted my Day shift Summary, but obviously did not---posting it here:
TITLE: 10/07 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
Mostly straightforward Maintenace with all activity below. The shift ended with some SQZ locking trouble, but after an hour of working this issue, H1 was back to Observing.
LOG:
This morning while in the CER for other reasons, I heard a coil driver on HAM4 ISI making a ticking/chirping noise. Fil suggested this was probably a fan, so we've pulled the chassis to check it out. We've put in a spare and will run that until next maintenance probably.
Chassis pulled is S1100320, the spare we put in is S1103567. Frs is https://services1.ligo-la.caltech.edu/FRS/show_bug.cgi?id=35547
We swapped the old chassis back in yesterday, after Fil replaced the fan last week.
Elenna noticed this OPO temp change made SQZ worse at the yellow BLRMs. I then ran SQZ_OPO_LR's SCAN_OPOTEMP state which moved the OPO temp further in the same direction. This unlocked SQZ but shouldn't have but did make the yellow BLRMs better.