TITLE: 01/11 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 05:05 UTC
Overall calm shift with one slightly annoying LL during an opo temp optimization. The reacquisition was slow due to the environment (high winds and an earthquake) but was ultimately fully automatic after an initial alignment. Lockloss alog 82215
Other than this, the SQZ opo temp had to be adjusted so I went out of OBSERVING for 6 mins (5:05 UTC to 5:11 UTC) and adjusted it successfully bringing range from ~138 to ~158MPc (screenshot attached).
Otherwise, the microseism seems to be coming down slowly and high winds have calmed down.
LOG:
None
Unknown cause lockloss but reasonable to assume it was either microseism or squeeze related (though I'm unsure if SQZ can cause a LL like that).
The lockloss happened as I was adjusting the SQZ OPO temperature since in the last hour, there has been a steady decline in H1:SQZ-CLF_REFL_RF6_ABS_OUTPUT channel. After noticing this, I went out of OBSERVING and into CORRECTIVE MAINTENANCE to fix. It was actually working (the channel signal was getting higher with each tap when the LL happened. This coincided with a 5.2 EQ in Ethiopia, which I do not think was the cause but may have moved things while in this high microseim state. I've attached the trends from SQZ overview and OPO temp including the improvements made. If it is possible to induce a LL changing the OPO temp, then this is likely what happened, though the LL did not happen when the temp was being adjusted, as the 3rd screenshot shows.
Seems that the microseism, the earthquake and SQZ issues also coincided with some 34mph gusts so I will conclude that it was mostly environmental.
Short update on locking: After very slow initial alignment, locking took some time to get through DRMI but managed after going to PRMI (and losing lock due to BS in between). We are now sitting at DARM_OFFSET but signals are not converged after 10 minutes due to a passing 5.0 from guatemala (that I beleive we will survive.
As Ibrahim said, the OPO temp adjustment would not cause a lockloss.
However we can see a this Friday time and two days before, the SQZ angle servo and ASC seem to get into a strange ~13minute oscillation when the OPO temperature is bad and the SQZ angle is around 220deg. See attached plots. We are not sure why this is. Now we are a week from the OPO crystal move 82134, the OPO temperature is becoming more stable but will still need adjusting for the next ~week.
TITLE: 01/11 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
IFO Has been locked for 9 hours.
It's been a perfect day for Observing.
Nothing to really report.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
22:08 | OPS | LVEA | LVEA | N | LASER SAFE | 15:07 |
16:09 | FAC | Kim | Optics Lab | n | technical cleaning | 16:39 |
22:12 | VAC | Jordon & Janos | EX | No | Going to EX Mech room to check & get Vacuum Equipment | 22:43 |
23:37 | PCAL | Mr. Llamas | PCAL Lab | YES | Starting a PCAL lab measurement | 23:48 |
TITLE: 01/11 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 6mph Gusts, 2mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.48 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 15:24 UTC
Jennie W, Sheila,
On Wednesday Sheila and I compared the measurement of HAM 6 throughput taken in April 2024 where we stepped the DARM offset and then compared the power at the ASC port which is sensitive to DARM with the power on the DCPDs, with the HAM 6 efficiency for this light predicted by the known losses as obtained from the squeezer noise budget.
We found that combining both of these results in a prediction of unknown losses of 12.2% which sort of matches the 10.4% unknown losses predicted from squeezing measurements (alog # 82097).
HAM6 throughput measured in April 2024 from alog #79146.: 80.2%
HAM6 known losses: google sheet (1-0.00072)*(1-0.015)*(1-0.0096)*(1-0.044)*(1-0.02) = 91.3% expected HAM6 throughput
Unknown HAM6 losses based on this measurement = 1-0.802/0.913 = 12.2%
Yesterday I looked at Camilla and Elenna's DARM step measurement from 21st October 2024, and found the unknown loss is 10.2 % using the HAM6 throughput measured in October which matches the 10.4% unknown losses predicted from squeezing measurements (alog # 82097).
HAM6 throughput measured in October 2024 from alog #82204: 82 %
Unknown HAM6 losses based on this measurement: 1-0.82/0.913 = 10.2%
Fri Jan 10 10:10:06 2025 INFO: Fill completed in 10min 3secs
Jordan confirmed a good fill curbside. TCmins [-92C, -90C] OAT (1C, 34F)
TITLE: 01/10 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 140Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 9mph Gusts, 6mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.62 μm/s
QUICK SUMMARY:
H1just recently locked and Observing 7 minues before I walked in.
Everything looks like it is working well at first glance, except H1:PEM-CS_DUST_LAB2 still doesn't work.
TITLE: 01/10 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 159Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Very quiet shift with only one brief drop from observing. H1 has now been locked for 7 hours.
TITLE: 01/10 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 160Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Calibration and commissioning time in the morning. One lock loss after that where I requested and initial alignment and it run that as well as main locking on its own. The squeezer issues this morning seem to be fixed. The useism is quite high, but we were able to relock.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
22:08 | OPS | LVEA | LVEA | N | LASER SAFE | 15:07 |
16:54 | SQZ | Sheila, Camilla | LVEA | LOCAL | SQZ table work | 18:15 |
19:42 | PSL | Betsy, Rick, Rahul | Opt Lab | n | Location scouting | 20:42 |
21:51 | PCAL | Tony, Francisco | PCAL lab | local | Check on lab | 21:52 |
23:41 | VAC | Janos | EX | n | Mech room work | 01:41 |
TITLE: 01/10 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 160Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 6mph Gusts, 3mph 3min avg
Primary useism: 0.06 μm/s
Secondary useism: 0.70 μm/s
QUICK SUMMARY: H1 has been locked and observing for just over an hour.
ETM glitch tag, looks like there was some glitchy behavior from ETMX just before the lock loss.
useism is getting to the point that it might be tough to relock, we shall see.
I requested it to start with an initial alignment and then it completed that and main locking without intervention. Back to Observing at 2253UTC
Sheila, Camilla
After TJ found out OPO was struggling to get enough SHG pump power to lock 82196, Sheila repeated Daniel's 82057 SHG work as he had increased H1:SQZ-SHG_GR_DC_POWERMON from 95 to 125mW but now it's down to 92mW: Tried steering the beam with picos moves, and by adjusting the temperature it improved to 93mW only.
We went only SQZT0 and Sheila adjusted the alignment into the OPO pump fiber with the OPO unlocked, bringing the the OPO REFL DC power from 1.7 to 1.9mW. We also moved the SHG wave plate further to maximize the amount of light Now CLF could lock with controlmon around 5.8V. Much better and closer to the center of the range so should avoid this mornings 82196 issues.
Homdyne Work:
To take the below data, SEED power reduced back to 0.6mW, and then:
Type | NLG | SQZ dB @ 1kHz | Angle | DTT Ref | Notes |
Dark Noise | N/A | N/A | ref 0 | This and shot noise was noisy <300Hz at times when the OPO was locking or scanning, unsure why. | |
ASQZ | 11.2 | 14.2 | 241 | ref 2 | opo_grTrans_setpoint_uW = 80uW |
SQZ | 11.2 | -6.8 | 154 | ref 3 | reduced LO loop gain from 15 to -1 |
SQZ | 14.3 | -6.1 | 182 | ref 4 |
opo_grTrans_setpoint_uW = 120uW
reduced LO loop gain from -1 to -12
|
ASQZ | 14.3 | 17.8 | 216 | ref 5 | |
ASQZ | 16.0 | 19.1 | 210 | ref 6 | opo_grTrans_setpoint_uW = 140uW |
SQZ | 16.0 | 191 | ref 7 | ||
SQZ | 16.0 | -6.8 | 190 | ref 8 | increased LO loop gain from -12 to -1 as was looking peaky |
Shot Noise | N/A | N/A | ref 1 |
Blocked SEED, LO only.
|
We measured ~6.5dB of SQZ on the HD and up to 19dB of ASQZ. Plot attached.
Next time we work on the HD, we should measure the LO loop to know the correct gain to use.
SQZ data set
UTC | Type | CLF Phase | DTT ref |
19:02:00 - 19:08:00 | No SQZ | N/A | ref 0 |
19:14:00 - 19:20:00 (6mins) | FDS SQZ | 177deg | ref 1 |
19:21:00 - 19:25:00 (4mins) | FDS Mid + SQZ | 215deg | ref 2 |
19:26:00 - 19:30:00 (4mins) | FDS Mid - SQZ | 125deg | ref 3 |
19:32:00 - 19:36:00 (4mins) | FDS ASQZ | (-sign) 108deg | ref 4 |
19:40:00- 19:43:00 (3mins) | FIS ASQZ | (-sign) 108deg | ref 5 |
19:43:15- 19:45:15 (2mins) | FIS ASQZ +10deg | (-sign) 118deg | ref 6 |
19:45:30- 19:47:30 (2mins)
|
FIS ASQZ -10deg
|
(-sign) 98deg
|
ref 7
|
19:48:00 - 19:51:00 (3mins)
|
FIS SQZ
|
177deg
|
ref 8
|
19:51:30 - 19:54:30 (3mins)
|
FIS Mid + SQZ
(check, small low freq glitch at 19:53:49)
|
215deg
|
ref 9
|
19:55:00 - 19:58:00 (3mins)
|
FIS Mid - SQZ
|
125deg
|
ref 10
|
In 83022 we found that we had pump depletion during these measurements so that the OPO trs powers were actually lower than reported
I've imported Camilla's data from dtt, subtracted the dark noise. As Camilla notes above, the NLG measurements here don't nicely fit a model for a consistent threshold power. Instead of using the NLG data, I've allowed the threshold power to be one of the parameters that varies in the fit, along with total efficiency and phase noise. In principle one could also use this data to estimate the technical nosie that limits squeezing, but I haven't tried that with just a few points here.
The attached shows the NLG measurements vs what we would expect given the threshold power that the squeezing and anti-squeezing fits suggest. This model suggests that the total efficiency to of the homodyne squeezing is only only 81%, which seems too low.
This script is available here: quantumnoisebudgeting, but needs a lot of cleaning up.
All wrapped up, returned to Observing at 2002UTC.
Thu Jan 09 10:03:57 2025 INFO: Fill completed in 3min 55secs
Jordan confirmed a good fill curbside. TCmins [-90C, -88C] OAT (1C, 34F)
Broadband and simulines ran without squeezer today. There was also a small earthquake that started to roll through at 1651UTC during this measurement.
Simulines start:
PST: 2025-01-09 08:36:47.309046 PST
UTC: 2025-01-09 16:36:47.309046 UTC
GPS: 1420475825.309046
Simulines end:
PST: 2025-01-09 09:00:29.590371 PST
UTC: 2025-01-09 17:00:29.590371 UTC
GPS: 1420477247.590371
Files:
2025-01-09 17:00:29,514 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250109T1
63648Z.hdf5
2025-01-09 17:00:29,524 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_2025
0109T163648Z.hdf5
2025-01-09 17:00:29,528 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_2025
0109T163648Z.hdf5
2025-01-09 17:00:29,535 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_2025
0109T163648Z.hdf5
2025-01-09 17:00:29,541 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_2025
0109T163648Z.hdf5
TITLE: 01/09 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 124Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 9mph Gusts, 5mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.63 μm/s
QUICK SUMMARY: Locked for 16 hours, useism up slightly from yesterday. The range has been trending down and we've been going in and out of Observing since Tony had to intervene. Looks like something in the squeezer is unlocking. It just unlocked again as I'm typing this up. Investigation ongoing.
Since the squeeze guardians reported that it has been unlocking from the OPO LR node, I adjusted the opo temperature hoping that it would help. The log of that node is incredibly long and very difficult to find what actually happened. From SQZ_OPO_LR - "2025-01-09_15:34:33.836024Z SQZ_OPO_LR [LOCKED_CLF_DUAL.run] USERMSG 0: Disabling pump iss after 10 lockloss couter. Going to thro
ugh LOCKED_CLF_DUAL_NO_ISS to turn on again."
I also should have mentioned that there is calibration and commissioning planned for today starting at 1630UTC (0830 PT).
The opo temperature adjustment did help our range, but it just dropped again. SQZ team will be contacted.
The Green SHG pump power needed to get the same 80uW from the OPO has increased from 14mW to 21mW since the OPO crystal swap on Tuesday. The CONTROLMON was at the bottom of it's range trying to inject this level of light, hence the OPO locking issues, plot.
In the past we've got back to Observing by following 70050 and lowering the OPO trans set point. But as we haven't maxed out the amount of power in the SHG to OPO path yet, we tuned this up with H1:SYS-MOTION_C_PICO_I motor 2 instead. A temporary fix if OPO is having issues and the H1:SQZ-OPO_ISS_CONTROLMON is too low can still be following 70050.
It's surprising that the amount of pump light has increased this quickly and quicker than after the last spot move plot, but our OPO refl is still low compared to before meaning that the new crystal spot still has lower losses than the last. We might need to realigned the SHG pump fiber to increase the amount of light we can inject.
We can work on improving Guardian log messages.
For reference here is a link with a summary of somewhat recent status for the LIGO SUS Fiber Puller: alog #80406
Fiber pulling was on a bit of a hiatus with a busy Fall/Winter of Ops Shifts and travel. This week, returned to the Fiber Puller lab to pull as many fibers as possible (have some free days this/next week). Thought I would note today's work because of some odd behavior.
Yesterday ended up spending most of the time making new Fiber Stock (we were down to 2, and about 12 more were made). Also packaged up Fiber Cartridge hardware returned from LLO and stored here to get re-Class-B-ed. Ended the day yesterday (Jan7) pulling 2025's first fiber: S2500002. This fiber was totally fine and no issues.
Today, I did some more lab clean-up but then around lunch I worked on pulling another fiber.
Issue with RESET Position
The first odd occurrence was with the Fiber Puller/Labview app. When I RESET the system to take a fresh new Fiber Stock, the Fiber Puller/Labview moved the Upper & Lower Stages to a spot where the Upper Stage/Fiber Cartridge Bracket was too "high" wrt the Fixed Lower Bracket---so I was not able to install a new Fiber Stock. Closing the Labview App, and a couple of "dry pulls" later, I was FINALLY! able to get the system to the correct RESET position. (not sure what was the issue here!)
Laser Power Looks Low
Now that the Fiber Stock was set to be POLISHED & PULLED, I worked on getting the Fiber Stock Aligned (this is a step unique to our set up here at LHO). As I put CO2 beam on the Fiber Stock, the first thing I noticed was very dim light on the stock (as seen via both cameras of the Fiber Puller). This was at a laser power of 55.5% (of the 200W "2018-new" laser installed for the Fiber Puller last summer). The alignment should not have changed for the system. I wondered if there were issues with the laser settings. I notice I had the laser controller at the "ANV" setting---it should be "MANUAL". But nothing improved the laser power.
Miraculously, the laser power (as seen with the cameras) returned to what I saw yesterday for S2500002, so I figured it was a glitch and then moved on to a POLISH, but within a few minutes and being mindful of the laser power on the stock on the cameras, once again the laser spot became dim. But I did nothing and just let the POLISH finish (about 15min). Fearing there was power-related issue, I decided to run the Pull (takes less than a minute), but at a higher power (75% vs 65% which had been the norm the last few months).
S2500003 pulled fine and passed.
I thought we were good, but when I set up for S2500004, I noticed more low and variable laser power. I worked on this fiber later in the afternoon after lunch. This one started with power which looked good to me (qualitatively), so I continued with set-up. But in a case of déjà vu, the laser power noticeably dipped in power as seen via the cameras. For this fiber I decided to continue, but since S2500003 had a "low-ish" violin fundamental freq (~498Hz), I figured I would return to a power of 65% for the Pull (hoping the laser power was hot enough for the pull (and not cool enough to cause the fiber to break during the pull---something which happened with our old and dying 100W laser we replaced in 2024).
Pulled the fiber and it all looked fine, profiled the fiber, and then ran an analysis on the fiber: it FAILED.
This is fine. Fibers fail. Since Aug we have pulled 29 fibers and this was the 5th FAIL we have had.
This is where I ended the day, but I'm logging this work only for some early suspicions with the laser. Stay Tuned For More!
Overall comments I wanted to add:
Elenna, Camilla
Ran the automatic DARM offset sweep via Elenna's instructions (took <15 minutes):
cd /ligo/gitcommon/labutils/darm_offset_step/
conda activate labutils
python auto_darm_offset_step.py
DARM offset moves recorded to /ligo/gitcommon/labutils/darm_offset_step/data/darm_offset_steps_2024_Oct_21_23_37_44_UTC.txt
Reverted the attached tramp sdf diffs afterwards, see attached. Maybe the script needs to be adjusted to automatically do this.
I just ran Craig's script to analyze these results. The script fits a contrast defect of 0.742 mW using the 255.0 Hz data and 0.771 mW using the 410.3 Hz data. This value is lower than the previously 1 mW on July 11 (alog 79045), which matches up nicely with our reduced frequency noise since the OFI repair (alog 80596).
I attached the plots that the code generates.
This result then estimates that the homodyne angle is about 7 degrees.
Last year I added some code to plot_darm_optical_gain_vs_dcpd_sum.py to calculate the losses through HAM 6.
Sheila and I have started looking at those again post-OFI replacement.
Just attaching the plot of power at the anti-symmetric port (ie. into HAM 6) vs. power after the OMC as measured by the DCPDs.
The plot is found in /ligo/gitcommon/labutils/darm_offset_step/figures/ and is also on the last page of the pdf Elenna linked above.
From this plot we can see that the relationship between the power into HAM 6 (P_AS) is related to the power at the output DCPDs as follows.
P_AS = 1.220*P_DCPD + 656.818 mW
Where the second term is light that will be rejected by the OMC + that which gets through the OMC but is insensitive to DARM length changes.
The throughput between the anti-symmetric port and the DCPDs is 1/1.22 = 0.820. So that means 18% of the TM00 light that we want at the DCPDs is lost through HAM 6.