Observing at 146Mpc and have been Locked for 6.5 hours now. The wind is a bit elevated and has been going up and down, but the ifo has been handling it well and it should go down a bit over the next couple of hours.
The ETMX roll mode is looking good, as are violins.
I edited the userapps/.../sqz/h1/scripts/SCAN_PSAMS.py script to use the 350Hz BLRMS rather than the high frequency ones to set SQZ angle. If locked at the start of commissioning tomorrow, we should pause SQZ_MANAGER guardian and then run this script.
TITLE: 06/11 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 144Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 24mph Gusts, 18mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.14 μm/s
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
Recovered from the earlier lockloss!
An Initial_Alignment was ran, and we got back up to Nominal_Low_Noise at 20:46 UTC
I did have to accept some SDF diffs for the SEI system, and reached out to Jim about it. He mentioned that the some of those channels should not be monitored.
When the SEI ENV went back to calm, and it dropped us from Observing a few moments later we unmonotored those channels.
We got back to Observing at 21:04 UTC
21:10 UTC we fell out of Observing because of a SQZ issue, returning to Observing at 21:24 UTC
SUS ETMY Roll Mode is growing!
H1:SUS-ETMY_M0_DARM_DAMP_R_GAIN was changed to account for an interesting Roll mode on ETMX, and accepted in SDF. YES, Ya Read That Correctly and I didn't make a mistake here. Changes to ETMY damped a Roll mode on ETMX .
The Ops Eve Shifter should revert this change, before handing off the IFO to the Night Owl Op.
H1 has been locked for 2 hours and 40 minutes and is currently OBSERVING.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:22 | FAC | LVEA is LASER HAZARD | LVEA | YES | LVEA is LASER HAZARD \u0d26\u0d4d\u0d26\u0d3f(\u239a_\u239a) | 22:54 |
15:46 | Fac | Mitchel | Mech Mezz | N | Checking the dust pump | 16:56 |
15:49 | FAC | Tyler | VAC prep | N | Staging tools & sand paper | 16:04 |
15:57 | FAC | Tyler | EY | N | Checking on bee box | 16:17 |
17:44 | EE | Fil & Eric | WoodShop | N | Working on cabling chiller yards mon channels. | 19:44 |
17:50 | FAC | Kim | H2 | N | Technical cleaning. | 18:30 |
18:39 | VAC | Gerardo, Jordan | LVEA | - | Bolt tightening and pump evaluation | 19:19 |
18:44 | PSL | RyanS | CR | N | RefCav alignment | 18:51 |
19:06 | PEM | Robert | LVEA | - | Damping vacuum pumps | 19:22 |
20:04 | PEM | Camilla | LVEA | Yes | Turning off an SR785 that was left on. | 20:09 |
TITLE: 06/11 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 24mph Gusts, 12mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.14 μm/s
QUICK SUMMARY:
Currently Observing and have been Locked for almost 2.5 hours. Keeping an eye on the roll mode and it's still low.
Elenna, Camilla, Jenne, Ryan Short, Tony
We saw an increase in the DCPD monitor screen, which the roll monitor identified as ETMX roll mode. The existing settings from a long time ago used AS A YAW to damp roll modes, I attempted to damp this by actuating on ETMY, which seems to have worked with a gain of 20. (settings in screenshot)
Camilla trended the monitor, and sees that this started to ring up slowly around 7 am today.
We've gone back to observing with this gain setting, but we plan to turn it off when this looks well damped (if we are still locked by the end of the eve shift, we should turn it off so the owl operator doesn't get woken up by SDF diffs).
Since we were able to actuate on ETMY, that indicates this is likely the ETMY roll mode. That indicates that either the ASC or the LSC feedforward is driving this mode.
I just took a look at the SRCL feedforward, and I see a pair of high Q zeros/poles (Q ~ 1600!) at about 13.5 Hz that I missed (facepalm). That seems a bit low, since this roll mode appears to be around 13.7 Hz, but we should still remove that anyway. We can take care of this tomorrow during commissioning and that might prevent the driving of this mode.
I can't think of any ASC control that would have a significant drive at 13.7 Hz.
I just removed two high Q features at 6.5 and 13.5 Hz that were in the SRCL feedforward. I kept the same filter but removed the features, so there should be no SDF or guardian changes. My hope is that this will prevent the roll mode from being rung up, so I have turned the gain off and SDFed it. Attached is a screenshot of the SDF change for ETMY roll.
Unfortunately, it appears that the ETMY roll mode is still ringing up, so the SRCL feedforward is not the cause. Another possibility is the ASC. The CHARD P control signal is larger now around 13.7 Hz than it was in April. The attached plot shows a reference trace from April and live trace from last night's lock. I don't know if this is enough to drive this mode. The bounce roll notches are engaged on ETMY L2, and have 40 dB attenuation for the roll mode between 13.4 and 13.7 Hz.
Kiet, Robert, and Carlos
We report the results of calibrating the LEMI magnetometers in the Vault outside of the X arm;
We went out on May 15th, 2025 and took the following measurement with each lasted 2 minutes.
Far field injection: to calibrate the LEMI
1) 17:36:45 UTC; X-axis far-field injection; without preamp on the Bartington
2) 17:43:23 UTC; X-axis far-field injection; without preamp on the Bartington
3) 18:01:53 UTC, X-axis far-field injection; without preamp on the Bartington
4) 18:04:15 UTC, X-axis far-field injection; without preamp on the Bartington
5) 18:23:25 UTC; Y-axis far-field injection; with preamp on the Bartington
6) 18:26:00 UTC; Y-axis far-field injection; with preamp on the Bartington
The preamp gain is 20; all injections are done at 20Hz. The coil used for far field injection has 26 turns, 3.2 Ohms.
The voltage that was used to drive the injection coil for farfield injection: Vp-p: 13.2 +- 0.1V. It was windy out so we decided to use the preamp on the Bartington magnetometer.
The LEMI channels used for this analysis: H1:PEM-VAULT_MAG_1030X195Y_COIL_X_DQ; H1:PEM-VAULT_MAG_1030X195Y_COIL_Y_DQ
Bartington calibration
7) 18:49:38 UTC; with preamp on the Bartington
8) 18:52:50 UTC; with preamp on the Bartington
the voltage that was used to drive the injection coil for farfield injection: Vp-p: 1.88V +- 0.01V
We inserted the bartington magnetometer to the center of a cylindrical coil to calibrate its z axis(1000 Ohms, 55 turns in 0.087 m)
The final results of LEMI calibration after taken accound the all the measurements is (9.101 +- 0.210)*10^-13 Tesla/counts, there is a 20% difference between this measurement and the measurement taken pre O3. Robert noted that when taking the previous measurements, the calibrating magnetometer was not fully isolated from the LEMI. This time they are completely independent.
We recommend analyses that use LEMI data to use the calibration value of 9.101 * 10^-13 +- 5% Tesla/counts to be consersative.
Using Camilla's No SQZ time yesterday, I made a comparison to the No SQZing time from Mar 26 2025 16:01 UTC GPS Time: 1427040078.
Note: I'm not sure how much of an effect this has , but Camilla's No SQZ time was before the Calibration Update yesterday.
Here we see a broadband loss of range from 2-7 Mpc starting around 50hz and above.
After H1 lost lock this morning, the vacuum team started some work on HAM1, which gave me an opportunity to do a much-needed adjustment of the FSS RefCav alignment. As I expected, this was a very straightforward alignment where I was able to get immediate and significant improvements by walking the beam into the RefCav mostly in pitch, but some in yaw also (all with the IMC offline). My results:
A great improvement; even better than the alignment was before the drop-off over the weekend.
Lockloss @ 18:33 UTC after 12.5 hr lock stretch - link to lockloss tool
Looks to be from some very local sudden ground motion. USGS has nothing to report yet, but site seismometers and Picket Fence certainly saw activity.
Now looks to be due to a M4.4 EQ from Fort St. John, Canada.
Camilla and I took H1 out of observing from 18:18 to 18:21 UTC to run the 'SCAN_SQZANG_FDS' state in SQZ_MANAGER to touch up the SQZ angle and gain some range. Scope attached of the scan where one can see low frequency BLRMS improved while high frequency got worse, but overall BNS range is better.
Compairing Sqz times from this morning Jun 11 2025 GPS time: 1433689084 and a 15 min span from March 26th 2025 GPS time: 1427073110
Command ran : python3 range_compare.py 1433689084 1427073110 --span 900
Today's time is well calibrated, well thermalized.
We can see New peaks in the the ASD, and some of our old peaks are higher. From 10-15hz, 25-30hz, 500- 600HZ, 2k hz esp.
We seem to also have a broadband decrease in both sensitivity and range. Where our sensitivity loss is small at most frequencies, but spans a wide frequency range.
Unfortunately our range has dropped off from 80hz -2k hz by up to ~15 Mpc. See first page.
The good news is that there are some very slight gains in range in the 50-80 hz freq range. See 3rd page
Wed Jun 11 10:10:49 2025 INFO: Fill completed in 10min 46secs
Good fill verified by Gerardo curbside.
TITLE: 06/11 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 140Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 11mph Gusts, 5mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY:
H1 is still locked 8 Hours and 30 minutes later!
All systems seem to be functioning.
There was talk of doing a calibration measurement, Which I started to do right after making sure there wasn't anyone still inside working the LVEA.
I ran a PCAL BroadBand with this command:
pydarm measure --run-headless bb
2025-06-11 07:44:58,555 config file: /ligo/groups/cal/H1/ifo/pydarm_cmd_H1.yaml
2025-06-11 07:44:58,571 available measurements:
pcal: PCal response, swept-sine (/ligo/groups/cal/H1/ifo/templates/PCALY2DARM_SS__template_.xml)
bb : PCal response, broad-band (/ligo/groups/cal/H1/ifo/templates/PCALY2DARM_BB__template_.xml)
The BroadBand finished. But I did not run the Simulines. It was believed by the Calibration gurus that we don't need it before Observing because our calibration .
"monitoring lines show a pretty good uncertainty for LHO this morning: https://gstlal.ligo.caltech.edu/grafana/d/StZk6BPVz/calibration-monitoring?orgId=1&var-DashDatasource=lho_calibration_monitoring_v3&var-coh_threshold=%22coh_threshold%22%20%3D%20%27cohok%27%20AND&var-detector_state=&from=1749629890225&to=1749652518797 Roughly +/-2% wiggle "
~Joe B
Clicked the button for Observing, And we went right into observing with out any SDF issues!
Went into observing at 14:57 UTC
There are messages though mostly from the SEI system, all of which are Setpoint changes see SPM DIFFS for differences for HAMs 2,3,4,5.
But these have not stopped us from getting in to Observing.
I have attached a screenshot of the broadband measurement from this morning. It shows that the calibration uncertainty is within +-2%, which means that our new calibration is excellent!
For those who want to plot the latest PCAL broadband, you can use a template that I have saved in /opt/rtcds/userapps/release/cal/h1/dtt_templates/PCAL_BB_template.xml (aka [userapps] cal/h1/dtt_templates/)
In order to use this template, you must find the GPS time of the start of the broadband measurement, which I found today by converting the timestap in Tony's post above into GPS time. This template pulls data from NDS2 because it uses GDS, so you will also need to go to the "Input" tab, and put your current GPS time in the "Epoch stop" entry that is within the "NDS2 selection" box. The current time will hopefully be after the start time of the broadband measurement, so that will ensure that the full span of the data you need is requested from NDS2. If you don't do this, the template will give you an error if you try to run.
TITLE: 06/11 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 135 Mpc
INCOMING OPERATOR: None
SHIFT SUMMARY:
We are Observing! We've been Locked for 1 hour. We got to NOMINAL_LOW_NOISE an hour ago after a couple locklosses with Elenna's help (see below) and then took a couple unthermalized broadband calibration measurements (84959, 84960). I also just adjusted the sqz angle and was able to get better squeezing for the 1.7 kHz band, but the 350 Hz band squeezing is very bad. I am selecting DOWN so that if we unlock, we don't relock.
Early in the relocking process, we were having issues with DRMI and PRMI not catching, even though we had really good DRMI flashes. I finally gave up and went to run an initial alignment, but we had a bit of a detour when an error in SDFing caused Big Noise (TM) to be sent into PM1 and caused the software WD to trip, then causing the HAM1 ISI and HAM1 HEPI to also trip. Once we got that figured out we went through a full initial alignment with no issues.
Relocking, we had two locklosses from LOWNOISE_ASC from the same spot. Here are their logs (first, second). There were no ASC oscillations before the locklosses, so it doesn't seem to be due to the 1Hz issues from earlier (849463). Looking at the logs, they both happened right after turning on FM4 for DHARD P, DcntrlLP. Elenna took a look at that filter and noticed that the ramping on time might be too short, and changed it from 5s to 10s, and updated the wait time in the guardian to match. She loaded that all in, and it worked!!
As a strange aside, after the second LOWNOISE_ASC lockloss, I went into manual IA to align PRX, but there was no light on ASC-AS_A. Left Manual IA, went through DOWN and SDF_REVERT again, then back into manual IA, and found the same issue at PRX. Looked at the ASC screen and noticed that the fast shutter was closed. Selected OPEN for the fast shutter, and it opened fine. This was a weird issue??
LOG:
23:30UTC Locked and getting data for the new calibration
23:43 Lockloss
- Started an initial alignment, trying to do automatically after PRC align was bypassed in the state graph (84950)
- Tried relocking, couldn't get DRMI or PRMI to catch, even with really good DRMI flashes
- Went to manual inital alignment to just do PRX by hand, but saw the HAM1 ISI IOP DACKILL had tripped
- Then HAM1 HEPI tripped, and I had to put PM1 in SAFE because huge numbers were coming in through the LOCK filter
- It was due to an SDF error and was corrected
- Lockloss from LOWNOISE_ASC for unknown cause (no ringup)
- Lockloss from LOWNOISE_ASC for unknown cause (no ringup)
- Tried going to manual IA to align PRX, but there was no light on ASC-AS_A. Left Manual IA, went through DOWN and SDF_REVERT again, then back into manual IA, and found the same issue at PRX. Looked at the ASC screen and noticed that the fast shutter was closed. Selected OPEN for the fast shutter, and it opened fine.
06:03 NOMINAL_LOW_NOISE
06:07 Started BB calibration measurement
06:12 Calibration measurement done
06:36 BB calibration measurement started
06:41 Calibration measurement done
07:02 Back into Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
00:50 | VAC | Gerardo | LVEA | YES | Climbing around on HAM1 | 00:58 |
Unfortunately, since we don't want the ifo trying to relock all night if we lose lock, I have to select DOWN, but that means that the request for ISC_LOCK is not in the right spot for us to stay Observing. So we won't be Observing overnight, but will be locked (at least until we lose lock, then we will be in DOWN)
Here is some more information about some of the problems Oli faced last night and how they were fixed.
PM1 saturations:
Unfortunately, this problem was an error on my part. Yesterday, Sheila and I were making changes to the DC6 centering loop, which feeds back to PM1. As a part of updating the loop design, I SDFed the new filter settings, but inadvertently also SDFed the input of DC6 to be ON in safe. We don't want this; SDF is supposed to revert all the DC centering loop inputs to OFF when we lose lock. Since I made this mistake, a large junk signal came in through the input of DC6 and then was sent to the suspension, which railed PM1 and then tripped the HAM1 ISI. Once I realized what was happening, I logged in and had Oli re-SDF the inputs of DC6 P and Y to be OFF.
You can see this mistake in my attached screenshot of the DC6 SDF; I carelessly missed the "IN" among the list of differences.
DHARD P filter engagement:
In order to avoid some control instabilities, Sheila and I have been reordering some guardian states. Specifically, we moved the LOWNOISE ASC state to run after LOWNOISE LENGTH CONTROL. This should not have caused any problems, except Oli noticed that we lost lock twice at the exact same point in the locking process, right at the end of LOWNOISE ASC when the DHARD P low noise controller is engaged, FM4. I attached the two guardian logs Oli sent me demonstrating this.
I took a look at the FM4 step response in foton, and noticed that the step response is actually quite long, and the ramp time of the filter was set to 5 seconds. I also looked at the DARM signal right before lockloss, and noticed that the DARM IN1 signal had a large motion away from zero just before lockloss, like it was being kicked. My hypothesis is that the impulse of the new DHARD P filter was kicking DARM during engagement. This guardian state used to be run BEFORE we switched the coil drivers to low bandwidth, so maybe the low bandwidth coil drivers can't handle that kind of impulse.
I changed the ramp time of the filter to 10 seconds, and we proceeded through the state on the next attempt just fine.
Francisco, Elenna, help online from Joe B
We used the thermalized calibration measurement that Tony took in alog 84949, and ran the calibration report, generating report 20250610T224009Z. We had previously done this process for a slightly earlier calibration measurement with guidance from Joe. Upon inspection of the report, Joe recommended that we change the parameter is_pro_spring
from False to True, which significantly improved the fit of the calibration. The report that Tony uploaded in his alog includes that fit change. Since we were happy with this fit, Francisco reran the pydarm report, this time requesting the generation of the GDS filters. After this completed, we inspected the comparison of the FIR filters with the DARM model, and saw very good agreement between 10 and 1000 Hz.
Two things we want to point out are that the nonsens filter fits included a lot of ripple a low frequency, but it still looks small enough that we think it is "ok". We also saw some large line features at high frequency in the TST filters, which Joe had previously assured us was ok.
While online with Joe, we had also confirmed that the DARM actuation parameters, such as gains and filters, matched in three locations: in the suspension model itself, in the CAL CS model, and in the pydarm ini file.
Since we confirmed this was all looking good, Francisco and I proceeded with the next steps, which we followed from Jeff's alog here, 83088. We ran these commands in this order:
pydarm commit 20250610T224009Z --valid
pydarm export --push 20250610T224009Z
pydarm upload 20250610T224009Z
pydarm gds restart
At this point, Jeff notes that he had to wait 12 minutes running "pydarm gds status
" and running the broadband measurement to confirm the calibration is good. Francisco and I also knew we needed to check the status of the calibration lines on grafana. However, a few minutes after we started the clock on this wait time, the IFO lost lock.
We think the calibration is good, but we have not actually been able to confirm this, which means we cannot go into observing tomorrow (Wednesday) before making this confirmation.
Doing so requires some locked time with calibration lines on and a broadband injection for a final verification of this new calibration. The hope is that we can achieve this tonight, but if not, we must do so tomorrow before going into observing. (Note: because of the different rules of "engineering" data versus "observing" data, we could go into observing mode tonight without this confirmation).
We confirmed this new calibration is good in this alog: 84963.
I am going to add a few more details and thoughts about this calibration here:
Currently, we are operating with a digital offset in SRCL, which is counteracting about 1.4 degrees of SRCL detuning. Based on the calibration measurement, operating with this offset seems to have compensated most of the anti-spring that has been previously evident in the sensing function. However, our measurements still show non-flat behavior at low frequency, which was actually best fit with a spring (aka "pro-spring"). However, the full behavior of this feature appears more like some L2A2L coupling. It may be worthwhile to test out this coupling by trying different ASC gains and running sensing function measurements.
Joe pointed out to me this morning in the cal lines grafana, and we also saw in the very early broadband measurement last night (84959), that the calibration looks very bad just at the start of lock, with uncertainties nearing 10%. This seems to level off within about 30 minutes of the start of the lock. Since that is pretty bad, we might want to consider what to do on the IFO side to compensate. Maybe our SRCL offset is too large for the first 30 minutes of lock, or there is something else we can do to mitigate this response.
Just watching the grafana for this recent lock acquisition, it took about 1 hour for the uncertainty of the 33 Hz line to drop from 8% to 2%.
No SQZ time taken today, 21:47:00UTC to 21:59:00UTC.
2 Seconds of data in this data span is missing.
Our tools can't pull the entire time for this No SQZ ing time.
Johnathan has given us the 2 seconds that are missing from the data 12 minute data stretch.
"I don't have good news for you. There is a 2s gap in there at 1433627912-1433627913 on H-H1_llhoft. The H-H1_HOFT_C00 is worse, I don't see frames in the 1433627.... range at all." ~ Johnathan H.
We were able to salvage 674 seconds from this time that can be useful.
Useful GPS time: 1433627238 -1433627912
Here is an executive summary of the ASC changes that have been made, why they were made, and the potential impact on the noise:
I ran a coherence in NLN with all the arm loops, MICH and PRC2 ASC. There is some coherence with CHARD P, and with MICH P and Y that would be worth investigating with a noise budget injection.
I just added the DHARD P boost back into the guardian and engaged it on the way up for this lock. No issues.
TITLE: 06/10 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 1mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY:
H1 was in IDLE when I arrived.
I will start trying to lock now.
I've changed the sign of the damping gain for ITMX 13 in lscparams from +0.2 to -0.2 after seeing it damp correctly in 2 lock stretches. The VIOLIN_DAMPING GRD could use a reload to see this change.
I have loaded the violin damping guardian, since the setting RyanC found still works.