The filter cavity was having a hard time locking, and it looks like the FC green trans power has been dropping slowly over the last 3 weeks, which seems to be caused by a drop in the SHG power from around 100-110 mW down to 77 mW .
I adjusted the picos and the SHG temperature, and recovered the SHG power to 86 mW, so there is still a lot of missing power. The filter cavity locked after this.
I've added a template for this adjustment to userapps/sqz/h1/Templates/ndscope/SHG_alignment_temp_adjust.yaml
Jennie W, Ryan C, Sheila
Ryan and Jennie saw that the filter cavity had trouble locking again. We looked at the filter cavity transmission and launched power now compared to 10 days ago. The launched power has dropped to 71% of what it was (after my slight improvement this morning), and the transmitted power is 69% of what it was. This means that the main reason the filter cavity tranmission has decreased is the lower injected power (due to SHG power drop), not a filter cavity alignment problem.
The guardian has a checker that the filter cavity is locked in green when transmission is above 60uW, which we lowered to 50uW in the GR_LOCKED checker. With the lower transmission the power was sometimes dropping below 50uW, so this would help. We will see what happens with the SHG power over the next few days.
Lockloss at 17:02 UTC in NLN while we were waiting for the FC to lock.
TITLE: 06/27 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 4mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.05 μm/s
QUICK SUMMARY:
14:46 UTC DRMI is struggling when ASC starts, I'm going to run a manual_IA
At the time of the request for help 10:22 UTC, we were in in CHECK_AS_SHUTTER where it was presumably stuck at SHUTTER_FAIL which I've encountered again at 15:22 UTC. It's been in that state since 09:05 UTC when we lost lock from 25Ws and the SHUTTER GRD reported "No kick... peak GS13 signal = 51.226"
The shutter did not trigger in this last lockloss, it looks like the light heading to the AS port was not high enough to trigger the shutter.
The lockloss_shutter_check guardian check for a kick in the HAM6 GS 13s anytime that we loose lock with more than 25kW circulating power in the arms. In this case we had just reach 100kW circulating power, so the guardian expected the shutter to trigger. This lockloss looks unusual in that there isn't a increase in the power going to the AS port right before the lockloss.
Ryan ran the shutter test and the shutter is working correctly now.
I think that this is probably the result of an usual lockloss happening at somewhat lower circulating power than usual. We should probably edit the logic in IFO notify to call for operator assistance whenever the shutter is in the failed state.
This is OK, the AS port went dark and stayed there for about 70ms or so after the lockloss and there was no excessive power surge that would have caused the fast shutter to be triggered.
The maximum power of ~1.4W was observed ~160ms after the lockloss, which is well below the threshold for the analog FS trigger (3 to 4W, I don't remember the exact number).
If something similar happened with 60W, though, FS might have been triggered.
Attached is the estimate of the power coming into HAM6 using two different sensors (PEM-CS_ADC_5_19 = HAM6 power sensor in the AS camera can which monitors power before the fast shutter, and ASC-AS_A_DC_NSUM after). Neither of these have hardware whitening, neither saturated (ASA was close to saturation but the HAM6 power sensor saturation threshold is about 570W when beam diverter is open, 5.7k if closed).
Note that the calibration of the PEM channel is a factor of 10 smaller than that in the observation mode (0.177W/ct, see alog 81112 and git repo for lock loss tool) because the beam diverter (90:10) was open.
TITLE: 06/27 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 04:13 UTC
Quite the frustrating shift in which I encountered:
LOG:
None
We disabled SQZ ASC for the night (not FC ASC), to prevent it from running away again overnight until someone can check what happened.
It has been a difficult relock acquisition following our last lockloss due to a few reasons all related to the inability to lock DRMI:
1. Small local EQ showing up on both primary and secondary microseism.
2. ALSX PLL Crystal Beatnote insufficient for ALSX lock - this comes on and off and was happening largely in the first hour of lock aquisition. There were 9 locklosses at this state in IR, PRMI, DRMI and of course, LOCKING_ALS. I attempted to cycle through "enable/disable" for the ALSX FIBR LOCK on the ALSX PLL page, but this did not help. The issue seems to have largely went away on its own 45 minutes into trying.
3. DRMI/PRMI general alignment. The flashes are good, and I've seen DRMI lock in worse conditions. I touched the BS, PRM, PR2, SRM, SR2 in order to optimize the alignment further but this proved useless.
The first thing I did after the lockloss was to do a full initial alignment, since I heard that the PRMI ASC was fixed. Before my shift, we were having 45 minutes of issues locking DRMI but the initial alignment did not seem to fix this. After 2 hours of trying to touch suspensions, I have decided to do a step-through of initial alignment. I still allowed PRMI ASC to happen automatically (though will do manually if this lock attempt fails). This has just finished and I'm relocking now..
DRMI locked almost immediately after a second, manual initial alignment. Strange.
Lockloss from 522 (LOWNLOISE_ASC). Lost lock at LOWNOISE_ASC due to 2.9 South Oregon EQ. DRMI again is not locking despite touching suspensions, going through PRMI, MICH. Flashes are just fine..
Re-locking now and if this continues to fail for another 20 minutes, I will do yet another manual initial alignment, which is so far the only thing that seems to work.
TITLE: 06/27 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
Currently relocking and stuck not being able to get past DRMI due to it not wanting to lock and some previous ALSX crystal frequency issues. Ibrahim is working on trying to fix that.
When we went back into Observing after Commissioning today at 21:20 UTC, we had accidentally left the beam diverters open, and didn't notice for 20 minutes, so between 21:20 and 21:41 UTC the beam diverter was open. Once we noticed, we went out of Observing, closed it, and then went back ito Observing. tagging detchar
LOG:
14:30 Observing and have been Locked for 7.5 hours
14:38 Lockloss
15:40 NOMINAL_LOW_NOISE
21:20 Observing after commissioning end
- We had accidentally left the beam diverter open so this data may not be very good
21:41 Out of Observing to close the beam diverter
21:44 Back into Observing with everything looking good
22:29 Lockloss
Just want to specify that the only beam diverter left open was the POP beam diverter! All other usually-closed beam diverters were closed.
In preparation for the SatAmp swap (ECR E2400330), I've written a script that allows us to easily compare the noise performance between satellite amplifier swaps. It takes in a suspension name, ifo, and two gpstimes, grabs the DAMP IN data for all dofs, regresses out the ISI GS13 noise, and then compares the leftover noise between the two gps times.
We wanted to have the script divide out the loop suppression, and Jeff looked into which suspensions we've taken valid open loop TFs for (85289). Some suspensions have valid OLTF measurements, so from those we were able to export the loop suppression to divide out (and are saved in the suspension's Data folders), but others don't have valid measurements, so I made it so if there is a valid measurement, it will divide it out from the regressed sensor noise data, otherwise it'll just show the regressed.
Here is an example of the suspension noise before and after a satellite amplifier swap for SR3 that was done at LLO last summer (72417). There was no loop suppression for us to divide out here, so the plots just show the DAMP IN and the leftover sensor noise.
Here is another example for LHO PR3. This one is only meant to be an example of the plots with the loop suppression removed, as the sat amp is the same for both gps times. There is some weirdness around where the peaks were supposed to be removed by the loop suppression - some dofs still show thin peaks. The loop suppression didn't remove them correctly, and we think this might be because the loop suppression assumes a SISO loop, but we might have cross coupling that then is not accounted for when we are dividing by the loop suppression.
The script is located in /ligo/svncommon/SusSVN/sus/trunk/Common/MatlabTools/damp_regression_compare.m, and is a function so it can be called from somewhere else. It has been committed as r12362.
The data that's pulled for the gps time and timespan for the sus dofs and chamber GS13s takes a while to grab the first tme, so I had it save the data in the respective suspension Data folders (ex. /ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/PR3/SAGM1/Data/dampRegress_H1PR3_M1_1405123218_1200.mat). Then if you need to run that gps time + span again it'll get the data right away.
The results get saved in the suspension's Results folder (ex. /ligo/svncommon/SusSVN/sus/trunk/HLTS/H1/PR3/SAGM1/Results/allDampRegressCompare_H1SUSPR3_M1_NoiseComparison_1405123218vs1433376018-1200.pdf).
WP12637 Install HWS Camera Controls
TJ, Camilla, Dave:
I made a slight change to my original Dec 2023 hws_camera_control.py code to use the python epics module to do the cagets directly, and to also use this to add caputs which will try, in a non-blocking way, to send the camera's status to a central EPICS IOC.
The central IOC code is hws_camera_status_ioc.py which runs on cdsioc1 under puppet control.
The camera control code runs every minute in an infinite loop. Each loop it checks the status of H1 using Guardian ISC_LOCK (greater or equal to 600 denotes H1 LOCK). If H1 is locked and the camera is on, it turns the camera off. If H1 is not locked and the camera is off, it turns the camera on.
A mini-overview of the 4 HWS cameras (ITMX, ITMY, ETMX, ETMY) is available on the CDS Overview screen. The block is divided into 4 segments, each camera is shown either in SKYBLUE (camera is enabled, images are being taken) or GREEN (camera is disabled, no noise being created).
Note that the color scheme relates to what is nominal for OBSERVE. Green indicates the cameras have power, but their frame acquisition has been disabled.
Clicking on the HWS block on the CDS Overview opens a detailed screen (see attachment). For each camera this shows if the camera is powered (ETMY is the only one powered down) and if the camera is enabled. The control code also reports its current time, its uptime, its start time and details of where and how it is running.
Second attachment shows the four tmux sessions running the control code, and the arguments used.
Calibration suite was run after 5 hours of thermalization. Before the measurement was run, work had been done to update the SRCL offset (85362).
Broadband
Start: 2025-06-26 20:35:38 UTC
End: 2025-06-26 20:40:49 UTC
Data: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250626T203538Z.xml
Simulines
Start: 2025-06-26 20:42:09 UTC
End: 2025-06-26 21:05:29 UTC
Data: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250626T204210Z.hdf5
/ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20250626T204210Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20250626T204210Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20250626T204210Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20250626T204210Z.hdf5
Current version of the pydarm report can be found at /ligo/groups/cal/H1/reports/20250626T204210Z_prospring/H1_calibration_report_20250626T204210Z.pdf
. We are investigating further into why the calibration is so different now.
After inspection on the sensing function from the initially generated report, we changed the model to an anti-spring and to start the fit at 8 Hz, to see if we could get a better calibration from this measurement.
First, we set the is_pro_spring parameter in the H1_pydarm.ini file from True to False. We expected this parameter make the model (orange line) fit better to the measurement (green dots). Overall, there was no noticeable change of the sensing model compared to the initial report, as seen in the first figure (CAL_SENSING_MODEL_ANTISPRING_20250626.png). Additionally, the sensing MCMC corner plots were not gaussian (opposite to what is instructed in T2400215 section 2.4.2), as seen in the second figure (CAL_SENSING_MCMC_CORNER_ANTISPRING_20250626.png).
To improve on the sensing model, we decreased the sensing parameter mcmc_fmin from 10 to 8 Hz. Even though the sensing function is slightly worse at low frequencies compared to the initial report (the one from oli's alog), the sensing corner plot shows more of a gaussian behavior. Additionally, the uncertainty is still within our 10% budget (see snippet of the calibration monitor from grafana CAL_MONITOR_GRAFANA_POST_CALMEAS_20250626.png), so we will pause on this investigation for now.
The updated report is attached as a PDF file. The measurement has not been tagged at the time of posting this comment. We have yet to understand why the model is struggling with the fit parameters.
Signal railed about 5:18 PM local time, I checked trend data for PT120 and PT180 and no pressure rise noted inside the main volume. Attached is 3 day trend of the pump behavior, very glitchy for a long while already.
System will be evaluated as soon as possible. AIP last replaced on 2015, see aLOG 18261.
(Jordan V., Gerardo M.)
Today we replaced the MKS gauge at FC-C-1, this is the first 6 way cross inside the filter cavity tube enclosure, we installed serial number 390F00490, twice, yes two times. It turns out that the flange has some scratches on the knife edge, and it was not going to seal regardless of the effort that we put into it. Once the gauge was removed the scratches transferred to the copper gasket. We replaced it with serial number 390F00495, this one seems to be doing good. New conflat was leak tested and no leak was detectable above 2.42e-10 torr*l/sec.
The old gauge serial number is 390F00406 with a date code of June 2021.
Additional pictures of the knife edge damage/dirty flange from manufacturer.
More photos of the MKS390 gauge due to new found features.
We found some features internal to the gauge, see attached photos, maybe when welding the conflat to the gauge body they did not use shielding gas internal to the gauge.
Future reference, we did a test on the gauge with an annealed copper gasket, no leaks detected above the 1.0e-10 torr*l/sec. So, if this gauge is deemed good we can use it, contacting vendor with lots of questions. Serial number 390F00495 is for the featured gauge on attached photos.
WP12623 h1asc add fast channels to DAQ
Elenna, Dave:
A new h1asc model was rev-locked and installed. Four new fast DQ channels were added to the DAQ (channel, rate):
> H1:ASC-DC6_P_IN1_DQ, 256
> H1:ASC-DC6_P_OUT_DQ, 512
> H1:ASC-DC6_Y_IN1_DQ, 256
> H1:ASC-DC6_Y_OUT_DQ, 512
DAQ restart needed.
WP12570 Restart Digivideo Cameras with latest pylon
Patrick, Jonathan, Dave:
Jonathan updated pylon on h1digivideo[4,5,6] and restarted all the camera servers on these machines. This should fix the bug of stuck open files accumulating when the camera connection is interrupted.
No DAQ restart needed.
Add PID SMOO channels to vacuum SDF
Dave:
Prior to today's h0vacly restart I added the missing CP PID-control SMOO channels to the vacuum SDF monitor.req and safe.snap files. SDF was restarted 08:29. No DAQ restart needed.
WP12577, 12608, 12615 Upgrade LY Vacuum Controls
Janos, Gerardo, Patrick, Jonathan, Erik, Dave:
Patrick installed a new h0vacly system this morning. Main items are:
Pleae see Patrick's alog for details.
A extended DAQ restart was required, renaming Ion Pump raw minute trend files for uninterrupted lookback and construcing new PT100 (HAM1) raw minute trends following the upgrade of h0vaclx last Tuesday (17th June 2025).
DAQ Restart
Jonathan, Erik, Patrick, Dave:
Immediately following the restart of h1asc at 11:52, the DAQ was restarted using the following procedure:
It was at this late point that I remembered that the temporary H1 version of PT100B is no longer needed, and indeed this channel has no data following the removal of the PT100B Volts channel from h0vacly. However since it is still in the EDC, we need to continue running the temporary IOC until the next DAQ restart. I've removed it from edcumaster.txt as a reminder.
GPS Leap Seconds Updates
Jonathan, Erik, Dave:
Erik's FAMIS task reminded us that the leapseonds files expiration date of 30 June 2025 is rapidly approaching. Although no leap seconds are to be applied, the files need to be updated to reset their expiration dates. Please see Erik and Jonathan's alog for more details.
DNS testing
Erik:
ns1 (backup DNS server) was used by Erik to see if we can reproduce the error whereby loss of connection to GC caused internal CDS name resolution issues. It did not.
Vacuum Ion Pump channel name changes (old-name, new-name)
H0:VAC-FCES_IP23_II123_AIP_IC_VOLTS | H0:VAC-FCES_IPFCC9_IIC9_AIP_IC_VOLTS |
H0:VAC-FCES_IP23_II123_AIP_IC_VOLTS_ERROR | H0:VAC-FCES_IPFCC9_IIC9_AIP_IC_VOLTS_ERROR |
H0:VAC-FCES_IP23_II123_AIP_IC_MA | H0:VAC-FCES_IPFCC9_IIC9_AIP_IC_MA |
H0:VAC-FCES_IP23_II123_AIP_IC_MA_ERROR | H0:VAC-FCES_IPFCC9_IIC9_AIP_IC_MA_ERROR |
H0:VAC-FCES_IP23_II123_AIP_IC_LOGMA | H0:VAC-FCES_IPFCC9_IIC9_AIP_IC_LOGMA |
H0:VAC-FCES_IP23_II123_AIP_IC_LOGMA_ERROR | H0:VAC-FCES_IPFCC9_IIC9_AIP_IC_LOGMA_ERROR |
H0:VAC-FCES_IP23_VI123_AIP_PRESS_TORR | H0:VAC-FCES_IPFCC9_VIC9_AIP_PRESS_TORR |
H0:VAC-FCES_IP23_VI123_AIP_PRESS_TORR_ERROR | H0:VAC-FCES_IPFCC9_VIC9_AIP_PRESS_TORR_ERROR |
H0:VAC-FCES_IP24_II124_AIP_IC_VOLTS | H0:VAC-FCES_IPFCD1_IID1_AIP_IC_VOLTS |
H0:VAC-FCES_IP24_II124_AIP_IC_VOLTS_ERROR | H0:VAC-FCES_IPFCD1_IID1_AIP_IC_VOLTS_ERROR |
H0:VAC-FCES_IP24_II124_AIP_IC_MA | H0:VAC-FCES_IPFCD1_IID1_AIP_IC_MA |
H0:VAC-FCES_IP24_II124_AIP_IC_MA_ERROR | H0:VAC-FCES_IPFCD1_IID1_AIP_IC_MA_ERROR |
H0:VAC-FCES_IP24_II124_AIP_IC_LOGMA | H0:VAC-FCES_IPFCD1_IID1_AIP_IC_LOGMA |
H0:VAC-FCES_IP24_II124_AIP_IC_LOGMA_ERROR | H0:VAC-FCES_IPFCD1_IID1_AIP_IC_LOGMA_ERROR |
H0:VAC-FCES_IP24_VI124_AIP_PRESS_TORR | H0:VAC-FCES_IPFCD1_VID1_AIP_PRESS_TORR |
H0:VAC-FCES_IP24_VI124_AIP_PRESS_TORR_ERROR | H0:VAC-FCES_IPFCD1_VID1_AIP_PRESS_TORR_ERROR |
H0:VAC-FCES_IP25_CS187_STATUS | H0:VAC-FCES_IPFCH8A_CSH8A_STATUS |
H0:VAC-FCES_IP25_II187_IC_VOLTS | H0:VAC-FCES_IPFCH8A_IIH8A_IC_VOLTS |
H0:VAC-FCES_IP25_II187_IC_VOLTS_ERROR | H0:VAC-FCES_IPFCH8A_IIH8A_IC_VOLTS_ERROR |
H0:VAC-FCES_IP25_II187_IC_AMPS | H0:VAC-FCES_IPFCH8A_IIH8A_IC_AMPS |
H0:VAC-FCES_IP25_II187_IC_AMPS_ERROR | H0:VAC-FCES_IPFCH8A_IIH8A_IC_AMPS_ERROR |
H0:VAC-FCES_IP25_VI187_PRESS_TORR | H0:VAC-FCES_IPFCH8A_VIH8A_PRESS_TORR |
H0:VAC-FCES_IP25_VI187_PRESS_TORR_ERROR | H0:VAC-FCES_IPFCH8A_VIH8A_PRESS_TORR_ERROR |
Tue24Jun2025
LOC TIME HOSTNAME MODEL/REBOOT
11:52:26 h1asc0 h1asc <<< Elenna's new asc model
not shown, shutdown of both TW0 and TW1 for file manipulation at this point
12:17:51 h1daqdc0 [DAQ] <<< 0-leg restart
12:18:03 h1daqfw0 [DAQ]
12:18:03 h1daqtw0 [DAQ]
12:18:05 h1daqnds0 [DAQ]
12:18:12 h1daqgds0 [DAQ]
12:18:15 h1susauxb123 h1edc[DAQ] <<< edc restart with new vacuum channel list
12:24:09 h1daqdc1 [DAQ] << 1-leg restart
12:24:22 h1daqfw1 [DAQ]
12:24:22 h1daqtw1 [DAQ]
12:24:23 h1daqnds1 [DAQ]
12:24:31 h1daqgds1 [DAQ]
12:25:06 h1daqgds1 [DAQ] <<< gds1 second restart