h1cam26 (BS) was restarted Thur 12:56 PDT after stopping late Wed 22:23. It only lasted less than two days and stopped responding Fri night at 22:30.
I restarted it again this morning at 11:07 following the standard procedure.
H1 is currently stuck at ADS TO CAMERAS (same as last night); see attached screenshot.
CAMERA SERVO node has messages of:
"ASC-CAM_PIT1 (& YAW1)_INMON is stuck! Do Not Advance!"
Will do some searching to see what the issue is.
NOTE: Sheila's alog that there was mention of PIT1 being updated.
Ryan S--messaged me about the issue being the BS Camera being down (similar to alog 79648 on Thursday), so I just contacted Dave and he will be bringing this BS camera back up.
As soon as Dave broght the camera back, I reselected CAMERA SERVO ON (for CAMERA_SERVO node) and this unstalled the node, and had ISC LOCK get us to NLN within a minute.....but of course, PI24 immediately started to ring up! (But it looked like the SUS PI node damped it down).
Arrived to see H1's ISC LOCK at ADS TO CAMERAS, but no signals on the front ndscope and no DARM line on DARM FOM. SUS Violin screen looked like everyone was OFF or not damping. Eventually, the telltasign was MC Trans was flashing (so IMC was NOT locked) and MC REFL was really bright (with OPS Overview saying the PSL was at ~61W.
OK, here's a rough timeline for last night (used the handy "guardctrl log" command line command which Camilla was able to help me find; I also attached the guardtrl log for all notes for the minute around the lockloss):
I will probably run an Initial Alignment and return to the game plan of locking H1 and running measurements at NLN.
Sat Aug 24 08:13:53 2024 INFO: Fill completed in 13min 49secs
TCmins were -159C for today's fill, the outside temp is down to 13C (56F). I've increased the trip temps from full-summer (-140C) to late-summer (-130C). We typically run at this level until mid October.
TITLE: 08/24 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: n/a
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 3mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY:
Trying to figure out the state of H1, it was in automatic operation and is currently stuck at ADS_TO_CAMERAS since before 11pm local. (My main confusion is why I don't see any signals on nuc 30 (no DARM signal and no signals on the PRMI.sb ndscope. I also don't see Odd.)
WAIT.
OK, H1 is DOWN, but Guaridian seems to think it's trying to get thru ADS to Cameras and Inject Squeezing.....my realization is that MC Trans was flashing! Then I saw that MC Refl is really bright (I'm assuming close to the 60.8W Guaridan had taken us to via ISC LOCK.
With regards to the weekend operators To Do List (aka alog 79760):
Calibrations:
Camera Set Points:
TITLE: 08/24 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: None
SHIFT SUMMARY: Currently relocking and at MOVE_SPOTS. Max power is currently 61W(in actuality we are getting 60.8W out) in lsc_params in the hopes of avoiding the new PI. High winds today made locking hard, but we finally were able to get into NLN for an hour, and we were able to take (unthermalized) calibration measurements(79676), as well as figure out where the new PI is from and started trying to update the camera offsets. While trying to run the script that checks for the best camera offset locations, we lost lock, and I have been trying to get us relocked since then. We kept having locklosses in the middle states (~200-420s), and this last time Sheila recommeded fixing alignment by sitting in OFFLOAD_DRMI_ASC and adjusting SRM and PRM to raise POP18 and lower POP90 (altough I mostly focused on raising POP18), and adjusting it seems to have helped us get part this trouble area. Also, OMC_LOCK has been having a little trouble locking the cavity, but it will eventually catch on its own. For tonight, I am setting H1_MANAGER to LOW_NOISE, but leaving IFO_NOTIFY off.
LOG:
23:30 Just lost lock from LOWNOISE_ASC
00:59 NOMINAL_LOW_NOISE
01:06 Went to NLN_CAL_MEAS to run calibration
01:35 Calibration measurements finished, went back to NLN_CAL_MEAS
02:00 Lockloss
02:13 Lockloss from ACQUIRE_DRMI_1F
02:24 Lockloss from CHECK_MICH_FRINGES
02:26 Started an initial alignment
03:05 Initial alignment done, relocking
03:20 Lockloss from RESONANCE
03:50 Lockloss from CARM_150_PICOMETERS
04:01 Lockloss from RESONANCE
04:13 Lockloss from CARM_TO_REFL
04:22 Lockloss from ENGAGE_DRMI_ASC
- Starting an initial alignment
04:41 Initial alignment done, relocking
04:53-04:58 Sat in OFFLOAD_DRMI_ASC and adjusted SRM and PRM to raise POP18
** These calibration measurements were started a few minutes after reaching NLN. The IFO was NOT thermalized!!
Calibration monitor screenshot
Broadband (2024-08/24 01:06 - 01:11 UTC)
File:
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240824T010639Z.xml
Simulines (2024/08/24 01:13 - 01:35 UTC)
Screenshot of simulines error from the end of the measurement
Files:
/ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240824T011302Z.hdf5
/ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240824T011302Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240824T011302Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240824T011302Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240824T011302Z.hdf5
Reconciling some ASC observe diffs. Attached screenshot shows the change.
No SRC1 offset anymore, new POP A offsets, and a change in the tramp for the WFS offset.
There are still diffs for the ADS LO PIT and YAW matrix. I don't know why that is so I am not changing them.
After reaching NLN this morning, I accepted some SDFs for ALSE{X,Y} new green references and WFS PZT biases - attachments 1 and 2
TITLE: 08/23 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is in ACQUIRE PRMI and MAINTENANCE
There have been 2 initial alignments (fully auto), many locklosses due to ASC reasons and many more due to wind gust speeds (to the best of our knowledge). We were in NLN for a total of 46 minutes today! Additionally, the LVEA has been swept in prep for observing (alog 79666). The comissioners are gathering a weekend operator task list.
We've had a rough day trying to get locked due to the following reasons:
PI Mode Mania (alog 79665)
PI modes since we have been back are getting rung up right after we enter NLN. While usually the culp[rit is mode 24, mode 31 does sound on verbal too. Camilla and Naoki tried to ring them up to investigate and they were not able to ring up mode 24, implying a potential slight change in the PI modes. We could not get locked to investigate thgis further but we tyhink thart this might be related to the short <1hr 30 mins locks that we are getting.
Wind Speed Story:
Wind gusts have been over 30mph for the last few hours and this has made getting locked extremely difficult with easily over 10 failed lock attempts from ALS (usually Y) to DRMI. While this is annoying, it is also a known impedance.
OMC Trans Camera Conundrum:
The OMC Trans Camera is observed to shake/flicker at an unknown state (we saw it once today and a few times yesterday)/ By the time we were ready to investigate when this flicker occurs, we started having our locking issues. We did rule out camera auto-exposure problems.
Pop 18 Peculiarity (alog 79663) :
Pop 18 is particularly low and we have been moving PRM and SRM in order to maintain DRMI until we can offload. A potential plan is to go fix it in the LVEA (ISCT1) but since we were unable to investgiate while locked, this is going to be delayed to another day given that the issue still persists.
What we want to do once locked:
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:58 | sys | h1 | lho | YES | LVEA is laser HAZARD | 18:24 |
16:28 | PEM | Robert, Genevieve, Millie | EX/EY | N | Seismic measurements | 20:27 |
17:39 | PCAL | Rick | PCAL Lab | Yes | Checking getting parts | 20:03 |
18:07 | PCAL | Tony | PCal Lab | Local | Parts | 20:03 |
19:11 | OPS | Camilla | LVEA | Yes | LVEA Pre-Obs Sweep | 19:31 |
22:17 | OPS | Camilla | LVEA | YES | Turning off monitor | 22:21 |
22:43 | PCAL | Llamas | Pcal LAb | Yes | Power cycling the KVMs for a remove measurement | 22:55 |
We can edit this list as needed.
We are having trouble locking this afternoon because of the wind, but we have some requests for the operators this weekend if they are able to relock.
We've changed the nominal locking power to 61W, in the hopes that this might avoid the PI or let us pass through it quickly.
When we first get to NLN, please take the IFO to NLN_CAL_MEAS and run a calibration sweep. If we stayed locked for ~1.5 hours, please re-run, and if we are ever locked for more than 3 hours please re-run again.
After the calibration has run, we would like to check the camera set points since we have seen earlier today that they are not optimal and that might be related to our PI problem. We already updated the offset for PIT1, but we'd like to check the others. We modified the script from 76695 to engage +20 dB filters in all these servos to speed up the process. Each DOF should take a little more than 15 minutes. We'd like these run with the POP beam diverter open, so we can see the impact on POP18 and POP90. The operators can run this by going to /ligo/gitcommon/labutils/beam_spot_raster and typing python camera_servo_offset_stepper.py 1 -s now (and once 1 completes, run 2 and 3 if there's time.)
I've added 3 ndscope templates that you can use to watch what these scripts do. the templates are in userapps/asc/h1/templates/ndscope/move_camera_offsets_{BS, DOF2_ETMX, DOF3_ETMY}.yml We'd like to see if any of these can increase the arm powers, or POP18. If there is a better value found, it can be set to the default by updating lscparams.py lines 457 to 465.
The injections for LSC feedforward can also be taken after the tasks Sheila mentions here. Do these measurements after at least 2-3 hours of lock.
The templates used for these measurements are found in "/opt/rtcds/userapps/release/lsc/h1/scripts/feedforward"
Steps:
Putting in another request for LSC feedforward measurements. Please disregard the above instructions and instead follow these:
There is no need to take any other measurements at this time! I have copied the exact filenames from the folder. Do not change the filename when you save.
TITLE: 08/23 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 26mph Gusts, 22mph 5min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
Currently at ACQUIRE_DRMI_1F. We've been working on locking, but progress has been slow due to high winds. A few minutes ago we were at LOWNOISE_ASC, but lost lock. Max power is currently set in lsc_params as 61W to hopefully bypass the new PI. Continuing locking attempts.
I am taking another look at what steps in lock acquisition take a very long time. See Georgia's previous alog on this: 76398
I noticed some of the power up steps seem to take longer than they should. I found some timers that are longer than necessary, so I am shortening them.
Just a note about many of the longer waits and tight convergence thresholds: powering up to 75 W was sometimes a very shaky process that required a much tighter convergence in the soft loops, otherwise runaway instabilities would often cause locklosses. We have previously relaxed some of these tight constraints to speed up lock but we didn't catch them all. This should take care of more of them (but maybe not all).
I shortened lownoise ASC further. All the ASC loop changes happen in one step. The damping loops happen in two separate steps-- the combination of the damping loops the last time we made changes to this state caused problems so those still need to be separate.
This changed caused a lockloss. I have reverted the guardian code to break up these steps.
After HAM7/FC GVs were open this morning, I aligned FC1/2 and FC GR is locked. FC GR trans is ~110, which is reasonable.
I measured NLG following 76542. The measured NLG is 16 (0.247/0.0154=16), which is a bit lower than the last measurement in 78000. The OPO temperature is very different from before as shown in the attached SDF.
Camilla, Oli
We looked into the weird results we were seeing in the ITMX In-Lock SUS charge measurements. The coherence when the bias is on for both the bias and length drives is always good, but the coherences are bad (usually below 0.09) for both bias and length drives when the excitations run with the bias off.
I compared the differences in the coherence value outputs in the python script versus the matlab script(screenshot), and although the values are calculated slightly differently and are not exactly the same, they are resonably close enough that we can say that there is not an issue with how we are calculating the coherences.
Next, we used ndscope to look at the latest excitations and measurements from July 09th (ndscope-bais on, ndscope-bias off). Plotting L3_LOCK_BIAS_OUTPUT, L3_DRIVEALIGN_L2L_OUTPUT, L3_LVESDAMON_UL_OUT_DQ, and L3_ESDAMON_DC_OUT_DQ for both ITMX and ITMY, if there was an issue with the excitations not going through, we would expect to see nothing on the ESDAMON and LVESDAMON channels, but we do see them on ITMX.
We were still confused as to why we would see the excitations go through the ESDAMON channels but still have such low coherence, so we compared the ITMX measurements to ITMY on dtt for the July 09th measurements looking at how each excitation showed up in DARM and what the coherence was. When the bias was on, both the bias drive and Length drive measurements look as we expect, with the drive in their respective channels, a peak seen in DARM at that frequency, and a coherence of 1 at that frequency(bias_drive_bias_on, length_drive_bias_on). However, in the comparisons with the bias off, we can see the excitations in their channels for both ITMX and ITMY, but while ITMY has the peak in DARM like the bias on measurements, ITMX is missing this peak in DARM(bias_drive_bias_off, length_drive_bias_off). The coherence between DARM and the excitation channel is also not 1 on ITMX.
We showed these results to Sheila and she said that these results for ITMX with the bias off make sense if there is no charge built up on the ITM, which would be the first time this has been the case! So there are no issues with the excitations or script thankfully.
We will be making changes to the analysis script to still run the analysis even if the coherence is low, and will be adding a note explaining what that means.
Hey LHO, the in lock charge measurement script is another script, other than A2L scripts and calibration scripts, that I overhauled last year due to my deep dissatisfaction with the existing code.
I'll point you to the aLog when I ran it: [68310].
Among the huge amount of behaviors that I correct I implemented many lessons I learned in implementing simuLines for LHO: your DARM units are a very small number and you must explicitly cast DARM data to np.float64 in order to have the TFs and coherences (in particular coherences) calculate correctly. I've had to repeat this lesson to lat least 4 people writing code for LHO now in the calibration group because it trips up people again and again and it is not an obvious thing to do, and something I solved through sheer brute force (took Louis a lot to convince since he initially refused to believe it).
In particular inside the ''digestData
" function of the "/ligo/svncommon/SusSVN/sus/trunk/QUAD/L1/Common/Scripts/InLockChargeMeasurements/process_single_measurement.py
" you will see me casting the gwpy data to float64 on lines 50 and 51; followed by some sampling rate tricks to get coherence to calculate correctly with gwpy's coherence
call as well as gwpy handles average_fft
calls.
Hope it helps!
Thanks Vlad, we'll have a look at that.
While looking at these measurements we realized that we were not using the same bias setting for all the quads (ITMY around half bias). We want to change this using the attached code but first will run the charge measurements to directly compare before and post vent.
and again...
h1cam26 stopped running at 15:25 (only ran for 4hr 18min). I power cycled camera at 15:41.