Broadband:
Start: 1433619864 / 2025-06-10 12:44:06
Stop: 1433620174 / 2025-06-10 12:49:16
Simulines:
GPS start: 1433620215.301202
GPS stop: 1433621616.185126
2025-06-10 20:13:18,020 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250610T194958Z.hdf5
2025-06-10 20:13:18,028 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20250610T194958Z.hdf5
2025-06-10 20:13:18,035 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20250610T194958Z.hdf5
2025-06-10 20:13:18,040 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20250610T194958Z.hdf5
2025-06-10 20:13:18,045 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20250610T194958Z.hdf5
I'm working on going through some Observe SDFs, so that we're ready for observing soon.
Jim is currently working on going through many of the SEI SDFs. The rest of the diffs I need to check with other commissioners to be sure about before we clear them, but I think we're getting close to having our SDFs cleared!
h1tcshws sdfs attached.
Reverted BaffePDs to what they were 3 months ago, attached, unsure why they would have changed.
SQZ ADF frequency sdfs accepted, we do not know why these would have been accepted at the values -600 that they've been at some of the past 2 weeks.
ASC SDFs were from changes to DC6, cleared.
Cleared these SDFs for the phase changes for LSC REFL A and B.
I trended these, and see that FM2 was on in all three of these last time we were in observing, so these must have been erroneously accepted in the observing snap.
I also accepted the HAM7_DK_BYPASS time from 1200 to 999999 after checking with Dave, as attached.
I went down to end Y to retrieve the usb stick that I remotely copied the c:\slowcontrols directory on h1brsey to, and also to try to connect h1brsey to the kvm switch in the rack. I eventually realized that what I thought was a vga port on the back of h1brsey was probably not, and instead I found this odd seeming wiring connected from what I am guessing is a hdmi or dvi port on the back of h1brsey, to some kind of converter device, then to a usb port on a network switch. I'm not sure what this is about, so I am attaching pictures.
Tue Jun 10 10:12:01 2025 INFO: Fill completed in 11min 57secs
around 11:15 local I observed an outlier on the VEA temperature trend. Zone 2 (Y output) appeared to be running beyond its norm. Because Eric was troubleshooting a heater coil in this particular zone (per WP 12589) this morning, this was not terribly surprising, but I decided to investigate anyway. According to FMCS, heating stage 1 was manually forced on. It appeared to hold at least 40% heating command in this condition. I don't recall a reason for this being manually enabled, nor did Eric. Since disabling it, heating command dropped from 40% to 0 and supply temperatures have fallen from 73F to 58. This might cause a sharper than usual course correct, but I would expect zone 2 to fall in line with the rest of the VEA by days end. E. Otterman T. Guidry
Sheila, Elenna, Camilla
Sheila was questioning if something is drifting for us to need an initial alignment after the majority of relocks. Elenna and I noticed that BS PIT moves a lot both while powering up /moving spots and while in NLN. Unsure from the BS alignment inputs plot what's causing this.
This was also happening before the break (see below) but the operators were similarly needing more regular initial alignments before the break too. 1 year ago this was not happening, plot.
These large BS PIT changes began 5th to 6th July 2024 (plot). This is the day shift from the time that the first lock like this happened 5th July 2024 19:26UTC (12:26PT): 78877 at the time we were doing PR2 spot moves. There also was a SUS computer restart 78892 but that appeared to be a day after this started happening.
Sheila, Camilla
This reminded Sheila of when we were heating a SUS in the past and causing the bottom mass to pitch and the ASC to move the top mass to counteract this. Then after lockloss, the bottom mass would slowly go back to it's nominal position.
We do see this on the BS since the PR2 move, see attached (top 2 left plots). See in the green bottom mass oplev trace, when the ASC is turned off on lockloss, the BS moves quickly and then slowly moves again over the next ~30 minutes, do not see simular things on PR3. Attached is the same plot before the PR2 move. And below is a list of other PR2 positions we tried, all the other positions have also made this BS drift. The total PR2 move since the good place is ~3500urad in Yaw.
To avoid this heating and BS drift, we should move back towards a PR2 YAW of closer to 3200. But, we moved PR2 to avoid the spot clipping on the scrapper baffle, e.g. 77631, 80319, 82722, 82641.
I did a bit of alog archaeology to re-remember what we'd done in the past.
To put back the soft turn-off of the BS ASC, I think we need to:
Camilla made the good point that we probably don't want to implement this and then have the first trial of it be overnight. Maybe I'll put it in sometime Monday (when we again have commissioning time), and if we lose lock we can check that it did all the right things.
I've now implemented this soft let-go of BS pit in the ISC_DRMI guardian, and loaded. We'll be able to watch it throughout the day today, including while we're commissioning, so hopefully we'll be able to see it work properly at least once (eg, from a DRMI lockloss).
This 'slow let-go' mode for BS pitch certainly makes the behavior of the BS pit oplev qualitatively different.
In the attached plots, the sharp spike up and decay down behavior around -8 hours is how it had been looking for a long time (as Camilla notes in previous logs in this thread). Around -2 hours we lost lock from NomLowNoise, and while we do get a glitch upon lockloss, the BS doesn't seem to move quite as much, and is mostly flattened out after a shorter amount of time. I also note that this time (-2 hours ago) we didn't need to do an initial alignment (which was done at the -8 hours ago time). However, as Jeff pointed out, we held at DOWN for a while to reconcile SDFs, it's not quite a fair comparison.
We'll see how things go, but there's at least a chance that this will help reduce the need for initial alignments. If needed, we can try to tweak the time constant of the 'soft let-go' to further make the optical lever signal stay more overall flat.
The SUSBS SDF safe.snap file is saved with FM1 off, so that it won't get turned back on in SDF revert. The PREP_PRMI_ASC and PREP_DRMI_ASC states both re-enable FM1 - I may need to go through and ensure it's on for MICH initial alignment.
RyanS, Jenne
We've looked at a couple of times that the BS has been let go of slowly, and it seems like the cooldown time is usually about 17 minutes until it's basically done and at where it wants to be for the next acquisition of DRMI. Attached is one such example.
Alternatively, a day or so ago Tony had to do an initial alignment. On that day, it seemed like the BS took much longer to get to its quiescent spot. I'm not yet sure why the behavior is different sometimes.
Tony is working on taking a look at our average reacquisition time, which will help tell us whether we should make another change to further improve the time it takes to get the BS to where it wants to be for acquisition.
Last night Corey ran a simulines measurement shortly into the start of the lock, 84908. This measurement was mainly done as a test to confirm simulines wasn't breaking the lock, so we were not well thermalized. We can first report that simulines did not break the lock, so the previous lockloss that occurred during the simulines measurement was likely unrelated to simulines itself.
GPS start of measurement: 1433563079
I did a time machine on the calibration monitor screen for the GPS start of the measurement.
I was able to generate a report by running pydarm report --skip-gds
I have attached the generated PDF from this report, and I took a screenshot of the first page since there is an interesting result. The sensing function shows a large spring, which is probably due to the fact that we are operating with a significant SRCL offset, which is designed to compensate for 1.4 degrees of SRCL detuning, 84794.
However, it is important to remember that this calibration measurement was made while the IFO was unthermalized, but the SRCL offset measurement I linked here was performed with the IFO thermalized.
The results from this calibration report are saved in /ligo/groups/cal/H1/reports/20250610T035741Z/
Camilla and I swept the LVEA this morning between locks. The VAC team still has HAM1 pumps to turn off and valve in, as well as a setup with a laptop on the Y arm near the manifold, and a turbo on the output arm. They will get to this later in the day. Other notable things found on out walk through:
The heating coil in zone 2A of the LVEA was repaired this morning. There will be some variation in the trending while the PI loop adjusts.
The CDS SDF has had 1 diff for the past 4 days because the second outlet of the EY Tripplite power-strip was turned on around 9am Friday 06jun2025. Strangely EY's lights were not turned on at all on Friday, so either this was a mistake for something had been plugged into the outlet ahead of time.
Asking around the control room, no one knows why this was turned on. Because driving to EY and entering the VEA is invasive on locking, we elected to turn it off for now.
Following the drop in LN2 level over the last few days (see plot) we decided to bump up the alarm level from 80% to 85% on all CPs to give the vacuum team an earlier warning that the PID controls system is not able to maintain a nominal level.
alarms was restarted 08:16 with the following change
Channel name="H0:VAC-LX_CP2_LT150_PUMP_LEVEL_PCT" low="8.5e+01" high="9.9e+01" description="CP2 Pump LN2 Level">
Channel name="H0:VAC-MY_CP3_LT200_PUMP_LEVEL_PCT" low="8.5e+01" high="9.9e+01" description="CP3 Pump LN2 Level">
Channel name="H0:VAC-MX_CP5_LT300_PUMP_LEVEL_PCT" low="8.5e+01" high="9.9e+01" description="CP5 Pump LN2 Level">
Channel name="H0:VAC-MX_CP6_LT350_PUMP_LEVEL_PCT" low="8.5e+01" high="9.9e+01" description="CP6 Pump LN2 Level">
Channel name="H0:VAC-EY_CP7_LT400_PUMP_LEVEL_PCT" low="8.5e+01" high="9.9e+01" description="CP7 Pump LN2 Level">
Channel name="H0:VAC-EX_CP8_LT500_PUMP_LEVEL_PCT" low="8.5e+01" high="9.9e+01" description="CP8 Pump LN2 Level">
TITLE: 06/10 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 1mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY:
H1 was in IDLE when I arrived.
I will start trying to lock now.
I've changed the sign of the damping gain for ITMX 13 in lscparams from +0.2 to -0.2 after seeing it damp correctly in 2 lock stretches. The VIOLIN_DAMPING GRD could use a reload to see this change.
I have loaded the violin damping guardian, since the setting RyanC found still works.
TITLE: 06/09 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: n/a tonight
SHIFT SUMMARY:
LOG:
Headline summary: We are very nearly back to NLN, only prevented in returning by the violin modes which are too high to engage OMC whitening. We have not yet been able to calibrate because of a very fast lockloss of unknown source.
The alog was down for most of the afternoon and evening, so I will do my best here to copy mesages from the mattermost chat, which served as the temporary alog.
Minor struggles in returning to full lock:
Once we achieved full power, we proceeded to try to solve some of the final instabilities leftover from last week. Changes made to avoid locking stability problems:
The plan was to try testing simulines again, however! We have had multiple very fast locklosses with no known origin. Two have happened shortly after arriving at OMC whitening. These are not coming from ASC and I don't see any ringup in DARM or LSC loops.
Had a look a the fast locklosses from OMC_WHITENING, lockloss tool tags windy for both (~20mph so not bad) and we are waiting for the violins to damp before engaging OMC_WHITENING.
Note, although these aren't slow ring ups, these are our typical type of locklosses and are not IMC fast locklosses.
I noticed that the LLCV is railing at its top value, 100% open, it can't open no more. This is a known issue, but it appears as if the valve is reaching 100% sooner than expected, meaning when the tank is almost half full. First, I'm going to try a re-zero the LLCV actuator and await for the results. First attachment is a 2 day plot of LLCV railing today and yesterday. The second plot is a 3 year history looking at the tank level and the LLCV, it rails at 100% a few times.
BURT restore? PID tuning ok? CP2 @ LLO PID parameters attached for comparison.
Thanks Jon. However, this system has a known issue, it turns out that the liquid level control valve is not suitable for the job, that is the reason why it reaches 100% sooner than later, but it appears as if something slip, now it reaches 100% at a higher level, this is the reason why I want to re-zero the actuator.
Attached is the Fill Control for CP7 The issue was mentioned here for the first time aLOG 4761, but I never found out who discovered this is only briefly mentioned by Kyle. Another entry on aLOG 59841.
Dirty solution of solving the issue with the LLCV getting railed at 100% open, we used the bypass valve, opened it up by 1/8 of a turn and that did the job. Not a single shot, but eventually we settle on that turn number. PID took over and managed to settle around to 92% open for the LLCV. Today we received a load for the tank for CP7. We are still going to calibrate the actuator.
(This is Oli)
Once we had been at 60W for two hours, I started a calibratoin measurement. I started with Simulines since yesterday I had gotten a broadband measurment done (84808). A couple minutes into the measurement, we lost lock. The cause is unknown, but I've attached the output for the simulines measurement (txt).
Lockloss happened as Calibration signals were ramping on, see attached. First glitch in L3, see attached.
Attaching trend of OMC DCPD sum during LL. Plot suggests the DCPDs were not the cause of the lock-loss.
[Jenne Louis Matt]
The change in the filters used by calibration created a 10% calibration error. Louis is trying to fix this but in an effort to have a way to revert to the previous filters without losing lock Jenne came up with a cdsutils way to revert back. Essentially it toggles off the new filters ('H1:OMC-DCPD_A0 FM10 H1:OMC-DCPD_B0') FM10
and steps the demod phase ('H1:OMC-LSC_PHASEROT')
in steps of 1 degree, 77 times, with a 0.065 sec delay (77steps/5seconds) between each step. The filter toggles have a 5 second ramp time, which informs the step delay in the cdsutils step, and the 77 degrees is the difference from the old demod phase and the new. Hopefully this avoids a lockloss in the event we are to revert, but may not. *fingers crossed*
Here is the command:
cdsutils switch H1:OMC-DCPD_A0 FM10 OFF; cdsutils switch H1:OMC-DCPD_B0 FM10 OFF; cdsutils step H1:OMC-LSC_PHASEROT 1,77 -s 0.065
UPDATE:
It worked :) anti-alias filters off and OMC-LSC phaserot returned to nominal
Joe, Fancisco and I got confused about the reverting and changing of the OMC Phase rot changes around this time.
I opened up an ndscope to see what happened.