Displaying reports 121-140 of 84565.Go to page Start 3 4 5 6 7 8 9 10 11 End
Reports until 07:40, Monday 15 September 2025
LHO General
ryan.short@LIGO.ORG - posted 07:40, Monday 15 September 2025 - last comment - 09:48, Monday 15 September 2025(86921)
Ops Day Shift Start

TITLE: 09/15 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 6mph Gusts, 3mph 3min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.09 μm/s
QUICK SUMMARY: I'll start by taking H1 through an initial alignment then relock up through CARM_5_PICOMETERS, since it sounds like that's still the latest stable state H1 can get to.

Comments related to this report
sheila.dwyer@LIGO.ORG - 09:05, Monday 15 September 2025 (86923)

Attempt 1:

CARM_5_pm by guardian.  TR_CARM offset -52, could then set TR_CARM gain to 2.1, then could step TR_CARM offset to -56.  Then could set DHARD P gain to -30 and DHARD Y gain to -40. 

Then ran CARM_TO_TR with the guardian.  We could step the TR_REFLAIR9 offset to -0.03 and things looked stable, things started to ring up at 2Hz when we stepped to -0.02. 

It seems like the increased DAHRD gain helped keep things more stable than last night.

camilla.compton@LIGO.ORG - 09:48, Monday 15 September 2025 (86924)

Plot attached of the lockloss, Elenna pointed out we need to look at the faster channels. The oscillation started once the REFLAIR9 offset was -0.02 and higher. It's a 17Hz to 18Hz woble, also seen growing in all the LSC signals, which makes more sense for the fast frequency.

This same 17Hz LSC wobble was seen in the last lockloss last night too, plot.

Images attached to this comment
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 22:15, Sunday 14 September 2025 - last comment - 08:25, Monday 15 September 2025(86920)
OPS Eve Shift Summary

TITLE: 09/15 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: None
SHIFT SUMMARY:

IFO is in IDLE for CORRECTIVE MAINTENANCE

The mission today was to get to RESONANCE. We got really close!

The idea was that the calculated TR_CARM offset was likey incorrect so we had to step the offset to maintain stability as we went into resonance. So attempts below are all from CARM_TO_5_PICOMETERS.

While sitting in CARM_TO_REFL during handoff, took RF_DARM TF (attached).

I'm stealing Ryan S's format for his IFO troubleshooting today since it was so easy to read! (Thanks Ryan!)

Reference for healthy CARM to RESONANCE transition pre-outage: 2025/09/09 20:01 UTC, GPS: 1441483301

Relocked,

Relocked,

Ran Initial Alignment - fully auto

Relocked,

Relocked,

Plan now was to try to step the TR_CARM_GAIN up as we were stepping the TR_CARM_OFFSET down further from -52 to keep the UGF consistent and not cause a LL. Sheila plannd to measure this each time we stepped it.

Relocked,

IFO in IDLE per Sheila's Instruction. We will try again tomorrow.

LOG:

None

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 08:25, Monday 15 September 2025 (86922)

Here are two screenshots to show our last attempt last night.

Second one show the TR_REFL 9 erorr signal as we stepped the TR_CARM offset closer to resonance.  There is some noise in the TR_REFL9 signal, due to alignment fluctuations coupling when we are not on resonance.  As we get closer to resonance by increasing the TR_CARM gain as the optical gain of the side of fringe signal drops, the refl signal gets a little quieter. The offset gets set to zero the error signal right before the handoff, and as you can see the handoff to the error signal works well and the refl 9 signal become quiet once it is in loop.  Looking at the TR_CARM signal after the handoff, you can see that the arm powers fluctate because they are now seeing the noise from refl 9 which is now in loop. 

When the offset is removed, TR_CARM goes to about 58.  All build ups become wobbly when the offset is removed to go to resonance.  This means that something seems wrong with the TR REFLAIR9 error signal, which could be due to an alignment problem or an issue with the sensor itself.

 

Images attached to this comment
LHO General
ryan.short@LIGO.ORG - posted 17:10, Sunday 14 September 2025 (86919)
Ops Day Shift Summary

TITLE: 09/14 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: More troubleshooting today on H1 locking, primarily focused on getting through the CARM_TO_REFL transition and RESONANCE states. See my morning alog for details on the first half of the day; the rest I list here:

Generally locking has been very easy today up through CARM_5_PICOMETERS and alignment has been good.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:12, Sunday 14 September 2025 (86918)
OPS Eve Shift Start

TITLE: 09/14 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 10mph Gusts, 6mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.12 μm/s
QUICK SUMMARY:

IFO is DOWN for CORRECTIVE MAINTENANCE

Ryan S has tried an exhaustive combination (I will link the alog when posted) of lock attempts to get beyond where we were yesterday, which was losing lock upon loading the matrix necessary to get from CARM_5_PICOMETERS to CARM_TO_REFL to RESONANCE.

I don't have any ideas of what to try but will be in communication with comissioners.

H1 General
ryan.short@LIGO.ORG - posted 11:48, Sunday 14 September 2025 (86917)
Summary of Morning Locking Attempts

Below is my outline of locking attempts so far this morning, with any changes I made in Guardian or otherwise in bold:

Images attached to this report
H1 ISC
sheila.dwyer@LIGO.ORG - posted 11:26, Sunday 14 September 2025 (86916)
summary of recent changes

Summary of what we've changed so far since the overnight lock where IMC visibility degraded: (strting some notes, plan to add links later).

LHO VE
david.barker@LIGO.ORG - posted 10:36, Sunday 14 September 2025 (86915)
Sun CP1 Fill

Sun Sep 14 10:08:17 2025 INFO: Fill completed in 8min 14secs

 

Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 07:47, Sunday 14 September 2025 (86914)
Ops Day Shift Start

TITLE: 09/14 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 30mph Gusts, 24mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.11 μm/s
QUICK SUMMARY: Windy and rainy morning on-site. I'll start by running H1 through a fresh initial alignment, then continue troubleshooting where Ibrahim left off last night.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 21:01, Saturday 13 September 2025 (86913)
OPS Eve Shift Summary

TITLE: 09/14 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: None
SHIFT SUMMARY:

IFO is in IDLE and CORRECTIVE MAINTENANCE

The "good" news:

DHARD P loop has been successfully automatic since closing it this afternoon. We can lock all the way to CARM_5_PICOMETERS quite quickly. DRMI has also been good to me this evening.

The bad news:

We thought we were losing lock at RESONANCE, but we were really losing lock at CARM_5_PICOMETERS.

Stepping through this state has showed that its only at the end of this, where the matrix involving REFL_BIAS, TR_REFL9 and TR_CARM is loaded, we lose lock roughly 10s later due to SRM becoming unstable quickly.

I investigated this via alogs 86909, 86910, 86911, 86912, which are comments to my OPS shift start (alog 86908). After being led down a rabbit hole of ramp times, from 3 other times where that was the problem, I can confirm that this isn't it. Curiously, lock was lost faster with a longer ramp time.

With confirmation from TJ and Jenne, I'm leaving IFO in IDLE with the plan being to solve this problem in tomorrow's shift.

I do feel like we're close to solving this problem or at least figuring out where the issue lies. Specific details are in the alogs listed below.

LOG:

None

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:39, Saturday 13 September 2025 - last comment - 20:27, Saturday 13 September 2025(86908)
OPS Eve Shift Start

TITLE: 09/13 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 17mph Gusts, 11mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.15 μm/s
QUICK SUMMARY:

IFO is DOWN in CORRECTIVE MAINTENANCE

There was a lot of good work done today. Primarily rephasing REFLAIR (alog 86903) and closing DHARD_P loop. This has allowed us to easily get past DRMI. It seems that we are still losing lock around resonance due to a 35hz fast ringup.

The Procedure for the EVE:

  1. Try DHARD_WFS and step to resonance (state before lockloss). If DHARD_WFS doesn't woprk,
    1. comment out line 2430, 2404 of ISC_LOCK.py
    2. Then go to CARM_5_PICOMETERS and turn DHARD_P Gain to -1, then -10. This is slow
  2. If DHARD_WFS works, keep going and step through states by:
    1. once at previous state of the one you want to step
    2. "guardian -i ISC_LOCK"
      1. Copy the ezca lines
      2. copt intrix load lines
      3. do NOT copy "self.lines" or the library - that's done by the guardian -i
    3. Copy line by line and see it work. If state is done, MANUAL to state AFTER desired state

What has happened so far:

Same PR_G ringup caused lockloss at RESONANCE and CARM_TO_REFL so I'm going to the state before, which is CARM_5_PICOMETERS. Will follow the blueprint above. Thankfully DHARD_P worked 2x now.

Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 17:10, Saturday 13 September 2025 (86909)

Writing this before I attempt to look at ISC_library but:

Our last stable state took us to CARM_5_PICOMETERS. I stayed there just to be sure it was stable.

I then followed the procedure and stepped through the next state, CARM_TO_REFL.

  • Line 2701-2710 contain both an ezca and a self. state so couldn't run that if loop's ezca states - may contact someone about this but the lines were:
if self.NN < 20:
self.myoffset = self.myoffset + ezca['LSC-TR_REFLAIR9_INMON']
self.timer['wait'] = 0.2
self.NN += 1
else:
self.myoffset = self.myoffset / self.NN
ezca['LSC-TR_REFLAIR9_OFFSET'] = round(-1*self.myoffset,3)
self.counter += 1
  • I continued stepping through
  • On the last line, which was "ISC_library.intrix['REFLBIAS', 'TR_REFL9'] = 2.0", the same 35hz ringup occured, causing a lockloss.

I will begin investigating what I can but I think this an issue with loading the matrix from ISC_Library, implying something is up with REFLBIAS or TR_REFL9? Somewhat out of my depth but will investigate alogs, wiki, ISC_library and anything else I can.

Either way, it seems that we thought we were losing at RESONANCE but it was the CARM_TO_REFL's last line, unless I got something wrong in the if-loop I didn't run through, which is also something I will investigate.

ibrahim.abouelfettouh@LIGO.ORG - 18:29, Saturday 13 September 2025 (86910)

In looking at the problem lines loaded by the matrix, I see REFL_BIAS, TR_REFL9 and TR_CARM.

According to ISC_library.py, these are REFL_BIAS=9, TR_REFL9 = 27 and TR_CARM = 26. I don't know what this means yet.

Then, I looked at alogs hoping to find the last time something like this happened, which from vague memory was during post-vent recovery this year (with Craig and Georgia):

Looking for REFL_BIAS, I found Sheila's 2018 alog 43623 talking about the TR_CARM transition and how SRM would saturate first, which was what I was experiencing. Specifically the line is: TR_CARM transition: This has been unreliable for the last several days, our most frequent reason for failure in locking.  We found that the problem was a large transient when the REFL_BIAS (TR CARM) path is first engaged would saturate SRM M1.  We looked at the old version of the guardian and found that this used to be ramped on over 2 seconds, but has been ramped over 0.1 seconds recently.  This transition has been sucsesfull both of the times we tried it since changing the ramp time, in one try there was no transient and in the other there was a transient but we stayed locked.

Looking for TR_REFL9, I found Craig's alog 84679 from the aforementioned post-vent recovery. Same issue with ramp time too, specifically referencing the few lines of ISC_LOCK that I posted above. He moved the ramp time to 2s, which I confirmed is still there. He has a picture of the lockloss (attached below) and it looks very similar to the ones we have been having. I trended the same channels he did and found that after this 2s ramp time, PR_GAIN would become unstable after 10s (attached below). Also found Elenna's alog (where I was also on-shift) dealing with the same issue, but before Craig had increased the ramp time - alog 84569. The deja vu was real. Buut I'm unsure this is the same issue because the ramp time is indeed 2s.

Looking for TR_CARM, I found this recent alog from Elenna discussing CARM, REFL and DHARD WFs - unknown whether it is relevant. Alog 86469. I will read this more closely.

While the I'm being led to just increase the ramp time further, I truly doubt this will change much since 2s is quite large of a ramp time and doesn't really explain much. Given all that, I'm going to try it before looking further since we've gotten back here in the time it took to write and investigate this.

Images attached to this comment
ibrahim.abouelfettouh@LIGO.ORG - 19:08, Saturday 13 September 2025 (86911)

As expected, that did not fix it, rather it seemed as though the faster ramp time caused a weird offset in REFLAIR9, which didn't happen last time. I checked the log and confirmed that there was a 4s instead of 2s ramp time but I'm kind of struggling to see it (attached).

There seems to be a 25hz signal on POP_A as soon as I increased the ramp time. Additionally, there wasn't really an instability upon turning on CARM_TO_REFL. It was stable for 9s until LL, showing a kick 3s before rather than a degradation. Additionally, the lockloss wasn't as violent.

With the tiniest font, maybe an even longer ramp time would work? Very unconvincing.

I don't have many ideas right now, but I'm reading the alog, the wiki and then devising some plan based off an idea of how this part of lock works.

ibrahim.abouelfettouh@LIGO.ORG - 20:27, Saturday 13 September 2025 (86912)

I found an alog from Craig about REFLAIR9 (alog 43784). I'm somewhat more convinced that something involving the REFLAIR9_OFFSET and POP_LF?

The line from the alog: "we have made it through this transition every time now, but the PR_GAIN number on the striptool still spikes, meaning the POP_LF power changes rapidly through this transition, which is probably bad." sounds relevant?

I'm still confused why the REFLAIR9_OFFSET turning on causes an instability in PR_GAIN, which causes an instability in SRM which causes a lockloss.

I'm sufficiently confused at this stage so will attempt to go through the usual lockloss again to see if it's the same. Then I'll try stepping through again as a sanity check.

So far, I've only changed ramp times, immediately changing them back.

One thing that's confusing me is what the "REFL_BIAS" means. The REFL_BIAS Gain is 86 but the REFL_BIAS value in ISC_Library is 9. Nevermind, I think one refers to the location within a matrix in terms of the element number whereas the other is the value of that number within the matrix.

LHO General
ryan.short@LIGO.ORG - posted 14:36, Saturday 13 September 2025 (86906)
Ops Day Shift Summary

TITLE: 09/13 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: The main efforts of today have been trying to get past DHARD_WFS. DRMI is now less troublesome since Sheila and Elenna rephased REFLAIR and some dark offsets were updated, and we are able to make it up to DARM_TO_RF consistently without intervention other than an occasional SRM alignment touchup. See the rephasing alog comments for details on locking efforts this afternoon.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
18:39 CDS Richard CER N Checking RF 18:46
H1 ISC (OpsInfo)
ryan.short@LIGO.ORG - posted 12:45, Saturday 13 September 2025 (86902)
SRC Dark Offsets Updated

E. Capote, R. Short

While locking DRMI sevral times this morning, I had noticed the SRC loops are what have been pulling alignment away once DRMI ASC engages. I have put INP1, PRC1, and PRC2 back into Guardian to be turned on during DRMI ASC, and so far they have been working well. Elenna looked into this and noticed the dark offsets for sensors used for SRC needed to be updated, so she did so and the accepted SDFs are attached (all changes in the screenshot were accepted except for the last two in the list related to RF36).

Images attached to this report
H1 ISC
sheila.dwyer@LIGO.ORG - posted 11:44, Saturday 13 September 2025 - last comment - 16:00, Saturday 13 September 2025(86903)
rephasing REFLAIR (PRMI and DRMI locking diode), further locking progress

Elenna, Ryan S, Sheila

This morning Ryan and Elenna were having difficulty getting DRMI to lock, so we locked PRMI and checked the phase of RELFAIR with the same template/PRM excitation described in 84630.

45 MHz phase changed by 7 degrees, 9MHz phase changed by 5 degrees.  The attached screenshot shows that the phasing of RFL45 did have an impact on the OLG, and improved the phase margin.  In that measurement we also added 25% to the MICH gain. 

We accepted the phases in SDF so they were in effect at our next (quick) DRMI lock, but not the gain change.

When we next locked DRMI, we measured the MICH OLG again, and here it seems that the gain is a bit high.  We haven't changed the gain in the guardian since this seemed to work well, but the third attached screen shot shows the loop gain.

After this we went to DARM to RF, and manually increased the TR_CARM offset to -7.  The REFL power diped to 97% of it's unlocked value while the arm cavity transmission was 24 times the single arm level, so following the plot from 62110 we'd estimate our power recycling gain was between 20 and 30. 

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 12:42, Saturday 13 September 2025 (86904)

We made two attempts to close the DHAD WFS.  In the first, we stepped the TR_CARM offset to -7 and tried the guardian steps.  Looking at the lockloss it looked like the sign of both DHARD loops was wrong and pulling us to a bad alignment.

In the second attempt, we stayed at the TR_CARM offset of -3 (from DARM_TO_RF), and tried manually to engage them.  The yaw loop was fine and we were able to use the normal full gain.  For the pitch loop, it did seem to have the wrong sign so we tried flipping the sign.  The guardian would step this gain from -1 to -10 at the end of the DHAR_WFS state, we stepped it from 1 to 3, which seemed to be  driving the error signal to zero, but we lost lock partway through this. 

ryan.short@LIGO.ORG - 14:35, Saturday 13 September 2025 (86905)

I have made three more attempts through DHARD_WFS, both unsuccessful. Each time, once in DARM_TO_RF, I've manually engaged DHARD_Y's initial gain, watched the error signal converge, then increased to the final gain without issue. I then would engage DHARD_P's initial gain with the opposite sign and watch the error signal slowly converge after many minutes. Both times, whenever I would increase the DHARD_P gain, soon after the control signal would start moving away from zero (away from the direction it came) and there would be a lockloss.

On the thrid attempt I did the same as before, but this time stepped the CARM offset from -3 to -5 before engaging DHARD_P, but ultimately this attempt didn't work either. I noticed that once the DHARD_P error signal crosses zero, DHARD_Y starts running away (?). If I'm fast enough, I can turn the gains to zero before a lockloss happens, then bring the buildups back up by adjusting PRM. This juggling act of turning the DHARD loops on and off while adjusting PRM went on for a while, but inevitably ended in a lockloss.

sheila.dwyer@LIGO.ORG - 16:00, Saturday 13 September 2025 (86907)

Elenna, Ibrahim, Ryan, Sheila

We had one more attempt at locking, we were able to close DHARD Y wfs with the guardian, and we stepped DHARD P as we stepped up the CARM offset.  Elenna fixed up DRMI along the way as well.

We engaged the DHARD P loop with the usual sign and gain when the TR_CARM offset was -35.  Then we let the guardian continue, and lost lock in the CARM_TO_ANALOG state. 

Ibrahim has a few plans of what to try next.

Displaying reports 121-140 of 84565.Go to page Start 3 4 5 6 7 8 9 10 11 End