Displaying reports 3201-3220 of 83416.Go to page Start 157 158 159 160 161 162 163 164 165 End
Reports until 20:01, Sunday 02 February 2025
H1 SQZ
ryan.short@LIGO.ORG - posted 20:01, Sunday 02 February 2025 (82599)
SHG Temperature Adjusted

At 01:02 UTC, H1 dropped from observing due to the SHG dropping out, and it would not come back on its own. I brought SQZ_MANAGER to 'DOWN' and raised the SHG temperature (H1:SQZ-SHG_TEC_SETTEMP) from 34.6 to 35.2 to raise the OPO ISS control signal and SHG green power back up to around 2.8 and 120mW, respectively. After that, I requested SQZ_MANAGER back to 'FREQ_DEP_SQZ' and everything came back without issue. Once I accepted the new SHG temperature in SDF, H1 returned to observing at 01:13 UTC.

Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:32, Sunday 02 February 2025 (82597)
OPS Day Shift Summary

TITLE: 02/02 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 143Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:

IFO is in NLN and OBSERVING as of 20:31 UTC

Many, many issues today all SQZ related. In short, our range is terrible and we know it is SQZ related.

Most of the story can be found in alog 82588 (and comments) but to summarize:

Otherwise, Dave did a vacstat restart due to a glitch - alog 82595

LOG:

Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 16:04, Sunday 02 February 2025 (82596)
Ops Eve Shift Start

TITLE: 02/02 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 22mph Gusts, 18mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.41 μm/s
QUICK SUMMARY: H1 has been locked for 3.5 hours. Ibrahim is bringing me up to speed on the SQZ issues of the morning.

H1 CDS
david.barker@LIGO.ORG - posted 13:51, Sunday 02 February 2025 (82595)
VACSTAT BSC3 sensor glitch detected, service restarted

VACSTAT detected a single BSC3 sensor glitch at 13:34 this afternoon. Last glitch was 20 days ago. I restarted vacstat_ioc.service on cdsioc0 at 13:46 and disabled HAM6's gauge.

H1 General (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 11:10, Sunday 02 February 2025 - last comment - 12:57, Sunday 02 February 2025(82592)
Lockloss 19:02 UTC

Unknown cause Lockloss. Sheila and I were troubleshooting a host of SQZ mysteries when this happened (not actually touching anything just discussing). It's tempting to say SQZ related since all our issues this weekend have been but yet unknown.

Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 12:57, Sunday 02 February 2025 (82594)

Back to NLN as of 20:31 UTC

Sheila and I troubleshot more SQZ stuff, ultimately deciding to turn off SQZ ANG ADJUST since it was oscillating the phase, and thus our range, unnecessarily. I edited sqzparams.py to turn it off and then had to edit SQZ ANG guardian to make DOWN the nominal state. Now we're observing but will need to manually change the angle to optimize range, which I just did and will do again once thermalized.

LHO VE
david.barker@LIGO.ORG - posted 10:12, Sunday 02 February 2025 (82590)
Sun CP1 Fill

Sun Feb 02 10:09:12 2025 INFO: Fill completed in 9min 9secs

TCmins [-95C, -93C] OAT (2C, 35F) DeltaTempTime 10:09:14

Images attached to this report
H1 SQZ (Lockloss, SQZ)
ibrahim.abouelfettouh@LIGO.ORG - posted 08:41, Sunday 02 February 2025 - last comment - 16:47, Sunday 02 February 2025(82588)
SQZ Morning Troubles (Lockloss 16:17)

SQZ Morning Troubles

After reading Tony's alog, I had the suspicion that this was the SHG power being too low, causing lock issues. I trended Sheila's SHG Power while adjusting the SHG temp to maximize. This worked to lock SQZ almost immediately - screenshot below.

However, despite SQZ locking, it looked like in trying to lock for hours the SQZ_ANG_ASC had gone awry and as the wiki put it, "ran away" I requested the reset SQZ_ANG guardian but this didn't work. Then, I saw another SQZ Angle reset guardian, called "Reset_SQZ_ANG_FDS", which immediately broke guardian. The node was in error and a bunch of settings in ZM4 abd ZM5 got changed. I reloaded the SQZ guardian and successfully got back to FREQ_DEP_SQZ. I saw there were some SDF changes that had occured so I undid them, successfully making guardian complain less and less. Then, there were 2 TCS SDF OFFSETs that were ZM4/5 M2 SAMS OFFSET and they were off by 100V, sitting at their max 200V. I recalled that when SQZ would drop out, it would notify something along the lines of "is this thing on?". I then made the assumption that because this slider was at its max, all I'd need to do was revert this change, whic happened over 3hrs ago. I did that and then we lost lock immediately.

What I think happened in order:

  1. We get to NLN, SQZ FC can't lock due to low SHG power. It had degraded from Sheila and I's set point of 2.2 to a further 1.45.
  2. Guardian maxes out its ability to attempt locking by using ZMs and SQZ Ang ASC (if those aren't the same thing) and sits there, not having FC locked and not being able to move any more.
  3. Tony gets called and realizes the issue and can only get to a certain level before LOCK_LO_FDS rails, I assumer due to the above change. This means that even if we readjust the power, the ZM sliders would have to be adjusted.
  4. I get to site, change the temp to maximize SHG power and SQZ locks but at this weird state where ZM5 and ZM4 have saturated M2 SAMS OFFSETS. I readjust but the slider difference is so violent, it causes a lockloss.

Now, I'm relocking, having reverted the M2 SAM OFFSETs that guardian made and having maximized the SHG Temp power (which interestingly had to go down contrary to yesterday). I've also attached as screenshot of the SDF change related to the AWC-SM4_M2_SAMS_OFFSET because I'm about to rever the change on the assumption it is erroneous.

What I find interesting is that lowering the SHG temp increases the SHG power first linearly, then after some arbitrary value, exponentially but only for one big and final jump (kind of like a temperature resonance if you can excuse my science fiction). You can see this in the SHG DC POWERMON trend that shows a huge final jump.

Images attached to this report
Comments related to this report
ibrahim.abouelfettouh@LIGO.ORG - 10:05, Sunday 02 February 2025 (82589)

Recovered from Lockloss and got to OBSERVING automatically - 17:58 UTC.

Only SDF was the only change I made which was re-adjusting SHG temperature to maximize its power.

Images attached to this comment
ibrahim.abouelfettouh@LIGO.ORG - 10:48, Sunday 02 February 2025 (82591)

Investigating further, I realize that once I fixed the SHG temp issue, the message "SQZ ASC AS42 not on??"  was the flashing notification, in which I should have followed: alog 71083. Instead, SDF led me to what the specific issue with ASC was. I will continue to monitor and investigate. Since range is fluctuating, I will probably go into comissioninbg to try and optimize squeezing once thermalized.

sheila.dwyer@LIGO.ORG - 11:44, Sunday 02 February 2025 (82593)

Ibrahim summarized this over the phone for me, and here's our summary of 4 things that need a fix or looking into (sometime):

  • The SQZ angle shouldn't be adjusted when there is no AS42 light.  The ASC has a check for this, which means light from the squeezer is so misaligned it isn't reaching the AS WFS or the OMC, but SQZ _ANG_ADJUST seems not to check.
  • The SQZ_MANAGER RESET_SQZ_ANG_FDS has some kind of typo that made the sqz manager go into error. (log attached, main sets a self..timer['wait'] but then in run it refers to self['wait'], this state must never have been run before).  I fixed the typo and loaded it but haven't tested it.
  • RESET_SQZ_ASC sets ZM4 +5 P + Y lock gains to 1, but that is not what they are normally set to.  We can either re-write this to check the gains and reset them, or move these gains to the servo filters. (I haven't done either of these things this morning).
  • Why did the PSAMs offsets get set to 200V at 5:49 AM local time? 
    • I looked at this, it seems that SQZ_MANAGER was just sitting in FDS_READY_IFO at the time, and the strain gauge servo settings didn't change at this time, and the two changes happened at exactly the same time so, so SDF seems like the most likely thing to have changed these.  I had trouble figuring out which SDF table these are in.  It does seem like the strain guage is at the same location that it was before all this happened, so the PSAMs should be at the same curvature.
Non-image files attached to this comment
ryan.short@LIGO.ORG - 16:47, Sunday 02 February 2025 (82598)

In regards to Sheila's final bullet point above, the change to the PSAMS offsets happened as a result of Tony's misclick (alog82585) where he brought SQZ_MANAGER to 'SET_ZM_SLIDERS_IFO [52]' as reported by the Guardian log at that time:

ryan.short@cdsws25[~]: guardctrl log -a 1422539317 -b 1422539318 SQZ_MANAGER
2025-02-02_13:48:19.274841Z SQZ_MANAGER [DOWN.run] timer['pause'] done
2025-02-02_13:48:19.445441Z SQZ_MANAGER EDGE: DOWN->SET_ZM_SLIDERS_IFO
2025-02-02_13:48:19.445441Z SQZ_MANAGER calculating path: SET_ZM_SLIDERS_IFO->SET_ZM_SLIDERS_IFO
2025-02-02_13:48:19.449362Z SQZ_MANAGER executing state: SET_ZM_SLIDERS_IFO (52)
2025-02-02_13:48:19.450448Z SQZ_MANAGER [SET_ZM_SLIDERS_IFO.enter]
2025-02-02_13:48:19.491549Z SQZ_MANAGER [SET_ZM_SLIDERS_IFO.main] ezca: H1:SUS-FC2_M1_OPTICALIGN_P_OFFSET => 252.9999999999991
2025-02-02_13:48:19.546438Z SQZ_MANAGER [SET_ZM_SLIDERS_IFO.main] ezca: H1:SUS-ZM4_M1_OPTICALIGN_P_OFFSET => -559.3071
2025-02-02_13:48:19.708738Z SQZ_MANAGER [SET_ZM_SLIDERS_IFO.main] ezca: H1:SUS-FC2_M1_OPTICALIGN_Y_OFFSET => 43.900000000000425
2025-02-02_13:48:19.792385Z SQZ_MANAGER [SET_ZM_SLIDERS_IFO.main] ezca: H1:AWC-ZM4_M2_SAMS_OFFSET => 200
2025-02-02_13:48:19.837611Z SQZ_MANAGER [SET_ZM_SLIDERS_IFO.main] ezca: H1:AWC-ZM5_M2_SAMS_OFFSET => 200

I also show this interaction in the attached ndscope.

Images attached to this comment
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 07:36, Sunday 02 February 2025 (82587)
OPS Day Shift Start

TITLE: 02/02 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 1mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.45 μm/s
QUICK SUMMARY:

IFO is at NLN as of 12:44 UTC

SQZ is failing to lock FC per Tony's OWL alog. I am investigtaing now.

H1 OpsInfo (SQZ)
anthony.sanchez@LIGO.ORG - posted 07:06, Sunday 02 February 2025 - last comment - 07:33, Sunday 02 February 2025(82585)
Owl Operator assistance Required Sunday morning

TITLE: 02/02 Owl Shift: 0600-1530 UTC (2200-0730 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 3mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.37 μm/s
QUICK SUMMARY:
I got a call to help H1 get relocked. When I logged in to start helping we were in POWER_10W.
Once we reached NOMINAL_LOW_NOISE ... we lost lock before Observing can be reached.
Once H1 Got past DRMI_1F I tried to get some Z's.

Allowing the H1 some time to relock, once it got back to Nominal_low_noise again the SQZ manager was stuck on beam diverter step, and hence H1 Called me back.

After poking around and trying to recreate my steps last time this happened: alog 82192

I started reading the troubleshhoting guide for SQZ: https://cdswiki.ligo-wa.caltech.edu/wiki/Troubleshooting%20SQZ

I tried to request FDS_READY_IFO[50] and miss clicked  [52]  Which changed the alignment of the FC2 mirrors and ZM4. [Not any other ZM mirrors!] 

Since then: I was able to narrow down the problem to the SQZ_FC not locking anymore. Likely because the ZM3 mirrors have now been moved.
 

Images attached to this report
Comments related to this report
anthony.sanchez@LIGO.ORG - 07:33, Sunday 02 February 2025 (82586)SQZ

OK, after returning FC2 and ZM4 back to their prior positions. I am now getting the Beam diverter error message again, which is where I started when the IFO called me the second time.

LHO General
ryan.short@LIGO.ORG - posted 22:00, Saturday 01 February 2025 (82584)
Ops Eve Shift Summary

TITLE: 02/02 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Largely uneventful shift with one lockloss and a somewhat lengthy relock. Wind hasn't been too bad and microseism is steady. H1 has now been locked for just over 2 hours.

H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 18:27, Saturday 01 February 2025 - last comment - 20:01, Saturday 01 February 2025(82582)
Lockloss @ 02:03 UTC

Lockloss @ 02:03 UTC - link to lockloss tool

No obvious cause, but there's an ETMX glitch immediately prior. End lock stretch at 16.5 hours.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 20:01, Saturday 01 February 2025 (82583)

H1 back to observing at 03:53 UTC.

I went immediately into an initial alignment to see if the ALS dropout issues were seen, and they were not. Since I believe this is the third time we've seen IA make the dropouts go away, is there something different about IA that fixes them?

Even with an alignment, relocking still took a while as there were ETMX saturations starting at DARM_TO_RF that eventually caused a lockloss during CARM_OFFSET_REDUCTION. Also, DRMI took a while to lock even with decent flashes.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:37, Saturday 01 February 2025 (82581)
OPS Day Shift Summary

TITLE: 02/02 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:

IFO is in NLN and OBSERVING as of 09:29 UTC (15 hr lock!) with a two drops from OBSERVING

OBS Drop 1: Planned Saturday Calibration Sweep - alog 82579 - CALIBRATION from 19:32 to 20:15 UTC (~45 mins)

OBS Drop 2: SQZ opo lockloss - PREVENTATIVE MAINTENANCE from 21:51 to 23:31 UTC (~1hr 40 mins). Details below.

Troubleshooting with Camilla and Sheila led to the same conclusion that the SHG power was too low. Sheila determined that it was as high as it could go but tuning the SHG temperature gave it some extra power, which is ultimately what allowed SQZ-OPO CONTROLMON signal to get back to a lockable number. This will likely need to be readjusted since it is slowly going down in counts if the opo/sqz have issues locking (though may have leveled out check screenshot). The instructions are: Sitemap -> SQZ -> SQZT0 -> Colorful SHG Box -> Set Temperature and such that H1:SQZ-SHG_GR_DC_POWERMON is maximized in counts (picture below of Sheila's adjustments). Accepted SHG temp change SDF attached.

Other SQZ Adjustments:

Other: the H2 building had a temperature excursion overnight that prompted Dave to troubleshoot with me first remotely and then in-person. He came on site to see the issue with the HVAC not keeping the temperature low (set point was 67 but temp went all the way to over 80). Since power cycling failed, Jonathan and Dave agreed to turn off non-essential computers and electronics until they can more thoroughly investigate the issue during the week. I've attacched the H2 Building Environment medm which shows the temperature peaking at 85F before electronics were turned off and thermostats power cycled. The blue trace shoes the air flow failing to correctly adjust with the temperature and the red trace shows the temperature.

LOG:

None

Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 16:08, Saturday 01 February 2025 (82580)
Ops Eve Shift Start

TITLE: 02/02 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 21mph Gusts, 14mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.39 μm/s
QUICK SUMMARY: H1 has been locked for 14.5 hours.

Displaying reports 3201-3220 of 83416.Go to page Start 157 158 159 160 161 162 163 164 165 End