Displaying reports 41-60 of 85609.Go to page 1 2 3 4 5 6 7 8 9 10 End
Reports until 09:29, Thursday 13 November 2025
H1 PSL
oli.patane@LIGO.ORG - posted 09:29, Thursday 13 November 2025 - last comment - 11:07, Thursday 13 November 2025(88087)
IMC_LOCK stuck in FAULT due to FSS oscillation

During PRC Align, IMC unlocked and couldn't relock due to the FSS oscillating a lot - PZT MON was showing it moving all over the place, and I couldn't even take the IMC to OFFLILNE or DOWN due to the PSL ready check failing. To try and fix the oscillation issue, I turned off the autolock for the Loop Automation on the FSS screen, and after a few seconds re-enabled the autolocking, and then we were able to go to DOWN fine, and then I was able to relock the IMC.

TJ said this has happened to him and to a couple other operators recently.

 

Images attached to this report
Comments related to this report
jason.oberling@LIGO.ORG - 11:07, Thursday 13 November 2025 (88090)OpsInfo

Took a look at this, see attached trends.  What happened here is the FSS autolocker got stuck between states 2 and 3 due to the oscillation.  The autolocker is programmed to, if it detects an oscillation, jump immediately back to State 2 to lower the common gain and ramp it back up to hopefully clear the oscillation.  It does this via a scalar multiplier of the FSS common gain that ranges from 0 to 1, which ramps the gain from 0dB to its previous value (15dB in this case); it does not touch the gain slider, it does it all in block of C code called by the front end model.  The problem here is that 0dB is not generally low enough to clear the oscillation, so it gets stuck in this State 2/State 3 loop and has a very hard time getting out of it.  This is seen in the lower left plot, H1:PSL-FSS_AUTOLOCK_STATE, it never gets to State 4 but continuously bounces between States 2 and 3; the autolocker does not lower the common gain slider, as seen in the center-left plot.  If this happens, turning the autolocker off then on again is most definitely the correct course of action.

We have an FSS guardian node that also raises and lowers the gains via the sliders, and this guardian takes the gains to their slider minimum of -10dB which is low enough to clear the majority of oscillations.  So why not use this during lock acquisition?  When an oscillation is detected during the lock acquisition sequence, the guardian node and the autolocker will fight each other.  This conflict makes lock acquisition take much longer, several 10s of minutes, so the guardian node is not engaged during RefCav lock acquisition.

Talking with TJ this morning, he asked if the FSS guardian node could handle the autolocker off/on if/when it gets stuck in this State 2/State 3 loop.  On the surface I don't see a reason why this wouldn't work, so I'll start talking with Ryan S. about how we'd go about implementing and testing this.  For OPS: In the interim, if this happens again please do not wait for the oscillation to clear on its own.  If you notice the FSS is not relocking after an IMC lockloss, open the FSS MEDM screen (Sitemap -> PSL -> FSS) and look at the autolocker in the middle of the screen and the gain sliders at the bottom.  If the autolocker state is bouncing between 2 and 3 and the gain sliders are not changing, immediately turn the autolocker off, wait a little bit, and turn it on again.

Images attached to this comment
H1 CDS
david.barker@LIGO.ORG - posted 08:10, Thursday 13 November 2025 (88085)
EY vacuum glitch at beam tube ion pump 03:00:16 Thu 13 Nov 2025 PST

VACSTAT detected a vacuum glitch at 03:00:24 this morning originating with PT427 (EY beam tube ion pump station, about 1000 feet from EY). The vacuum pressure rapidly increased from 1.4e-09 to 1.4e-07 Torr. It then rapidly pumped back down to nominal in 8 seconds. The glitch was detected soon afterwards by all of the gauges in EY. They only increased  from around 1.0e-09 to 2.0e-09 and they took around 25 minutes to pump down.

The glitch was seen at MY as a much reduced amplitude about 6 minutes after the event.

H1 was not locked at the time.

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 07:37, Thursday 13 November 2025 - last comment - 08:22, Thursday 13 November 2025(88084)
Ops Day Shift Start

TITLE: 11/13 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 3mph Gusts, 0mph 3min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 1.22 μm/s 
QUICK SUMMARY:

Currently in DOWN due to excessively high secondary microseism. We're supposed to have calibration measurements and commissioning today. I'll try for a bit to get us back up, but I doubt we'll get past DRMI since it's worse now than it was last night for Ryan or TJ

Comments related to this report
david.barker@LIGO.ORG - 08:22, Thursday 13 November 2025 (88086)

I restarted VACSTAT at 08:18 to clear its alarm. Tyler resolved the RO alarm at 06:17 and now the CDS ALARM is GREEN again.

H1 General
thomas.shaffer@LIGO.ORG - posted 02:54, Thursday 13 November 2025 (88083)
Ops Owl Update

The useism continues to grow. I'll keep the IFO in down and see where things are at again in a few hours.

LHO General
ryan.short@LIGO.ORG - posted 22:01, Wednesday 12 November 2025 (88082)
Ops Eve Shift Summary

TITLE: 11/13 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: TJ
SHIFT SUMMARY: H1 was happily observing until mid-shift when the microseism just got too high and caused a lockloss. Haven't been able to make it past DRMI since then due to the ground motion. I'm leaving H1 in 'DOWN' since it's not having any success, but TJ says he'll check on things overnight.
LOG:

H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 19:19, Wednesday 12 November 2025 (88081)
Lockloss @ 02:44 UTC

Lockloss @ 02:44 UTC - link to lockloss tool

Possibly caused by ground motion, as everything looked to be moving quite a lot at the time, and the secondary microseism band has been rising this evening. Environment plots on the lockloss tool show a quick increase in motion at the time of the lockloss also.

H1 General
oli.patane@LIGO.ORG - posted 16:36, Wednesday 12 November 2025 - last comment - 17:08, Wednesday 12 November 2025(88079)
Ops Day Shift End

TITLE: 11/13 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Observing at 140 Mpc and have been Locked for over 16.5 hours. One short drop out of Observing due to the squeezer dropping out, but that relocked automatically and we're been Observing ever since. GRB-Short E617667 came in today at 21:29 UTC
LOG:

15:30UTC Observing and Locked for over 7.5 hours
    15:52 Out of Observing due to SQZ unlock
    15:56 Back into Observing
    21:29 GRB-Short E617667

Start Time System Name Location Lazer_Haz Task Time End
16:00 FAC Randy YTube n Caulking up those joints 22:57
17:04 FAC Kim MX n Tech clean 18:34
18:21   Sheila. Kar Meng Optics Lab y(local) OPO prep (Sheila out 18:57) 19:10
18:51 VAC Gerardo MX n Looknig for case 20:07
20:09   Corey OpticsLab n Cleaning optics 23:33
21:26   Kar Meng Optics Lab y(local) OPO prep 23:28
22:01   RyanS OpticsLab n Cleaning optics 23:33
22:10   TJ Optics Lab n Spying on Corey and RyanS 22:22
22:59   Matt Optics Lab n Grabbing wipes 23:16
23:34   Matt Prep Lab n Putting wipes away 23:35
Comments related to this report
david.barker@LIGO.ORG - 17:08, Wednesday 12 November 2025 (88080)

Tyler has the reverse osmosis water conditioning system offline overnight. The CDS alarms system has an active cell-phone bypass for this channel which expires tomorrow afternoon. This should be the only channel in CDS ALARM.

Bypass will expire:
Thu Nov 13 05:05:22 PM PST 2025
For channel(s):
    H0:FMC-CS_WS_RO_ALARM
 

LHO General
ryan.short@LIGO.ORG - posted 16:02, Wednesday 12 November 2025 (88078)
Ops Eve Shift Start

TITLE: 11/13 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 5mph Gusts, 3mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.56 μm/s 
QUICK SUMMARY: H1 has been locked for 16 hours and observing the whole day.

LHO VE
david.barker@LIGO.ORG - posted 11:52, Wednesday 12 November 2025 (88077)
Wed CP1 Fill

Wed Nov 12 10:08:17 2025 INFO: Fill completed in 8min 13secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 AOS
oli.patane@LIGO.ORG - posted 09:18, Wednesday 12 November 2025 (88075)
ISI CPS Noise Spectra Check Weekly FAMIS

Closes FAMIS#27534, last checked 87787

I was supposed to do this last week but I was out, so doing it now. Last time it was done was 10/28, so this is comparing to measurements from two weeks ago.

Nothing of note, everything looks very similar to how it looked a couple weeks ago.

Non-image files attached to this report
H1 General
oli.patane@LIGO.ORG - posted 07:38, Wednesday 12 November 2025 - last comment - 07:45, Wednesday 12 November 2025(88073)
Ops Day Shift Start

TITLE: 11/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 3mph Gusts, 1mph 3min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.37 μm/s 
QUICK SUMMARY:

Observing at 148 Mpc and have been Locked for over 7.5 hours. Currently in a standdown due to Superevent S251112cm that came in at 11/12 15:19 UTC

Comments related to this report
oli.patane@LIGO.ORG - 07:45, Wednesday 12 November 2025 (88074)

Looks like we had a few events come in last night besides the superevent:

11/12 06:29 UTC GRB-Short E617425

11/12 13:35 UTC GRB-Short E617519

LHO General
ryan.short@LIGO.ORG - posted 22:03, Tuesday 11 November 2025 (88072)
Ops Eve Shift Summary

TITLE: 11/12 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: A completely uneventful shift with H1 observing throughout. Now been locked for almost 7 hours.

H1 CDS
jonathan.hanks@LIGO.ORG - posted 17:21, Tuesday 11 November 2025 (88071)
WP 12876 Update the run number server to help test the new frame writer

As per WP 12876 I updated the run number server with a new version to allow more testing of the new frame writer (h1daqfw2), without allowing it to modify external state in CDS.

The run number server tracks channel list configuration changes in the frames.  The basic idea is the the frame writers create a checksum/hash of the channel list, send it to the run number server and get a run number to include in the frame.  This is then used by the nds1 server to optimize multi-frame queries (if it knows the configuration hasn't changed some of the structures can be re-used between frames (this has been measured at about a 30% speed-up)).

This update added a new port/interface that the server listens on.  It behaves a little different, it will only return the current run number (or 0 if the hash doesn't match) and will not increment the global state, so it is safe to have a test system use.

Now the new frame writer can automatically query the run number server to get the correct value to put in the frame (previously we had been setting it via a epics variable).  One step closer to the new frame writer being in production.

 

H1 SUS
ryan.short@LIGO.ORG - posted 16:39, Tuesday 11 November 2025 (88070)
In-Lock SUS Charge Measurement - Weekly

FAMIS 28431, last checked in alog87986

New data points for all quads this week, measurement trends attached.

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 16:29, Tuesday 11 November 2025 (88069)
Ops Day Shift End

TITLE: 11/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Currently Observing at 150 Mpc and have been Locked for just over an hour. Maintenance day was pretty light, but on the way back up there was some testing done with using the ETMY ESD (88065), so it took us a bit longer to get back to Observing. Besides that, getting back up wasn't bad at all.
LOG:

15:30UTC Locked and Out of Observing due to magnetic injections
15:40 Back to Observing
15:45 Out of Observing due to SUS charge measurements
16:01 Back into Observing after SUS charge measurements finished
16:01 Out of Observing to change the OM2 setpoint and unlock the IFO
16:03 I unlocked the IFO for maintenance

19:58 Started relocking
    - initial alignment
    - lockloss from LOWNOISE_COIL_DRIVERS due to commissioning
23:17 NOMINAL_LOW_NOISE
    23:28 Observing                                   

Start Time System Name Location Lazer_Haz Task Time End
15:46 FAC Randy YARM n Caulking beam tubes 23:00
16:05 PEM Sam, Alicia HAM1 n Installing accelerometers 17:00
16:06 FAC Nellie, Kim LVEA n Tech clean 16:59
16:12 ISC Matt CR n IFO at 60W !!!!!!!!!!!!!!!!!!! 16:59
16:12 VAC Jordan EY n Purge air line build 16:15
16:16 VAC Gerardo, Jordan LVEA n Checking CP1 and for HAM1 gate valve 16:39
16:19 PEM Rene LVEA n Helping with HAM1 accelerometer install 17:00
16:30 FAC Tyler MX, EX, EY n Roof inspections 17:44
16:34 VAC Travis EY n Purge air line build 19:46
16:35 FAC Mitchell LVEA n Putting away and grabbing parts 16:59
16:39 VAC Jordan, Gerardo EY n Purge air line work 19:24
17:00 FAC Nellie, Kim EY n Tech clean 18:31
17:13 EE Fil EX, EY n Prepping for 28-bit DAC 19:06
17:14 PEM Sam, Rene, Alicia LVEA n Moving accelerometers 17:44
17:16   Richard LVEA n Walking around 17:36
17:28   Betsy CER n Inventory/looking for stuff 18:28
17:45 3IFO Tyler LVEA, MX, MY n 3IFO checks 18:29
18:29 FAC Tyler Yarm backside N Inspect well water piping 19:07
18:31 FAC Nellie, Kim EX n Tech clean 19:59
19:06 EE Fil CER N Checks 20:02
19:41   Mitchell LVEA n Looking at parts 19:46
19:58   RyanC LVEA n Sweep 20:10
20:50   Kar Meng OpticsLab n Looking at components 21:00
21:07   Corey OpticsLab n Cleaning optics 00:13
21:39   Rahul OpticsLab n Wrapping up optics 21:59
22:18   RyanS OpticsLab n Cleaning optics 00:09
22:20 SUS Kar Meng OpticsLab y(local) OPO prep work 23:46
23:58   Betsy OpticsLab n Looking for parts 00:14
00:08   Kar Meng OpticsLab y(local) OPO prep work ongoing
LHO General
ryan.short@LIGO.ORG - posted 16:15, Tuesday 11 November 2025 (88068)
Ops Eve Shift Start

TITLE: 11/12 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 8mph Gusts, 5mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.26 μm/s 
QUICK SUMMARY: H1 has been locked for about an hour following Tuesday maintenance day.

H1 GRD (CDS)
thomas.shaffer@LIGO.ORG - posted 14:53, Tuesday 14 January 2025 - last comment - 10:17, Wednesday 12 November 2025(82273)
h1guardian1 machine reboot and point back at nds0

WP12274

FAMIS28946

We rebooted the h1guardian1 machine today for 3 things:

  1. Point the machine back at nds0 as the primary nds server
    • I noticed the other day that the guardian was still defining the chosen nds server as nds1 primary and nds0 as secondary. I'm not entirely sure when this was changed, but maybe 2 years ago (alog66834).
    • This was done by changing the NDSSERVER definition in the /etc/guardian/local-env file
  2. Relieve any stale processes that might latch the gps leap second data.
  3. Quarterly machine reboot FAMIS task

All 168 nodes came back up and Erik confirmed that nds0 was seeing the traffic after the machine reboot.

Comments related to this report
erik.vonreis@LIGO.ORG - 10:17, Wednesday 12 November 2025 (88076)

Server order is set in the guardian::lho_guardian profile in puppet.

Displaying reports 41-60 of 85609.Go to page 1 2 3 4 5 6 7 8 9 10 End