Displaying reports 8521-8540 of 84715.Go to page Start 423 424 425 426 427 428 429 430 431 End
Reports until 22:30, Sunday 16 June 2024
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 22:30, Sunday 16 June 2024 - last comment - 00:58, Monday 17 June 2024(78478)
Lockloss

Lockloss @ 06/17 05:28 UTC from unknown causes. Definitely not from wind or an earthquake

Comments related to this report
oli.patane@LIGO.ORG - 23:46, Sunday 16 June 2024 (78479)

06:45 Observing

oli.patane@LIGO.ORG - 00:58, Monday 17 June 2024 (78480)

Haven't figure out the cause, but this lockloss generally follows the pattern that we have seen for other 'DARM wiggle'** locklosses, but there are a couple of extra things that I noted and want to have them recorded, even if they mean nothing.

Timeline (attachment1, attachment2-zoomed, attachment3-unannotated)
Note: I am taking the lockloss as starting at 1402637320.858, since that is when we see DARM and ETMX L3 MASTER_OUT lose and fail to regain control. The times below are milliseconds before this time.

  •  152 - 147ms before LL (yellow box): EX L3 MASTER_OUT channel has a small but sharp drop. This might not actually be anything significant, but the slope of this little drop is much steeper than how EX L3 usually moves. This is not seen by DARM.
  • 116 - 95ms before LL (blue): LSC_DARM_IN1 and DCPD{A,B} are suddenly a bit noisier
  •  96 - 68msbefore LL (pink): glitch
  • 68 - 1ms before LL: DARM and DCPDs go back to looking like normal (the classic requirement of what makes it a DARM wiggle)
  • 0ms (green): DARM/EX L3 MASTER_OUT/DCPDs lose control -> lockloss

It also kind of looks like ASC-CSOFT_P_OUT and ASC-DSOFT_P_OUT get higher in frequency in the ten seconds before the lockloss(attachemnt4), which is something I had previoiusly noticed happening in the 2024/05/01 13:19 UTC lockloss (attachment5). However, that May 1st lockloss was NOT a DARM wiggle lockloss.

** DARM wiggle - when there is a glitch seen in DARM and ETMX L3 MASTER_OUT, then DARM goes back to looking normal before losing lock within the next couple hundreds of milliseconds

 

Images attached to this comment
LHO FMCS (PEM)
oli.patane@LIGO.ORG - posted 21:13, Sunday 16 June 2024 (78477)
HVAC Fan Vibrometers Check FAMIS

Closes FAMIS#26310, last checked 78362

Corner Station Fans (attachment1)
- All  fans are looking normal and within range.

Outbuilding Fans (attachment2)
- All fans are looking normal and within range.

Images attached to this report
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 19:03, Sunday 16 June 2024 - last comment - 20:39, Sunday 16 June 2024(78475)
Lockloss

Lockloss at 06/17 01:37 UTC - looks like it may have been due to a jump in the wind speed?

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 20:39, Sunday 16 June 2024 (78476)

03:39 UTC Observing

H1 DetChar (DetChar)
sukanta.bose@LIGO.ORG - posted 17:41, Sunday 16 June 2024 (78474)
Data Quality Shift Report 2024-06-03 to 2024-06-09

Link to report.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:31, Sunday 16 June 2024 - last comment - 16:47, Sunday 16 June 2024(78472)
OPS Day Shift Summary

TITLE: 06/16 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

IFO was in NLN and OBSERVING as of 06:05 UTC (21hr 37 min lock) but is NOW in CORRECTIVE_MAINTENANCE while we briefly restart the ADS Camera.

Like 5 minutes before my shift ended, the ADS Pitch1 Inmon and Yaw1 Inmon are stuck. It seems that they keep trying to turn on, but can't get past TURN_ON_CAMERA_FIXED_OFFSET. This has happened before alog 77499 and it is likely that the cameras just need to be restarted. We did not lose lock. Oli (incoming Op) has called Dave and they are working on it.
 

LOG:

None

Comments related to this report
oli.patane@LIGO.ORG - 16:47, Sunday 16 June 2024 (78473)

Dave restarted the camera servo for camera 26 and we are back in Observing as of 23:43 UTC

H1 General
oli.patane@LIGO.ORG - posted 16:20, Sunday 16 June 2024 (78471)
Ops EVE Shift Start

TITLE: 06/16 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 13mph Gusts, 11mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.07 μm/s
QUICK SUMMARY:

Observing and locked for 21.5 hours.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 13:30, Sunday 16 June 2024 (78470)
OPS Day Midshift Update

IFO is in NLN and OBSERVING (Now 18hr 35 min lock!)

Nothing else of note.

LHO VE
david.barker@LIGO.ORG - posted 10:13, Sunday 16 June 2024 (78469)
Sun CP1 Fill

Sun Jun 16 10:10:34 2024 INFO: Fill completed in 10min 30secs

Note TCs did not reach -200C because lower outside temps this morning (15C, 59F).

Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 07:39, Sunday 16 June 2024 (78468)
OPS Day Shift Start

TITLE: 06/16 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 4mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.09 μm/s
QUICK SUMMARY:

IFO is in NLN and OBSERVING since 06:06 UTC (12hr 45 min lock)

NUC27 Glitch screen is giving a warning: "Cluster is down, glitchgram is not updated", which I haven't seen before.

H1 General (SQZ)
oli.patane@LIGO.ORG - posted 01:06, Sunday 16 June 2024 (78467)
Ops EVE Shift End

TITLE: 06/16 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:  We are Observing at 158 Mpc and have been locked for 6 hours now. The only issues I had today was the OPO ISS maxing out at 54uW and so not being able to catch at 80uW and so needing to adjust the setpoint, and then relocking after the 06/16 00:01 UTC lockloss , I had some issues with getting PRMI and DRMI to lock, but it makes sense since the wind was still a bit high at the time. After that the night has been quiet.
LOG:

23:00 Detector relocking and at DARM_TO_RF

23:39 NOMINAL_LOW_NOISE
- SQZ OPO ISS pump having trouble locking. The OPO transmission couldn't go higher than 54.6uW
    - Adjusted OPO temp, but the current temp was the best so I put it back. Reloaded the OPO guardian (So I had changed nothing!). OPO was able to get up to 72 after this, but still not to 80.
    - Naoki came on teamspeak and lowered the threshold to 70, and it caught very soon after that.
00:01 While dealing with the OPO, we lost lock. We had been locked for 22 minutes

00:29 Lockloss from ACQUIRE_DRMI
00:30 Started an initial alignment
00:51 Initial alignment done, relocking
01:04 Lockloss from ACQUIRE_DRMI
01:52 NOMINAL_LOW_NOISE
01:56 Observing
02:01 Our range was low so I took us out of Observing and ran the sqz tuning guardian states
02:10 Back to Observing, with a 7Mpc increase in range

05:25 Left Observing and started calibration measurements
05:53 Calibration measurements done, running sqz alignment (new optic offsets accepted)

06:05 Back into Observing

Images attached to this report
H1 CAL
oli.patane@LIGO.ORG - posted 22:57, Saturday 15 June 2024 (78465)
Calibration Measurements June 16, 2024

Calibration was run between 06/16 05:25 and 05:53 UTC

Calibration monitor screenshot

Broadband (2024/06/16 05:25 - 05:30 UTC)

File: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240616T052532Z.xml

Simulines (2024/06/16 05:32 - 05:53 UTC)

Files:

/ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240616T053211Z.hdf5
/ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240616T053211Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240616T053211Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240616T053211Z.hdf5
/ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240616T053211Z.hdf5

 

 

Images attached to this report
H1 CAL
oli.patane@LIGO.ORG - posted 22:27, Saturday 15 June 2024 - last comment - 23:05, Saturday 15 June 2024(78464)
Dropped Observing for Calibration

06/16 05:24 UTC I took us out of Observing to run a calibration sweep that we weren't able to run earlier.

Comments related to this report
oli.patane@LIGO.ORG - 23:05, Saturday 15 June 2024 (78466)

06/16 06:05 UTC Back into Observing after running calibration sweep and tuning squeeze

H1 General
oli.patane@LIGO.ORG - posted 19:50, Saturday 15 June 2024 (78463)
Ops Eve Midshift Status

Currently Observing at 152 Mpc and have been Locked for 55 mins. We had a lockloss at 06/16 00:01 UTC, 22 minutes into NOMINAL_LOW_NOISE, and relocking required me helping with PRMI and DRMI, but we were able to get up eventually. The wind is slowly going down and is around 25mph now.

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 17:07, Saturday 15 June 2024 - last comment - 19:15, Saturday 15 June 2024(78460)
Lockloss

Lockloss @ 06/16 00:01UTC (LDAS currently down so no link). We had only been locked for 20 minutes and I was trying to get the SQZ OPO to lock, so we had not been in Observing. There was a small jump up in wind at that time, but not sure if that would have caused the LL

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 18:58, Saturday 15 June 2024 (78461)

01:56 UTC Observing

oli.patane@LIGO.ORG - 19:15, Saturday 15 June 2024 (78462)

Went back out of Observing for ten minutes (02:01 - 02:10 UTC) to tune sqz because our range was 147Mpc. Now back in Observing at 154 Mpc

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:32, Saturday 15 June 2024 (78459)
OPS Day Shift Summary

TITLE: 06/15 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Wind
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

IFO is in and is locking at LOWNOISE_COIL_DRIVERS

Wind gusts have been high since the last lockloss and per usual, I would put H1 in down when the wind would go higher than 30mph and wait for oppurtunistic low-wind periods to lock ALS - here's a timeline since the last LL. The only sdf revert I had to accept was in my midshift alog 78457

All times are UTC

19:31 - NLN/Observing Lockloss

19:38 - ALS Lockloss

19:39 - DOWN due to 40-50mph winds

21:43 - Attempt to re-lock due to wind speeds below 30mph

22:08 - Two failures at MICH_FRINGES and no ability to lock PRMI let alone DRMI - decied to run initial alignment

22:25 - Initial Alignment done fully auto, attempt to relock - winds picking up about to surpass 30mph

22:41 - DOWN again after 2 ALS locklosses and 3 IR Locklosses, with winds around 35mph.

22:54 - Saw small oppurtunistic sub 30mph calmness (that only lasted 10 mins) and decided to try locking again - it worked!

23:02 - DRMI locked!

23:30 - Shift ended while in LOWNOISE_COIL_DRIVERS

LOG:

None

Displaying reports 8521-8540 of 84715.Go to page Start 423 424 425 426 427 428 429 430 431 End