Displaying reports 3601-3620 of 83536.Go to page Start 177 178 179 180 181 182 183 184 185 End
Reports until 10:51, Thursday 16 January 2025
H1 AOS
sheila.dwyer@LIGO.ORG - posted 10:51, Thursday 16 January 2025 - last comment - 14:00, Tuesday 28 January 2025(82312)
pause added to guardian state 557

Camilla, Sheila, Erik

Erik points out that we've lost lock 54 times since November in the guardian states 557 558 transition from ETMX or low noise ESD ETMX. 

We thought that part of the problem with this state was a glitch caused when the boost filter in DARM1 FM1 is turned off, which motivated Erik's change to the filter ramping on Tuesday 82263, which was later reverted after two locklosses that happened 12 seconds after the filter ramp, 82284

Today we added 5 seconds to the pause after the filter is ramped off (previously the filter ramp time and the pause were both 10 seconds long, now the filter ramp time is still 10 seconds but the pause is 15 seconds).  We hope this will allow us to better tell if the filter ramp is the problem or something that happens immediately after.

Comments related to this report
camilla.compton@LIGO.ORG - 16:52, Monday 27 January 2025 (82493)

In the last two lock acquisitions, we've had the fast DARM and ETMY L2 glitch 10seconds after DARM1 FM1 was turned off. Plots attached from Jan 26th and zoom, and Jan 27th and zoom.  Expect this means this fast glitch is from the FM1 turning off, but we've seen this glitch come and go in the past, e.g. 81638 where we though we fixed the glitch by never turning on DARM_FM1, but we still were turning FM1 on, just later in the lock sequence. 

In the lock losses we saw on Jan 14th 82277 after the OMC change (plot), I don't see the fast glitch but there is a larger slower glitch that causes the lockloss. One thing to note different between that date and recently is that the counts of the SUS are double the size. We always have the large slow glitch, but when the ground is moving more we struggle to survive it? Did the 82263 h1omc change fix the fast glitch from FM1 turning off (that seems to come and go) and we were just unlucky with the slower glitch and high ground motion the day of the change?

Can see from the attached microseism plot that it was much worse around Jan 14th than now.

Images attached to this comment
erik.vonreis@LIGO.ORG - 14:00, Tuesday 28 January 2025 (82508)

Around 2025-01-21 22:29:23 UTC (gps 1421533781) there was a lock-loss in the ISC_LOCK state 557 that happened before FM1 was turned off.

It appears to have happend about 15 seconds after executing the code block where self.counter == 2.  This is about half way through the 31 second wait period before executing the self.counter == 3,4 blocks.

See attached graph.

Images attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 10:10, Thursday 16 January 2025 (82310)
Thu CP1 Fill

Thu Jan 16 10:06:15 2025 INFO: Fill completed in 6min 12secs

Jordan confirmed a good fill curbside. TCs started high around +30C so trip temp was raised to -30C for today's fill. TCmins [-55C, -54C] OAT (2C, 36F).

Images attached to this report
H1 General (DetChar)
camilla.compton@LIGO.ORG - posted 09:18, Thursday 16 January 2025 - last comment - 09:35, Thursday 16 January 2025(82306)
Lights had been on in LVEA Since Tuesday

Robert and I just went into the LVEA for Commissioning activities and the lights were already on. Expect they had been left on since Tuesday. 

Comments related to this report
david.barker@LIGO.ORG - 09:35, Thursday 16 January 2025 (82309)

Opened FRS33087 to potentially install a Gneiss environment monitor in the LVEA to read light levels via EPICS.

H1 CAL
anthony.sanchez@LIGO.ORG - posted 09:13, Thursday 16 January 2025 (82305)
Calibration Sweep

Latest Calibration:
gpstime;python /ligo/groups/cal/src/simulines/simulines/simuLines.py -i /ligo/groups/cal/H1/simulines_settings/newDARM_20231221/settings_h1_newDARM_scaled_by_drivealign_20231221_factor_p1.ini;gpstime

notification: end of measurement
notification: end of test
diag> save /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250116T163130Z.xml
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250116T163130Z.xml saved
diag> quit
EXIT KERNEL

2025-01-16 08:36:40,405 bb measurement complete.
2025-01-16 08:36:40,405 bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250116T163130Z.xml
2025-01-16 08:36:40,405 all measurements complete.
 

 


gpstime;python /ligo/groups/cal/src/simulines/simulines/simuLines.py -i /ligo/groups/cal/H1/simulines_settings/newDARM_20231221/settings_h1_newDARM_scaled_by_drivealign_20231221_factor_p1.ini;gpstime
PST: 2025-01-16 08:40:33.517638 PST
UTC: 2025-01-16 16:40:33.517638 UTC
GPS: 1421080851.517638

2025-01-16 17:03:33,281 | INFO | 0 still running.
2025-01-16 17:03:33,281 | INFO | gathering data for a few more seconds
2025-01-16 17:03:39,283 | INFO | Finished gathering data. Data ends at 1421082236.0
2025-01-16 17:03:39,501 | INFO | It is SAFE TO RETURN TO OBSERVING now, whilst data is processed.
2025-01-16 17:03:39,501 | INFO | Commencing data processing.
025-01-16 17:03:39,501 | INFO | Ending lockloss monitor. This is either due to having completed the measurement, and this functionality being terminated; or because the whole process was aborted.
2025-01-16 17:04:16,833 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250116T164034Z.hdf5
2025-01-16 17:04:16,840 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20250116T164034Z.hdf5
2025-01-16 17:04:16,845 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20250116T164034Z.hdf5
2025-01-16 17:04:16,850 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20250116T164034Z.hdf5
2025-01-16 17:04:16,854 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20250116T164034Z.hdf5
ICE default IO error handler doing an exit(), pid = 1289567, errno = 32
PST: 2025-01-16 09:04:16.931501 PST
UTC: 2025-01-16 17:04:16.931501 UTC
GPS: 1421082274.931501

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 07:51, Thursday 16 January 2025 (82304)
Thursday Morning Shift report

TITLE: 01/16 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 21mph Gusts, 13mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.27 μm/s
QUICK SUMMARY:
IFO Locked for 24 hours!
Planned Comissioning time today from 16:30- 19:30 UTC where we will drop from observing for comissioning and calibration activities.


Alarm handler:
PSL Dust 101 & 102

Red but not actively alarming:
Vacuum alert: H0:VAC-LX_Y0_PT110_MOD1_PRESS_TORR
Trending this channel back it looks like its been red for days, and this Alog mentions that its not currently running

 

 

H1 CDS
david.barker@LIGO.ORG - posted 22:00, Wednesday 15 January 2025 - last comment - 09:33, Thursday 16 January 2025(82303)
Single Long Range Dolphin IPC receive errors on end station models for channels sent by the h1lsc model

At 17:48:23 Wed 15jan2025 PST all end station receivers of long-range-dolphin IPC channels originating from h1lsc saw a single IPC receive error.

The models h1susetm[x,y], h1sustms[x,y] and h1susetm[x,y]pi all receive a single channel from h1lsc and recorded a single receive error at the same time. No other end station models receive from h1lsc.

On first investigation there doesn't appear to be anything going on with h1lsc at this time to explain this.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 09:19, Thursday 16 January 2025 (82307)

FRS33085 is an umbrella ticket covering any IPC errors seen during O4.

Yesterday's IPC receive error was the fourth occurence during O4, we are averaging roughly one every six months.

david.barker@LIGO.ORG - 09:33, Thursday 16 January 2025 (82308)

I have cleared the end station SUS errors with a DIAG_REST when H1 was out of observe.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 22:00, Wednesday 15 January 2025 (82302)
OPS Eve Shift Summary

TITLE: 01/16 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:

IFO is in NLN and OBSERVING as of 14:57 UTC (15 hr lock!)

Ultra-smooth shift - nothing of note.

LOG:

None

H1 General
anthony.sanchez@LIGO.ORG - posted 16:46, Wednesday 15 January 2025 (82301)
Wednesday Day Shift end

TITLE: 01/16 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
Great Day of Observing!
Dropped down to Commissioning twice:
Once due to an earthquake causing the SQZr to  lock loss- but we stayed in NLN.
And another time due to an Xtreme PI Damping event. Once again we stayed in NLN.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
17:45 PSL Ryan C PSL Chiller N Checking the PSL Chiller water level 18:15
17:46 FAC Kim Optics Lab Yes Tecnical cleaning 16:46
17:56 PEM Ryan C VAC Prep n Tracing Dust mon issues and wires 18:16
19:05 PCAL Francisco & Kim PCAL Yes Technical Cleaning 19:34
21:13 R&R JC Overpass N Walking over to the Overpass 21:23
22:46 VAC Janos EX n Work in maintenance room 23:09
00:28 PCAL Francisco PCAL Lab Yes PCAL measurements 01:08

 

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:06, Wednesday 15 January 2025 (82300)
OPS Eve Shift Start

TITLE: 01/16 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 159Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 3mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.34 μm/s
QUICK SUMMARY:

IFO is in NLN and OBSERVING as of 14:57 UTC (9 hr lock!)

H1 SUS
anthony.sanchez@LIGO.ORG - posted 15:42, Wednesday 15 January 2025 (82299)
Xtreme PI Damping

 23:39:06 UTC Xtreme PI damping took us out of Observing.
We got back into Observing at  23:40:06 UTC

 

 

H1 SQZ
anthony.sanchez@LIGO.ORG - posted 15:18, Wednesday 15 January 2025 (82298)
SQZr unlocked by Earthquake?

TITLE: 01/15 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 159Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 3mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.35 μm/s
QUICK SUMMARY:

Dropped from Observation at 15:22:22:40 UTC due to the SQZ system got unlocked. We did not drop out of Nominal_Low_Noise.
Looking slightly more into this, It looks like a 5.7Mag Eathquake may have unlocked the SQZ system.

Images attached to this report
H1 SUS
oli.patane@LIGO.ORG - posted 12:29, Wednesday 15 January 2025 (82296)
ALL* plotallsus_tfs.m scripts now plot cross-coupling

As an update to 80919, I have finished editing all*  plotallsus_tfs.m scripts to now include the plotting of cross-coupling. There is an on/off boolean along with a matrix near the top of each script that allows you to choose whether or not to plot cross coupling, and between which DOF.

This can help when we want to check for cross-coupling before vs after a period of time/vent/etc.

 

* When I say all I mean every sus matlab script whose name is a variation of plotallsus_tfs.m or plotallsus_tfs_M1.m. I did not update the plotallsus_spectra.m or plotallsus_tfs_{some other stage}.m since those are not used as often. All changes verified to work and committed to svn.

H1 DetChar (DetChar)
ansel.neunzert@LIGO.ORG - posted 10:51, Wednesday 15 January 2025 (82294)
Comparisons of GDS-CALIB_STRAIN_CLEAN and OMC-PI_DCPD_64KHZ_AHF_DQ at times with many narrow lines near violin modes

This comparison was suggested to help evaluate whether the narrow line contamination seen around violin mode regions during ring-ups is related to aliasing. The idea (as I follow it) is that because the 64 kHz channel has different aliasing, the noise ought to look different in the two channels if it is aliasing-related. In fact the two channels look similar, albeit not identical, which looks like evidence against the aliasing hypothesis. However, this test may not catch all the places where aliasing could occur, so it may not be conclusive. See the associated detchar-request issue for ongoing discussion.

The first two plots are from 2023, during time periods which were previously identified as having line contamination around the violin modes (71800, 79825). The last plot is from just a few days ago, on Jan 12.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:24, Wednesday 15 January 2025 (82293)
Wed CP1 Fill, manually cancelled due to TCmins too high

Wed Jan 15 10:06:29 2025 ALERT: Fill done (errors) in 6min 25sec

TCs started very high, around +40C, and therefore TCmins [-28C, -27C] did not exceed the trip temp (-60C). After the TCs had flatlined I manually cancelled the fill.

OAT (-1C, 31F) and foggy.

Images attached to this report
H1 PSL (PSL)
anthony.sanchez@LIGO.ORG - posted 09:35, Wednesday 15 January 2025 - last comment - 09:49, Wednesday 15 January 2025(82290)
PSL_Chiller: Check PSL chiller.
Diag Main has now got a PSL_Chiller message telling me to Check the PSL Chiller.
I ran the PSL Status script to hopefully get some insight.

Laser Status:
    NPRO output power is 1.842W
    AMP1 output power is 70.15W
    AMP2 output power is 137.0W
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 28 days, 23 hr 15 minutes
    Reflected power = 25.55W
    Transmitted power = 102.4W
    PowerSum = 127.9W

FSS:
    It has been locked for 0 days 3 hr and 32 min
    TPD[V] = 0.781V

ISS:
    The diffracted power is around 4.2%
    Last saturation event was 0 days 3 hours and 32 minutes ago


Possible Issues:
	PMC reflected power is high
	Check chiller (probably low water)



PSL probably wants some water and someone to chill with.
The plots below seem like it might be a little lonely. 

 
Images attached to this report
Comments related to this report
anthony.sanchez@LIGO.ORG - 09:41, Wednesday 15 January 2025 (82291)
I just got the verbal alarms alert to Check the PSL Chiller.
ryan.crouch@LIGO.ORG - 09:49, Wednesday 15 January 2025 (82292)PSL

I topped off the PSL chiller with ~100mL of water.

H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 08:06, Wednesday 15 January 2025 - last comment - 13:36, Wednesday 15 January 2025(82287)
Wed Morning Shift

TITLE: 01/15 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 3mph Gusts, 1mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.35 μm/s
QUICK SUMMARY:

When I walked in H1 had been locked and Observing for 33 minutes!

Unknown lockloss this morning at 13:17 UTC
H1 Manny relocked without assistance this morning and no one was even woken up.

 

PS: It's a little icey out in the parking lot so be prepared stepping out of your car.
 

Images attached to this report
Comments related to this report
ryan.crouch@LIGO.ORG - 13:36, Wednesday 15 January 2025 (82297)SUS

ETMY mode1s damping wasn't going well this morning and it was slowly rising (it did this a few times over the previous weekend as well), I went from +60 of phase to +30 and flipped the sign of the gain +0.1 -> -0.1, it has been damping for the past hour and has damped past where it was turning around with the previous settings.

H1 CDS
david.barker@LIGO.ORG - posted 17:12, Tuesday 14 January 2025 - last comment - 10:43, Wednesday 15 January 2025(82282)
CDS Maintenance Summary: Tuesday 14th January 2025

WP12272 h1omc0 new RCG, quadratic filter ramping

Erik, Dave:

h1omc0 models were built against RCG 5.31 which introduces quadratic smoothing to ramped filter switching. All the models running on this frontend have the new rcg (h1iopomc0, h1omc, h1omcpi).

The overview was modified to show that h1omc0 has a different rgc than h1susex by colour coding the RCG: dark_blue=5.31(quadratic filter ramp and variable duotone frequence), light-blue-5.30 (LIGO DAC) and green = 5.14 (standard)

WP12274 h1guardian1 reboot

TJ, Erik:

TJ rebooted h1guardian1 to reload all the nodes. The hope is that this will eliminate the leap-second warnings we have been seeing on certain nodes.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 10:43, Wednesday 15 January 2025 (82295)

Tue14Jan2025
LOC TIME HOSTNAME     MODEL/REBOOT
08:06:21 h1omc0       h1iopomc0   <<< Install RCG5.31
08:06:35 h1omc0       h1omc       
08:06:49 h1omc0       h1omcpi     


17:30:10 h1omc0       h1iopomc0   <<< Revert back to RCG5.30
17:30:24 h1omc0       h1omc       
17:30:38 h1omc0       h1omcpi     
 
 

Displaying reports 3601-3620 of 83536.Go to page Start 177 178 179 180 181 182 183 184 185 End