Displaying reports 21-40 of 85688.Go to page 1 2 3 4 5 6 7 8 9 10 End
Reports until 15:27, Thursday 20 November 2025
LHO General
thomas.shaffer@LIGO.ORG - posted 15:27, Thursday 20 November 2025 (88188)
Ops Day Shift End

TITLE: 11/20 Day Shift: 1530-2330 UTC (0730-1530 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Locked for 13.5 hours. PEM characterization continues. The useism has been on the rise the last few hours.
LOG:                              

Start Time System Name Location Lazer_Haz Task Time End
15:47 FAC Nellie Opt lab n Tech clean 16:09
15:48 FAC Randy Yarm n Y1 beam tube sealing 23:01
18:45 FAC Kim, Nellie OSB n Rolling up roll up door 19:05
18:50 PEM Robert, Genevieve LVEA n Move the speaker cable 19:27
19:30 PEM Robert, Genevieve LVEA n Readjusting the shaker rod 19:43
20:05 IO Corey Opt Lab n JAC optics cleaning 22:05
21:05 PEM Robert, Genevieve, Carlos LVEA N Moving shaker to BSC3 21:29
21:22 SQZ Sheila, Kar Meng Opt Lab LOCAL OPO testing 21:50
21:24 CDS Marc MY n Grabbing parts 23:22
22:51 SUS Oli CER n Writing down a number 22:55
23:05 PEM Robert, Genevieve, Sam LVEA n Look at and alter the shaker 23:13
23:23 SQZ Sheila, Kar Meng Opt Lab - Looking for tool pan in area past card reader, then going to optics lab for more OPO work 01:23
LHO VE
david.barker@LIGO.ORG - posted 10:17, Thursday 20 November 2025 (88187)
Thu CP1 Fill

Thu Nov 20 10:09:15 2025 INFO: Fill completed in 9min 11secs

 

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 07:30, Thursday 20 November 2025 - last comment - 09:01, Thursday 20 November 2025(88184)
Ops Day Shift Start

TITLE: 11/20 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 6mph Gusts, 4mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.30 μm/s 
QUICK SUMMARY: Locked for 6 hours but it looks like the squeezer has an issue. Working on it now. Dust alarm in the diode room, but other than that, no alarms overnight.

Comments related to this report
thomas.shaffer@LIGO.ORG - 07:56, Thursday 20 November 2025 (88185)

Notifications from the SQZ suite of guardians:

    SQZ_MANAGER - request Reset_SQZ_ASC_FDS

    SQZ_OPO_LR - "pump fiber rej power in ham7 high, nominal 35e-3, align fiber pol on sqzt0."

    SQZ_LO_LR - "LOW OMC_RF3 power < -22! Align SQZ-OMC_RF3, or more CLF power if aligned."

This seems like a similar situation as alog87361

 

thomas.shaffer@LIGO.ORG - 09:01, Thursday 20 November 2025 (88186)

I ended up doing these things:

  • Cycling SQZ_MANAGER Down and back. Same results
  • Setting SQZ_OPO_LR to Scan_OPOTEMP
  • Adjusting the SHG rejected fiber power with the half and quarter wave plates, but I wasn't sure that I was actually making anything better. While I was reducing the SQZ-SHG_FIBR_REJECTED_DC_POWERMON from 0.75 down to near 0.1 (the guardian notification says it should be around 0.035), the H1:SQZ-SHG_REJECTED_DC_POWERMON was still drifting away at the same rate. I ended up reverting these changes.
  • At this point the only SQZ guardian notification was the SQZ_LO saying that their might be a misalignment issue since there was no 3MHz a the OMC. Sheila walked in and moved ZM4 and ZM6 back to a time when we had a better sqz lock. This went away.
  • Squeezing still looks back, but Sheila says the angle is bad and it might have to do with her script from last night. She touched that up and we were back.

 

H1 PEM
ryan.short@LIGO.ORG - posted 21:53, Wednesday 19 November 2025 - last comment - 00:06, Thursday 20 November 2025(88182)
Magnetic Injection Suite Started

Since I saw H1 was locked, I started the full magnetic injection suite (49 injections total over the 7 coils across site) at 05:40 UTC using the PEM_MAG_INJ Guardian. This dropped H1 out of observing, but it should return to observing once the injections finish. The suite will finish in roughly 2 hours or if the IFO loses lock, whichever comes first. I'll check things later tonight in case this poses any issues.

Comments related to this report
ryan.short@LIGO.ORG - 00:06, Thursday 20 November 2025 (88183)

Injection suite finished at 08:00 UTC and H1 returned to observing.

LHO General
ryan.short@LIGO.ORG - posted 20:10, Wednesday 19 November 2025 - last comment - 21:08, Wednesday 19 November 2025(88180)
Ops Eve Shift Summary

TITLE: 11/20 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY: PEM measurements were running smoothly until a lockloss @ 01:28 UTC. Since H1 is having a bit of trouble getting relocked (locklosses during LASER_NOISE_SUPPRESSION and DRMI) for reasons I'm unsure of, I've set things to run overnight the same as last night, with H1_MANAGER attempting to keep H1 up but not call for help. I've also set Sheila's SQZ angle stepper script to start at 09:00 UTC (01:00am PST) running on cdsws39 with the hope that H1 is locked.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
01:13 CAL Tony PCal Lab N Changing spheres 01:21
01:40 PEM Robert, Sam, Genevieve EX -> LVEA N Moving shaker 02:50
02:12 VAC Gerardo LVEA N Checking AIP controller 03:02
Comments related to this report
sheila.dwyer@LIGO.ORG - 21:08, Wednesday 19 November 2025 (88181)

IFO locked and was in NLN, but not observing, because I'd left a pico controller enabled and it was causing an SDF diff.  I disabled it and now we are in observing. 

LHO General
ryan.short@LIGO.ORG - posted 16:40, Wednesday 19 November 2025 (88179)
Ops Eve Shift Start

TITLE: 11/20 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 6mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.33 μm/s 
QUICK SUMMARY: H1 has been locked for about 30 minutes and PEM measurements are ongoing.

LHO General
thomas.shaffer@LIGO.ORG - posted 16:28, Wednesday 19 November 2025 (88178)
Ops Day Shift End

TITLE: 11/20 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Currently at low noise, PEM characterization all day and one SQZ measurement started. There was one lock loss during the shift caused by us trying to move the corner station sensor correction off while in lock to allow Robert and a UW visitor look at the ITMY seismometer. Didn't work. Relocking was an initial alignment and then waiting at DRMI while we still had SC off, but all auto. 
LOG:

Start Time System Name Location Lazer_Haz Task Time End
16:25 PEM Robert, Sam, Genevieve CR/LVEA n PEM measurements, in and out of the LVEA all day 22:05
17:07 FAC Kim, Nellie OSB n Opening receiving roll up door 17:22
17:48 PEM Robert, Sam EX n Setup measurement 19:33
19:34 SEI Richard, UW desert n Checking on the vault seismometers 20:25
19:42 SYS Betsy Opt Lab n Check on lab 19:47
20:22 IO Corey Prep lab n JAC table enclosure 22:58
20:57 SQZ Sheila CR n SQZ meas. 22:07
21:40 VAC Travis EY n Check in mech room 22:04
21:56 SQZ Sheila, Kar Meng Opt Lab n OPO unboxing 23:00
22:04 PEM Sam, Genevieve, Jennie LVEA n Move speaker 22:40
22:06 PEM Robert, UW Paul LVEA n Look at seismometer 23:21
22:13 PEM Richard LVEA n Join Robert and Paul 22:39
23:22 pem Robert, Sam, Genevieve EX n Set up shaker 00:11
H1 DAQ
jonathan.hanks@LIGO.ORG - posted 14:47, Wednesday 19 November 2025 (88177)
WP 12886 continued, work on h1daqdc0

Jonathan and Yuri

This is a continuation of WP 12886.  This is the last physical step in the control room/MSR  for the reconfiguration.  We ran a fiber from the warehouse patch pannel to h1daqdc0 and replaced the network card with a newer card.  We will be renaming h1daqdc0, likely to h1daqkc0 (daqd kafka connector), and converting it to send the IFO data into the LDAS as part of the NGDD project.

Next steps on this work is to work on the software setup, and to install the corresponding fiber patch in the LDAS room.

 

H1 ISC
elenna.capote@LIGO.ORG - posted 14:22, Wednesday 19 November 2025 (88176)
Checking LSC and ASC safe SDFs

Checking the safe SDF settings for the LSC and ASC model.

ASC model not monitored channels here. No outstanding SDF diffs while unlocked

LSC model not monitored channels here. Also no outstanding SDF diffs while unlocked.

ASCIMC has no unmonitored channels. No safe diffs with IFO unlocked.

Images attached to this report
H1 PEM
ryan.crouch@LIGO.ORG - posted 13:18, Wednesday 19 November 2025 (88174)
New dust monitor Huddle Test results

I've ran a few different huddle test with the new TemTop dust monitors against 3 different MetOne GT521s. The first tests I ran in the Optics lab and Diode room showed some discrepancies between the spike times but that could be due to a difference in sample times vs the two dust monitors. The PMS21 and GT521s both sample for 60 seconds then hold for 300s, but the PMD331 does not have a built in hold time so it samples continuously or manually. The PMS21 usually reads an order of magnitude or two lower than the other two which makes me wonder if it needs a scale factor, but it also sees these big spikes occasionally that the others don't see which is confusing. The flow rate is listed as being 0.1 CFM on the PMS21, but the GT521s and the PMD331 are also listed as having flow rates of 0.1 CFM. CFM = Cubic feet / minute, its also = 2.83 L/min which is what I read when running the flow test on all of the DMs. *Also the times do not account for daylight savings, so each y-axis timestamp is actually an hour behind the actual PST*.

Test 1 - Optics Lab:

I tested the TemTops one at a time in the Optics lab, the results of the PMD331 and the results of the PMS21

Test 2 - Diode Room:

I tested the TemTops at the same time in the Diode room, I started off with only the PMS21 then I added the PMD331. For only the PMS21 we saw the these counts, for only the PMD331 we saw these counts, and for both of them we saw these counts,all against a MetOne GT521s. 

Test 3 - Control Room:

I tested three dust monitors at the same time in the control room ( I grabbed a spare pumped GT521s from the storage racks by the OSB receiving, it's our last properly working spare). I did a day of samples with holds enabled and a day of continuous sampling. When the dust monitors were sampling at slightly different intervals we saw the peaks offset from each other such as at 11-18 ~07:00 PST at the right of the plot. During the continuous testing we can see the peaks from everyone coming into the control room for the end of O4 celebration, I'm not sure why there's some time between the peaks, zooming in on this plot to cut out the large peaks from said celebration we can see the PMD331 and the MetOne GT521s following each other pretty closely, but the PMS21 wasn't really reading  much, there are small bumps around where the peaks from the other DMs are. Adding a scale factor of 10 to the PMS21 counts yield a better looking plot, playing with the scale factor till the PMS21 counts looked more in-line with the other DMs I got to a scale factor of 40 to get this plot.

Images attached to this report
H1 SQZ
sheila.dwyer@LIGO.ORG - posted 13:08, Wednesday 19 November 2025 (88175)
FDS SQZ angle scan data set

In breaks in the PEM work this morning I improved the script I used last night so that I don't have to do as many things by hand.  The script is in sheila.dwyer/SQZ/automate_dataset/SQZ_ANG_stepper.py, I'll try to clean it up a bit more and add to userapps soon.  

It's running now and will take 5 minutes of no sqz time, and do the steps needed to measure nonlinear gain (it doesn't give a result, but some times that can be looked at later).  Then it takes some angle steps in 10 degree steps on the steep side of the elipse and 30 degree steps on the slow side of the elipse (where SQZ angle changes slowly with demod angle).  

This current version should take 1 hour to take 3 minutes of data at each point, or 1.5 hours for 4 minutes of data.  

It's now running for FDS with nominal psams settings, this can be aborted if PEM wants to do injections before it's over. 

LHO VE
david.barker@LIGO.ORG - posted 10:39, Wednesday 19 November 2025 (88173)
Wed CP1 Fill

Wed Nov 19 10:09:29 2025 INFO: Fill completed in 9min 25secs

 

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 09:20, Wednesday 19 November 2025 (88171)
h1tcshwssdf restarted to remove duplicate channels

FRS36084 Dave, TJ:

TJ discovered that some HWS SLED settings were being monitored by multiple slow-SDF systems (h1syscstcssdf and h1tcshwssdf). The former's monitor.req is computer generated, the later's is hand built.

There were 4 duplicated channels: H1:TCS-ITM[X,Y]_HWS_SLEDSET[CURRENT,TEMPERATURE]

I removed these  channels from tcs/h1/burtfiles/h1tcshwssdf_[monitor.req, safe.snap, OBSERVE.snap] and restarted the psudo-frontend h1tcshwssdf on h1ecatmon0 at 09:00 Wed 19nov2025 PST. The number of monitored channels dropped from 64 to 60 as expected.

I did a full scan of all the monitor.req files and confirmed that there were only these 4 channels duplicated.

I'll add a check to cds_report.py to look for future duplications.

LHO General
thomas.shaffer@LIGO.ORG - posted 07:36, Wednesday 19 November 2025 - last comment - 09:34, Wednesday 19 November 2025(88168)
Ops Day Shift Start

TITLE: 11/19 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 3mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.39 μm/s 
QUICK SUMMARY: Locked for 13 hours, but it looks like we just got into observing 45min ago due to an SDF diff. Lights are on at MY, Y_BT AIP still in error as mentioned in Ryan's log. Plan for to day is to continue PEM characterization. 

Comments related to this report
david.barker@LIGO.ORG - 09:34, Wednesday 19 November 2025 (88172)

Because the CDS WIFIs will always be on from now onwards, I've accepted these settings into the CDS SDF

H1 PSL
oli.patane@LIGO.ORG - posted 09:29, Thursday 13 November 2025 - last comment - 09:14, Wednesday 19 November 2025(88087)
IMC_LOCK stuck in FAULT due to FSS oscillation

During PRC Align, IMC unlocked and couldn't relock due to the FSS oscillating a lot - PZT MON was showing it moving all over the place, and I couldn't even take the IMC to OFFLILNE or DOWN due to the PSL ready check failing. To try and fix the oscillation issue, I turned off the autolock for the Loop Automation on the FSS screen, and after a few seconds re-enabled the autolocking, and then we were able to go to DOWN fine, and then I was able to relock the IMC.

TJ said this has happened to him and to a couple other operators recently.

 

Images attached to this report
Comments related to this report
jason.oberling@LIGO.ORG - 11:07, Thursday 13 November 2025 (88090)OpsInfo

Took a look at this, see attached trends.  What happened here is the FSS autolocker got stuck between states 2 and 3 due to the oscillation.  The autolocker is programmed to, if it detects an oscillation, jump immediately back to State 2 to lower the common gain and ramp it back up to hopefully clear the oscillation.  It does this via a scalar multiplier of the FSS common gain that ranges from 0 to 1, which ramps the gain from 0dB to its previous value (15dB in this case); it does not touch the gain slider, it does it all in block of C code called by the front end model.  The problem here is that 0dB is not generally low enough to clear the oscillation, so it gets stuck in this State 2/State 3 loop and has a very hard time getting out of it.  This is seen in the lower left plot, H1:PSL-FSS_AUTOLOCK_STATE, it never gets to State 4 but continuously bounces between States 2 and 3; the autolocker does not lower the common gain slider, as seen in the center-left plot.  If this happens, turning the autolocker off then on again is most definitely the correct course of action.

We have an FSS guardian node that also raises and lowers the gains via the sliders, and this guardian takes the gains to their slider minimum of -10dB which is low enough to clear the majority of oscillations.  So why not use this during lock acquisition?  When an oscillation is detected during the lock acquisition sequence, the guardian node and the autolocker will fight each other.  This conflict makes lock acquisition take much longer, several 10s of minutes, so the guardian node is not engaged during RefCav lock acquisition.

Talking with TJ this morning, he asked if the FSS guardian node could handle the autolocker off/on if/when it gets stuck in this State 2/State 3 loop.  On the surface I don't see a reason why this wouldn't work, so I'll start talking with Ryan S. about how we'd go about implementing and testing this.  For OPS: In the interim, if this happens again please do not wait for the oscillation to clear on its own.  If you notice the FSS is not relocking after an IMC lockloss, open the FSS MEDM screen (Sitemap -> PSL -> FSS) and look at the autolocker in the middle of the screen and the gain sliders at the bottom.  If the autolocker state is bouncing between 2 and 3 and the gain sliders are not changing, immediately turn the autolocker off, wait a little bit, and turn it on again.

Images attached to this comment
jason.oberling@LIGO.ORG - 09:14, Wednesday 19 November 2025 (88170)

Slight correction to the above.  The autolocker did not get stuck between states 2 and 3, as there is no path from state 3 to state 2 in the code.  What's happening is the autolocker goes into state 4, detects an oscillation, then immediately jumps back to state 2; so this is a loop from states 2 -> 3 -> 4 -> 2 due to the oscillation and the inability of the autolocker gain ramp to effectively clear it.  This happens at the clock speed of the FSS FE computer, while the channel that monitors the autolocker state is only a 16 Hz channel.  So the monitor channel is no where close to fast enough to see all of the state changes the code is going through during an oscillation.

H1 ISC
sheila.dwyer@LIGO.ORG - posted 16:32, Monday 25 August 2025 - last comment - 15:34, Thursday 20 November 2025(86555)
new version of noise budget code: WIP

I've made a fresh start on the noise budget code, using much of the work and many of the functions from aligoNB, but in a code that should be overall simpler for us to work with.  This exists in a repo here: simplenb.

Noises estimated using the excess power method can be read in either using gps times or from a dtt template. .xml files are tracked using git lfs and read in by dttxml. If a list of gps times is provided then the code will download the data using gwpy and save asds as a .pkl file, so that the data doesn't need to be downloaded each time the budget is run. Deleteing the pkl file from the local copy will cause it to redownload the data if the user wants to change times or resolution.  The excess power projection is done using the frequency resolution of the dtt file, or right now it is hard coded to an fft length of 3 seconds for those made using a list of gps times.  The noises are rebinned to a frequency vector specified for the budget after the projections are made. 

Thermal noises are imported from gwinc.  Quantum noises are also calculated using gwinc, which is why this code needs to be run in an environment that uses Kevin's superQK branch of gwinc.  I've added a function save_quantum_params to my quantum noise modeling repo that takes a template of a yaml file including all the parameters you think are important for modeling quantum noise, and a ifo struct, which saves a yaml file with the current parameters.  This function is also available in the quantum_utils.py function in this repo, so that people can use it with any method they like of generating a gwinc model of quantum noise to incorporate into this noise budget. I think it is best to leave the quantum noise budgeting separate from this repo for now, because the code for that is still evolving.  For now I've used the quantum parameters where all the missing squeezing is explained by frequency dependent loss from 85942, the next step is to add mode mismatches to that model.

Running the budget.py script will generate noise budget plots, you can choose how many layers deep to go in plotting sub-budgets by setting that parameter in the make_all_plots function call in the final lines.  The plots attached were made using the injections that Camilla made this morning (86550), and a reference time of 20:00 UTC.

Still to do:

Non-image files attached to this report
Comments related to this report
alexandra.mitchell@LIGO.ORG - 15:34, Thursday 20 November 2025 (88189)
We used the simplenb code to make a plot for the the Stanford seismic NSF proposal, I've attached it below for reference. 

Some things which we found along the way getting this to run properly as someone offsite who doesn't have a lot of experience with cds
* We struggled with LFS and eventually manually downloaded the files needed, we couldn't get them to download via git for some reason 
* superQK environment is a branch of pygwinc https://git.ligo.org/gwinc/pygwinc 
* The aligoNB repo is also needed 
* If working offsite some lines in excess_power_utils under def import_dtt_calib should be edited to the correct directory

Hope this is useful for anyone trying to do the same offsite! 

Non-image files attached to this comment
Displaying reports 21-40 of 85688.Go to page 1 2 3 4 5 6 7 8 9 10 End