Displaying reports 2621-2640 of 83371.Go to page Start 128 129 130 131 132 133 134 135 136 End
Reports until 07:28, Tuesday 04 March 2025
H1 General
ryan.crouch@LIGO.ORG - posted 07:28, Tuesday 04 March 2025 (83149)
OPS Tuesday day shift start

TITLE: 03/04 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 5mph Gusts, 2mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.41 μm/s
QUICK SUMMARY:

H1 CDS
erik.vonreis@LIGO.ORG - posted 06:17, Tuesday 04 March 2025 (83148)
Workstations updated

Workstations were updated and rebooted.  This was an os packages update.  Conda packages were not updated.

LHO FMCS
anthony.sanchez@LIGO.ORG - posted 21:46, Monday 03 March 2025 (83146)
Fan Vibrometer Famis task 26364

Famis 26364

Fan Vibrometer trends.
There was a small bump in the MR_FAN_3_170 about 2 and a half days ago.

 

 

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 17:17, Monday 03 March 2025 (83145)
Monday Eve Shift Start

TITLE: 03/04 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 13mph Gusts, 9mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.20 μm/s
QUICK SUMMARY:

H1 Has been Locked for 5 Hours and 45 minutes.
All Systems seem to be running smoothly with the Exception of the LVEA WAP.
 

H1 ISC (CAL)
elenna.capote@LIGO.ORG - posted 17:00, Monday 03 March 2025 (83144)
New calibration only accounts for some of the range difference

Our range is lower than usual, and it appears that the overall DARM noise changed somewhat coincident with the calibration update (see log 83088). The calibration update has improved the calibration accuracy at some frequencies and worsened it in others, namely, calibration error below 40 Hz has reduced from +-4% to within 1%, but calibration error above 50 Hz has increased from within 1% to 2-3% (I am eyeballing these values off the plot in the linked alog).

I took Jeff's calibration transfer functions from the linked alog and I applied them to the GDS-CALIB_STRAIN_NOLINES channel from a time just before the calibration update and a time after. I used our new range difference calculation method to compare the range from before the calibration change to the range after.

Method:

I chose a "before" time, when the GDS CLEAN range appeared to be around 160 Mpc and an "after" time from this past weekend. I used times that started after 3 hours of lock to ensure we were thermalized. I was careful to look for times with no large glitches, and used median averaging to calculate the PSDs.

before time = 1423272193 (Feb 10 12:22:55 PST)

after time = 142495718 (Mar 2 05:26:08 PST)

I exported the transfer functions shown in this plot (**when exporting, I noticed that the refs used for the phase got mixed up, I believe the blue phase trace corresponds to the black magnitude trace and vice versa**). For the "before" time, I used the black trace labeled "Pre-calibration change" and for the "after" time I used the red trace labeled "Post GDS TDCF Burn-in".

I pulled 30 minutes of data from the times listed above, and used GDS-CALIB_STRAIN_NOLINES for my calculations.

The "uncorrected" data is simply GDS-CALIB_STRAIN_NOLINES * 4000 m (that is, calibrated strain converted into meters with no Pcal correction)

The "corrected" data is (GDS-CALIB_STRAIN_NOLINES * 4000 m) / pcal transfer function, where the pcal transfer function is the R_model / R_true exported from the DTT template above.

The PCAL transfer function is only well-measured from about 9 Hz to 450 Hz, so I cropped the PSDs to those frequencies. Finally, I used the normalized range difference method to calculate the cumulative range difference between the before and after calibration update times for both the "uncorrected" and "corrected" data.

Results:

I believe these results indicate that there is excess noise present in DARM that is unrelated to the calibration change, see first attached plot. I converted the result into a percent difference, because the overall Mpc units refer to the integrated range from only 9-450 Hz, so it's not really comparable with our sensemon range calculation. This plot shows that the range increased about 0.5% between 30-50 Hz, which is present in both the uncorrected and corrected calibrated strain. However, above 50 Hz, the range is worse, and it's here where the difference in the old and new calibration is also evident. Without applying the pcal correction, the range is lower is by nearly 2%, but only about 1% with the pcal correction.

Since this is a frequency dependent effect, it is difficult to say what our overall sensmon range would be if we still had the old calibration and/or we didn't have this excess noise. However, I think it is fair to say that our excess noise has reduced our range by about 1% and the new calibration by another 1%.

I also added a plot that compares these two times, the first is GDS-CALIB_STRAIN_NOLINES * 4000 m and the second is (GDS-CALIB_STRAIN_NOLINES * 4000 m) / pcal transfer function.

Images attached to this report
Non-image files attached to this report
LHO General
corey.gray@LIGO.ORG - posted 16:33, Monday 03 March 2025 (83136)
Mon DAY Ops Summary

TITLE: 03/03 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:

H1 started the day Observing and then transitioned for Commissioning, but the thermalized H1 only lasted 21min of Commissioning before a lockloss.

Relocking was surprisingly not trivial with mainly the DRMI causing grief, but after a second Initial Alignment was run, DRMI managed to lock.  Still seems unhealthy (even with our quieiter microseism.

LOG:

H1 SUS
oli.patane@LIGO.ORG - posted 13:45, Monday 03 March 2025 (83142)
PI31 sitting higher after thermalization since EY ring heater change on Feb 21

Since the ring heater power for EY was raised(82962) in order to move away from PI8/PI24, whose noise seemed to be coupling into DARM (82961), the change in ring heater power has caused PI31 to become elevated as the detector thermalizes(ndscope1). PI31 RMS, which usually sits around/below 2, starts ringing up about two hours into the lock and by four hours into the lock, it reaches its new value that it sits at for the rest of the lock, around 4(ndscope2). Once it's at this new value, every so often, it'll start to quickly ring up before being damped within a minute. The first couple of locks after the ring heater change, this ringup was happening every 1 - 1.5 hours, but then it shifted to ringing up every 30 - 45 minutes.

The channels where we see the RMS amplitude increases and the quick ringups are the same channels (well the PI31 versions) where we were seeing the glitches in PI24 and PI8 that were affecting the range(PI8/PI24 versus PI31). So changing the EY ring heater power shifted us away from PI8(10430Hz)/PI24(10431Hz), but towards PI31(10428Hz). Luckily it doesn't look like these ringups, nor the higher RMS that PI31 sits at after we thermalize, have an effect on the range (comparing range drop times and glitchgram times to PI31 ringup times and the downconverted signal from the DCPDs). They also don't seem to be related to any of the locklosses that we've had since the ring heater change.

Images attached to this report
H1 TCS
ryan.crouch@LIGO.ORG - posted 12:15, Monday 03 March 2025 (83140)
TCS Chiller Water Level Top Off - Biweekly

Closes FAMIS 27810, last checked in alog82945

For CO2X the level was at 29.4, I added 160 ml to get to 29.9.

For CO2Y the level was at 10.0, I added 150 ml to get it to 10.5.

The Dixie cup was empty, no signs of current water drippage to be seen.

LHO VE
david.barker@LIGO.ORG - posted 10:37, Monday 03 March 2025 (83139)
Mon CP1 Fill

Mon Mar 03 10:12:33 2025 INFO: Fill completed in 12min 30secs

Gerardo confirmed a good fill curbside.

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 07:42, Monday 03 March 2025 - last comment - 08:07, Monday 03 March 2025(83134)
Mon DAY Ops Transition

TITLE: 03/03 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 12mph Gusts, 8mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.27 μm/s
QUICK SUMMARY:

When the alarm went off and I checked H1, I saw that it was sadly down for about an hr---so I figured I'd need to come in to run an alignment to bring H1 back, but automation was already on it with the alignment and I walked in to H1 already in Observing for about 15min!  (H1 was knocked out by a M4.5 EQ on the SE tip of Orcas Island.)

Secondary (&Primary) µseism continue their trends downward; secondary is now squarely below the 95th percentile line (between 50th & 95th).

Monday Commissioning is slated in about 45 min (at 1630utc).

Comments related to this report
corey.gray@LIGO.ORG - 08:07, Monday 03 March 2025 (83137)

Reacquisition after the EQ lockloss did not have a SRC Noise Ring Up during OFFLOAD_DRMI_ASC.

Images attached to this comment
H1 General (CDS)
anthony.sanchez@LIGO.ORG - posted 22:10, Sunday 02 March 2025 (83133)
Sunday Eve Shift End

TITLE: 03/03 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

H1 has been locked for coming up on 5 hours without much issue. Been a nice and quiet night.
I do see that the DCPD signals are slowly diverging, but I don't know the cause of that.
Stand Down query Failure is still present on the OPS_Overview screen.
The Unifi AP's are still Disconnected. Tagging CDS

H1 CDS (CDS, DetChar)
anthony.sanchez@LIGO.ORG - posted 17:26, Sunday 02 March 2025 - last comment - 17:48, Sunday 02 March 2025(83131)
Unifi AP's reporting INV instead of OFF in LVEA.

Trying to get into Observing tonight and Verbals is telling me that WAP on in the LVEA.
lvea-unifi-ap has been down since Tuesday. which may be because the WAP was turned off for Maintenance.

the same thing is happening to the CER.

 

Images attached to this report
Comments related to this report
anthony.sanchez@LIGO.ORG - 17:48, Sunday 02 March 2025 (83132)CDS

Logging into Unifi wap control I see that  LVEA-UNIFI-AP and CER-UNIFI-AP Wireless Access Point (WAP) are NOT on, and were last seen 5 days ago.
These WAP  are both listed as Disconnected | disabled, where all of the other turned off access points say they are connected | disabled.
Both of these WAPs do not respond to MEDM click to turn on or off.
Both of the WAPs are unpingable, where the connected  but disables WAPs are still pingable even while they are disabled.

I guess this is "good" for observing as the WIFI cannot be turned on in the LVEA, but on Tuesday we are likely going to want those APs to work again.
 

Images attached to this comment
H1 General
anthony.sanchez@LIGO.ORG - posted 17:13, Sunday 02 March 2025 (83129)
Sunday Eve Shift Start

TITLE: 03/03 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 7mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.41 μm/s
QUICK SUMMARY:
H1 was Locked for 16 hours and 17 minutes, before a lockloss  just before my shift.
Corey and I agreed that an Initial_Alignment was in order to get H1 relocked quickly.

H1 reached Nominal_Low_Noise @ 1:12 UTC

Stand Down query Failure is still visible on the OPS_Overview screen on the top of NUC20.

 

LHO General
corey.gray@LIGO.ORG - posted 16:33, Sunday 02 March 2025 (83127)
Sun DAY Ops Summary

TITLE: 03/02 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:

Quiet shift on this Oscar Night 2025 with a lockloss in the last 40min of the shift.  And with the lower primary+secondary microseism compared to 24hrs ago, it was much harder to see 0.12-0.15Hz oscillations in LSC/SUS signals!  What a difference a day makes.
LOG:

H1 PSL
anthony.sanchez@LIGO.ORG - posted 16:20, Sunday 02 March 2025 (83130)
PSL

PSL Weekly Status Report FAMIS 26358
Laser Status:
    NPRO output power is 1.857W
    AMP1 output power is 70.47W
    AMP2 output power is 139.8W
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 26 days, 3 hr 51 minutes
    Reflected power = 22.27W
    Transmitted power = 106.0W
    PowerSum = 128.3W

FSS:
    It has been locked for 0 days 0 hr and 18 min
    TPD[V] = 0.7976V

ISS:
    The diffracted power is around 4.0%
    Last saturation event was 0 days 0 hours and 18 minutes ago


Possible Issues: None reported

LHO VE
david.barker@LIGO.ORG - posted 10:20, Sunday 02 March 2025 (83128)
Sun CP1 Fill

Sun Mar 02 10:14:17 2025 Fill completed in 14min 14secs

Trying out a new colour scheme with the delta channels to distinguish between 30S, 60S and 120S chans.

Images attached to this report
H1 ISC (IOO, PSL)
mayank.chaturvedi@LIGO.ORG - posted 19:09, Wednesday 26 February 2025 - last comment - 14:40, Monday 03 March 2025(83077)
Opened a new ISS PD array

Jennie Siva Keita Mayank

Following our previous attempt  here .  We opened a new ISS PD array (S.N. 1202965).
This unit is in great condition. i.e. 

1) No sign of contamination.
2) All the optics are intact (No chipping)

We tried interfacing the QPD-cable S1203257 with the QPD but it turned out that they are not compatible.
We will look for the updated version of the QPD cable.   

Images attached to this report
Comments related to this report
jennifer.wright@LIGO.ORG - 14:49, Thursday 27 February 2025 (83091)EPO

More photos I took of the unboxed unit,

Keita holding part of QPD connector that connects to cable,

zoom in of part of prisms close to PD array to show they don't look damaged like the previous one we unboxed,

dcc and serial number of baseplate (this is different part for each observatory due to differing beam heights).

Keita explaining the QPD cable clamp to Shiva (right) and Mayank (left).

View of optics with periscope upper mirror on the left.

View of part of prisms close to periscope.

View of back of array and strain relief.

plus a picture of an packaged coptic that was sitting on top of this capsule while it was in the storage cupboard.

 

Images attached to this comment
jennifer.wright@LIGO.ORG - 16:02, Thursday 27 February 2025 (83099)

For future reference all the ISS arrays and there serial numbers are listed in the dcc entry for the assembly drawing LIGO-D1101059-v5.

matthewrichard.todd@LIGO.ORG - 14:40, Monday 03 March 2025 (83143)

[Matthew Mayank Siva Keita]

On Friday (2025-02-28) we moved the optics onto taller posts so that we did not have to pitch the beam up to much (in hind-sight, we probably would've been okay doing this) when we align the beam into the input port of the ISS array. We have not aligned the beam yet and most likely should re-profile it(may not need to) to ensure that the planned lens position is correct.

We also spent some time checking the electronics box for proper connections and polarity; then we tested the upper row of PDs (4 top ones) by plugging in each cathode/anode to the respective port. The output DSUB we used a breakout board and threw each channel onto an oscilloscope -- it seems that all four of the top row of PDs are functioning as anticipated.


Important Note:

Keita and I looked at the "blue glass" plates that serve as beam dumps, but just looking at the ISS array we do not know how to mount them properly. We think there may be some component missing that clamps them to the array. So we repackaged the blue-glass in its excessive lens paper.

Images attached to this comment
H1 ISC
jim.warner@LIGO.ORG - posted 11:19, Monday 03 February 2025 - last comment - 10:32, Monday 03 March 2025(82608)
ESD glitch limit added to ISC_LOCK

During commisioning this morning, we added the final part of the ESD glitch limiting, by adding the actual limit part to ISC_LOCK. I added a limit value of 524188 to ETMX_L3_ESD_UR/UL/LL/LR filter banks, which are the upstream part of the 28 bit dac configuration for SUS ETMX. These limits are engaged in LOWNOISE_ESD_ETMX, but turned off again in PREP_FOR_LOCKING.

In LOWNOISE_ESD_ETMX I added:

            log('turning on esd limits to reduce ETMX glitches')
            for limits in ['UL','UR','LL','LR']:
                ezca.get_LIGOFilter('SUS-ETMX_L3_ESD_%s'%limits).switch_on('LIMIT')

So if we start losing lock at this step, these lines could be commented out. The limit turn-off in PREP_FOR_LOCKING is probably benign.

Diffs have been accepted in sdf.

I think the only way to tell if this is working is to wait and see if we have fewer ETMX glitch locklosses, or if we start riding through glitches that has caused locklosses in the past.

Comments related to this report
camilla.compton@LIGO.ORG - 11:48, Monday 03 February 2025 (82609)

Using the lockloss tool, we've had 115 Observe locklosses since Dec 01, 23 of those were also tagged ETM glitch, which is around 20%.

camilla.compton@LIGO.ORG - 12:15, Monday 10 February 2025 (82723)SEI

Since Feb 4th, we've had 13 locklosses from Observing, 6 of these tagged ETM_GLITCH: 02/1002/0902/0902/0802/0802/06

sheila.dwyer@LIGO.ORG - 11:30, Tuesday 11 February 2025 (82743)

Jim, Sheila, Oli, TJ

We are thinking about how to evaluate this change.  In the meantime we made a comparison similar to Camilla's: In the 7 days since this change, we've had 13 locklosses from observing, with 7 tagged by the lockloss tool as ETM glitch (and more than that identified by operators), compare to 7 days before the change we had 19 observe locklosses of which 3 had the tag. 

We will leave the change in for another week at least to get more data of what it's impact is.

jim.warner@LIGO.ORG - 10:32, Monday 03 March 2025 (83138)

I forgot to post this at the time, we took the limit turn on out of the guardian on Feb 12, with the last lock ending at 14:30 PST, so locks since that date have had the filters engaged, but since they multiply to 1, they shouldn't have an effect without the limit. We ran this scheme between Feb 3 17:40 utc until Feb 12 22:15 utc.

Camilla asked about turning this back on, I think we should do that. All that needs to be done is uncommenting out the lines (currently 5462-5464 in ISC_LOCK.py):

            #log('turning on esd limits to reduce ETMX glitches')
            #for limits in ['UL','UR','LL','LR']:
            #    ezca.get_LIGOFilter('SUS-ETMX_L3_ESD_%s'%limits).switch_on('LIMIT')

The turn off of the limit is still in one of the very early states is ISC_LOCK, so nothing beyond accepting new sdfs should be needed.

Displaying reports 2621-2640 of 83371.Go to page Start 128 129 130 131 132 133 134 135 136 End