Displaying reports 8101-8120 of 84691.Go to page Start 402 403 404 405 406 407 408 409 410 End
Reports until 09:43, Friday 05 July 2024
H1 General
corey.gray@LIGO.ORG - posted 09:43, Friday 05 July 2024 (78880)
M5.0 EQ Off BC Coast Knocks H1 Out Of Lock (During COMMISSIONING/PR2 Move)

Commissioning started at 1600utc/9amlocal, but at 1637utc a magnitude 5.0 earthquake near British Columbia knocked h1 out of lock (this is where we have been having earthquakes the last day or so).

LHO General
corey.gray@LIGO.ORG - posted 07:38, Friday 05 July 2024 (78876)
Fri Ops Day Transition

TITLE: 07/05 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 2mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:

H1's been locked 2.25hrs with 2-3hr lock stretches starting about 15hrs ago (with issues before that as noted in previous logs & mainly post-tues-Maintenance this week).  Seeing small EQ spikes in recent hours after the big EQ about 22hrs ago.  Low winds and microseism.  On the drive in, there was a thin layer of smoke on the horizon in about all directions post-4th of July with no active/nearby plumes observed.

H1 SQZ
ryan.short@LIGO.ORG - posted 02:40, Friday 05 July 2024 - last comment - 10:16, Friday 05 July 2024(78875)
SQZ TTFSS Input Power Too High - Raised Threshold

H1 called for assistance at 08:45 UTC because it was able to lock up to NLN, but could not inject squeezing due to an error with the SQZ TTFSS. The specific error it reported was "Fiber trans PD error," then on the fiber trans screen it showed a "Power limit exceeded" message. The input power to the TTFSS (SQZ-FIBR_TRANS_DC_POWER) was indeed too high at 0.42mW where the high power limit was set at 0.40mW. Trending this back a few days, it seems that the power jumped up in the morning on July 3rd (I suspect when the fiber pickoff in the PSL was aligned) and it has been floating around that high power limit ever since. I'm not exactly sure why this time it was an issue, as we've had several hours of observing time since then.

I raised the high power limit from 0.40mW to 0.45mW, the TTFSS was able to lock without issue, SQZ_MANAGER brought all nodes up, and squeezing was injected as usual. I then accepted the new high power limit in SDF (attached) for H1 to start observing at 09:20 UTC.

Since this feels like a Band-Aid solution just to get H1 observing tonight, I encourage someone with more knowledge of the SQZ TTFSS to look into it as time allows.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 10:16, Friday 05 July 2024 (78881)

Vicky checked that we can have a max of 1mW as the sum of both fibers ( H1:SQZ-FIBR_PD_DC_POWER from PD User_Manual p14)  to stay in the linear operating range. To be safe for staying in observing, we've further increased the "high" threshold to  1mW.

Images attached to this comment
H1 SQZ (SQZ)
ryan.crouch@LIGO.ORG - posted 01:00, Friday 05 July 2024 - last comment - 12:37, Friday 05 July 2024(78867)
OPS Thursday eve shift summmary

TITLE: 07/05 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: An earthquake lockloss then a PI lockloss. Currently at MAX_POWER

Lock1:
Lock2:

Lock3:

 

To recap for SQZ, I have unmonitored 3 SQZ channels on syscssqz (H1:SQZ-FIBR_SERVO_COMGAIN, H1:SQZ-FIBR_SERVO_FASTGAIN, H1:SQZ-FIBR_LOCK_TEMPERATURECONTROLS_ON) that keep dropping us out of observing for now until their root issue can be fixed (Fiber trans PD error, too much power on FIBR_TRANS?). I noticed that each time the GAINS change it also drops our cleaned range

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 12:37, Friday 05 July 2024 (78887)

It seems that as you found, the issue was the max power threshold. Once Ryan raised the threshold in 78881, we didn't see this happen again, plot attached. I've re-monitored these 3 SQZ channels: sdfs attached (H1:SQZ-FIBR_SERVO_COMGAIN, H1:SQZ-FIBR_SERVO_FASTGAIN, H1:SQZ-FIBR_LOCK_TEMPERATURECONTROLS_ON) and TEMPERATURECONTROLS_ON accepted.

It's expected that the CLEAN range would drop as that range only reports when the GRD-IFO_READY flag is true (which isn't the case when there's sdf diffs).

Images attached to this comment
H1 General (Lockloss, SUS)
ryan.crouch@LIGO.ORG - posted 23:38, Thursday 04 July 2024 (78874)
06:37 UTC

PI ring up lockloss? PI28 and 29 were unable to be damped down by the GRD and we eventually lost lock.

https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1404196646

H1 SEI
ryan.crouch@LIGO.ORG - posted 21:46, Thursday 04 July 2024 (78873)
H1 ISI CPS Noise Spectra Check - Weekly

Closes FAMIS25997, last checked in alog78550

ITMX_ST2_CPSINF_H1 has gotten noisier at high frequency

Everything else looks the same as previously.

Non-image files attached to this report
H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 19:15, Thursday 04 July 2024 - last comment - 20:56, Thursday 04 July 2024(78870)
02:13 UTC lockloss

Lost lock from an earthquake

Comments related to this report
ryan.crouch@LIGO.ORG - 19:35, Thursday 04 July 2024 (78871)

XARM kepts giving "fiber polarization error" and is in CHANGE_POL state, neither I nor guardian can get the H1:ALS-X_FIBR_LOCK_FIBER_POLARIZATIONPERCENT below 14 using the polarization controller. I called Sheila and she suggested turning on H1:ALS-X_FIBR_LOCK_LOGIC_FORCE which fixed it!

ryan.crouch@LIGO.ORG - 20:56, Thursday 04 July 2024 (78872)

03:56 UTC Observing

H1 SEI (SEI)
neil.doerksen@LIGO.ORG - posted 18:35, Thursday 04 July 2024 - last comment - 09:14, Friday 12 July 2024(78869)
Earthquake Analysis : Similar onsite wave velocities may or may not cause lockloss, why?

It seems earthquakes causing similar magnitudes of movement on-site may or may not cause lockloss. Why is this happening? Should expect to always or never cause lockloss for similar events. One suspicion is that common or differential motion might lend itself better to keeping or breaking lock.

- Lockloss is defined as H1:DRD-ISC_LOCK_STATE_N going to 0 (or near 0).
- I correlated H1:DRD-ISC_LOCK_STATE_N with H1:ISI-GND_STS_CS_Z_EQ_PEAK_OUTMON peaks between 500 and 2500 μm/s.
- I manually scrolled through the data from present to 2 May 2024 to find events.
    - Manual, because 1) wanted to start with a small sample size and quickly see if there was a pattern, and 2) because I need to find events that caused loss, then go and find similarly sized events we kept lock.
- Channels I looked at include:
    - IMC-REFL_SERVO_SPLITMON
    - GRD-ISC_LOCK_STATE_N
    - ISI-GND_STS_CS_Z_EQ_PEAK_OUTMON ("CS_PEAK")
    - SEI-CARM_GNDBLRMS_30M_100M
    - SEI-DARM_GNDBLRMS_30M_100M
    - SEI-XARM_GNDBLRMS_30M_100M
    - SEI-YARM_GNDBLRMS_30M_100M
    - SEI-CARM_GNDBLRMS_100M_300M
    - SEI-DARM_GNDBLRMS_100M_300M
    - SEI-XARM_GNDBLRMS_100M_300M
    - SEI-YARM_GNDBLRMS_100M_300M
    - ISI-GND_STS_ITMY_X_BLRMS_30M_100M
    - ISI-GND_STS_ITMY_Y_BLRMS_30M_100M
    - ISI-GND_STS_ITMY_Z_BLRMS_30M_100M
    - ISI-GND_STS_ITMY_X_BLRMS_100M_300M
    - ISI-GND_STS_ITMY_Y_BLRMS_100M_300M
    - ISI-GND_STS_ITMY_Z_BLRMS_100M_300M
    - SUS-SRM_M3_COILOUTF_LL_INMON
    - SUS-SRM_M3_COILOUTF_LR_INMON
    - SUS-SRM_M3_COILOUTF_UL_INMON
    - SUS-SRM_M3_COILOUTF_UR_INMON
    - SUS-PRM_M3_COILOUTF_LL_INMON
    - SUS-PRM_M3_COILOUTF_LR_INMON
    - SUS-PRM_M3_COILOUTF_UL_INMON
    - SUS-PRM_M3_COILOUTF_UR_INMON

        - ndscope template saved as neil_eq_temp2.yaml

- 26 events; 14 lockloss, 12 locked (3 or 4 lockloss event may have non-seismic causes)

- After, usiing CS_PEAK to find the events, I, so far, used the ISI channels to analyse the events.
    - The SEI channels were created last week (only 2 events captured in these channels, so far).

- Conclusions:
    - There are 6, CS_PEAK events above 1,000 μm/s in which we *lost* lock;
        - In SEI 30M-100M
            - 4 have z-axis dominant motion with no motion or strong z-motion or no motion in SEI 100M-300M
            - 2 have y-axis dominated motion with a lot of activity in SEI 100M-300M and y-motion dominating some of the time.
    - There are 6, CS_PEAK events above 1,000 μm/s in which we *kept* lock;
        - In SEI 30M-100M
            - 5 have z-axis dominant motion with only general noise in SEI 100M-300M
            - 1 has z-axis dominant noise near the peak in CS_PEAK and strong y-axis domaniated motion starting 4 min prior to the CS_PEAK peak; it too only has general noise in SEI 100M-300M. This x- or y-motion which starts about 4 min before the peak in CS_PEAK has been observed in 5 events -- Love waves precede Rayleigh waves, could be Love waves?
    - All events below 1000 μm/s which lose lock seem to have a dominant y-motion in either/both SEI 30M-100M / 100M-300M. However, the sample size is not large enough to convince me that shear motion is what is causing lockloss. But it is large enough to convince me to find more events and verify. (Some plots attached.)

Images attached to this report
Comments related to this report
beverly.berger@LIGO.ORG - 09:08, Sunday 07 July 2024 (78921)DCS, SEI

In a study with student Alexis Vazquez (see the poster at https://dcc.ligo.org/LIGO-G2302420, we found that there was an intermediate range of peak ground velocities in EQs where lock could be lost or maintained. We also found some evidence that lock loss in this case might be correlated with high microseism (either ambiant or caused by the EQ). See the figures in the linked poster under Findings and Validation.

neil.doerksen@LIGO.ORG - 09:14, Friday 12 July 2024 (79070)SEI

One of the plots (2nd row, 2nd column) has the incorrect x-channel on some of the images (all posted images are correct, by chance). Patterns reported may not be correct, will reanalyze.

H1 General
anthony.sanchez@LIGO.ORG - posted 16:34, Thursday 04 July 2024 (78868)
Thursday OPS Day Shift End

TITLE: 07/04 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:

Today H1 was down for ALS maintenance and replacement of ALS X PD as described in Daniel's alog (78864)
once they returned I started an Initial_Alignment and then started locking.
Observing was reached at 23:28 UTC.
There have been a number of earthquakes right off the coast of Victoria Island B.C. today.

LOG:                                                                                                                                                                                                                                                                                                 

Start Time System Name Location Lazer_Haz Task Time End
16:08 SAF LVEA LVEA YES LVEA IS LASER HAZARD 10:08
17:31 PEM Robert EX N Going to EX not inside the VEA 17:44
18:11 ALS Daniel, Kieta EX Yes Troubleshooting ALS Beatnote issues 21:11
23:26 FAC Tony Water tank N Closing water diverting valves. 23:26
H1 General
ryan.crouch@LIGO.ORG - posted 16:05, Thursday 04 July 2024 (78866)
OPS Thursday eve shift start

TITLE: 07/04 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 10mph Gusts, 4mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.08 μm/s 
QUICK SUMMARY:

H1 ISC
daniel.sigg@LIGO.ORG - posted 15:24, Thursday 04 July 2024 - last comment - 10:22, Friday 05 July 2024(78864)
ISCTEX Beatnote alignment improved

Keita Daniel

We found that the transimpedance gain of the ALS-X_FIBR_A_DC PD was wrong (changed it from 20k to 2k). In turn, this meant that 20mW of light was on this PD.

After looking at the beatnote amplitude directly at the PD and found it to be way too small, we decided to swap the PD with a spare (new PD S/N S1200248, old PD S/N S1200251). However, this did not improve the beatnote amplitude. (The removed PD was put back into the spares cabinet.)

We then looked for clipping and found that the beam on the first beam sampler after the fiber port was close to the side. We moved the sampler so the beam is closer to the center of the optics. We also found the beam on the polarizing cube in the fiber path to be low. We moved the cube downwards to center the beam. After aligning the beam back to the broadband PD, the beatnote amplitude improved drastically. This alignment seems very sensitive.

We had to turn the power from the laser in the beat note path down from 20mW to about 6mW on the broadband PD.

This required a recalibration of the ALS-X_LASER_IR_PD photodiode. The laser output power in IR is about 60mW.

The beatnote strength as read by the medm screens is now 4-7dBm. Still seems to vary.

Comments related to this report
keita.kawabe@LIGO.ORG - 15:48, Thursday 04 July 2024 (78865)

To recap, fundamental problem was the alignment (probably it was close to clipping before, and started clipping over time due to temperature shift or whatever). Also, the PBS mount or maybe the mount post holder for the fiber beam is not really great, a gentle push by finger will flex something and change the alignment enough to change the beat note. We'll have to see for a while if the beat note will stay high enough.

Wrong transimpedance value in MEDM was not preventing PLL from locking but was annoying. H1:ALS-X_FIBR_A_DC_TRANSIMPEDANCE was 20000 though the interface box gain was 1.  This kind of stuff confuses us and slows down the troubleshooting. Whenever you change the gain of the BBPD interface box, please don't forget to change the transimpedance value at the same time (gain 1= transimpedance 2k, gain 10= 20k).

We took a small plier from the EE shop and forgot to bring it back from the EX (sorry).

Everything else should be back to where it was. Thorlab powermeter box was put on Camilla's desk.

Images attached to this comment
keita.kawabe@LIGO.ORG - 10:22, Friday 05 July 2024 (78882)

It's still good, right now it's +5 to +6 dBm.

Too early to tell but we might be diurnally going back and forth between +3-ish and +7-ish dBm.  4dB power variation is big (a factor of ~2.5).

If this is diurnal, it's probably explained by the alignment drift, i.e. we're not yet sitting close to the global maxima. It's not yet worth touching up the alignment unless this becomes a problem, but anyway, if we decide to make it better some time in the future, remember that we will have to touch both the PBS and the fiber launcher (or lens).

Images attached to this comment
H1 General
anthony.sanchez@LIGO.ORG - posted 15:14, Thursday 04 July 2024 (78863)
ALSY SDF Screen shots of work done by Daniel and Kieta today

6 channels were accepted in the SDF diffs after the ALS adjustments done today.

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 13:14, Thursday 04 July 2024 (78862)
strange Drops from Observing

TITLE: 07/04 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Earthquake
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 6mph 5min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.07 μm/s
QUICK SUMMARY:
Kieta and Daniel have come to site and gone to EX to adjust the H1:ALS-X_FIBR_A_DEMOD_RFMON. which should allow a better bat note. Work Permit : 11962

I was looking at verbals an noticed that Intention bit was flipped back and forth from 12:02 UTC until 12:57 UTC  a large number of times.
I checked the SQZ manager during that time and I didnt see much motion, so i'm still not sure what caused this..

Kieta & Daniel have come back from End X for a snack, and let me kno wthat the Photo Diode is likely busted and they are gonna go back out there and swap it.
I have accepted some ALS SDF Diffs.

Hopefully they can swap it out today and we can get locked sometime today.
 

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 10:29, Thursday 04 July 2024 (78861)
CDS HW Status IOC: added check for IOP timing card status

The CDS HW STAT ioc was restarted this morning, the new code checks the status of the LIGO Timing Card in the IO Chassis.

LHO VE
david.barker@LIGO.ORG - posted 10:27, Thursday 04 July 2024 (78860)
Thu CP1 Fill

Thu Jul 04 10:17:39 2024 INFO: Fill completed in 17min 35secs

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 08:01, Thursday 04 July 2024 (78859)
Thursday OPS Day Shift Start

TITLE: 07/04 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 6mph Gusts, 4mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:
We had been locked for 7 + hours until an Earthquake unlocked us. 
When I arrived the IFO was Unlocked and ALS_XARM was stuck on CHECK_CRYSTAL_FREQ.
I'm still trying to figure out how to make the H1:ALS-X_FIBR_LOCK_BEAT_FREQUENCY better.

Images attached to this report
Displaying reports 8101-8120 of 84691.Go to page Start 402 403 404 405 406 407 408 409 410 End