Displaying reports 101-120 of 84502.Go to page Start 2 3 4 5 6 7 8 9 10 End
Reports until 11:44, Friday 12 September 2025
H1 IOO
elenna.capote@LIGO.ORG - posted 11:44, Friday 12 September 2025 (86879)
Change in IMC visibility

A quick calculation of the IMC visibility:

Before power outage:

IMC refl when offline = 46.9

IMC refl when online at 2 W = 0.815

1 - 0.815/46.9 = 0.982

After power outage:

IMC refl when offline = 46.5

IMC refl when online at 2 W = 2.02

1 - 2.02/46.5 = 0.965

So we think we have lost 2.6% of the visibility

H1 DetChar (DetChar, DetChar-Request)
elenna.capote@LIGO.ORG - posted 11:32, Friday 12 September 2025 - last comment - 10:20, Monday 15 September 2025(86878)
Detchar request- help understanding our IMC-related glitches.

I posted the following message in the Detchar-LHO mattermost channel:

Hey detchar! We could use a hand with some analysis on the presence and character of the glitches we have been seeing since our power outage Wednesday. They were first reported here: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=86848 We think these glitches are related to some change in the input mode cleaner since the power outage, and we are doing various tests like changing alignment and power, engaging or disengaging various controls loops, etc. We would like to know if the glitches change from these tests.

We were in observing from roughly GPS time 1441604501 to 1441641835 after the power outage, with these glitches and broadband excess noise from jitter present. The previous observing period from roughly GPS 1441529876 to 1441566016 was before the power outage and these glitches and broadband noise were not present, so it should provide a good reference time if needed.

After the power outage, we turned off the intensity stabilization loop (ISS) to see if that was contributing to the glitches. From 1441642051 to 1441644851, the ISS was ON. Then, from 1441645025 to 1441647602 the ISS was OFF.

Starting from 1441658688, we decided to leave the input mode cleaner (IMC) locked with 2 W input power and no ISS loop engaged. Then, starting at 1441735428, we increased the power to the IMC from 2 W to 60 W, and engaged the ISS. This is where we are sitting now. Since the interferometer is has been unlocked since yesterday, I think the best witness channels out of lock will be the IMC channels themselves, like the IMC wavefront sensors (WFS), which Derek reports are a witness for the glitches in the alog I linked above.

Comments related to this report
elenna.capote@LIGO.ORG - 10:20, Monday 15 September 2025 (86927)

To add to this investigation:

We attenuated the power on IMC refl, as reported in alog 86884. We have not gone back to 60 W since, but it would be interesting to know if a) there was glitches in the IMC channels at 2W before the attenuation, and b) if there were glitches at 2 W after the attenuation. We can also take the input power to 60 W without locking to check if the glitches are still present.

H1 IOO
sheila.dwyer@LIGO.ORG - posted 11:30, Friday 12 September 2025 (86877)
IMC OLG measurement comparison

Ryan S, Keita, Oli, Sheila

Using the instructions found here we used the netgpib script to get data from an IMC OLG measurement (already checked yesterday 86852). 

Oli and I adapted Craig's quick tf plot script to plot two TFs on top of each other, and found some data from Oct 31st 2024 where the IMC OLG was measured with 2W input power 80979

We adjusted the gain of the 2024 measurement to match the gain of the 2024 measurement to today's for the first point.  The IMC cavity pole is at 8.8kHz, so since this measurement cuts off at 10kHz it will be difficult to get information about the IMC cavity pole from this measurement. 

This script is in sheila.dwyer/IOO/IMC_OLG/quick_2tfs_plot.py,  legend entries and file names are hardcoded not arguments.

 

Non-image files attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:53, Friday 12 September 2025 (86876)
Fri CP1 Fill

Fri Sep 12 10:08:39 2025 INFO: Fill completed in 8min 35secs

 

Images attached to this report
LHO General
ryan.short@LIGO.ORG - posted 07:41, Friday 12 September 2025 - last comment - 08:19, Friday 12 September 2025(86871)
Ops Day Shift Start

TITLE: 09/12 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 2mph Gusts, 1mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.16 μm/s
QUICK SUMMARY: No observing time for H1 overnight. Investigations will continue today into issues caused by the site power outage on Wednesday.

Comments related to this report
ryan.short@LIGO.ORG - 07:59, Friday 12 September 2025 (86873)

Things looking relatively stable overnight. Some trends and current snapshot of MC TRANS and REFL attached.

Images attached to this comment
sheila.dwyer@LIGO.ORG - 08:19, Friday 12 September 2025 (86875)

Here's an image of a time when the IMC was locked at 2W before the power outage to compare to: archive screenshot

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 19:20, Thursday 11 September 2025 (86870)
OPS Eve Shift Summary

TITLE: 09/12 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: NONE
SHIFT SUMMARY:

IFO is in DOWN and MAINTENANCE

We are still working on figuring out what happened post-outage that is yielding a glitchy interferometer when locked.

Due to the risk of damaging the interferometer, the decision has been made to cancel the OWL shift, shorten the EVE shift and stay in DOWN until tomorrow.

PSL Dust is fluctuating but has been increasing in the last 2-3 hours. I assume that this is due to the PSL work that was done 2/3 hours ago - this has not been an issue in the past though I've attached a plot.

LOG:

None

Images attached to this report
LHO FMCS (PEM)
ibrahim.abouelfettouh@LIGO.ORG - posted 19:01, Thursday 11 September 2025 (86869)
Checking HVAC Fans - Weekly FAMIS 26686

Closes FAMIS 26686. Last checked in alog 86718

Everything below threshold with the outage clearly visible as a blip (mostly in MY and MX fans).

Plots attached.

Images attached to this report
H1 ISC
oli.patane@LIGO.ORG - posted 18:09, Thursday 11 September 2025 - last comment - 08:01, Friday 12 September 2025(86866)
Glitch rate from overnight lock

Originally pointed out by Camilla or Elenna earlier today, but I wanted to record it here in case it can help us figure out what the issue is. During last night's low range lock after the power outage (2025-09-11 03:00:47 - 17:42:34 UTC), our glitch rate was way higher than it typically is, and the glitches were mainly confined to several specific frequencies(main summary page, glitches page). I've been able to get some of these frequencies, but there are some frequency lines that I haven't been able to narrow in on the exact frequencies yet.

Here are the frequencies I confirmed, as well as guesses for the other lines:
16.91
24-ish
29.37
37.32
47.41
60-ish
76-ish
90-ish
120-ish
156.97
199.44
250-ish
321.95
409.04
510-ish
660.30

I've plotted them just lined up next to each other as well as plotting the difference in frequency as compared to each one's previous point, and we can see there is a slow exponential increase in the difference between each glitch line frequency. The yellow points are the ones that are around the correct range, but not their exact values.

Additionally, once we turned the ISS Second Loop off at 16:55 UTC, the glitches previously appearing between 500 and 1000 Hz stopped almost altogether, the glitches at 409 Hz and below became a lot more common and louder, and we also saw some extra glitches start above 4000 Hz. We understand the glitches above 4000 Hz, but we aren't sure why the glitches between 500 and 4000 Hz would stop when we did this.

Hoping this might help shine a light on some possible electronics issue?

Images attached to this report
Comments related to this report
derek.davis@LIGO.ORG - 08:01, Friday 12 September 2025 (86874)

The exponential behavior noted in this alog is related to how frequencies are chosen for the sine-Gaussian wavelets used by Omicron. This type of frequency behavior is what we would expect for broadband glitches, and unfortunately, it does not relate to their physical source in this case. 

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 17:14, Thursday 11 September 2025 (86867)
OPS Eve Shift Start

TITLE: 09/11 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 22mph Gusts, 15mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.16 μm/s
QUICK SUMMARY:

IFO is DOWN for MAINTENANCE

With full brevity, something is broken and we don't know why. You can use this alog as a glossary for ideas and tests that our experts have had today:

Much like yesterday, guesses involved the IMC, so Sheila and Ryan went into the PSL with no immediate culprits. IFO was extremely glitchy while observing so we made the call not to try locking at the risk of damaging. If comissioners think of anything to test, they will let me know.

Relevant alogs from throughout the day:

Alogs are still being written as it's been a long and busy day. I will add more throughout the shift.

H1 General
anthony.sanchez@LIGO.ORG - posted 16:42, Thursday 11 September 2025 (86865)
Thursday Ops Day shift defeat

TITLE: 09/11 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:

H1 Started the day locked and Observing but something was strange. H1 h(t) triggers across multiple frequencies.
When we went to commissioning and skipped the Calibration due to the very obvious fact that the IFO was not running well.
Sheila and Oli went out on the floor to inject some test signals into  H1:IMC-REFL_SERVO_COMEXCEN.

Sheila,  Elenna, Camilla, Ryan, Daniel, Oil, & TJ were trying to find out why the IFO is behaiving so abnormally for most of the day.  
Journalist came into the Control room for a quick tour with Fred.

There are fears that we may accidentally burn something if we try to lock, and perhaps the EVE's and Owls may be canceled.
Dust counts in the Anti-Room were Very high today and after TJ went to the Mech Mez to check and adjust the Dust pumps he did discover that the Vacuum lines for the Dust mon  pumps were not securely connected.... which could explain the dust. After that the dust levels did seem to fall.

Ryan & Sheila went into the PSL room to see if there was any obvious issues.

 Rahul checked out the Input arm su spensions and didn't find anything strange.
 

LOG:

Start Time System Name Location Lazer_Haz Task Time End
14:57 FAC Nellie Optics lab N Technical Cleaning 15:13
16:06 IMC Sheila,  Elenna,  Oli LVEA N Checking on the IMC 18:06
17:54 SEI Jim LVEA N Trippn HAM3 18:24
18:48 PSL&PEM Ryan S LVEA N Checking the PSL environmental controls 19:18
20:25 JAC Corey Optics / vac lab N working on Jac table 20:25
20:43 IMC Daniel & Ellenna LVEA N Checking for oscillations on IMC 21:13
22:11 PSL Sheila & Ryan S PSL room Yes Checking for smell of burnt in PSL 22:56

 

 

H1 PEM
thomas.shaffer@LIGO.ORG - posted 14:45, Thursday 11 September 2025 (86863)
24 days with dust monitor 10 high on BSC2

Continuing from alog86631, here's 24 days with DM10 mounted up high on BSC2 on a catwalk. The 3 Tuesdays are labled and I also plotted DM6 as a comparision, which has not moved and stayed in the mega cleanroom.

Images attached to this report
H1 IOO
sheila.dwyer@LIGO.ORG - posted 14:15, Thursday 11 September 2025 (86860)
IMC transmission degraded after power outage, and in 60W locks since the power outage

Compaing now to a time before the power outage, the ISS diffracted power is now the same (3.6%).  The PMC transmission has dropped by 2%, and the PMC reflection has increased by 3.5%.

Looking at IMC refl, before the power outage when the IMC was locked with 2W input power.

time IMC refl refl percent of before outage MC2 trans MC2 trans % of before outage
9/10 8:12 UTC (before power outage, 2W IMC locked) 0.74   317  
9/11 00:22 UTC (IMC relocked at 2W after outage) 1.15 155% 312 98%
9/11/1:50 UTC (after first 60 W and quick lockloss, IMC relocked at 2W) 1.27 171% 310 97%
9/11 18:17 UTC (after overnight 60W lock)** had one IMC ASC loop off 2.06 278% 278 87%
9/11 19:22 UTC (after all IMC ASC on) 2.13 287% 301 95%
9/11 21:17 UTC (after sitting at 2W for 3 hours) 2.03 274% 303 95%

The attached screenshot shows that the ISS second loop increased the IMC input power during the overnight 60W lock to keep the IMC circulating power constant. 

Images attached to this report
H1 ISC
oli.patane@LIGO.ORG - posted 13:58, Thursday 11 September 2025 (86861)
Comparing IMC WFS during IMC offline before and after outage...but then it changed

After the lockloss today, we took the IMC offline for a bit, and I moved the IMC PZTs back to around where they had been before the outage. The time from then when we had the IMC offline was September 11, 2025 18:07:32 - 18:11:32 UTC. I then found a time from before the outage when we last had the IMC offline, which was September 02, 2025 17:03:02 - 17:19:02 UTC, and verified that the pointing for MC1 P and Y were about the same, as well as the IMC PZTs are in the same general area.

I then looked at IMC-WFS_{A,B}_DC_{PIT,YAW,SUM}_OUT during these times. The dtt, where blue is the Sept 2 time and red is the time from today, seems to show similar traces. The ndscopes (Sept2, Sept11) show many of those channels to be in the same place, but a few aren't - H1:IMC-WFS_A_DC_PIT_OUT has changed by 0.3, H1:IMC-WFS_A_DC_YAW_OUT has changed by 0.6, and H1:IMC-WFS_B_DC_YAW_OUT has changed by 0.07. However, after this time, we relocked the IMC for a while, and then went back to offline, between 19:25:22 - 19:31:22 UTC, and these values changed. I've added that trace to the dtt in green, and it still looks about the same. Many of the WFS values have changed though.

Images attached to this report
H1 SEI
jim.warner@LIGO.ORG - posted 13:23, Thursday 11 September 2025 - last comment - 09:53, Tuesday 16 September 2025(86859)
HAM3 H2 CPS is noisy, has been for a while

I don't really think this is related to the poor range, but it seems that one of the cps on HAM3 has excess high frequency noise and has been noisy for a while.

First image is 30+ day trends of the 65-100hz nad 130-200hz blrms for the HAM3 cps. Something happened  about 30 days ago that cause the H2 cps to get noisy at higher frequency.

Second image are rz location trends for all the HAM ISI for the last day around the power outage. HAM3 shows more rz noise after the power outage.

Last image are asds comparing HAM2 and HAM3 horizontal CPS. HAM3 H2 shows much more noise above 200hz.

Since finding this, I've tried power cycling the CPS on HAM3 and reseating the card, but that so far has not fixed the noise. Since this has been going for a while, I will wait until maintenance to try to either fix or replace the card for this CPS.

Images attached to this report
Comments related to this report
jim.warner@LIGO.ORG - 09:53, Tuesday 16 September 2025 (86959)

I've replaced the noisy cps, and adjusted the cps setpoint to maintain the global yaw alignment, meaning I looked at the free hanging (iso loops off) position before and after the swap and changed the RZ setpoint  so the delta between the isolated and freehanging position for RZ was the same with the new sensor. New sensor doesn't show either the glitching or high frequency noise that the old sensor had. I also changed the X and Y set points, but those only changed a few microns and should not affect IFO alignment.

Images attached to this comment
H1 CDS
david.barker@LIGO.ORG - posted 11:40, Thursday 11 September 2025 - last comment - 15:13, Thursday 11 September 2025(86857)
Corner station dust monitors were recording high values overnight due to central pump offline

Tony, TJ, Dave:

After the power outage the CS dust monitors (Diode room, PSL enclosure, LVEA) started recording very large numbers (~6e+06 PCF). TJ quickly realized this was most probably a problem with the central pump and resolved that around 11am today.

2-day trend attached.

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 15:13, Thursday 11 September 2025 (86864)PEM

The pump was running especially hot and the gauge showed no vacuum pressure. I turned off the pump and checked the hose connections. The filter for the bleed valve was loose, the bleed screw was full open, and one of the pump filters was very loose. after checking these I turned it back on and it immediately sucked to 20inHg. I trimmed it back to 19inHg and then rechecked a few hours later to confirm it had stayed at that pressue. The pump was also running much cooler at that point as well. 

H1 ISC (PSL)
camilla.compton@LIGO.ORG - posted 11:19, Thursday 11 September 2025 - last comment - 14:01, Thursday 11 September 2025(86855)
Looking into if ISS 2nd Loop is causing IFO Glitches

Sheila, Ryan S, Tony, Oli, Elenna, Camilla, TJ ...

From Derek's analysis 86848, Sheila was suspicious that the ISS second loop was causing the glitches, see attached. The ISS turning on on a normal lock vs in this low-range glitchy lock attached, it's slightly more glitchy in this lock.

The IMC OLG was checked 86852.

Sheila and Ryan unlocked the ISS 2nd loop at 16:55UTC. This did not cause a lockloss although Shiela found that the IMC WFS sensors saw a shift in alignment which is unexpected.

See the attached spectrum of IMC_WFS_A_DC_YAW and LSC_REFL_SERVO (identified in Derek's search), red is with the PSL 2nd loop ON, blue is off. There is no large differences so maybe the ISS 2nd loop isn't to blame.

We lost lock at 17:42UTC, unknown why but the buildups starting decreasing 3 minutes before the LL. It could have been from Sheila changing alignments of the IMC PZT.

Sheila found IMC REFL DC channels are glitching whether the IMC is locked or unlocked, plot attached. But this seems to be the case even before the power outage.

Once we unlocked, Oli out the IMC PZTs back to their location before the power outage, attached is what IMC cameras looks like locked at 2W after WFS converged for a few minutes.

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 11:26, Thursday 11 September 2025 (86856)

Here are three spectrograms of the IMC WFS DC YAW signal: before the power outage, after the power outage with the ISS on, after the power outage with the ISS off.

I don't see glitches in the before outage spectrogram, but I do see glitches in BOTH the ISS on and ISS off spectrograms. The ISS off spectrogram shows more noise in general but it appears that it has the same amount of nonstationary noise.

Images attached to this comment
camilla.compton@LIGO.ORG - 12:05, Thursday 11 September 2025 (86858)

IM4 trans with IMC locked at 2W before the power outage was PIT, YAW: 0.35,-0.11. After the outage PIT, YAW: 0.39,-0.08. Plot attached. These are changes of 0.04 in PIT, 0.03 in YAW. 

Oli and Ryan found that the alignment on IMC REFL WFS was the same but PSL-ISS_SEONDLOOP different, this makes us think the alignment change is in the MC or IMs. The IMs osems haven't changed considerably.

Images attached to this comment
camilla.compton@LIGO.ORG - 14:01, Thursday 11 September 2025 (86862)PSL

Ryan, Sheila

Ryan and Sheila noticed that the PSL-PMC_TEMP changed ~0.5deg without the set point changing. After looking into his we compared to the April power outage where it came back 1deg cooler. We therefore don't think this is causing us any issues. Both bower outages plotted in attached.

We did notice tough that some glitches in PMC-TEMP_OUT started ~ 31st May 2025 and have been present since, plot attached. Tagging PSL.

Images attached to this comment
H1 CDS
david.barker@LIGO.ORG - posted 12:24, Wednesday 10 September 2025 - last comment - 07:51, Friday 12 September 2025(86827)
H1 is down due to power outage.

We had a site wide power outage around 12:11 local time. Recovery of CDS has started.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 12:40, Wednesday 10 September 2025 (86828)

I've turned the alarms system off, it was producing too much noise.

We are recovering front end models.

david.barker@LIGO.ORG - 13:51, Wednesday 10 September 2025 (86831)

Jonathan, Erik, Richard, Fil, Patrick, EJ, TJ, RyanS, Dave:

CDS is recovered. CDSSDF showing WAPs are on, FMCSSTAT showing LVEA temp change.

Images attached to this comment
david.barker@LIGO.ORG - 13:52, Wednesday 10 September 2025 (86832)

Alarms are back on (currently no active alarms). I had to restart the locklossalert.service, it had gotten stuck.

richard.mccarthy@LIGO.ORG - 07:51, Friday 12 September 2025 (86872)

BPA Dispatcher on duty said they had a breaker at the Benton substation open & reclose. At that time, they did not have a known cause for the breaker operation.  Hanford fire called to report a fire off Route 4 by Energy Northwest near the 115KV BPA power lines. After discussions with the BPA dispatcher the bump on the line or breaker operation, may have been caused by a fault on the BPA 115KV line causing the fire. BPA was dispatching a line crew to investigate.

 

Displaying reports 101-120 of 84502.Go to page Start 2 3 4 5 6 7 8 9 10 End