Displaying reports 141-160 of 86125.Go to page Start 4 5 6 7 8 9 10 11 12 End
Reports until 08:48, Monday 15 December 2025
H1 SUS (CDS, IOO, ISC, SUS)
jeffrey.kissel@LIGO.ORG - posted 08:48, Monday 15 December 2025 (88512)
h1sush2b SUS models have OPTICALIGN alignment offsets saved in their safe.snaps
J. Kissel


Saved opticalign alignment sliders for SUS on the h1sush2b computer in the h1susim and h1sushtts models; 
    h1susim
        H1SUSIM1
        H1SUSIM1
        H1SUSIM1
        H1SUSIM1
    h1sushtts (which will become h1susham1)
        H1SUSRM1
        H1SUSRM2
        H1SUSPM1

Ready for the h1sush2a + h1sush2b = h1sush12 IO chassis merge!
Images attached to this report
H1 SUS
jeffrey.kissel@LIGO.ORG - posted 08:43, Monday 15 December 2025 (88511)
h1sush2a SUS models have OPTICALIGN alignment offsets saved in their safe.snaps
J. Kissel

Saved opticalign alignment sliders in H1SUSMC1, MC3, PRM, and PR3 models. In particular, after I offloaded the IMC WFS (see LHO:88510).
Images attached to this report
H1 IOO (CDS, ISC, SUS)
jeffrey.kissel@LIGO.ORG - posted 08:36, Monday 15 December 2025 (88510)
IMC WFS Offloaded
J. Kissel

In prep for for the sush12 upgrade, I've offloaded the IMC WFS to the MC1, MC2, and MC3 OPTICALIGN sliders. This was done with the built in "MCWFS_OFFLOADED" state in the IMC_LOCK guardian. There wasn't too much alignment change requested by the WFS's control signal; just ~5 counts on MC1 and MC3.
I'll now save the alignment offsets into the SUS safe.snaps.
LHO General
thomas.shaffer@LIGO.ORG - posted 07:35, Monday 15 December 2025 - last comment - 12:19, Monday 15 December 2025(88508)
Ops Day Shift Start

TITLE: 12/15 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: MAINTENANCE
    Wind: 25mph Gusts, 19mph 3min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.44 μm/s 
QUICK SUMMARY: Gate valves 5 & 7 closed, plan is to vent the CS and start the SUS CDS upgrade. Vent meeting at 830 in the control room today to coordinate.

Comments related to this report
thomas.shaffer@LIGO.ORG - 07:56, Monday 15 December 2025 (88509)

Noting the current state of the select systems:

  • SEI
    • HAM7 - ISI tripped due to vent work
    • HAM3 - Looks like the ISI tripped Dec 13 0723 UTC. Untripping now.
    • As vent work progresses today, we will transition HAM1&2 into safe states
  • SUS
    • All SUS damped
    • OPO is in an odd state where the guardian says it's in SAFE, but the master switch is on and signals are going out. I'm guessing that this is the result of Friday vent activities. I will confirm with that team.
  • CDS
    • No major errors or systems in odd states
  • VAC
    • Gate valves 5&7 closed
    • Currently the CS is still under vacuum.
betsy.weaver@LIGO.ORG - 12:19, Monday 15 December 2025 (88516)
Full into Vent Mode now, here is the current plan for the next few days.
Images attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 10:23, Sunday 14 December 2025 (88507)
Sun CP1 Fill

Sun Dec 14 10:06:15 2025 INFO: Fill completed in 6min 11secs

 

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:41, Saturday 13 December 2025 (88506)
Sat CP1 Fill

Sat Dec 13 10:01:33 2025 INFO: Fill completed in 1min 32secs

Quick fill, has been burping LN2 over past day.

Images attached to this report
H1 SQZ (SQZ)
karmeng.kwan@LIGO.ORG - posted 16:28, Friday 12 December 2025 - last comment - 17:38, Friday 12 December 2025(88503)
HAM7 VOPO swap progress

[Keita, Daniel, Karmeng]

Particle counter acting up (black screen when we move the stand, and high particle count for the first three measurement). Picture for comparison with hand held counter.

We remove all three cables connected to the old OPO, the PZT cable is wrapped with a foil as a marker.

The old OPO is removed, wrapped and kept in the bag used to store the new OPO. New OPO is placed in the chamber, but not bolted on.

Images attached to this report
Comments related to this report
daniel.desantis@LIGO.ORG - 16:46, Friday 12 December 2025 (88504)

The trick to disconnecting the OPO cables was to install one flat dog on the front-right side of the OPO base, then remove the flat dog we had placed on the rear-left side of the OPO assembly earlier this week. This allowed us to slide the OPO back a bit and angle it up slightly so that the connectors were accesible and could be removed. Keita was able to hold the OPO in this position with one hand and loosen the jacking screws with the other (we thought this would be safer than trying to hold the OPO vertically above the VIP). We have a photo of this that Kar Meng may post later.

keita.kawabe@LIGO.ORG - 17:38, Friday 12 December 2025 (88505)

I first tried to undo the screws for the PZT cable connector on the OPO without lifting the OPO, but managed to hit the SFI1 (the one close to the -Y door) with a steel allen key twice. The wrench is tiny but we didn't want to repeat it many times.

Initially the OPO posotion was VERY tightly constrained in all directions (like within 0.5mm range). In addition to the dog clamps installed as the position reference that restricted the motion in -X and +Y direction, there was no room to move in +X and -Y (end not much in +Z) either because the metal ferrule thing at the back of the PZT connector hit the SFI1. Lifting OPO means that the PZT cable will be badly kinked. That's why we changed the position references of the OPO from (one left, one back) to (one left, two front).

After that, I was able to push OPO in +Y direction and lift the entire thing (with the cables still attached). I thought about the second person undoing connectors while I'm holding the OPO mid-air, but Daniel came up with the idea that I will only lift the front edge of the OPO to tilt it just enough so the cable connectors and the allen key stay safely above the SFI1. I didn't have to bear the load of the OPO while undoing the connectors, it was much safer than my alternative.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:22, Friday 12 December 2025 (88502)
OPS Day Shift Summary

TITLE: 12/13 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY:

IFO is in IDLE for Planned Engineering

TLDR - Last few big remaining lock health checks were successfully done.

First, initial alignment was ran successfully and automatically. 

We then started our day with 5 lock steps we wanted to confirm as working in the case that we could not fully lock:

To get around SRC1 P/Y instability, that would take time we don't have to fix, we locked while manually touching SRM at different times, notably: DRMI_LOCKED_CHECK_ASC, PREP_ASC_FOR_FULL_IFO, MOVE_SPOTS. However, after not being able to get past MOVE_SPOTS - couldn't keep RFPOP90 low enough with SRM touches, we tried a different method to get over the remaining checks suggested by Jenne:

  1. Stop at 25W, touch SRM (counts on H1:LSC-POPAIR_B_RF90_I_NORM_MON good if under 12)
  2. Manual to Lownoise Coil Drivers - Successful.
  3. Auto until LOWNOISE_ESD_ETMX - Successful.
  4. Manual to OMC Whitening, damp violins - Successful. Tony Damped violins.
  5. Manual to Laser Noise Suppression - Mostly successful - Ryan S took care of this with advice from Keita about the scaling of the IMC_REFL and LSC_REFL IN1/2 and Fast Gain Servo Gains.

For a very comprehensive summary of locking progress, check out Ryan S's Wonderful Alog 88498.

Lock Acquisitions:

  1. Lockloss due to SRC1 P/Y at Prep_ASC_FOR_FULL_IFO. SRM railed prior.
  2. Lockloss due to Technical Cleaning at HAM2. HAM2 WD Tripped. Cleaning stopped after this.
  3. Lockloss due to failed SRM adaptation (with ops acting as SRC1) at MOVE_SPOTS
  4. Lockloss at Laser Noise Suppression upon activation of IMC_REFL Fast Gain at 25W
  5. Lockloss at Laser Noise Suppression upon activation of IMC_REFL Fast Gain at 25W - though this time we confirmed it was likely the fast gain since Keita and Ryan S stepped through the state with gains scaled to 25W.

Meanwhile,

LOG:                                                                                                                                                                                                      

Start Time System Name Location Lazer_Haz Task Time End
15:57 SAFETY LASER HAZ STATUS LVEA N LVEA is LASER SAFE \u0d26\u0d4d\u0d26\u0d3f (\u2022_\u2022) 10:51
15:38 FAC Nellie and Kim LVEA HAM7, HAM2, HAM1 N Technical Cleaning 17:01
16:05 VAC Jordan LVEA N Purge Air Meas. 16:19
18:21 EE Marc CER N Hi-Voltage HAM7 18:29
18:53 SQZ Kar Meng, Daniel, Keita LVEA N HAM7 OPO 19:53
21:03 TCS Matt Optics, Vac Prep N CHETA work 23:18
21:51 VAC Jordan LVEA N Closing gate valves, turning on HAM1 cleanroom 23:19
22:21 ISC Jennie LVEA N Looking for camera mount 22:47
22:21 JAC Daniel LVEA N JAC cabling 00:21
22:25 SQZ Keita, Kar Meng, Daniel D. LVEA N HAM7 OPO swap 00:18
22:48 JAC Marc LVEA N JAC cabling 00:48
23:11 FIT Masayuki Arms N On a run 23:41
23:53 JAC Jennie Opt Lab N Looking for parts 00:53
00:04 JAC Masayuki Opt Lab N Joining Jennie 01:04
H1 General (ISC, OpsInfo)
ryan.short@LIGO.ORG - posted 16:18, Friday 12 December 2025 (88498)
The Final Day of H1 Locking for 2025

I. Abouelfettouh, T. Sanchez, K. Kawabe, M. Todd, R. Short

Ibrahim kicked off the locking attempts this morning after an initial alignment. It sounds like one lockloss was during PREP_ASC due to SRM alignment running away (before the "human servo" was implemented), and another was simply due to cleaning activities near HAM1/2. More details in his shift summary.

Matt set the X-arm ring heaters back to their nominal settings after having used inverse filters yesterday; see alog88494.

While relocking, we noticed that PRC1_P seemed to be pulling alignment in the wrong direction after engaging DRMI ASC, so I turned it off and aligned PRM by-hand. During ENGAGE_ASC, I noticed ADS PIT3 was taking a long time to converge, so after all ASC and soft loops had finished, I checked the POP_A offsets, and they indeed needed updating. Pitch was a bit different, so this explains why PRC1_P was misbehaving. I've accepted these in SDF, see screenshot. During this whole relocking stretch, Ibrahim had been keeping SRM well aligned as SRC1 ASC is still disabled, but that proved too difficult during MOVE_SPOTS and we lost lock.

On the next attempt, we were able to get all the way to 25W automatically (with Ibrahim again acting as the human SRC1 servo). Instead trying to keep up with the spot move, we jumped to LOWNOISE_COIL_DRIVERS, where I watched the coil driver states successfully change for all optics (PRM, PR2, SRM, SR2, BS, ETMY, ETMX, ITMY, and ITMX). Then, we were able to simply return ISC_LOCK to auto and request LOWNOISE_ESD_ETMX, which exercised the lownoise ESD transitions for both ETMs. This worked without issue. We then planned to test OMC whitening, so we jumped to the OMC_WHITENING state where Tony and Ibrahim began damping violin modes, which were very rung up. Before the violins were able to damp low enough to turn on OMC whitening, we decided rather than waiting, we should try the REFL B transition done in LASER_NOISE_SUPPRESSION first. We turned off violin damping, I commented out the step of engaging the ISS secondloop in LASER_NOISE_SUPPRESSION (we would test this later, but couldn't at this point since we only has 25W of input power), and jumped down to LASER_NOISE_SUPPRESSION. The state ran without issue until the very end when the IMC REFL servo fast gain is stepped up as the IMC REFL input gains are stepped down, which is the last step of the state and only there to potentially survive earthquakes better, and caused a lockloss.

The final locking attempt of the day began the same as the one before, with 25W being achieved automatically, jumping to LOWNOISE_COIL_DRIVERS, going through the lownoise ESD states, and jumping up to OMC_WHITENING. Contrary to before, we waited here while damping violins until the OMC whitening was turned on. I'd argue Guardian turned this on a bit prematurely as the OMC DCPDs immediately saturated, but the IFO did not lose lock. Violin modes damped quickly and soon the saturation warnings subsided. Our plan after confirming the OMC whitening was working was to try LASER_NOISE_SUPPRESSION again, but after talking through it and looking at Guardian code with Keita, we decided we should use some different gain settings to compensate for the fact we were again only at 25W. We eventually decided on the figure of 8 dB more gain was needed on the input to the IMC and LSC REFL common mode boards, which Guardian adjusts during this state. I started going through the steps of LASER_NOISE_SUPPRESSION by-hand, but raising the IMC REFL servo IN1 gain to 17 dB instead of 9 dB and the LSC REFL servo IN1 and IN2 gains to 14 db instead of 6 dB. I didn't get all the way to 14 dB for LSC REFL as we started hearing test mass saturation warnings, so I stopped at 10 dB instead. The last step of the state is to lower each of the IMC REFL input gains as you increase the IMC REFL fast gain, but on the fourth iteration of the step, we lost lock. It's possible the fast gain should have been scaled also due to the lower input power, but at least this was confirmed to be the problem step as it's the same place we lost lock on the previous attempt.

After this last lockloss, we tested functionality of the ISS secondloop by ensuring PRM was misaligned, raising the input power to 62W, and using the IMC_LOCK Guardian to close the secondloop. This worked without issue and everything was returned to normal.

Even though we were not able to fully recover H1 to low noise today, overall we believe we have confirmed the functionality of the main systems in question following the power outage last week and various CDS/Beckhoff changes this week. Arm gate valves are now closed in preparation for the HAM1 vent on Monday, and we plan to see H1 again in its full glory roughly mid-February 2026.

Images attached to this report
LHO General
jordan.vanosky@LIGO.ORG - posted 15:19, Friday 12 December 2025 (88501)
HAM1 Cleanroom turned on in prep for vent

I powered on the HAM1 cleanroom at ~3:10 PST in prep for next week's HAM1 vent.

H1 CDS
david.barker@LIGO.ORG - posted 15:11, Friday 12 December 2025 (88500)
GV5 and GV7 removed from alarms

To prevent the alarms system from being permantly RED I have removed the GV5 and GV7 channels while we are vented.

Current alarm is for PT114B cold cathode (CP1) which was tripped during the closing, we expect it to "catch" soon.

LHO VE
jordan.vanosky@LIGO.ORG - posted 15:09, Friday 12 December 2025 (88499)
GV5 & GV7 Hard Closed

Per WP 12926

GV-7 & GV-5 hard closed at 14:16 PST and 14:34 PST respectively. Metal valve to gate annulus opened as well.

PT-114 and PT-134 pirani interlocks tripped while I was nearby opening the annulus valves. I have re-renabled them from the control room.

FC-V3 (BSC3) and FC-V4 closed as well to isolate the FCT.

Plot of PT-120B (BSC2), PT-124B (CP-1) and PT-144B (CP-2) during valve closure attached.

Images attached to this report
H1 ISC (SQZ)
marc.pirello@LIGO.ORG - posted 10:36, Friday 12 December 2025 (88497)
Turned off HV for HAM7

We turned off the HV bypass for upcoming HAM7 work.  The power is located in the racks on the Mezannine in the Mechanical Room.

I followed Fil's instruction by first turning off the 24V for the interlock chassis, followed by removal of the bypass from the interlock chassis and then switching off the SQZ_PZT power.  Finally I switched off the SQZ_TTFSS power, so that all high voltage is off.

Marc, Keita

LHO VE
david.barker@LIGO.ORG - posted 10:30, Friday 12 December 2025 (88496)
Fri CP1 Fill

Fri Dec 12 10:08:13 2025 INFO: Fill completed in 8min 10secs

 

Images attached to this report
H1 General
matthewrichard.todd@LIGO.ORG - posted 09:06, Friday 12 December 2025 (88494)
Yesterday reseting XARM

Yesterday morning during locking attempts I noticed the ring heaters in the XARM were set incorrectly, and ETMX was completely off. I'm not completely sure why this happened, but my guess at this point is that after using the inverse filters last week, someone accepted SDFs or reverted to the safe values and these inverse filters were turned off but the set values were not restored in the nominal state. I'll try and understand if we can add this check to the TCS_RH_PWR guardian.

Anyway, I reused the inverse filters last night for ETMX and set them back to nominal this morning with the same set power.

Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 07:34, Friday 12 December 2025 (88493)
OPS Day Shift Start

TITLE: 12/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 3mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.30 μm/s 
QUICK SUMMARY:

IFO is LOCKING

Plan for the morning is to continue locking attempts until ~2PM

H1 SUS (CDS)
ryan.short@LIGO.ORG - posted 18:29, Thursday 11 December 2025 - last comment - 10:07, Friday 12 December 2025(88491)
ITMX Coil Driver State Binary Readback Issue

A. Effler, R. Short

People had noticed earlier an issue with SRM's coil driver binary IO state (see alog88486) so other optics were checked. ITMX's monitors showed some concerning colors (see first screenshot), but after taking a spectra of the quad's top-mass OSEMs comparing similar lock states before and after the power outage last week (see second screenshot), I am confident in saying the actual behavior of ITMX is unchanged and this boils down to a readback issue on the binary IO. This evidence is backed up by the fact that ITMX has been correctly aligned in our locking attempts, and Anamaria and I cycled the state requests a few times and saw expected behavior from the suspension. Dave says he and others will be looking into this.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 10:07, Friday 12 December 2025 (88495)

ITMX's BIO readback started deviating this Wed at 15:05. At this time we were restarting the DAQ for a second time to install a new h1ascimc model and add the new Beckhoff JAC channels to the EDC. If you trend H1:SUS-ITMX_BIO_M0_MON it was nominal going into the DAQ restart, and in deviation when the DAQ came back, presumably a coincidence.

By this time on Wednesday all hardware changes had been completed (asc0 upgrade, Beckhoff chassis work), hence the DAQ restart.

Trending back it looks like this happened in March/April this year just before the end of O4b (01apr2025). It started Sun30mar2025 and ended Wed02apr2025. I don't see any record of whether it was fixed on 02apr2024 or fixed itself.

Displaying reports 141-160 of 86125.Go to page Start 4 5 6 7 8 9 10 11 12 End