Displaying reports 1-20 of 85978.Go to page 1 2 3 4 5 6 7 8 9 10 End
Reports until 16:28, Friday 12 December 2025
H1 SQZ (SQZ)
karmeng.kwan@LIGO.ORG - posted 16:28, Friday 12 December 2025 - last comment - 17:38, Friday 12 December 2025(88503)
HAM7 VOPO swap progress

[Keita, Daniel, Karmeng]

Particle counter acting up (black screen when we move the stand, and high particle count for the first three measurement). Picture for comparison with hand held counter.

We remove all three cables connected to the old OPO, the PZT cable is wrapped with a foil as a marker.

The old OPO is removed, wrapped and kept in the bag used to store the new OPO. New OPO is placed in the chamber, but not bolted on.

Images attached to this report
Comments related to this report
daniel.desantis@LIGO.ORG - 16:46, Friday 12 December 2025 (88504)

The trick to disconnecting the OPO cables was to install one flat dog on the front-right side of the OPO base, then remove the flat dog we had placed on the rear-left side of the OPO assembly earlier this week. This allowed us to slide the OPO back a bit and angle it up slightly so that the connectors were accesible and could be removed. Keita was able to hold the OPO in this position with one hand and loosen the jacking screws with the other (we thought this would be safer than trying to hold the OPO vertically above the VIP). We have a photo of this that Kar Meng may post later.

keita.kawabe@LIGO.ORG - 17:38, Friday 12 December 2025 (88505)

I first tried to undo the screws for the PZT cable connector on the OPO without lifting the OPO, but managed to hit the SFI1 (the one close to the -Y door) with a steel allen key twice. The wrench is tiny but we didn't want to repeat it many times.

Initially the OPO posotion was VERY tightly constrained in all directions (like within 0.5mm range). In addition to the dog clamps installed as the position reference that restricted the motion in -X and +Y direction, there was no room to move in +X and -Y (end not much in +Z) either because the metal ferrule thing at the back of the PZT connector hit the SFI1. Lifting OPO means that the PZT cable will be badly kinked. That's why we changed the position references of the OPO from (one left, one back) to (one left, two front).

After that, I was able to push OPO in +Y direction and lift the entire thing (with the cables still attached). I thought about the second person undoing connectors while I'm holding the OPO mid-air, but Daniel came up with the idea that I will only lift the front edge of the OPO to tilt it just enough so the cable connectors and the allen key stay safely above the SFI1. I didn't have to bear the load of the OPO while undoing the connectors, it was much safer than my alternative.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:22, Friday 12 December 2025 (88502)
OPS Day Shift Summary

TITLE: 12/13 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY:

IFO is in IDLE for Planned Engineering

TLDR - Last few big remaining lock health checks were successfully done.

First, initial alignment was ran successfully and automatically. 

We then started our day with 5 lock steps we wanted to confirm as working in the case that we could not fully lock:

To get around SRC1 P/Y instability, that would take time we don't have to fix, we locked while manually touching SRM at different times, notably: DRMI_LOCKED_CHECK_ASC, PREP_ASC_FOR_FULL_IFO, MOVE_SPOTS. However, after not being able to get past MOVE_SPOTS - couldn't keep RFPOP90 low enough with SRM touches, we tried a different method to get over the remaining checks suggested by Jenne:

  1. Stop at 25W, touch SRM (counts on H1:LSC-POPAIR_B_RF90_I_NORM_MON good if under 12)
  2. Manual to Lownoise Coil Drivers - Successful.
  3. Auto until LOWNOISE_ESD_ETMX - Successful.
  4. Manual to OMC Whitening, damp violins - Successful. Tony Damped violins.
  5. Manual to Laser Noise Suppression - Mostly successful - Ryan S took care of this with advice from Keita about the scaling of the IMC_REFL and LSC_REFL IN1/2 and Fast Gain Servo Gains.

For a very comprehensive summary of locking progress, check out Ryan S's Wonderful Alog 88498.

Lock Acquisitions:

  1. Lockloss due to SRC1 P/Y at Prep_ASC_FOR_FULL_IFO. SRM railed prior.
  2. Lockloss due to Technical Cleaning at HAM2. HAM2 WD Tripped. Cleaning stopped after this.
  3. Lockloss due to failed SRM adaptation (with ops acting as SRC1) at MOVE_SPOTS
  4. Lockloss at Laser Noise Suppression upon activation of IMC_REFL Fast Gain at 25W
  5. Lockloss at Laser Noise Suppression upon activation of IMC_REFL Fast Gain at 25W - though this time we confirmed it was likely the fast gain since Keita and Ryan S stepped through the state with gains scaled to 25W.

Meanwhile,

LOG:                                                                                                                                                                                                      

Start Time System Name Location Lazer_Haz Task Time End
15:57 SAFETY LASER HAZ STATUS LVEA N LVEA is LASER SAFE \u0d26\u0d4d\u0d26\u0d3f (\u2022_\u2022) 10:51
15:38 FAC Nellie and Kim LVEA HAM7, HAM2, HAM1 N Technical Cleaning 17:01
16:05 VAC Jordan LVEA N Purge Air Meas. 16:19
18:21 EE Marc CER N Hi-Voltage HAM7 18:29
18:53 SQZ Kar Meng, Daniel, Keita LVEA N HAM7 OPO 19:53
21:03 TCS Matt Optics, Vac Prep N CHETA work 23:18
21:51 VAC Jordan LVEA N Closing gate valves, turning on HAM1 cleanroom 23:19
22:21 ISC Jennie LVEA N Looking for camera mount 22:47
22:21 JAC Daniel LVEA N JAC cabling 00:21
22:25 SQZ Keita, Kar Meng, Daniel D. LVEA N HAM7 OPO swap 00:18
22:48 JAC Marc LVEA N JAC cabling 00:48
23:11 FIT Masayuki Arms N On a run 23:41
23:53 JAC Jennie Opt Lab N Looking for parts 00:53
00:04 JAC Masayuki Opt Lab N Joining Jennie 01:04
H1 General (ISC, OpsInfo)
ryan.short@LIGO.ORG - posted 16:18, Friday 12 December 2025 (88498)
The Final Day of H1 Locking for 2025

I. Abouelfettouh, T. Sanchez, K. Kawabe, M. Todd, R. Short

Ibrahim kicked off the locking attempts this morning after an initial alignment. It sounds like one lockloss was during PREP_ASC due to SRM alignment running away (before the "human servo" was implemented), and another was simply due to cleaning activities near HAM1/2. More details in his shift summary.

Matt set the X-arm ring heaters back to their nominal settings after having used inverse filters yesterday; see alog88494.

While relocking, we noticed that PRC1_P seemed to be pulling alignment in the wrong direction after engaging DRMI ASC, so I turned it off and aligned PRM by-hand. During ENGAGE_ASC, I noticed ADS PIT3 was taking a long time to converge, so after all ASC and soft loops had finished, I checked the POP_A offsets, and they indeed needed updating. Pitch was a bit different, so this explains why PRC1_P was misbehaving. I've accepted these in SDF, see screenshot. During this whole relocking stretch, Ibrahim had been keeping SRM well aligned as SRC1 ASC is still disabled, but that proved too difficult during MOVE_SPOTS and we lost lock.

On the next attempt, we were able to get all the way to 25W automatically (with Ibrahim again acting as the human SRC1 servo). Instead trying to keep up with the spot move, we jumped to LOWNOISE_COIL_DRIVERS, where I watched the coil driver states successfully change for all optics (PRM, PR2, SRM, SR2, BS, ETMY, ETMX, ITMY, and ITMX). Then, we were able to simply return ISC_LOCK to auto and request LOWNOISE_ESD_ETMX, which exercised the lownoise ESD transitions for both ETMs. This worked without issue. We then planned to test OMC whitening, so we jumped to the OMC_WHITENING state where Tony and Ibrahim began damping violin modes, which were very rung up. Before the violins were able to damp low enough to turn on OMC whitening, we decided rather than waiting, we should try the REFL B transition done in LASER_NOISE_SUPPRESSION first. We turned off violin damping, I commented out the step of engaging the ISS secondloop in LASER_NOISE_SUPPRESSION (we would test this later, but couldn't at this point since we only has 25W of input power), and jumped down to LASER_NOISE_SUPPRESSION. The state ran without issue until the very end when the IMC REFL servo fast gain is stepped up as the IMC REFL input gains are stepped down, which is the last step of the state and only there to potentially survive earthquakes better, and caused a lockloss.

The final locking attempt of the day began the same as the one before, with 25W being achieved automatically, jumping to LOWNOISE_COIL_DRIVERS, going through the lownoise ESD states, and jumping up to OMC_WHITENING. Contrary to before, we waited here while damping violins until the OMC whitening was turned on. I'd argue Guardian turned this on a bit prematurely as the OMC DCPDs immediately saturated, but the IFO did not lose lock. Violin modes damped quickly and soon the saturation warnings subsided. Our plan after confirming the OMC whitening was working was to try LASER_NOISE_SUPPRESSION again, but after talking through it and looking at Guardian code with Keita, we decided we should use some different gain settings to compensate for the fact we were again only at 25W. We eventually decided on the figure of 8 dB more gain was needed on the input to the IMC and LSC REFL common mode boards, which Guardian adjusts during this state. I started going through the steps of LASER_NOISE_SUPPRESSION by-hand, but raising the IMC REFL servo IN1 gain to 17 dB instead of 9 dB and the LSC REFL servo IN1 and IN2 gains to 14 db instead of 6 dB. I didn't get all the way to 14 dB for LSC REFL as we started hearing test mass saturation warnings, so I stopped at 10 dB instead. The last step of the state is to lower each of the IMC REFL input gains as you increase the IMC REFL fast gain, but on the fourth iteration of the step, we lost lock. It's possible the fast gain should have been scaled also due to the lower input power, but at least this was confirmed to be the problem step as it's the same place we lost lock on the previous attempt.

After this last lockloss, we tested functionality of the ISS secondloop by ensuring PRM was misaligned, raising the input power to 62W, and using the IMC_LOCK Guardian to close the secondloop. This worked without issue and everything was returned to normal.

Even though we were not able to fully recover H1 to low noise today, overall we believe we have confirmed the functionality of the main systems in question following the power outage last week and various CDS/Beckhoff changes this week. Arm gate valves are now closed in preparation for the HAM1 vent on Monday, and we plan to see H1 again in its full glory roughly mid-February 2026.

Images attached to this report
LHO General
jordan.vanosky@LIGO.ORG - posted 15:19, Friday 12 December 2025 (88501)
HAM1 Cleanroom turned on in prep for vent

I powered on the HAM1 cleanroom at ~3:10 PST in prep for next week's HAM1 vent.

H1 CDS
david.barker@LIGO.ORG - posted 15:11, Friday 12 December 2025 (88500)
GV5 and GV7 removed from alarms

To prevent the alarms system from being permantly RED I have removed the GV5 and GV7 channels while we are vented.

Current alarm is for PT114B cold cathode (CP1) which was tripped during the closing, we expect it to "catch" soon.

LHO VE
jordan.vanosky@LIGO.ORG - posted 15:09, Friday 12 December 2025 (88499)
GV5 & GV7 Hard Closed

Per WP 12926

GV-7 & GV-5 hard closed at 14:16 PST and 14:34 PST respectively. Metal valve to gate annulus opened as well.

PT-114 and PT-134 pirani interlocks tripped while I was nearby opening the annulus valves. I have re-renabled them from the control room.

FC-V3 (BSC3) and FC-V4 closed as well to isolate the FCT.

Plot of PT-120B (BSC2), PT-124B (CP-1) and PT-144B (CP-2) during valve closure attached.

Images attached to this report
H1 ISC (SQZ)
marc.pirello@LIGO.ORG - posted 10:36, Friday 12 December 2025 (88497)
Turned off HV for HAM7

We turned off the HV bypass for upcoming HAM7 work.  The power is located in the racks on the Mezannine in the Mechanical Room.

I followed Fil's instruction by first turning off the 24V for the interlock chassis, followed by removal of the bypass from the interlock chassis and then switching off the SQZ_PZT power.  Finally I switched off the SQZ_TTFSS power, so that all high voltage is off.

Marc, Keita

LHO VE
david.barker@LIGO.ORG - posted 10:30, Friday 12 December 2025 (88496)
Fri CP1 Fill

Fri Dec 12 10:08:13 2025 INFO: Fill completed in 8min 10secs

 

Images attached to this report
H1 General
matthewrichard.todd@LIGO.ORG - posted 09:06, Friday 12 December 2025 (88494)
Yesterday reseting XARM

Yesterday morning during locking attempts I noticed the ring heaters in the XARM were set incorrectly, and ETMX was completely off. I'm not completely sure why this happened, but my guess at this point is that after using the inverse filters last week, someone accepted SDFs or reverted to the safe values and these inverse filters were turned off but the set values were not restored in the nominal state. I'll try and understand if we can add this check to the TCS_RH_PWR guardian.

Anyway, I reused the inverse filters last night for ETMX and set them back to nominal this morning with the same set power.

Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 07:34, Friday 12 December 2025 (88493)
OPS Day Shift Start

TITLE: 12/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 3mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.30 μm/s 
QUICK SUMMARY:

IFO is LOCKING

Plan for the morning is to continue locking attempts until ~2PM

H1 DAQ
jonathan.hanks@LIGO.ORG - posted 18:30, Thursday 11 December 2025 (88490)
WP 12930, test an alternate configuration for ECR E2500314

We have been seeing a low rate of data drops with the frame writer serving as both the frame writer and the data concentrator for the DAQD 0 systems.  This work is to try an alternate configuration.  Moving the gds broadcaster to be a data concentrator and making the frame writer single purpose again.

The data drops are due to messages from teh front end arriving too late and being discarded.  We are not seeing them on the DAQD 1 leg, only the 0.  This leads us to tend towards the issue be the load (and thus responsivness to input messages) of the FW machine.

Today I adjusted the system such that GDS0 became the data concentrator and the broadcaster.

I've attached a diagram of the current layout.

The main changes:

  1. Link TW0 to GDS0, this was done first to ensure the data link worked well and TW0 keeps functioning.
    1. Add a 10G card into gds0 and two
    2. Moved TW0 to read data from gds0
  2. Move the FE input into the gds machine and reconfigure the FW machine to just be a receiver

The final migration followed this rough order

The control room monitoring and medm screens still expect a DC, FW, TW, NDS, and GDS.  So gds0 as the DC is exporting DAQ-DC0 variables, and we are running a epics proxy ioc which maps DAQ-GDS0 channels to DAQ-DC0 for now.  This will change in the future.  Most of the DC0 variables will probably be taken over by the cps_recv process (this is in testing on the large test stand in LLO).

There were a few daqd restarts to make sure everything was working.  Data is flowing and fw0 and fw1 are producing identical frames.

The plan is to let this run though til January and evaluate the error rate.

Images attached to this report
H1 SUS (CDS)
ryan.short@LIGO.ORG - posted 18:29, Thursday 11 December 2025 - last comment - 10:07, Friday 12 December 2025(88491)
ITMX Coil Driver State Binary Readback Issue

A. Effler, R. Short

People had noticed earlier an issue with SRM's coil driver binary IO state (see alog88486) so other optics were checked. ITMX's monitors showed some concerning colors (see first screenshot), but after taking a spectra of the quad's top-mass OSEMs comparing similar lock states before and after the power outage last week (see second screenshot), I am confident in saying the actual behavior of ITMX is unchanged and this boils down to a readback issue on the binary IO. This evidence is backed up by the fact that ITMX has been correctly aligned in our locking attempts, and Anamaria and I cycled the state requests a few times and saw expected behavior from the suspension. Dave says he and others will be looking into this.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 10:07, Friday 12 December 2025 (88495)

ITMX's BIO readback started deviating this Wed at 15:05. At this time we were restarting the DAQ for a second time to install a new h1ascimc model and add the new Beckhoff JAC channels to the EDC. If you trend H1:SUS-ITMX_BIO_M0_MON it was nominal going into the DAQ restart, and in deviation when the DAQ came back, presumably a coincidence.

By this time on Wednesday all hardware changes had been completed (asc0 upgrade, Beckhoff chassis work), hence the DAQ restart.

Trending back it looks like this happened in March/April this year just before the end of O4b (01apr2025). It started Sun30mar2025 and ended Wed02apr2025. I don't see any record of whether it was fixed on 02apr2024 or fixed itself.

H1 SUS (ISC)
oli.patane@LIGO.ORG - posted 18:25, Thursday 11 December 2025 (88429)
SR3, PR3, LPY Estimator ON vs OFF times + LSC/ASC comparisons

Overall: The main change we are seeing in the LSC and ASC signals with the PR3 and SR3 estimators ON is the lowering of the 1 Hz resonance in ASC, which is known to be a bit of a problem peak.

We've been wanting times within the same lock where we take quiet times with the SR3 and PR3 estimators in various ON/OFF configurations so we can get a better look at possible differences in ISC signals, since the difference we are looking for, if any, is very small and the frequency response changes from lock to lock. We finally were able to get some time on November 24th. I made sure to get two sets of ALL ON times, one at the beginning and one at the end, and then also two sets of ALL OFF times, similarly one at the beginning and one at the end, to get a better idea of what changes between the two are really due to the estimator configurations.

Times:

SR3, PR3 Est ALL ON (one set before & one set after other sets)
start: 1448060052
end: 1448060452

start: 1448063252
end: 1448063616

SR3, PR3 Est ALL OFF (one set before & one set after other sets)
start: 1448060469
end: 1448060977

start: 1448063631
end: 1448063846

SR3, PR3 Est JUST L
start: 1448060995
end: 1448061621

SR3, PR3 Est JUST P
start: 1448061643
end: 1448062049

SR3, PR3 Est JUST Y
start: 1448062067
end: 1448062407

SR3, PR3 Est LP
start: 1448062846
end: 1448063239

SR3, PR3 Est PY
start: 1448062423
end: 1448062828

I used my script /ligo/svncommon/SusSVN/sus/trunk/Common/MatlabTools/estimator_ISC/estimator_isc_comparison.m, to plot ASDs for LSC (CTRL) and ASC (ERR) channels, as well as the LSC CAL channels and ASC diode channels when an ASC channel was made up of more than one diode (ex. INP1 uses REFL A 45I and REFL B 45I). I very quickly realized that the difference between each individual dof and LP or PY ON vs OFF is not enough of a change to be able to accurately say whether there is a difference. So we're just going to look at all estimator dofs ON vs all OFF. These result plots can be found in /ligo/svncommon/SusSVN/sus/trunk/Common/MatlabTools/estimator_ISC/H1/Results/ as revision r12809. On the plots, Light Green 'ON' is next to Dark Green 'OFF' in time, and Light Blue 'ON' is next to Dark Blue 'OFF' in time, so Light vs Dark Green and Light vs Dark Blue are useful for proving drops in noise when the estimator is turned on.

LSC
LSC, LSC-CAL (Length, Length Zoomed in)
- Don't show much improvement
    - Maybe MICH at 1 Hz has improved a bit with the estimators ON
    - PRCL shows slight decrease in noise between 3.5-6 Hz with the estimators ON
ASC
PITCH
REFL Diodes ASC - INP1, CHARD Pitch (would be affected by PR3):
- Don't show much improvement
- INP1 P shows slightly lower noise between 2.5-3.5 Hz
    - Similar to what was seen in INP1 P OUT in August: 86640
- Very small decrease in 1 Hz peak in CHARD
    - ASC-REFL_A_RF45_I_PIT sees a small decrease at 1 Hz while ASC-REFL_B_RF45_I_PIT does not see anything
AS Diodes ASC - MICH, SRC1, SRC2, DHARD Pitch (would be affected by SR3):
- Improvement seen in the 1 Hz resonance by 1.5-2x (except SRC1)
POP Diodes ASC - PRC1, PRC2 Pitch (would be affected by PR3):
- PRC1 P sees slight improvement seen in the 1 Hz resonance

YAW
REFL Diodes ASC - INP1, CHARD Yaw (would be affected by PR3):
- Don't show much improvement
- CHARD Y sees slight decrease betweeen 4-6 Hz with the estimators ON
AS Diodes ASC - MICH, SRC1, SRC2, DHARD Yaw (would be affected by SR3):
- Don't show much improvement
POP Diodes ASC - PRC1, PRC2 Yaw (would be affected by PR3)
- PRC2 Y sees slight decrease betweeen 4-6 Hz with the estimators ON
    - Similar to what was seen in PRC2 Y OUT in August: 86640

Non-image files attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 18:01, Thursday 11 December 2025 (88488)
Thursday Ops Shift Start

TITLE: 12/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 4mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.41 μm/s 
SHIFT SUMMARY:

Initial_Alignment completed quickly thismornign with no intervention.
LL @ Start_TR_CARM

NO_IR found message when at CARM_TO_TR
Manualed back to Start_TR_CARM
LL @ Start TR CARM again

Matt noticed that ETMX Ring Heater has not been on for 36 hours. 

Reconfig of DAQ 0 leg, trying to pull a machine out. See Alog by Jonathan ***pending****
DC0 Status indicators may blink red on CDS Overview. 

TR_CARM Issue update: 
There is a perception that Beckhoff maybe have a "sticky slider" issue, So Dr. Driggers has pushed all buttons asscoitated with Corner 5 Beckhoff ~ 18:15 UTC. 
Sticky Slider Idea has been ruled out @ 20:03 UTC  
Many Hours Later...apparently there was a the PLL Boost that was stopping us from getting past Start_TR_CARM That Dr. Driggers found.

error with GS0 @ 20:54 UTC was Jonathan doing DC0 reboots.

Violin Damping Guadian node restart ~ 21:25 UTC
Manually setting Violin damping settings. Lost losk when applying Damping to EYmode8
The Guardian gate way needed to be restarted to solve this issue. Thank you Jonathan!
Pshyc! Violin Guardian issue is back! Turns out its a Python2.7 -> Python 3+ issue. Thanks Dave!
 
LHO's IFO Made it to Move spots before Lockloss.

error with GS0 @ 00:29 UTC  & 01:09 UTC was Jonathan doing DC0 reboots again and finding an Error.
Jonathan is doing another "Zero leg" Restart @ 1:20 UTC Alog still pending, but here is a work permit.

It is currently 02:00 UTC and the Locking team has called it for the night after losing lock at Max Power.
Lockloss was from an ASC ring up around 275 kW of recirculating power in the arms. 


LOG:                          

Start Time System Name Location Lazer_Haz Task Time End
15:57 SAFETY LASER HAZ STATUS LVEA NO LVEA is LASER SAFE \u0d26\u0d4d\u0d26\u0d3f (\u2022_\u2022) 10:51
15:57 FAC Kim & Nellie LVEA Y Technical Cleaning 16:33
17:23 JAC Jennie W, & Co JAC Lab N Working on JAC 19:23
17:39 FAC Randy X-arm N Clearing the path of tumbleweeds 19:39
17:48 Tours Cassidy & tour CTRL-Overpass N Giving a tour 18:48
17:52 LASER Saftey Travis LVEA, FCES, EY n Updating Laser status signs up around site 18:49
18:22 SQZ Karmeng & Daniel LVEA SQZt7 YES Working on HAM7 upgrades 18:57
19:26 JAC/Cheta Betsy Optics lab N Dropping off parts. 19:56
19:27 CDS Jonathan MSR N Rebooting DC0 & making CDS_Overview angry 20:57
19:31 ISC Jeff & Jennie D LVEA Y Checking on status of PSL ISCT racks. 20:11
19:42 JAC J.W. JAC Lab N Jac Work 19:48
22:15 FAC Tyler OSB Receiving N Forklifting something quickly over to OSB Recieving. 22:35
22:27 LAser Trans Sheila & Anamaria LVEA yes Transitioning the LVEA to Laser Hazard. 22:57
22:53 VAC Gerardo & Jordan LVEA N Getting parts 23:13
23:46 PEM RyanC PCAL lab N Get data from dust monitor huddle test 00:01
00:10 CDS Jonathan MSR N Working on the DNS and DC0 servers 00:50
Images attached to this report
H1 SUS (CDS)
jeffrey.kissel@LIGO.ORG - posted 16:42, Thursday 11 December 2025 - last comment - 17:20, Thursday 11 December 2025(88486)
Allegedly Stuck SRM M3 UL Low-pass Binary IO Switch
J. Kissel, J. Driggers, A. Effler

Trying to figure out why SRC1 loop, which is the fast AS_A_RF72 loop that drives controls SRM is being finicky, I pulled open the H1 SUS SRM coil driver binary IO state request screen (see attached) and found that the UL low pass (LP) bit on the M3 stage stage is allegedly stuck -- it doesn't change when I go from state 1 (LP OFF, ACQ OFF), to state 3 (LP ON, ACQ OFF). The rest of the coils on the M3 stage change when receiving the same request.

I say "allegedly" because it can be either the actual switch is stuck (the BO chassis) or the readback has failed / is stuck (the BI chassis).

To be investigated...
Images attached to this report
Comments related to this report
anamaria.effler@LIGO.ORG - 17:20, Thursday 11 December 2025 (88487)

Seems this bit has been stuck on and off since Feb 21 (for state 1 of M3 0 is good and 1 is bad in the attached ndscope). Also attached I put the spectra of the FASTIMONs which are after the analog switches. The state 1 spectra look the same as before, and the state 3 spectra looked the same anyway, so more likely that the readback has gone bad.

I further checked that PRM/PR2/BS/ITMX/ITMY do the correct BIO switching from high noise to low noise, according to their readbacks. I have not double checked the spectra.

Images attached to this comment
H1 General
matthewrichard.todd@LIGO.ORG - posted 11:17, Thursday 11 December 2025 - last comment - 18:35, Thursday 11 December 2025(88480)
Morning Locking Attempts

M. Todd, J. Driggers, J. Kissel, S. Dwyer, T. Sanchez, D. Sigg


For posterity's sake, I thought it would be nice to record some of the actions and efforts the locking team was doing this morning.

Sheila noticed we were losing lock at START_TR_CARM, and so we relocked DRMI and tried again but going back to PREP_TR_CARM and making sure things were reset. Upon trying to advance to START_TR_CARM, the ALS-C_REFL_DC_BIAS_GAIN was set before losing lock again.

This raised suspicion about the IMC_COMM_CARM path, and it was thought that the changes to Bekhoff over the last few days could have caused some of these problems. To double check everything in this path is set to the values that are being "reported", Jenne went through and toggled every button and slider in the path as a "sticky slider" approach to solving this. The chassis work involved corner 4 and 5, but Jeff narrowed these issues down to corner 5 chassis -- here is a wiring diagram of the chassis: D1100683.

We called in Daniel to see what his thoughts were on these issues.

Comments related to this report
sheila.dwyer@LIGO.ORG - 14:01, Thursday 11 December 2025 (88484)

We eventually lock TR_CARM by going through the guardian steps slowly.  We noticed that when we stepped through slowly, the ALS COMM VCO would pull the mode cleaner away causing the TR_CARM path to rail.  We disabled the VCO internal servo that uses the frequency comparator to keep the VCO at a fixed frequency.  This has been used in the past for our sucsesful transitions, but seemed to be railing each time this morning.  

We left the ALS not shuttered, and ran the QPD servos while ASC and soft loops were engaged, with Tony acting as a SRC1 servo.  After all the ASC had engaged, the camera set points still looked good, I set the QPD offsets back to their values from SDF, and the camera offsets still looked good.  The guardain has been reset to shutter ALS as normal next time we relock.  The POP A yaw offset was off slightly, I set it to -0.43 rather than -0.4.  

We then transitioned to DC_READOUT using the guardian without any intervention, or known fixes for the problems we were having.  

We are able to turn on the SRC1 yaw loop, but the SRC1 pitch loop pulls the side band build ups off.  I've added the SRC1 yaw loop back into ENGAGE_ASC_FOR_FULL_IFO.

jenne.driggers@LIGO.ORG - 18:02, Thursday 11 December 2025 (88489)

Re: Matt's alog this morning, I was worried that we had a sticky slider situation after the beckhoff work (which apparently isn't really a thing here, but was a thing at the 40m).  We moved sliders and flipped switches on the IMC common mode board, the IFO REFL common mode board, and the summing node. Probably that wasn't the issue.

After a few other TR_Carm unsuccessful attempts, I trended and it turns out that the time we had been successful doing things by hand that Sheila mentions, we had forgotten to turn off the H1:ALS-C_COMM_PLL_BOOST, so we did the whole transition with the boost left on.  I have now set in the guardian to leave the boost on, and we have now gotten through this several times without any intervention.  I also added turning off the ALS COMM VCO by clicking the Ext button (and resetting it to Int in PrepForLocking), however it's possible that that wasn't necessary, since guardian already had changing the On/Off switch as part of these states.  The real key seems to be leaving on the H1:ALS-C_COMM_PLL_BOOST.

As Sheila said, DC readout seems to just be working fine, no intervention needed.  Total mystery why it hadn't been working on Tuesday.

We have now powered up to 25 W two times!  Even if we're not able to get farther than this, there are very few items left that would need to be checked using the full IFO (eg, the ISS second loop can be checked using IMC-only).  

SRC1 P is still out of the guardian.  One time I was able to close it using the offset from lownoise ASC, at Elenna's suggestion.  But, we still lost lock in MoveSpots.  The other time I was by-hand watching SRM pitch. No need to move it up to 25W, and I think I picked the wrong direction during move spots, so we still lost lock.

We got up past 25W a third time.  This time, rather than doing MOVE_SPOTS, I manual-ed and did RF9 and then RF45 modulation depth reduction.  Both of those were fine, although the 45 MHz reduction did confuse my Jenne-in-the-loop SRM alignment loop. At some point, it became clear that SRC1 yaw was pulling us away, so we turned that off and I started dealing with SRM yaw alignment as well as SRM pit alignment.  We then did move spots with both SRC1 pit and yaw open and me watching them.  That seemed fine.  We then started going to MAX_POWER.  I think we got up to 50W, but then we got a nasty ASC ringup and lost lock.

The only analog things that we haven't tested yet are (I think) coil driver switching, ESD switching, and OMC whitening switching, and ISS Second loop engagement. Tony had checked PCals earlier in one of our locks, and they were fine. 

It sounds like there will be some folks around to perhaps try locking again tomorrow.  However, as Betsy pointed out, now we have a pretty small list of things that *haven't* been tested, so even if we aren't able to get to full NLN we have very few things to be suspicious of when relocking next calendar year after the vent. ISS second loop we can test using IMC-only.  Anamaria and Jeff were thinking through how could we check (using, eg, the IMON channels) the sus actuator switching. I think that we could also try OMC whitening switching using a single-bounce OMC.  

ryan.short@LIGO.ORG - 18:35, Thursday 11 December 2025 (88492)

Due to it giving Jenne trouble and pulling alignment away, I have commented out SRC1_Y from ENGAGE_ASC. This means the "human servo" will need to be in-use for now for SRM, especially during powerup and the spot move.

I've attached the ASC trends around where the ringup happened during the powerup. Looks like this is the known ~0.45 Hz instability and it's possible we were just unlucky.

The plan for tomorrow is to start locking just like we did this last time (fully auto with someone monitoring SRM alignment) and see how far we can get. If we're unable to fully reach low noise, there are ways we can check certain systems for functionality before we plan to close the arm gate valves mid-afternoon.

Images attached to this comment
Displaying reports 1-20 of 85978.Go to page 1 2 3 4 5 6 7 8 9 10 End