Displaying reports 21-40 of 85933.Go to page 1 2 3 4 5 6 7 8 9 10 End
Reports until 10:48, Tuesday 09 December 2025
H1 SEI (OpsInfo)
jim.warner@LIGO.ORG - posted 10:48, Tuesday 09 December 2025 (88440)
HAM7 ISI is locked

Vac finished pulling the -Y door and the 2 access ports on the +Y side, I went out and locked the ISI. A,B and C lockers were fine, the D locker couldn't be fully engaged, which I think is a known issue for this ISI. I just turned until I started feeling uncomfortable resistance, so D is partially engaged.

LHO VE
david.barker@LIGO.ORG - posted 10:17, Tuesday 09 December 2025 (88439)
Tue CP1 Fill

Tue Dec 09 10:13:17 2025 INFO: Fill completed in 13min 13secs

 

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 09:34, Tuesday 09 December 2025 - last comment - 09:41, Tuesday 09 December 2025(88435)
CDS Power-outage Recovery Update

CDS is almost recovered from last Thrusday's power outage. Yesterday Patrick and I started the IOCs for:

picket_fence, ex_mains, cs_mains, ncalx, h1_observatory_mode, range LED, cds_aux, ext_alert, HWS ETMY dummy

I had to modify the picket fence code to hard-code IFO=H1, the python cdscfg module was not working on cdsioc0.

We are keeping h1hwsey offline, so I restarted the h1hwsetmy_dummy_ioc service.

The EDC disconnection list is now down to just the HWS ETMX machine (84 channels), we are waiting for access to EX to reboot.

Jonathan replaced the failed 2TB disk in cdsfs0.

Jonathan recovered the DAQ for the SUS Triple test stand in the staging building.

I swapped the power supplies for env monitors between MY and EY, EY has been stable since.

 

Comments related to this report
david.barker@LIGO.ORG - 09:35, Tuesday 09 December 2025 (88436)

I took the opportunity to move several IOCs from hand-running to systemd control on cdsioc0 configured by puppet. As mentioned, some needed hard-coding IFO=H1 due to cdscfg issues.

david.barker@LIGO.ORG - 09:41, Tuesday 09 December 2025 (88437)

CDS Overview.

Note there is a bug in the H1 Range LED display, a negative range is showing as 9MPc.

GDS still needs to be fully recovered.

Images attached to this comment
H1 General
anthony.sanchez@LIGO.ORG - posted 07:55, Tuesday 09 December 2025 (88434)
Maintenance Tuesday Start

TITLE: 12/09 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 11mph Gusts, 6mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.54 μm/s 
QUICK SUMMARY:
 SUS in-lock charge measurements did not run due to being unlocked.
HAM7 Door Bolts have been loosened & door is ready to come off.

Potential Tuesday Maintenance Items:
CDS - Log in and check on vacuum system computer (Patrick)
18-bit DACs in iscey should be replaced with 20-bit DACs???
Beckhoff upgrades
installing SUS front-end model infrastructure for JM1 and JM3, and renaming the h1sushtts.mdl to h1sush1.mdl
RELOCK CHECKPOINT IMC
18-bit DACs in h1oaf should be replaced with 20-bit DACs
h1sush2a/h1sush2b >> sush12 consolidation
Upgrade susb123 to LIGO DACs
Add sush6 chassis
DEC 8 - FAC - annual fire system checks around site
MON - RELOCKING IFO  Reached DRMI last night
TUES AM - HAM7 Pull -Y Door
FARO work at BSC2 (Jason, Ryan C)
 

H1 DetChar (DetChar)
joan-rene.merou@LIGO.ORG - posted 22:58, Monday 08 December 2025 - last comment - 13:51, Tuesday 09 December 2025(88433)
Hunting down the source of the near-30 Hz combs with magnetometers
[Joan-Rene Merou, Alicia Calafat, Sheila Dwyer, Anamaria Effler, Robert Schofield]

This is a continuation of the work performed to mitigate the set of near-30 Hz and near-100 Hz combs as described is Detchar issue 340 and lho-mallorcan-fellowship/-/issues/3. As well as the work in alogs 88089, 87889 and 87414.

In this search, we have been moving around two magnetometers provided to us by Robert. Given our previous analyses, we thought the possible source of the combs would be around either the electronics room or the LVEA close to input optics. We have moved around these two magnetometers to have a total of more than 70 positions. In each position, we left the magnetometers alone and still for at least 2 minutes, enough to produce ASDs using 60 seconds of data and recording the Z direction (parallel to the cylinder). For each one of the positions, we recorded the data shown in the following plot

  

That is, we compute the ASD using 60s FT and check the amplitude of the ASD at the frequency of the first harmonic of the largest of the near-30 Hz combs, the fundamental at 29.9695 Hz. Then, we compute the median of the +- 5 surrounding Hz and save the ASD value at 29.9695 Hz "peak amplitude" and the ratio of the peak against the median to have a sort of "SNR" or "Peak to Noise ratio".

Note that we also check the permanent magnetometer channels. However, in order to compare them to the rest, we multiplied the ASD of the magnetometers that Robert gave us times a hundred so that all of them had units of Tesla.

After saving the data for all the positions, we have produced the following two plots. The first one shows the peak to noise ratio of all positions we have checked around the LVEA and the electronics room:

  

Where the X and Y axis are simply the image pixels. The color scale indicates the peak to noise ratio of the magnetometer in each position. The background LVEA has been taken from LIGO-D1002704.
Note that some points slightly overlap with other ones, this is because in some cases we have check different directions or positions in the same rack.

It can be seen how from this SNR plot the source of the comb appears to be around the PSL/ISC Racks. Things become more clear if we also look at the peak amplitude (not ratio) as shown in the following figure:

  

Note that in this figure, the color scale is logarithmic. It can be seen how, looking at the peak amplitudes, there is one particular position in the H1-PSL-R2 rack whose amplitude is around 2 orders of magnitude larger than the other positions. Note that this position also had the largest peak to noise ratio. 

This position, that we have tagged as "Coil", is putting the magnetometer into a coil of white cables behind the H1-PSL-R2 rack, as shown in this image:

  

The reason that led us to put the magnetometer there is that we also found the peak amplitude to be around 1 order of magnitude larger than on any other magnetometer on top of one set of white cables that go from inside the room towards the rack and up towards we are not sure where:

  

This image shows the magnetometer on top of the cables on the ground behind the H1-PSL-R2 rack, the white ones on the top of the image appear to show the peak at its highest. It could be that the peak is louder in the coil because there being so many cables in a coil distribution will generate a stronger magnetic field.

This is the actual status of the hunt. These white cables might indicate that the source of these combs is the different interlocking system in L1 and H1, which has a chassis in the H1-PSL-R2 rack. However, we still need to track down exactly these white cables and try turning things on and off based on what we find in order to see if the combs dissapear.
Images attached to this report
Comments related to this report
jason.oberling@LIGO.ORG - 13:51, Tuesday 09 December 2025 (88442)PSL

The white cables in question are mostly for the PSL enclosure environmental monitoring system, see D1201172 for a wiring diagram (page 1 is the LVEA, page 2 is the Diode Room).  After talking with Alicia and Joan-Rene there are 11 total cables in question: 3 cables that route down from the roof of the PSL enclosure and 8 cables bundled together that route out of the northern-most wall penetration on the western side of the enclosure (these are the 8 pointed out in the last picture of the main alog).  The 3 that route from the roof and 5 of those from the enclosure bundle are all routed to the PSL Environmental Sensor Concentrator chassis shown on page 1 of D1201172, which lives near the top of PSL-R2.  This leaves 3 of the white cables that route out of the enclosure unaccounted for.  I was able to trace one of them to a coiled up cable that sits beneath PSL-R2; this particular cable is not wired to anything and the end isn't even terminated, it's been cleanly cut and left exposed to air.  I haven't had a chance to fully trace the other 2 unaccounted cables yet, so I'm not sure where they go.  They do go up to the set of coiled cables that sits about half-way up the rack, in between PSL-R1 and PSL-R2 (shown in the next-to-last picture in the main alog), but their path from there hasn't been traced yet.

I've added a PSL tag to this alog, since evidence points to this involving the PSL.

H1 ISC
jenne.driggers@LIGO.ORG - posted 18:55, Monday 08 December 2025 - last comment - 09:46, Tuesday 09 December 2025(88432)
Locked as far as DRMI

[Anamaria, RyanS, Jenne, Oli, RyanC, MattT, JeffK]

We ran through an initial alignment (more on that in a moment), and have gotten as far as getting DRMI locked for a few minutes.  Good progress, especially for a day when the environmental conditions have been much less than favorable (wind, microseism, and earthquakes).  We'll try to make more progress tomorrow after the wind has died down overnight. 

During initial alignment, we followed Sheila's suggestion and locked the green arms.  The comm beatnote was still very small (something like -12 dBm).  PR3 is in the slider/osem position that it was before the power outage. We set Xarm ALS to use only the ETM_TMS WFS, and not use the camera loop.  We then walked ITM to try to improve the COMM beatnote.  When I did it, I had thought that I only got the comm beatnote up to -9 dBm or so (which is about where it was before the power outage), but later it seems that maybe I went too far and it's all the way up at -3 dBm.  We may consider undoing some of that ITM move.  The ITMX, ETMX, and TMSX yaw osem values nicely matched where they had been before the power outage.  All three suspensions' pitch osems are a few urad different, but going closer to the pre-ooutage place made the comm beatnote worse, so I gave up trying to match the pitch osems.  

We did not reset any camera setpoints, so probably we'll want to do the next initial alignment (if we do one) using only ETM_TMS WFS for Xgreen.  

The rest of initial alignment went smoothly, after we checked that all other optics' sliders were in their pre-outage locations.  Some were tens of urad off on the sliders, which doesn't make sense.  We had to help the alignment in several places by hand-aligning the optics a bit, but no by-hand changes to controls servos or dark offsets or anything like that.

When trying to lock, we struggled to hold Yarm locked and lock COMM and DIFF until the seismic configuration auto-switched to the microseism state.  Suddenly things were much easier.  

We caught DRMI lock 2x times on our first lock attempt, although it lost DRMI lock during ASC.  We also were able to lock PRMI, but lost lock while I was trying to adjust PRM.  Later, we locked PRMI, and were able to offload the PRMI ASC (to PRM and BS).  

The wind has picked back up and it's a struggle to hold catch DRMI lock, so we're going to try again tomorrow.

 

Comments related to this report
ryan.short@LIGO.ORG - 09:46, Tuesday 09 December 2025 (88438)

During this process, I also flipped the "manual_control" flag in lscparams so that ALS will not scan alignment on its own and ISC_LOCK won't automatically jump to PRMI from DRMI or MICH from PRMI.

LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 17:23, Monday 08 December 2025 - last comment - 17:37, Tuesday 09 December 2025(88430)
HAM7 is vented

(Randy, Jordan, Travis, Filiberto, Gerardo)

We closed four small gate valves, two at the relay tube, RV-1 and RV-2.  Two at the filter cavity tube, between BSC3 and HAM7, FCV-1 and FCV-2.  The purge air system was on since last week, with a dew point reported by the dryer tower of -66 oC, and -44.6 oC measured chamber side.  Particulate was measured at the port by the chamber, zero for all sizes.  HAM7 ion pump was valved out. 

Filiberto helped us out with making sure high voltage was off at HAM7, we double checked with procedure M1300464.  Then, system was vented per procedure E2300169 with no issues.

Other activities at the vented chamber:

Currently the chamber has the purge air active at a very low setting.

 

Images attached to this report
Comments related to this report
gerardo.moreno@LIGO.ORG - 17:37, Tuesday 09 December 2025 (88447)VE

(Randy, Travis, Jordan, Gerardo)

-Y door was removed, no major issues removing it, but the usual O-ring sticking to flat flange, they stuck around the bottom part of the door, 5-8 O'clock.  Other than that no other issues.  Both blanks were removed and the ports were covered with an aluminum sheet.

Note, the soft cover will rub against ZM3 if the external jig to pull the cover is not used.

H1 General
ryan.crouch@LIGO.ORG - posted 16:35, Monday 08 December 2025 (88419)
OPS Monday Day shift summary

TITLE: 12/08 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY:

HAM7 was prepped then vented today and there are 4 bolts left on each door, the HAM7 HV was turned off alog88421. I changed the nuc30 DARM fom to the NO_GDS template in the launch.yaml file. Team CDS has been working their way through some of the EDC channel disconnects, the list length shrinks everytime I look at it.

We wanted to lock for health checks today but the Earth disagreed, a 7.6 from Japan rumbled in around 14:00 UTC and then a 6.7 from the same region at 22:00 UTC, and the wind started to pick up around 21:00UTC. Windy.com reports the wind will increase/remain elevated till it peaks around 10/11 PM PST | 06/07 UTC then it will should start to decrease. Ground motion and wind is still elevated as of the end of the shift.

LOG:                                                                                                                                                                                                                                          

Start Time System Name Location Lazer_Haz Task Time End
15:45 FAC Nellie LVEA N->Y->N Tech clean 18:21
16:03 FAC Kim LVEA N->Y->N Tech clean 18:21
16:14 FAC Randy LVEA N Door prep, HAM6/7 17:07
16:44 SAF Sheila LVEA N -> Y LASER HAZARD transition to HAZARD 16:53
16:53   LASER HAZARD LVEA Y LVEA IS LASER HAZARD 18:00
16:54 ISC Sheila LVEA Y SQZT7 work 17:53
17:08 CAL Tony PCAL lab Y PCAL measurement 18:28
17:13 EE Fil Mid/EndY N Power cycle electronics, timing issue 17:46
17:36 EE Marc, Daniel LVEA Y Check on racks by SQZT7 18:46
17:47 EE Fil LVEA Y Join Marc 18:13
17:55 ISC Matt Prep lab N Checks, JOT lab 18:28
18:05 VAC Travis LVEA N Prep for HAM7 vent 19:20
18:10 EE Fil LVEA n Shutting HV off for HAM7 19:10
18:17 SAF Richard LVEA N Check on FIl and Marc 18:28
18:21 VAC Gerardo LVEA N HAM7 checks 19:36
18:29 CAL Tony, Yuri FCES N   19:19
20:29 CDS Dave MidY and EndY N Plug in switch 21:16
20:46 CAL Tony PCAL lab LOCAL Grab goggles 20:48
21:48 VAC Randy LVEA N Door bolts 22:58
22:05 VAC Gerardo LVEA N HAM7 doors 23:20
22:12 VAC Jordan LVEA N HAM7 door bolts 22:38
22:16 FAC Tyler +1 MY, EY N Fire inspections 00:06
22:18 ISC Matt Prep Lab N Parts on CHETA table 22:53
22:43 VAC Travis LVEA N Join door crew 23:19
22:58   Anamaria, Rene, Alicia LVEA N Checks by PSL 23:30
23:55 CAL Tony PCAL lab LOCAL Take a quick picture 00:03
23:57 ISC Jennie Prep lab, LVEA N Gather parts Ongoing

18:32 UTC SEI_CONF back to AUTO from MAINTENANCE where it was all weekend

18:58 UTC HAM7 ISI tripped

22:03 UTC Earthquake mode as a 6.6 from Japan hit us

H1 SUS (CDS, SEI, SUS)
jeffrey.kissel@LIGO.ORG - posted 16:14, Monday 08 December 2025 (88415)
Weekend ETMY Software Watchdog Trips were Because of L2 to R0 Longitudinal Tracking not being blocked by USER WD
J. Kissel, J. Warner

Trending around this morning to understand the reported ETMY software watchdog (SWWD) trips over the weekend (LHO:88399 and LHO:88403), Jim and I conclude that -- while unfortunate -- nothing in software, electronics or hardware is doing anything wrong or broken; we just had a whopper Alaskan earthquake (see USGS report for EQ us6000rsy1 at 2025-12-06 20:41:49 UTC) and had a few big aftershocks. 

Remember, since the upgrade to the 32CH, 2^28 bit DAC last week, both end station's DAC outputs will "look CRAZY" to all those whom are used to looking at the number of counts of a 2^20 bit DAC. Namely, the maximum number of counts is a factor of 2^10 = 1024x larger than previously, saturating at +/- 2^27 = +/- 134217728 [DAC counts] (as opposed to +/-2^19 = +/- 524288 [DAC counts]).

The real conclusion: Both SWWD thresholds and USER WD Sensor Calibration need updating; they were overlooked in the change of the OSEM Sat Amp whitening filter from 0.4:10 Hz to 0.1:5.3 Hz per ECR:E2400330 / IIET:LHO:31595.
The watchdogs use a 0.1 to 10 Hz band-limited RMS as their trigger signal, and the digital ADC counts they use (calibrated into either raw ADC voltage or microns, [um], of top mass motion) will see a factor of anywhere from 2x to 4x increase in RMS value for the same OSEM sensor PD readout current. In otherwords, the triggers are "erroneously" a factor 2x to 4x more sensitive to the same displacement.

As these two watchdog trigger systems are currently mis-calibrated, I put all reference of their RMS amplitudes in quotes, i.e. ["um"]_RMS for the USER WDs and ["mV"]_RMS and quote a *change* in value when possible.
Note -- any quote of OSEM sensors (i.e. the OSEM basis OSEMINF_{OSEM}_OUT_DQ and EULER basis DAMP_{DOF}_IN1_DQ) in [um] are correctly calibrated and the ground motion sensors (and any band-limited derivatives thereof; the BLRMS and PeakMons) are similarly well-calibrated.

Also: The L2 to R0 tracking went into oscillation because the USER WDs didn't trip. AGAIN -- we really need to TURN OFF this loop programmatically until high in the lock acquisition sequence. It's too hidden -- from t a user interface standpoint -- for folks to realize that it should never be used, and always suspect, when the SUS system is barely functional (e.g. when we're vented, or after a power outage, or after a CDS hardware / software change, etc.)

Here's the timeline leading up to the first SUS/SEI software watchdog that helped us understand it there's nothing wrong with the software / electronics / hardware but instead it was the giant EQ that tripped things originaly, but then subsequent trips were because of an overlooked watchdog trigger sensor vs. threadhold mis-calibration coupled with the R0 tracking loops.
2025-12-04 
    20:25 Sitewide Power Outage.
    22:02 Power back on.

2025-12-05
    02:35 SUS-ETMY watchdog untripped, suspension recovery
    20:38 SEI-ETMY system back to FULLY ISOLATED (large gap in recovery between SUS and SEI due to SEI GRD non-functional because the RTCDS file system had not yet recovered)
    20:48 Locking/Initial alignment start for recovery.

2025-12-06 
    20:41:49 Huge 7.0 Mag EQ in Alaska

    20:46:30 First s&p-waves hit the observatory; corner station peakmon (in Z) is around 15 [um/s]_peak (30-100 mHz band)
             SUS-ETMY sees this larger motion, motion on M0 OSEM sensors in 0.1 to 10 Hz band increases from 0.01 ["um"]_RMS to 1 ["um"]_RMS.
             SUS-SWWD using the same sensors, in the same band but calibrated into ADC volts is 0.6 ["mV"]_RMS to ~5 ["mV"]_RMS

    20:51:39 ISI-ETMY ST1 USER watchdog trips because the T240s have tilted off into saturation, killing ST1 isolation loops
             SUS-ETMY sees the large DC shift in alignment from the "loss" of ST1, and 
             SUS-ETMY sees the very large motion, increasing to ~100 ["um"]_RMS (with USER WD threshold set to 150 ["um"]_RMS) -- USER WD never trips. But -- peak motion is oscillating to the 300 ["um"]_peak range (but not close to saturating the ADC.)
             SUS-SWWD reports an RMS voltage increase to 500 [mV_RMS] (with the SWWD WD threshold set to 110 ["mV"]_RMS) -- starts the alarm count-down of 600 [sec] = 10 [min].

    20:51:40 ISI-ETMY ST2 USER watchdog trips ~0.5 sec later as the GS13s go into saturation, and actuators try hard to keep up with the "missing" ST1 isolation
             SUS-ETMY really starts to shake here. 

    20:52:36 The peak love/rayleigh waves hit the site, with the corner station Z motion peakmon reporting at 140 [um/s], and the 30 - 100 mHz BLRMS reporting 225 [um/s].
             At this point its clear from the OSEMs that the mechanical system (either the ISI or the QUAD) is clanking against earthquake stops, as the OSEMs show a saw-tooth-like waveforms. 

    20:55:39 SWWD trips for suspension, shutting off suspension DAC output -- i.e. damping loops and alignment offsets -- and sending the warning that it'll trip the ISI soon.
             Since the SUS is still ringing naturally recovering from the still-large EQ and uncontrolled ISI.
    
    20:59:39 SWWD trips for seismic, shutting off all DAC output for HEPI and ISI ETMY
             SUS-ETMY OSEMs don't really notice -- it's still naturally ringing down with a LOT of displacement. There is a noticable small alignment shift as HEPI sloshes to zero.

    21:06    SUS-ETMY SIDE OSEM stops looking like a saw-tooth, the last one to naturally ring-down. After this all SUS looks wobbly, but normal.
             ISI-ETMY ST2 GS-13 stops saturating
 
    21:08    SUS-ETMY LEFT OSEM stops exceeding the SWWD threshold, the last one to do so.

2025-12-07
    00:05    HPI-ETMY and ISI-ETMY User WDs are untripped, though it was a "tripped again ; reset" messy restart for HPI because we didn't realize that the SWWD needed to be untripped.
             The SEI manager state was trying to get bck to DAMPED, which includes turning on the ISO loops for HPI.
             Since no HPI or ISI USER WDs know about the SWWD DAC shut-off, they "can begin" to do so, "not realizing" there is no physical DAC output.
             The ISI's local damping is "stable" without DACs because there's just not a lot that these loops do and they're AC coupled.
             HPI's feedback loops, which are DC coupled, will run away.

    00:11    SUS and SEI SWWD is untripped

    00:11:44 HPI USER WD untripped, 

    00:12    RMS of OSEM motion begins to ramp up again, the L / P OSEMs start to show an oscillation at almost exactly 2 Hz.
             The R0 USER WD never tripped, which allowed the H1 SUS ETMY L2 (PUM) to R0 (TOP) DC coupled longitudinal loop to flow out to the DAC.
             with the Seismic system in DAMPED (HEPI running, but ST1 and ST2 of the ISIs only lightly damped), and
             with the M0 USER WD still tripped and the main chain without any damping or control,
             after HEPI turned on, causing a shift in the alignment of the QUAD, changing the distance / spacing of the L2 stage, and
             the L2 "witness" OSEMs started feeding back the undamped main chain L2 to the reaction chain M0 stage, and slowly begain oscillating in positive feedback. see R0 turn ON vs. SWWD annotated screenshot.
             Looking at the recently measured open loop gain of this longitudinal loop -- taken with the SUS in it's nominally DAMPED condition and the ISI ISOLATED, there's a damped mode at 2 Hz.
             It seems very reasonably that this mode is a main chain mode, and when undamped would destroy the gain margin at 2 Hz and go unstable. See R0Tracking_OpenLoopGain annoted screenshot from LHO:87529.
             And as this loop pushes on the main chain, with an only-damped ISI, it's entirely plausible that the R0 oscillation coupled back into the main chain, causing a positive feedback loop.
             
    
    00:22    The main chain OSEM RMS exceeds the SWWD threshold again, as the positive feedback gets out of control peaking around ~300 ["mV"]_RMS, and the USER WD says ~100 ["um"]_RMS. Worst for the pitch / longitudinal sensors, F1, F2, F3.
             But again, this does NOT trip the R0 USER WD, because the F1, F2, F3 R0 OSEM motion is "only" 80 ["um"]_RMS still below the 150 ["um"]_RMS limit.

    00:27    SWWD trips for suspensions AGAIN as a result, shutting off all DAC output -- i.e. damping loops and alignment offsets -- and sending the warning that it'll trip the ISI soon.
             THIS kills the 
    
    00:31    SWWD trips for seismic AGAIN, shutting off all DAC output for HEPI and ISI ETMY

    15:59    SWWDs are untripped, and because the SUS USER WD is still tripped, the same L2 to R0 instability happens again.
             This is where the impression that "the watchdogs keep tripping; something broken" enters in.
             
    16:16    SWWD for sus trips again
    
    16:20    SWWD for SEI trips again 

2025-12-08
    15:34    SUS-ETMY USER WD is untripped, main chain damping starts again, and recovery goes smoothly.
    
    16:49    SUS-ETMY brought back to ALIGNED
    
Images attached to this report
Non-image files attached to this report
H1 SUS
oli.patane@LIGO.ORG - posted 15:54, Monday 08 December 2025 (88428)
Updated SUS watchdog BANDLIM filter files for SUS with updated 0.1:5.3 Hz satamps

Jeff alerted me that we had never updated the SUS watchdog compensation filters for the suspension stages with the upgraded satamps (ECR E2400330). In the SUS watchdog filter banks, in the BANDLIM bank, FM6 contains the compensation filter for the satamps. I used a script to go through and update all of these for the suspensions and stages with their precise compensation filter values (the actual values of each satamp's channel's responses were measured and live in /ligo/svncommon/SusSVN/sus/trunk/electronicstesting/lho_electronics_testing/satamp/ECR_E2400330/Results/), then loaded the new filter files in. This filter module was updated for:

Images attached to this report
H1 CDS
patrick.thomas@LIGO.ORG - posted 14:53, Monday 08 December 2025 (88426)
Started PLC and IOC for EX, CS power monitoring
The Visual Studio 2017 Community Edition installed on the end X machine (10.105.0.31) said my license had expired (it should be free), but I was unable to log into my account to renew it. I ended up reinstalling TwinCAT and selecting the option to install the Visual Studio shell as well, since that comes with TwinCAT and does not require an account. I was able to open the solution in that and run it.

The machine at the corner station (10.105.0.27) only had the shell installed, and I had no trouble with it.
H1 SEI
ryan.short@LIGO.ORG - posted 14:26, Monday 08 December 2025 (88425)
SEI_DIFF Now Ignoring HAMs 7 & 8

Jenne and I noticed that the SEI_DIFF node was reporting "chambers not nominal" and was stuck in its 'DOWN' state as a result. We realized this was because HAM7 ISI is tripped likely due to vent prep and impending door removal, so we removed HAMs 7 and 8 from the list of chambers that are checked in SEI_DIFF's "isi_guardstate_okay()" function. After loading the node, SEI_DIFF successfully went to 'BSC2_FULL_DIFF_CPS'.

H1 DAQ
daniel.sigg@LIGO.ORG - posted 13:50, Monday 08 December 2025 (88423)
TwinCAT Oddities

The picomotor controllers were not working. The software side looked ok, but there was no physical drive signal. The TwinCAT system showed an error message about "nonsensical priority order of the PLC tasks". In the past, we ignored these messages without any problems. After fixing this issue and re-activating the system, it started working again. Not usre if it just needed a restart, or if the priority order has now become important. More investigation needed.

H1 DAQ
daniel.sigg@LIGO.ORG - posted 13:45, Monday 08 December 2025 (88422)
Atomic clock reset

The atomic clock has been resynchronized with GPS. The tolerance has been reduced to <1000ns again.

Images attached to this report
H1 ISC (OpsInfo)
ryan.short@LIGO.ORG - posted 12:26, Monday 08 December 2025 - last comment - 15:18, Monday 08 December 2025(88420)
"Ignore SQZ" Flag Enabled

Since the SQZ laser is now off in preparation for the HAM7 vent, and we still want to keep trying to lock the IFO in the meantime, I've switched the "ignore_sqz" flag in lscparams.py from False to True. ISC_LOCK has been loaded.

Comments related to this report
ryan.short@LIGO.ORG - 13:58, Monday 08 December 2025 (88424)

This sent ISC_LOCK into error in a few places, so I've flipped the flag back and will revisit the logic at a later time.

ryan.short@LIGO.ORG - 15:18, Monday 08 December 2025 (88427)

I've fixed the logic in ISC_LOCK so now it ignoring SQZ_MANAGER with the flag raised is working as intended. I'm leaving the "ignore_sqz" flag True and all changes have been loaded.

LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 18:50, Tuesday 04 November 2025 - last comment - 17:46, Monday 08 December 2025(87966)
Kobelco Compressor and Dry Air Skid Functionality Test

I ran the dry air system thru its quarterly test, FAMIS task.  The system was started around 8:20 am local time, and turned off by 11:15 am.  System achieved a dew point of -50 oF, see attached photo taken towards the end of the test.  Noted that we may be running low on oil at the Kobelco compressor, checking with vendor on this.  Picture of oil level is while system is off.

Images attached to this report
Comments related to this report
gerardo.moreno@LIGO.ORG - 17:46, Monday 08 December 2025 (88431)VE

(Jordan, Gerardo)

We added some oil to the Kobelco reservoir whith the compressor off.  We added about 1/2 gallon to get the level to come up to the half mark, see attached photo of level, photo was taken after the compressor had been running for 24+ hours.  Level now is at nominal.

Images attached to this comment
Displaying reports 21-40 of 85933.Go to page 1 2 3 4 5 6 7 8 9 10 End