Displaying reports 21-40 of 85942.Go to page 1 2 3 4 5 6 7 8 9 10 End
Reports until 18:22, Tuesday 09 December 2025
H1 General
anthony.sanchez@LIGO.ORG - posted 18:22, Tuesday 09 December 2025 (88449)
Final Maintenance Tuesday Ops Update.

TITLE: 12/10 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: MAINTENANCE
    Wind: 9mph Gusts, 7mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.39 μm/s 
QUICK SUMMARY:

All planned CDS Work : Status Postponed in favor of Locking.
DEC 8 - FAC - annual fire system checks around site : Status "Completed.... For now." ~ Tyler 
MON TUES - RELOCKING IFO : " In Progress "  
TUES AM - HAM7 Pull -Y Door : STATUS Completed
FARO work at BSC2 (Jason, Ryan C) Post poned in favor for locking.  : Status postponed infavor of Locking.
HAM7 - in-chamber work to swap OPO assembly : Status " In progress"
Notes: 
Fire Pump 1 is on 23:01 UTC

 -18V power supply SUSC1 power supply dead, Fil C. is replacing it. 
Dave took down corresponding AA & AI chassis and brought them back up.

Fire pump 1 back on 23:16 UTC
Fire pump 2 on at 23:19 UTC
Fire Pump 1 is back on again 23:21 UTC

High Voltage for HAM6 was accidentally shut off , which is why the Fast shutter no longer works....Turned back on at 23:53 UTC.  

16:17 All SEI ALL SUS at Corner Station taken to safe to prepair for restarting ALL CS Models.
Waiting for HAM7 OPO team to give Green Light for model restarts on HAM7.
... Oops...  ZM4 & ZM5 models were accidentally restarted  with H1SUSSQZOUT before we heard from OPO team.  

TSC X&Y CO2 TRIP Imminent !? Verbals scrolled too fast to get a time....  Dave was restarting those models at the time. 

Ends Stations and Mid stations were also restarted since the X & Y ES both tripped due to a dolphin issue. ES were not taken to Safe.
"Might as well restart them all" ~ Dave B.

SUS ETMX was shook hard enough to trip the watchdog.  Probably because they were not taken to safe befroe the reboots. 

 

H1 CDS
jonathan.hanks@LIGO.ORG - posted 18:06, Tuesday 09 December 2025 (88448)
Restarted all models

Dave, Jonathan,

We restarted all the tonight starting at 16:30-17:40.  This was due to unexpected model behavior where epics outputs did not match daqd data.  We are working to understand the mechanism.  Our supposition is that it was due to a slow /opt/rtcds on model startup.  We did not need to restart any daqd processes, which points to this issue being internal to the models.  Dave will fill in a few more details in a comment.

H1 SQZ (SQZ)
karmeng.kwan@LIGO.ORG - posted 17:26, Tuesday 09 December 2025 - last comment - 21:02, Tuesday 09 December 2025(88443)
HAM7 VOPO replacement activity (iris setup)

[Sheila, Betsy, Anamaria, Eric, Daniel, Karmeng]

First time opening HAM7.

Removed the contamination control horizontal wafer.
Tools and iris are setup next to the chamber. 
Dust count in the chamber and in the clean room are good/low.
4 irises placed at: after ZM1, and CLF REFL. ZM3 and ZM4 are roughly placed, need to realign after CDS are power on. Pump reflection hard to place, and we will not place an iris there.

 

Comments related to this report
eric.oelker@LIGO.ORG - 21:02, Tuesday 09 December 2025 (88454)

We took some photos to help us determine where to place the irises and how to route the new in-vacuum cabling.  Linking them here since they might be useful when planning for future vents:

https://photos.app.goo.gl/xYCUhbqwxzZVwe7c6

H1 SUS
oli.patane@LIGO.ORG - posted 17:00, Tuesday 09 December 2025 - last comment - 11:10, Wednesday 10 December 2025(88445)
Estimators seemingly caused 0.6 Hz oscillations again

Jeff, Oli

Earlier, while trying to relock, we were seeing locklosses preceded by a 0.6 Hz oscillation seen in the PRG. Back in October we had a time where the estimator filters were installed incorrectly and caused a 0.6 Hz lock-stopping oscillation (87689). Even though we haven't made any changes to the estimators in over a month now, I decided to try turning them all off (PR3 L/P/Y, SR3 L/P/Y). During the next lock attempt, there were no 0.6 Hz oscillations seen. I checked the filters and settings and everything looks normal, so I'm not sure why this was happening.

I took spectra of the H1:SUS-{PR3,SR3}_M1_ADD_{L,P,Y}_TOTAL_MON_DQ channels for each suspension and each DOF during two similar times before and after the power outage. I wanted the After time to be while we were in MICROSEISM, since it maybe seems like maybe the ifo isn't liking the normal WINDY SEI_ENV right now, so I wanted both the Before and After times to be in a SEI_ENV of MICROSEISM and the same ISC_LOCK states. I chose the After time to be 2025/12/09 18:54:30 UTC, when we were in an initial alignment, and then found a Before time of 2025/11/22 23:07:21 UTC.

Here are the sprectra for PR3 and SR3 for those times. PR3 looks fine for all DOF, and SR3 P looks to be a bit elevated between 0.6 - 0.75 Hz, but it doesn't look like it should be enough of a difference to cause oscillations.

Then, while talking to Jeff, we discovered the difference in overall noise in the total damping for L and P changed depending on the seismic state we were in, so I made a comparison between MICROSEISM and CALM SEI_ENV states (PR3, SR3). USEISM time was 2025/12/09 12:45:26 UTC and CALM was 2025/12/09 08:54:08 UTC with a BW of 0.02. The only difference in the total drive is seen in L and P, where it's higher below 0.6 Hz when we are in CALM.

So during those 0.6 Hz locklosses earlier today, we were in USEISM. Is it possible that the combination of the estimators in the USEISM state create an unstable combination?

Images attached to this report
Comments related to this report
edgard.bonilla@LIGO.ORG - 08:51, Wednesday 10 December 2025 (88456)

This is possibly true. The estimator filters are designed/measured using a particular SEI environment, so it is expected that they would underperform when we change the SEI loops/blends.

Additionally, we use the GS13 signal for the ISI-->SUS transfer function .It might be the case that the different amount of in-loop/out-of-loop ness of the GS13 might do something to the transfer functions. I don't have any math conclusions from it yet, but Brian and I will think about it.

jeffrey.kissel@LIGO.ORG - 11:10, Wednesday 10 December 2025 (88458)SEI, SUS
I'm pretty confident that the estimators aren't a problem, or at least a red herring.

Just clarifying the language here -- "oscillation" is an overloaded term. And remember, we're in "recovery" mode from Last Thursday's power outage -- so literally *everything* is suspect and wild guesses are are being thrown on around like flour in a bakery, and we only get brief, but separated by 10s of minutes time, unrepeatable, evidence that something's wrong. 

The symptom was "we're trying 6 different things at once to get the IFO going. Huh -- the ndscope time-series IFO build ups as we're locking one time looked to exponentially grow to lock-loss in one lock stretch and in another it just got noisier halfway through this lock stretch. What happened? Looks like something at 0.6 Hz."

We're getting to "that point" in the lock acquisition sequence maybe once every 10 minutes.
There's an entire rack's worth of analog electronics that go dark in the middle of this, as one leg of its DC power failed. (LHO:88446)
The microseism is higher than usual and we're between wind storms, so we're trying different ISI blend configurations (LHO:88444)
We're changing around global alignment because we thing suspensions moved again during the "big" HAM2 ISI trip at the power outage (LHO:88450)
There's a IFO-wide CDS crash after a while that requires all front-ends to be rebooted; with the suspicion that our settings configuration file track system might have been bad . (LHO:88448)...

Everyone in the room thinks "the problem" *could* be the thing they're an expert in, when it's likely a convolution of many things.

Hence, Oli trying to turn OFF the estimators.
An near that time, we switch the configuration of the sensor correct / blend filters of all the ISIs (switching the blends from WINDY to MICROSEISM -- see LHO:88444).

So -- there was 
    - only one, *maybe* two where an "oscillation" is seen, in the sense of "positive feedback" or "exponential growth of control signal." 
    - only one "oscillation" where it's "excess noise in the frequency region around 0.6 Hz," but they check if it actually *is* 0.6 Hz again isn't rigorous.

That happens to be frequency of the lowest L and P modes of the HLTSs, PR3 and SR3.
BUT -- Oli shows in their plots that:
    - Before vs. after the power outage, when looking at times when the ISI platforms are in the same blend state PR3 and SR3 control is the same.
    - The comparing the control request when the ISI platforms are in microseims vs. in windy show the expected change in control authority from ISI input, as the change in shape of the ASD of PR3 and SR3 between ~0.1 and ~0.5 Hz matches the change in shape of the blends.

Attached is an ndscope of all the relevant signals -- our at least the signals in question, for verbal discussion later.


Images attached to this comment
H1 SUS (CDS, SUS)
jeffrey.kissel@LIGO.ORG - posted 15:54, Tuesday 09 December 2025 (88446)
H1 SUS-C1's -18V_DC Power Fails (MC2, PR2, SR2's Coil Drivers, AAs, AIs, and BI/BO Chassis)
J. Driggers, R. Short, D. Barker, J. Kissel, R. McCarthy, M. Pirello, F. Clara
WP 12925
FRS 36300

The power supply for the negative rail (-18V) of the SUS-C1 rack in the CER -- the right power supply in VDC-C1, U3-U1 failed on 2025-12-09 22:55:40 UTC. This rack houses the coil drivers, AAs, AIs, and BI/BO chassis -- all the analog electronics for SUS-MC2, SUS-PR2, SUS-SR2.

We found the issue via DIAG_MAIN, which said "OSEMs in Fault" calling out MC2, PR2, and SR2.
We confirmed that the IMC wasn't locking, and MC2 couldn't push the IMC through fringes.
Also, the OSEM PD inputs on all stages of these suspensions were digital zero (not even ADC noise).

Marc/Fil/Richard were quick on the scene and with the replacement. 
We brought the HAM3 and HAM4 ISI -- ISI_DAMPED_HEPI_OFFLINE -- in prep for a front-end model / IO chassis restart if necessary.
Un-managed the MC2, PR2, and SR2 guardians, by bringing their MODE to AUTO.
Used those gaurdians to bring those SUS to SAFE.
- Richard/Marc/Fil replaced the failed -18V power supply.
- While there, the +18 V supply had already been flagged in Marc's notes for replacement, so we replaced that as well (see D2300167).

Replaced failed +18V power supply S1201909 with new power supply S1201944
Replaced failed -18V power supply S1201909 with new power supply S1201957.

The rack was powered back up, suspensions, and seismic restored by 2025-12-09 23:31 UTC. The suspensions appear fully functional.

Awesome work team!
H1 SEI
jim.warner@LIGO.ORG - posted 14:05, Tuesday 09 December 2025 (88444)
Blend filters used for H1 ISI Stage 1

Because we are back into the time of year when the ground motion can change a lot, I'm posting a quick reminder of what the blend filter low passes look like for the main two blends we use for horizontal dofs on the St1 of the ISI. Attached figure shows bode plots of the BSC St1 lowpasses on the right and the HAM St1 low passes on the left. Blue lines are the ~100mhz blends we use for the microseism states, red lines are the 250mhz blends we use for the "windy" states. The blends are the main component that changes between calm/windy and useism, I think we use the same sensor correction for both states. I won't go into detail about what these plots mean, this a iykyk kind of post. Sorry. 

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 13:10, Tuesday 09 December 2025 (88441)
Maintenance Tuesday Update.

TITLE: 12/09 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 21mph Gusts, 17mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.45 μm/s 
QUICK SUMMARY:

DEC 8 - FAC - annual fire system checks around site : In Progress
MON Tues - RELOCKING IFO : In Progress
TUES AM - HAM7 Pull -Y Door : Completed
FARO work at BSC2 (Jason, Ryan C) : Post poned in favor of Locking? 
HAM7 - in-chamber work to swap OPO assembly : In Progress.

All other work Status is currently unknown.
---------------------------------------------------------------------------------------------------

Notes:

Fire Alarm went off at 16:58 UTC 
HVAC system down Temperature will climb everywhere says Tyler @ 17:15 UTC
HVAC Fans turn back on @ 17:18 UTC Tyler says Air handlers are back on line , should be just a little Blip in temp.

Tumbleweeds at EY piled up too high to access EY. Chris has since cleared this.

Fire alarm in MSR going off @ 18:07 UTC

Initial Alignment process: 
Held ISC_LOCK in Idle
Forced X arm to get locked, Moved ITMX to increase the Comm beat note, then Offloaded it.
When we tried for an initial Alignment we skipped.Green arms in Initial alignment.
Pushed PRM to get locked while we were in PRC align. 
Pushed BS when Mich_bright aligning. 
Pushed SRM to help SRC alignment.

Locking: 
Jumped straight to Green Arms Manual and aimed for Offload_DRMI_ASC. 

Jenne D. by hand Offloaded a "PR2 osem Equivelent", during one of out locks that got DRMI locked but locklossed at Turn_On_BS_Stage2. 
another LL at Turn_On_BS_Stage2. 

We made it past Turn_On_BS_Stage2 when Jenne D. Told PRC1 & SRC1 to not run in ASC. 
H1 has been losing lock at a number of places before power up.
  

H1 SEI (OpsInfo)
jim.warner@LIGO.ORG - posted 10:48, Tuesday 09 December 2025 (88440)
HAM7 ISI is locked

Vac finished pulling the -Y door and the 2 access ports on the +Y side, I went out and locked the ISI. A,B and C lockers were fine, the D locker couldn't be fully engaged, which I think is a known issue for this ISI. I just turned until I started feeling uncomfortable resistance, so D is partially engaged.

LHO VE
david.barker@LIGO.ORG - posted 10:17, Tuesday 09 December 2025 (88439)
Tue CP1 Fill

Tue Dec 09 10:13:17 2025 INFO: Fill completed in 13min 13secs

 

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 09:34, Tuesday 09 December 2025 - last comment - 09:41, Tuesday 09 December 2025(88435)
CDS Power-outage Recovery Update

CDS is almost recovered from last Thrusday's power outage. Yesterday Patrick and I started the IOCs for:

picket_fence, ex_mains, cs_mains, ncalx, h1_observatory_mode, range LED, cds_aux, ext_alert, HWS ETMY dummy

I had to modify the picket fence code to hard-code IFO=H1, the python cdscfg module was not working on cdsioc0.

We are keeping h1hwsey offline, so I restarted the h1hwsetmy_dummy_ioc service.

The EDC disconnection list is now down to just the HWS ETMX machine (84 channels), we are waiting for access to EX to reboot.

Jonathan replaced the failed 2TB disk in cdsfs0.

Jonathan recovered the DAQ for the SUS Triple test stand in the staging building.

I swapped the power supplies for env monitors between MY and EY, EY has been stable since.

 

Comments related to this report
david.barker@LIGO.ORG - 09:35, Tuesday 09 December 2025 (88436)

I took the opportunity to move several IOCs from hand-running to systemd control on cdsioc0 configured by puppet. As mentioned, some needed hard-coding IFO=H1 due to cdscfg issues.

david.barker@LIGO.ORG - 09:41, Tuesday 09 December 2025 (88437)

CDS Overview.

Note there is a bug in the H1 Range LED display, a negative range is showing as 9MPc.

GDS still needs to be fully recovered.

Images attached to this comment
H1 General
anthony.sanchez@LIGO.ORG - posted 07:55, Tuesday 09 December 2025 (88434)
Maintenance Tuesday Start

TITLE: 12/09 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 11mph Gusts, 6mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.54 μm/s 
QUICK SUMMARY:
 SUS in-lock charge measurements did not run due to being unlocked.
HAM7 Door Bolts have been loosened & door is ready to come off.

Potential Tuesday Maintenance Items:
CDS - Log in and check on vacuum system computer (Patrick)
18-bit DACs in iscey should be replaced with 20-bit DACs???
Beckhoff upgrades
installing SUS front-end model infrastructure for JM1 and JM3, and renaming the h1sushtts.mdl to h1sush1.mdl
RELOCK CHECKPOINT IMC
18-bit DACs in h1oaf should be replaced with 20-bit DACs
h1sush2a/h1sush2b >> sush12 consolidation
Upgrade susb123 to LIGO DACs
Add sush6 chassis
DEC 8 - FAC - annual fire system checks around site
MON - RELOCKING IFO  Reached DRMI last night
TUES AM - HAM7 Pull -Y Door
FARO work at BSC2 (Jason, Ryan C)
 

H1 DetChar (DetChar)
joan-rene.merou@LIGO.ORG - posted 22:58, Monday 08 December 2025 - last comment - 13:51, Tuesday 09 December 2025(88433)
Hunting down the source of the near-30 Hz combs with magnetometers
[Joan-Rene Merou, Alicia Calafat, Sheila Dwyer, Anamaria Effler, Robert Schofield]

This is a continuation of the work performed to mitigate the set of near-30 Hz and near-100 Hz combs as described is Detchar issue 340 and lho-mallorcan-fellowship/-/issues/3. As well as the work in alogs 88089, 87889 and 87414.

In this search, we have been moving around two magnetometers provided to us by Robert. Given our previous analyses, we thought the possible source of the combs would be around either the electronics room or the LVEA close to input optics. We have moved around these two magnetometers to have a total of more than 70 positions. In each position, we left the magnetometers alone and still for at least 2 minutes, enough to produce ASDs using 60 seconds of data and recording the Z direction (parallel to the cylinder). For each one of the positions, we recorded the data shown in the following plot

  

That is, we compute the ASD using 60s FT and check the amplitude of the ASD at the frequency of the first harmonic of the largest of the near-30 Hz combs, the fundamental at 29.9695 Hz. Then, we compute the median of the +- 5 surrounding Hz and save the ASD value at 29.9695 Hz "peak amplitude" and the ratio of the peak against the median to have a sort of "SNR" or "Peak to Noise ratio".

Note that we also check the permanent magnetometer channels. However, in order to compare them to the rest, we multiplied the ASD of the magnetometers that Robert gave us times a hundred so that all of them had units of Tesla.

After saving the data for all the positions, we have produced the following two plots. The first one shows the peak to noise ratio of all positions we have checked around the LVEA and the electronics room:

  

Where the X and Y axis are simply the image pixels. The color scale indicates the peak to noise ratio of the magnetometer in each position. The background LVEA has been taken from LIGO-D1002704.
Note that some points slightly overlap with other ones, this is because in some cases we have check different directions or positions in the same rack.

It can be seen how from this SNR plot the source of the comb appears to be around the PSL/ISC Racks. Things become more clear if we also look at the peak amplitude (not ratio) as shown in the following figure:

  

Note that in this figure, the color scale is logarithmic. It can be seen how, looking at the peak amplitudes, there is one particular position in the H1-PSL-R2 rack whose amplitude is around 2 orders of magnitude larger than the other positions. Note that this position also had the largest peak to noise ratio. 

This position, that we have tagged as "Coil", is putting the magnetometer into a coil of white cables behind the H1-PSL-R2 rack, as shown in this image:

  

The reason that led us to put the magnetometer there is that we also found the peak amplitude to be around 1 order of magnitude larger than on any other magnetometer on top of one set of white cables that go from inside the room towards the rack and up towards we are not sure where:

  

This image shows the magnetometer on top of the cables on the ground behind the H1-PSL-R2 rack, the white ones on the top of the image appear to show the peak at its highest. It could be that the peak is louder in the coil because there being so many cables in a coil distribution will generate a stronger magnetic field.

This is the actual status of the hunt. These white cables might indicate that the source of these combs is the different interlocking system in L1 and H1, which has a chassis in the H1-PSL-R2 rack. However, we still need to track down exactly these white cables and try turning things on and off based on what we find in order to see if the combs dissapear.
Images attached to this report
Comments related to this report
jason.oberling@LIGO.ORG - 13:51, Tuesday 09 December 2025 (88442)PSL

The white cables in question are mostly for the PSL enclosure environmental monitoring system, see D1201172 for a wiring diagram (page 1 is the LVEA, page 2 is the Diode Room).  After talking with Alicia and Joan-Rene there are 11 total cables in question: 3 cables that route down from the roof of the PSL enclosure and 8 cables bundled together that route out of the northern-most wall penetration on the western side of the enclosure (these are the 8 pointed out in the last picture of the main alog).  The 3 that route from the roof and 5 of those from the enclosure bundle are all routed to the PSL Environmental Sensor Concentrator chassis shown on page 1 of D1201172, which lives near the top of PSL-R2.  This leaves 3 of the white cables that route out of the enclosure unaccounted for.  I was able to trace one of them to a coiled up cable that sits beneath PSL-R2; this particular cable is not wired to anything and the end isn't even terminated, it's been cleanly cut and left exposed to air.  I haven't had a chance to fully trace the other 2 unaccounted cables yet, so I'm not sure where they go.  They do go up to the set of coiled cables that sits about half-way up the rack, in between PSL-R1 and PSL-R2 (shown in the next-to-last picture in the main alog), but their path from there hasn't been traced yet.

I've added a PSL tag to this alog, since evidence points to this involving the PSL.

H1 ISC
jenne.driggers@LIGO.ORG - posted 18:55, Monday 08 December 2025 - last comment - 09:46, Tuesday 09 December 2025(88432)
Locked as far as DRMI

[Anamaria, RyanS, Jenne, Oli, RyanC, MattT, JeffK]

We ran through an initial alignment (more on that in a moment), and have gotten as far as getting DRMI locked for a few minutes.  Good progress, especially for a day when the environmental conditions have been much less than favorable (wind, microseism, and earthquakes).  We'll try to make more progress tomorrow after the wind has died down overnight. 

During initial alignment, we followed Sheila's suggestion and locked the green arms.  The comm beatnote was still very small (something like -12 dBm).  PR3 is in the slider/osem position that it was before the power outage. We set Xarm ALS to use only the ETM_TMS WFS, and not use the camera loop.  We then walked ITM to try to improve the COMM beatnote.  When I did it, I had thought that I only got the comm beatnote up to -9 dBm or so (which is about where it was before the power outage), but later it seems that maybe I went too far and it's all the way up at -3 dBm.  We may consider undoing some of that ITM move.  The ITMX, ETMX, and TMSX yaw osem values nicely matched where they had been before the power outage.  All three suspensions' pitch osems are a few urad different, but going closer to the pre-ooutage place made the comm beatnote worse, so I gave up trying to match the pitch osems.  

We did not reset any camera setpoints, so probably we'll want to do the next initial alignment (if we do one) using only ETM_TMS WFS for Xgreen.  

The rest of initial alignment went smoothly, after we checked that all other optics' sliders were in their pre-outage locations.  Some were tens of urad off on the sliders, which doesn't make sense.  We had to help the alignment in several places by hand-aligning the optics a bit, but no by-hand changes to controls servos or dark offsets or anything like that.

When trying to lock, we struggled to hold Yarm locked and lock COMM and DIFF until the seismic configuration auto-switched to the microseism state.  Suddenly things were much easier.  

We caught DRMI lock 2x times on our first lock attempt, although it lost DRMI lock during ASC.  We also were able to lock PRMI, but lost lock while I was trying to adjust PRM.  Later, we locked PRMI, and were able to offload the PRMI ASC (to PRM and BS).  

The wind has picked back up and it's a struggle to hold catch DRMI lock, so we're going to try again tomorrow.

 

Comments related to this report
ryan.short@LIGO.ORG - 09:46, Tuesday 09 December 2025 (88438)

During this process, I also flipped the "manual_control" flag in lscparams so that ALS will not scan alignment on its own and ISC_LOCK won't automatically jump to PRMI from DRMI or MICH from PRMI.

LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 17:23, Monday 08 December 2025 - last comment - 17:37, Tuesday 09 December 2025(88430)
HAM7 is vented

(Randy, Jordan, Travis, Filiberto, Gerardo)

We closed four small gate valves, two at the relay tube, RV-1 and RV-2.  Two at the filter cavity tube, between BSC3 and HAM7, FCV-1 and FCV-2.  The purge air system was on since last week, with a dew point reported by the dryer tower of -66 oC, and -44.6 oC measured chamber side.  Particulate was measured at the port by the chamber, zero for all sizes.  HAM7 ion pump was valved out. 

Filiberto helped us out with making sure high voltage was off at HAM7, we double checked with procedure M1300464.  Then, system was vented per procedure E2300169 with no issues.

Other activities at the vented chamber:

Currently the chamber has the purge air active at a very low setting.

 

Images attached to this report
Comments related to this report
gerardo.moreno@LIGO.ORG - 17:37, Tuesday 09 December 2025 (88447)VE

(Randy, Travis, Jordan, Gerardo)

-Y door was removed, no major issues removing it, but the usual O-ring sticking to flat flange, they stuck around the bottom part of the door, 5-8 O'clock.  Other than that no other issues.  Both blanks were removed and the ports were covered with an aluminum sheet.

Note, the soft cover will rub against ZM3 if the external jig to pull the cover is not used.

LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 18:50, Tuesday 04 November 2025 - last comment - 17:46, Monday 08 December 2025(87966)
Kobelco Compressor and Dry Air Skid Functionality Test

I ran the dry air system thru its quarterly test, FAMIS task.  The system was started around 8:20 am local time, and turned off by 11:15 am.  System achieved a dew point of -50 oF, see attached photo taken towards the end of the test.  Noted that we may be running low on oil at the Kobelco compressor, checking with vendor on this.  Picture of oil level is while system is off.

Images attached to this report
Comments related to this report
gerardo.moreno@LIGO.ORG - 17:46, Monday 08 December 2025 (88431)VE

(Jordan, Gerardo)

We added some oil to the Kobelco reservoir whith the compressor off.  We added about 1/2 gallon to get the level to come up to the half mark, see attached photo of level, photo was taken after the compressor had been running for 24+ hours.  Level now is at nominal.

Images attached to this comment
Displaying reports 21-40 of 85942.Go to page 1 2 3 4 5 6 7 8 9 10 End