Displaying reports 3621-3640 of 83537.Go to page Start 178 179 180 181 182 183 184 185 186 End
Reports until 09:35, Wednesday 15 January 2025
H1 PSL (PSL)
anthony.sanchez@LIGO.ORG - posted 09:35, Wednesday 15 January 2025 - last comment - 09:49, Wednesday 15 January 2025(82290)
PSL_Chiller: Check PSL chiller.
Diag Main has now got a PSL_Chiller message telling me to Check the PSL Chiller.
I ran the PSL Status script to hopefully get some insight.

Laser Status:
    NPRO output power is 1.842W
    AMP1 output power is 70.15W
    AMP2 output power is 137.0W
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 28 days, 23 hr 15 minutes
    Reflected power = 25.55W
    Transmitted power = 102.4W
    PowerSum = 127.9W

FSS:
    It has been locked for 0 days 3 hr and 32 min
    TPD[V] = 0.781V

ISS:
    The diffracted power is around 4.2%
    Last saturation event was 0 days 3 hours and 32 minutes ago


Possible Issues:
	PMC reflected power is high
	Check chiller (probably low water)



PSL probably wants some water and someone to chill with.
The plots below seem like it might be a little lonely. 

 
Images attached to this report
Comments related to this report
anthony.sanchez@LIGO.ORG - 09:41, Wednesday 15 January 2025 (82291)
I just got the verbal alarms alert to Check the PSL Chiller.
ryan.crouch@LIGO.ORG - 09:49, Wednesday 15 January 2025 (82292)PSL

I topped off the PSL chiller with ~100mL of water.

H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 08:06, Wednesday 15 January 2025 - last comment - 13:36, Wednesday 15 January 2025(82287)
Wed Morning Shift

TITLE: 01/15 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 3mph Gusts, 1mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.35 μm/s
QUICK SUMMARY:

When I walked in H1 had been locked and Observing for 33 minutes!

Unknown lockloss this morning at 13:17 UTC
H1 Manny relocked without assistance this morning and no one was even woken up.

 

PS: It's a little icey out in the parking lot so be prepared stepping out of your car.
 

Images attached to this report
Comments related to this report
ryan.crouch@LIGO.ORG - 13:36, Wednesday 15 January 2025 (82297)SUS

ETMY mode1s damping wasn't going well this morning and it was slowly rising (it did this a few times over the previous weekend as well), I went from +60 of phase to +30 and flipped the sign of the gain +0.1 -> -0.1, it has been damping for the past hour and has damped past where it was turning around with the previous settings.

H1 TCS
camilla.compton@LIGO.ORG - posted 07:59, Wednesday 15 January 2025 - last comment - 12:13, Tuesday 21 January 2025(82288)
AOM Drive output now connected to CO2X table feedthrough

Camilla, TJ, Marc, Fil. WP#12281

We attached the AOM drive cable from the back of the D1300649 chassis to the lowest TEST point in the PEM feed through photo on the CO2X table. We used two barrel connectors (photos attached) to do this as it looks like there used to be an AOM driver on the table that the signal went into before going to the AOM.  

We thought that we could use the digital filters in the h1tcscs model to create a loop with this output and feed to the Synrad UC-2000 PWM controller (needs 0-10VDC). The max of CTRL2 was capped at +/-2 (unsure why) and this was actually +/- 0.6V on the BNC via an oscilloscope. We'll need to increase this by ~x10 to get PWM to work. Reverted changed sdfs. 

There was an unknown cable also labeled AOM drive coming into the table, not connected to anything photo

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 12:13, Tuesday 21 January 2025 (82380)

Matt Todd and I checked that neither of these BNC barrels were grounded to the CO2X table. 

LHO General (PSL)
ibrahim.abouelfettouh@LIGO.ORG - posted 22:00, Tuesday 14 January 2025 (82286)
OPS Eve Shift Summary

TITLE: 01/15 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:

IFO is in NLN and OBSERVING as of 05:40 UTC (20 min lock!)

Overall, bad recovery from maintenance. Here’s what happened in 7 short stories:

  1. Late LN2 Fill: NORCO arrived to fill CP8 at 00:34 UTC (16:34 PT) over 3 hours late while we were locking. Jim made EX as robust as possible such that we would survive their truck moving through. This worked though the BRS was all over the place as expected. Due to them driving <5mph as requested, they took 20 minutes to arrive at EX, then over 1 hr to do the fill and then another 20 minutes to get back. They left site at 02:34 UTC (18:34 PT). After NORCO left, I undid Jim’s seismic configuration changes to go back to nominal.
  2. Changes made prevented locking: Earlier in the day, (before NORCO arrived), we lost lock at state 557/8, TRANSITION_FROM_ETMX. I was told by the DAY operator that if this happened, I was to revert Camilla’s changes in alog 82277 with regard to the DARM Boost. Since Camilla and Sheila were in the room when this happened, they decided not to do it since it could have just been bad luck. Erik had also made changes regarding ramp times that affect this state. Camilla and Sheila left for the day and then NORCO arrived. While NORCO was filling, we lost lock again at state 557, TRANSITION_FROM_ETMX. Camilla told me to revert her changes and Jenne told Erik to revert his. This was done by 1:31 UTC (17:31 PT).
  3. Changes reverted prevented locking: We could not lock ALS until Norco left the EX area, which happened ALS locked finally at 02:10 UTC (while Norco was en route from EX). Locking then happened automatically until state 500, PREP_DC_READOUT_TRANSITION, where for 10 minutes, I was getting the error message “OMC not ready”. I then called Camilla, who didn’t answer. I then called Erik, who was still on site and came to help troubleshoot the issue (since his changes involved resetting OMs - the idea was that maybe this is related, which it is likely not). After some fiddling to no avail, I called Sheila (the first person on the call list), who walked me through the OMC troubleshooting process.
  4. Troubleshooting with Sheila and Erik: I informed Sheila that the error was that the OMC TF was not letting guardian continue. We did a manual OMC scan while waiting at ASC_QPD_ON to see that there indeed were triplets, with 2 sidebands and one carrier with the right magnitude. We let guardian lock it and it did, yielding the expected signal magnitude for the DCPD_SUM and PZT2 Monitors. Looking closely at the error, Erik realized that it was the phase that was below the threshold (150) and was measuring at 144. Sheila, Erik and I then decided to lower the phase threshold to 140. This worked immediately until we also lost lock just as immediately when we transitioned to DARM_TO_DC_READOUT, which saturated DCPD, EX, EY, IX and HAM6. This happened at 04:13 UTC (20:42 PT). At 4:49 UTC, I reverted the phase changes to OMC_LOCK guardian, since clearly something else was awry.
  5. “Did you try turning it off and back on?”: After the lockloss and the reversion, we went up to lock again, preparing to face the same issue, DTT template and TF in hand and then… it worked. Perhaps the lockloss from earlier was from transitioning the OMC with whitening to the OMC without whitening (which I did after achieving the full OMC lock during troubleshooting). Now our next hurdle is state 557/558 (TRANSITION_FROM_ETMX).
  6. The DARM Boost is actually still on: It turns out that the DARM boost had to be turned off in two places, rather than just line 3058 (alog 82277), there was another later boost turn-on at line 3774. Sheila discovered this during troubleshooting at 5:15 UTC. Since we didn’t want to tempt H1 by turning off abruptly, we chose to leave it on and see what would happen at 557/558. After some anticipatory sighs, it worked! We automatically went all the way to NLN.
  7. 05:40 UTC - reached NLN! We had one node that wasn’t right, which was the FSS being in INIT and not knowing how to go to IDLE. It wasn’t on the SDF page (no other SDF diffs). Sheila manualled into IDLE, which worked. Strangely enough, we also got a “check PSL chiller” alert but since I know that this alert has some weird timing feature with respect to its thresholds, I will just tag PSL.

Big thanks to Erik and Sheila who helped a lot with troubleshooting.

LOG:

None

H1 CDS
erik.vonreis@LIGO.ORG - posted 18:24, Tuesday 14 January 2025 (82284)
H1OMC models were reverted

The changes here 82263 were reverted.

H1OMC models were reverted to version 5.3.0 of the RCG that uses the linear ramp.  The IFO was consistently losing lock after a filter ramp down.

H1 CDS
david.barker@LIGO.ORG - posted 17:12, Tuesday 14 January 2025 - last comment - 10:43, Wednesday 15 January 2025(82282)
CDS Maintenance Summary: Tuesday 14th January 2025

WP12272 h1omc0 new RCG, quadratic filter ramping

Erik, Dave:

h1omc0 models were built against RCG 5.31 which introduces quadratic smoothing to ramped filter switching. All the models running on this frontend have the new rcg (h1iopomc0, h1omc, h1omcpi).

The overview was modified to show that h1omc0 has a different rgc than h1susex by colour coding the RCG: dark_blue=5.31(quadratic filter ramp and variable duotone frequence), light-blue-5.30 (LIGO DAC) and green = 5.14 (standard)

WP12274 h1guardian1 reboot

TJ, Erik:

TJ rebooted h1guardian1 to reload all the nodes. The hope is that this will eliminate the leap-second warnings we have been seeing on certain nodes.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 10:43, Wednesday 15 January 2025 (82295)

Tue14Jan2025
LOC TIME HOSTNAME     MODEL/REBOOT
08:06:21 h1omc0       h1iopomc0   <<< Install RCG5.31
08:06:35 h1omc0       h1omc       
08:06:49 h1omc0       h1omcpi     


17:30:10 h1omc0       h1iopomc0   <<< Revert back to RCG5.30
17:30:24 h1omc0       h1omc       
17:30:38 h1omc0       h1omcpi     
 
 

H1 TCS
camilla.compton@LIGO.ORG - posted 17:10, Tuesday 14 January 2025 (82281)
Added adjustable fiber collimator to HWS ETMY, SLED left off.

TJ, Camilla WP12277.

Started the work done in 81734 on the EX HWS at EY. Swapped the fiber collimator to a CFCS11-A adjustable SMA fiber collimator. Still need to swap to a 50um fiber, remove HWS-L3 and realign. SLED left off. 

Images attached to this report
H1 TCS
camilla.compton@LIGO.ORG - posted 17:03, Tuesday 14 January 2025 - last comment - 15:16, Thursday 16 January 2025(82262)
CO2Y RIN at CW and PWM

Repeated 82151, with H1:IOP-OAF_L0_MADC{2,3}_TP_CH{10-13} 65kHz channels on CO2Y. WP# 12261.

Plots attached of the DC and AC out channels. These signals are straight from the PD in counts, before the filtering to undo the D1201111 pre-amp listed in 81868. PWM is at 5kHz, as can be seen in the spectrum.

Strangely when adding cursors to the laser on CW and PWM @50% signals plot, the DC channel has a factor of 30 difference but the AC channel has a factor of 5 difference. We would expect these to be the same.
Also the 95% PWM signal has lower broadband noise that the 25% and 50% PWM signals on the DC channel but higher on the AC channel.
  • CW = 3880 counts, 49.1W laser power ~ RIN 1.5e-5 but limited by electronics noise. 
  • @50% PWM = 3500 counts, 30.0W laser power ~ RIN 5e-5
  • @25% PWM = 1800 counts, 14.4W laser power 
  • @95% PWM = 3880 counts, 46.7W laser power 
Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 15:16, Thursday 16 January 2025 (82319)

I misread the graph, for CW 100%

  • CW = 3880 counts, 49.1W laser power ~ RIN 1.5e-6 but limited by electronics noise (6e-3/3880). This is higher than the 6 x 10-7 measured in 81863.
H1 General
ryan.crouch@LIGO.ORG - posted 16:31, Tuesday 14 January 2025 (82274)
OPS Tuesday day shift summary

TITLE: 01/14 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Currently relocking, we just lost lock at LOWNOISE_ESD_ETMX.
LOG:                                                                                                                                                                           

Start Time System Name Location Lazer_Haz Task Time End
22:08 OPS LVEA LVEA N LASER SAFE 16:01
15:48 FAC Kim, Nelly EX n Tech clean 16:46
16:01 FAC Chris XARM n Big Green versus tumbleweeds, finished at 23:00 17:02
16:16 CAL Sheila CR n IM4 trans cal check 17:01
16:17   Camilla, Mitchell LVEA (WB) n Looking for parts 16:24
16:31 VAC Janos, Jordan, Travis, JC MX, EX n Air supply replacement 22:36
16:39 PSL Mayank, Sivananda, Rick, Rahul, Keita LVEA (H2 PSL+tour) n Grabbing parts (Keita out 18:21) 18:48
16:40 PCAL Tony PCAL Lab y(local) Preparing stuff to ship 17:31
16:47 FAC Kim, Nelly EY n Tech clean 18:09
16:48 EE Fil CER n Checking for OMC0 necessities 17:05
16:52   Christina OSB Receiving n Forklifting stuff into the bins 18:01
17:04 FAC Chris, pest control LVEA, EX, MX, EY, MY, FCES n Pest control 20:11
17:05 EE Fil EX, EY YES, n Checking all racks 19:56
17:26 FAC Eric, contractor LVEA n Patching wall holes 19:28
17:36 PCAL Francisco PCAL Lab y(local) Grabbing stuff for PCAL meas 17:52
17:45 PEM RyanC EX, FCES n Checking dust monitors 18:31
17:49 PCAL Francisco EX YES PCAL measurements 20:02
18:09 FAC Kim, Nelly LVEA n Tech clean 19:08
18:11 TCS Camilla LVEA n Adjusting CO2 power 18:22
18:30 SEI Jim CR n Testing filters on BSCs 20:26
18:34 IAS RyanC LVEA n Setting up FARO for next week 18:47
18:37 TCS Camilla LVEA n CO2 laser work 19:10
18:52 PCAL Rick, Mayank, Sivananda EX YES Tour 20:01
19:11 HWS TJ, Camilla EY n Adding new collimator to HWS 20:20
19:41 VAC Janos, JC EY, MY, LVEA YES Fitting new exhaust filters for roughing pipes 23:53
20:02 PCAL Francisco PCAL Lab y(local) Dropping stuff off 20:32
20:07 PCAL Rick, Sivananda, Mayank PCAL Lab y(local) tour 20:59
20:37 EE Fil LVEA n Rack checks 22:29
20:47 SEI Jim CR n Tests on ETMX 22:27
21:06 OPS Camilla LVEA YES Transitioning LVEA to HAZARD 21:27
21:27 TCS Camilla, TJ LVEA Y HWS table work 22:19
22:19 OPS Camilla LVEA Y -> N SAFE transition 22:26
22:46 TCS Camilla LVEA N TCSY adjustment 22:53
23:10 EE Daniel LVEA N HAM1 investigation 23:32
H1 SEI
jim.warner@LIGO.ORG - posted 16:17, Tuesday 14 January 2025 (82279)
Changes toSEI LARGE_EQ state

During one of the large eqs over the weekend all of the BSC-ISI tripped. I checked one of the chambers and the trip was due to large low frequency drive railing the stage 2 horizontal actuators. The earthquake was big enough SEI_ENV went LARGE_EQ, well after the IFO lost lock, but also after the ISIs started tripping. In SEI_ENV I've reduced the threshold on the peak mon to 6000, about 1.5x the largest eq we've ever ridden out, down from 10000. I've also changed the stage 2 blends used in this state to some 1.5hz blends which roll-off the gs13s at low frequency much more aggressively. I don't think these changes would have been enough to prevent the ISI trips for this particular earthquake, the ISIs started tripping while peak mon was around 4000, so there must have been a large amount of ground motion that was out of band for that channel.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:03, Tuesday 14 January 2025 (82278)
OPS Eve Shift Start

TITLE: 01/15 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 2mph Gusts, 0mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.39 μm/s
QUICK SUMMARY:

IFO is LOCKING at MOVE_SPOTS

The LN2 truck is yet to arrive and so it may cause a lockloss.

H1 ISC (OpsInfo)
camilla.compton@LIGO.ORG - posted 15:50, Tuesday 14 January 2025 - last comment - 17:26, Tuesday 14 January 2025(82277)
Re-added DARM boost that was removed because of glitch after CDS filter ramping changes

In 81638 we removed a DARM1 FM1 boost because of a glitch when ramping it off causing locklosses during ESD transitions in preparation to switch back to ETMX. Today the CDS team updated H1OMC0 models with an improved filter ramping: 82263. We hope this will allow us to keep the boost us which gives us more range against high microseism while relocking (it's always off by NLN).

Uncommented line 3058 from ISC_LOCK.py and reloaded: ezca.get_LIGOFilter('LSC-DARM1').turn_on('FM1'). If we have locklossses at ISC_LOCK state 557 or 558, at the operator can re-comment this line out. Tagging OpsInfo.

Comments related to this report
camilla.compton@LIGO.ORG - 16:56, Tuesday 14 January 2025 (82280)

Sheila, Camilla, Erik

We lost lock 12s after this DARM1 FM1 filter was turned off, we're not sure if the filter changes are the cause. Are trying to relock again.

We think we were a little confused and have been turning on FM1 the whole time, as it was still turning on in PREP_DC_READOUT_TRANSITION. Unsure if it was just luck that the glitch disappeared when we made the change Dec 5th. Will look into more...

ibrahim.abouelfettouh@LIGO.ORG - 17:26, Tuesday 14 January 2025 (82283)

Re-commented out line 3058 today at 1:25 UTC after losing lock at the same state LOWNOISE_ESD_ETMX (558).

H1 ISC (CAL, CDS, ISC)
jeffrey.kissel@LIGO.ORG - posted 15:34, Tuesday 14 January 2025 (82268)
Data in OMC DCPD Test Point Readbacks Limited by Single-Point-Precision above 7kHz when IFO is in Nominal Low Noise (and NOT limited anywhere by ADC Noise)
J. Kissel, T. Sanchez, E. Goetz, L. Dartez

*EDIT* This limitation is only pulling out data with test points from the A0/B0 filter outputs due to them being recorded in single precision, not double precision. The actual data for all internal calculations is double precisions, and in fact for the final calibrated gravitational wave strain is both calculated in, and then stored in frames as, double precision.

Back in July 2024 when I started to characterize the super-16 kHz-Nyquist frequency data off the OMC DCPDs with the live 524 kHz channels. See LHO:78516 for the whole story, but we got stalled when we ran into what we believed was some sort of single-precision, numerical precision noise, limited at the equivalent of 1e12 [A/rtHz] DCPD current or 1e-6 [V/rtHz] ADC voltage.

In LHO:78559, we ruled out single-point precision calculation of the ASD when we ran the same DCPD signal through a special version of DTT which uses double-precision to calculate the pwelch algorithm. That version of the data proved that the high frequency limitation is still there, and NOT the precision of DTT.
In that same data set, we also showed that if you ask DTT to remove the mean, i.e. large DC component of the signal, it also did NOT have any impact on this limit.

And it's in removing the mean that we reveal / confirm that it is "single-point precision" limit in the test point readbacks. 
Check out 1st attachment which is the data from LHO:78559, but with no DTT calibration applied. That means the channels are calibrated into the units of whatever they are coming off of the front-end -- in this case milliamps, or [mA].

The DC value of the test point channel during the time of measurement was ~20 [mA].

The front-end computes all its filtering in double precision, but the readbacks of the products of those calculations are single-point precision. An IEEE 754 32-bit base-2 floating-point, single precision channel has 24 bits of significance (excluding the sign and exponent) to hold the entire frequency dependent content of the time-series that has a DC value of ~20 [mA]. The (front end filtering algorithm?) rounds the 20.43 DC component to the nearest 2^n value, i.e. 2^5 = 32 ["mA"]. 

Eq. 2.2 of Liquid instruments article on quantization noise suggests that the amplitude spectral density of 1 bit spread across the 0 to 2^18 Hz (f_Nyquist) frequency range over which we care is 
    n_{ASD,RMS} = sqrt( DELTA^2 / (12 * f_Nyquist) ) 
                
                = DELTA * sqrt( 1 / 12 * 1/f_Nyquist )  
where 
    - DELTA is the minimum step resolution (i.e. the peak value / number of significant bits), 
    - the factor or 1/12 comes from the expectation value of the noise power derived from the integral of the product of instantaneous noise power and the probability that that power is distributed across one, specifically the least significant, bit
        (and we use the square root because we want the amplitude not the power)
    - the factor of 1/f_Nyquist comes from spreading out the (presumably frequency independent) power over the entire frequency range
        (and again, we use square root because we want the amplitude not the power)

In line-by-line math, that's 
     [[ 32 ["mA"_DC]              peak value
        * (1/2^24)                significant bits in single precision) ]]
     [[ 1 bit 
        * (1/sqrt(12))              expectation value quantization noise amplitude spread across 1 bit
        * (1/sqrt(2^18 Hz))         quantization noise spread across 0 to Nyquist frequency range
        ]] 

     = [ 32 / (2^24) ] * sqrt[ 1 / (12 * 2^18) ]

     = 1.07539868e-9 ["mA"/rtHz]
 
This number *exactly* the high-frequency asymptote we see. BINGO!

The next step was then to create pick-off paths of the ADC channels and high-pass -- i.e. remove the large DC component of the signal -- in the front-end. Importantly, this has to be the *first* filter, so the DC component is removed before any other calculation is done. We added the infrastructure and installed the filtering (LHO:78956, LHO:78975), but have not come back to the data until today.

Today, we're finally looking at the front-end OMC DCPD data that's been high-pass filtered with a 5th order, 1 Hz high-pass, with 40 dB stop-band attenuation and a 1 dB ripple 
    ellip("HighPass",5,1,40,1)
The odd (5th) order means that DC component is completely suppressed, leaving only the remaining frequency-dependent accumulated RMS, which is 5.8901e-4 [mA_RMS] to upper limit of the dynamic range and define the precision limit.

SIDE QUEST -- Fractional numbers are much less intuitive to "just round up to the nearest power of 2^n," so the equivalent of "converting 20.43 [mA] to 32 ["mA"]" for 5.8901e-4 is instead a process of,
    Converting 5.8901e-4 to floating point binary, 
        # sign exponent fraction
          0 01110100 00110100110110111110111
        # round up the fraction part
          0 01110100 01000000000000000000000
        # convert back to decimal
        0.00061035156
That takes the quantization limit down to 
     = [ 6.1035156e-4 / (2^24) ] * sqrt[ 1 / (12 * 2^18) ] 
     = 2.05e-14 ["mA"/rtHz]


Take a look at 2nd attachment, which compares the normal nominal OMC DCPD data to this 1 Hz high passed data. One can see all of the AA filtered data all the way out to the 232 kHz Nyquist frequency, because that data only goes as low as 1e-13 [mA].

Finally in the last attachment, we re-cast this into ADC noise units, to show where the data against the trace we'd had as a bench mark before. Note *this* comparison is a false comparison because the digital AA filtering is applied after the ADC noise is added. So -- don't read anything into the fact that the resolved noise goes below the ADC noise -- it's just a guide to the eye and a bench mark to remind folks that the numerical precision limit is *not* ADC noise.

In conclusion, if we want to investigate the OMC DCPD data above 7 kHz with test points, we need to make sure to use a version where the data is high-passed significantly.
So, now we can actually begin doing that...
Images attached to this report
H1 SEI
thomas.shaffer@LIGO.ORG - posted 15:26, Tuesday 14 January 2025 (82276)
H1 ISI CPS Noise Spectra Check - Weekly

FAMIS26026

Last week's report - alog82184. All spectra look good to me and agree with last week's report.

 

Non-image files attached to this report
H1 TCS
thomas.shaffer@LIGO.ORG - posted 15:21, Tuesday 14 January 2025 (82275)
TCS chiller sock filters replaced

FAMIS31405

I replaced both sock filters for fresh ones and inspected the radiator air filters.

H1 GRD (CDS)
thomas.shaffer@LIGO.ORG - posted 14:53, Tuesday 14 January 2025 (82273)
h1guardian1 machine reboot and point back at nds0

WP12274

FAMIS28946

We rebooted the h1guardian1 machine today for 3 things:

  1. Point the machine back at nds0 as the primary nds server
    • I noticed the other day that the guardian was still defining the chosen nds server as nds1 primary and nds0 as secondary. I'm not entirely sure when this was changed, but maybe 2 years ago (alog66834).
    • This was done by changing the NDSSERVER definition in the /etc/guardian/local-env file
  2. Relieve any stale processes that might latch the gps leap second data.
  3. Quarterly machine reboot FAMIS task

All 168 nodes came back up and Erik confirmed that nds0 was seeing the traffic after the machine reboot.

Displaying reports 3621-3640 of 83537.Go to page Start 178 179 180 181 182 183 184 185 186 End