Displaying reports 1-20 of 85656.Go to page 1 2 3 4 5 6 7 8 9 10 End
Reports until 14:47, Wednesday 19 November 2025
H1 DAQ
jonathan.hanks@LIGO.ORG - posted 14:47, Wednesday 19 November 2025 (88177)
WP 12886 continued, work on h1daqdc0

Jonathan and Yuri

This is a continuation of WP 12886.  This is the last physical step in the control room/MSR  for the reconfiguration.  We ran a fiber from the warehouse patch pannel to h1daqdc0 and replaced the network card with a newer card.  We will be renaming h1daqdc0, likely to h1daqkc0 (daqd kafka connector), and converting it to send the IFO data into the LDAS as part of the NGDD project.

Next steps on this work is to work on the software setup, and to install the corresponding fiber patch in the LDAS room.

 

H1 ISC
elenna.capote@LIGO.ORG - posted 14:22, Wednesday 19 November 2025 (88176)
Checking LSC and ASC safe SDFs

Checking the safe SDF settings for the LSC and ASC model.

ASC model not monitored channels here. No outstanding SDF diffs while unlocked

LSC model not monitored channels here. Also no outstanding SDF diffs while unlocked.

ASCIMC has no unmonitored channels. No safe diffs with IFO unlocked.

Images attached to this report
H1 PEM
ryan.crouch@LIGO.ORG - posted 13:18, Wednesday 19 November 2025 (88174)
New dust monitor Huddle Test results

I've ran a few different huddle test with the new TemTop dust monitors against 3 different MetOne GT521s. The first tests I ran in the Optics lab and Diode room showed some discrepancies between the spike times but that could be due to a difference in sample times vs the two dust monitors. The PMS21 and GT521s both sample for 60 seconds then hold for 300s, but the PMD331 does not have a built in hold time so it samples continuously or manually. The PMS21 usually reads an order of magnitude or two lower than the other two which makes me wonder if it needs a scale factor, but it also sees these big spikes occasionally that the others don't see which is confusing. The flow rate is listed as being 0.1 CFM on the PMS21, but the GT521s and the PMD331 are also listed as having flow rates of 0.1 CFM. CFM = Cubic feet / minute, its also = 2.83 L/min which is what I read when running the flow test on all of the DMs. *Also the times do not account for daylight savings, so each y-axis timestamp is actually an hour behind the actual PST*.

Test 1 - Optics Lab:

I tested the TemTops one at a time in the Optics lab, the results of the PMD331 and the results of the PMS21

Test 2 - Diode Room:

I tested the TemTops at the same time in the Diode room, I started off with only the PMS21 then I added the PMD331. For only the PMS21 we saw the these counts, for only the PMD331 we saw these counts, and for both of them we saw these counts,all against a MetOne GT521s. 

Test 3 - Control Room:

I tested three dust monitors at the same time in the control room ( I grabbed a spare pumped GT521s from the storage racks by the OSB receiving, it's our last properly working spare). I did a day of samples with holds enabled and a day of continuous sampling. When the dust monitors were sampling at slightly different intervals we saw the peaks offset from each other such as at 11-18 ~07:00 PST at the right of the plot. During the continuous testing we can see the peaks from everyone coming into the control room for the end of O4 celebration, I'm not sure why there's some time between the peaks, zooming in on this plot to cut out the large peaks from said celebration we can see the PMD331 and the MetOne GT521s following each other pretty closely, but the PMS21 wasn't really reading  much, there are small bumps around where the peaks from the other DMs are. Adding a scale factor of 10 to the PMS21 counts yield a better looking plot, playing with the scale factor till the PMS21 counts looked more in-line with the other DMs I got to a scale factor of 40 to get this plot.

Images attached to this report
H1 SQZ
sheila.dwyer@LIGO.ORG - posted 13:08, Wednesday 19 November 2025 (88175)
FDS SQZ angle scan data set

In breaks in the PEM work this morning I improved the script I used last night so that I don't have to do as many things by hand.  The script is in sheila.dwyer/SQZ/automate_dataset/SQZ_ANG_stepper.py, I'll try to clean it up a bit more and add to userapps soon.  

It's running now and will take 5 minutes of no sqz time, and do the steps needed to measure nonlinear gain (it doesn't give a result, but some times that can be looked at later).  Then it takes some angle steps in 10 degree steps on the steep side of the elipse and 30 degree steps on the slow side of the elipse (where SQZ angle changes slowly with demod angle).  

This current version should take 1 hour to take 3 minutes of data at each point, or 1.5 hours for 4 minutes of data.  

It's now running for FDS with nominal psams settings, this can be aborted if PEM wants to do injections before it's over. 

LHO VE
david.barker@LIGO.ORG - posted 10:39, Wednesday 19 November 2025 (88173)
Wed CP1 Fill

Wed Nov 19 10:09:29 2025 INFO: Fill completed in 9min 25secs

 

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 09:20, Wednesday 19 November 2025 (88171)
h1tcshwssdf restarted to remove duplicate channels

FRS36084 Dave, TJ:

TJ discovered that some HWS SLED settings were being monitored by multiple slow-SDF systems (h1syscstcssdf and h1tcshwssdf). The former's monitor.req is computer generated, the later's is hand built.

There were 4 duplicated channels: H1:TCS-ITM[X,Y]_HWS_SLEDSET[CURRENT,TEMPERATURE]

I removed these  channels from tcs/h1/burtfiles/h1tcshwssdf_[monitor.req, safe.snap, OBSERVE.snap] and restarted the psudo-frontend h1tcshwssdf on h1ecatmon0 at 09:00 Wed 19nov2025 PST. The number of monitored channels dropped from 64 to 60 as expected.

I did a full scan of all the monitor.req files and confirmed that there were only these 4 channels duplicated.

I'll add a check to cds_report.py to look for future duplications.

LHO General
thomas.shaffer@LIGO.ORG - posted 07:36, Wednesday 19 November 2025 - last comment - 09:34, Wednesday 19 November 2025(88168)
Ops Day Shift Start

TITLE: 11/19 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 3mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.39 μm/s 
QUICK SUMMARY: Locked for 13 hours, but it looks like we just got into observing 45min ago due to an SDF diff. Lights are on at MY, Y_BT AIP still in error as mentioned in Ryan's log. Plan for to day is to continue PEM characterization. 

Comments related to this report
david.barker@LIGO.ORG - 09:34, Wednesday 19 November 2025 (88172)

Because the CDS WIFIs will always be on from now onwards, I've accepted these settings into the CDS SDF

H1 SQZ
sheila.dwyer@LIGO.ORG - posted 06:49, Wednesday 19 November 2025 (88167)
sqz data set FIS, nominal psams, script left running

unamplified seed: 0.0055 amplified seed: 0.052, deamplified: 0.00073 NLG = 9.45, seems too small

no sqz: 1447564297 - 1447565492
reduced seed, adjusted temp: amplified max:0.00547 minimum 8.3e-5, unamplified: 2.54e-4 NLG 21.5

FIS, ran scan sqz angle kHz, CLF 6 demod phase left at 149.7 degrees. 1447566037

I started the script to scan squeezing angles with FIS, Ryan Short gave me a pointer on how to have my script change guardian states (puts SQZ ANGLE SERVO to IDLE now).  It is set up to try the current angle +/-10 degrees, run through a bunch of angles in 20 degree steps, flip the CLF sign and run through angles again.  When finished it should request frequency dependent squeezing again and Ryan has set things up so the IFO should go to observing when that happens.  

The script ran anf completed, the IFO didn't go to observing when it completed because I forgot to turn off the pico controler after I lowered the seed.  Here is the log of times:

log of changes : 
149.7 : 1447566755.0 
159.7 : 1447566995.0 
139.7 : 1447567235.0 
200.0 : 1447567475.0 
180.0 : 1447567715.0 
160.0 : 1447567956.0 
140.0 : 1447568195.0 
120.0 : 1447568436.0 
100.0 : 1447568676.0 
80.0 : 1447568916.0 
60.0 : 1447569156.0 
40.0 : 1447569396.0 
20.0 : 1447569636.0 
0.0 : 1447569876.0 
180.0 : 1447570116.0 
160.0 : 1447570356.0 
140.0 : 1447570596.0 
120.0 : 1447570837.0 
100.0 : 1447571076.0 
80.0 : 1447571317.0 
60.0 : 1447571557.0 
40.0 : 1447571797.0 
20.0 : 1447572037.0 
0.0 : 1447572277.0 

 

Images attached to this report
LHO General (VE)
ryan.short@LIGO.ORG - posted 21:59, Tuesday 18 November 2025 (88166)
Ops Eve Shift Summary

TITLE: 11/19 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
SHIFT SUMMARY:

H1 had a lockloss from an unknown source early on in my shift, but after an alignment, relocking went easily. Since then, I've been tuning magnetic injection parameters to be used later this week. Also had an alarm for an annulus ion pump at the Y-arm BTM, so I phoned Travis as I was unsure how critical this was. He took a look into it and believes addressing this can wait until morning.

I've set the remote owl operator so that H1 will relock overnight if it unlocks for some reason so that it's ready to go in the morning, but no calls will be made for help. H1 will also start observing, if it can, on the off-chance there's a potential candidate signal.

Sheila has started a script to change the SQZ angle while in frequency independent squeezing, which should take about 90 minutes and will put settings back to nominal when done, at which point H1 should go into observing.

H1 DAQ
jonathan.hanks@LIGO.ORG - posted 17:48, Tuesday 18 November 2025 (88165)
WP 12886 Reconfigure daqd systems on the DAQD 0 leg for ECR E2500314

Jonathan, Dave, Erik, Richard,

As per WP 12886 we reconfigured the DAQD 0 computers.  The goal was to combine the functionality of the data concentrator (DC0) and frame writer (FW0) into one machine, consolidating them to one machine.  This then frees up the other machine for use with pushing data into the NGDD (Next Generation Data Delivery) kafka brokers in LDAS.

At this point h1daqdc0 is no longer the data concentrator.  H1daqfw0 is the data concentrator + frame writer.

This required a few other changes:

Once this work was done and h1daqfw2 was shown to be producing identical frames to h1daqfw[01] work was able to move onto the consolidation.

The basic steps:

the H1:DAQ-DC0 epics variables are used in many places, so Dave and Jonathan configured h1daqfw0 to output H1:DAQ-DC0 epics variables, and put a small IOC together to output the set of H1:DAQ-FW0 variables we need.  This is an area we need to revisit.  One likely approach is to make use of a feature in cps_recv that outputs the daqd STATUS, CRC_CPS, CRC_SUM variables and then to move the daqd on FW0 back to outputing the FW0 variables.

This work validates that we can combine the DC and FW computers into one.

The next step is to turn the old DC machine into the producer for NGDD data.  We will run the fiber to connect it through to LDAS later this week and work on setting that system up.

My checklist for this is in the DCC https://dcc.ligo.org/T2500385

LHO General
ryan.short@LIGO.ORG - posted 15:46, Tuesday 18 November 2025 (88164)
Ops Eve Shift Start

TITLE: 11/18 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 13mph Gusts, 9mph 3min avg
    Primary useism: 0.06 μm/s
    Secondary useism: 0.47 μm/s 
QUICK SUMMARY: H1 has been locked for almost 2 hours and team PEM is running measurements.

LHO General
thomas.shaffer@LIGO.ORG - posted 15:42, Tuesday 18 November 2025 (88152)
Ops Day Shift End

TITLE: 11/18 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Happy end to O4! After 4 hours of maintenance today, we relocked mostly automatic with one hiccup in the SQZ_FC guardian that was using a DC0 GPS channel that temporarily didn't exists due to the 0-leg data concentrator work that is still on going. We have been locked for almost 2 hours now and the PEM team is at work running end of run characterization.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
15:37 FAC Randy Yarm n BTE sealing MY -> CS 22:54
16:00 VAC Norco EY n LN2 fill 18:51
16:20 VAC Norco MY n LN2 fill CP3 - fill complete 20:46
16:20 FAC Erik EX n Glycol inspect 16:45
16:34 CDS Fil LVEA n JAC cable pull from CER to HAM1 20:01
16:35 SUS Ryan C CR/EY/EX n SUS oplev charge meas. 18:51
16:36 FAC Tyler CS, Mids n 3IFO checks 18:06
16:39 VAC Travis, Jordan EY n Purge air line work 18:32
16:43 VAC Gerardo CS n Run purge air 19:16
16:47 FAC Nellie, Kim LVEA n Tech clean 18:12
16:54 CDS Marc LVEA n Joining Fil in the cable pulling 20:04
17:19 PEM Robert, Sam, Genevieve LVEA n Move a shaker 18:49
17:43 CDS Richard LVEA n Check on the HAM1 cable pull 18:05
17:47 PROP Christina EY n Check at EY 18:17
18:04 VAC Travis, Gerardo EX n Turning on compressor 19:20
18:12 FAC Nellie EY n Tech clean 19:12
18:12 FAC Kim EX n Tech clean 19:53
18:50 PEM Sam LVEA n Taking pictures 18:59
19:31 FAC Kim FCES n Tech clean 20:00
19:32 VAC Travis EX n Turn compressor off 19:51
19:49 PSL Jason CR n Ref cav tweak 20:00
20:01 PEM Robert LVEA n Setup meas. 20:46
20:16 SUS Oli CR n IFO quiet time 20:27
20:16 PEM Ryan S CR n PEM mag inj 22:05
20:28 PEM Sam LVEA n More pictures 20:31
21:04 ISC Daniel CER n Picture 21:08
H1 DetChar (DetChar)
emmanuelalejandro.avila@LIGO.ORG - posted 15:31, Tuesday 18 November 2025 (88163)
Data Quality Shift Report 2025-11-10 to 2025-11-10

Link to report here.

Summary:

H1 IOO
daniel.sigg@LIGO.ORG - posted 12:38, Tuesday 18 November 2025 (88162)
JAC electronics chassis

We started installing electronics chassis for JAC:

Still missing:

The necessary coax cables were also installed and terminated where possible.

H1 PSL
jason.oberling@LIGO.ORG - posted 11:59, Tuesday 18 November 2025 (88161)
PSL FSS RefCav Remote Beam Alignment Tweak

The RefCav Refl spot was off this morning and the TPD had been trending down the last several days, so I took the opportunity to tweak the beam alignment into it from the Control Room.  The ISS was ON for this work.

When I started the RefCav TPD was ~0.520 V and when I finished the TPD was ~0.548 V.  I then noticed the ISS was diffracting ~4.5%, which is higher than its norm, so I adjusted the ISS RefSignal to -2.01 V from -2.00 V.  This brought the diffracted power % down to around its normal value of ~4%.  At this diffraction % the RefCav TPD is now ~0.550 V and the PMC Trans is now ~106.9 W.

H1 SUS (SUS)
ryan.crouch@LIGO.ORG - posted 11:00, Tuesday 18 November 2025 (88157)
OPLEV charge measurements, ETMX, ETMY

I took the OPLEV charge measurements for both the ends this morning. I noticed that both of the scripts leave the L3 bias offset switch off when its should be on.

I ran into the same issue as had previously with ETMX constantly saturating no matter what amplitude I used. Jeff suggested I look at the biases and see if they're correct, the biases used for ETMX were [-9, -4, 0, 4, 9] and I changed them to [-8. -4. 0. 4. 8] in ESD_Night_ETMX.py which stopped the overflows. The L3 bias offset on ETMX is -8.9 while we're DOWN, as soon as the bias is changed to -9 I started seeing overflows. Before I changed the bias offset I was trying some different amplitudes and I was getting overflows at what looked like ~10/15k/sec for everything, i ctrl+ced out a measurement and saw that the overflows did not stop, which lead me to chech the L3 bias offset where had not correctly restored it to -8.9 as its supposed to. After changing the L3 bias offset list in the measurement code I then had to fine tune the amplitude after starting low. Amplitudes of 10000, 12000, and 20000 weren't enough, I ended up at 100,000 which gave me the best measurement I've seen in months. LR did still have some big error bars, the charge is high on LR, and UR but looks low at UL and LL, and the recent trend actually looks to be decreasing, looking at the year long trend it actually looks fairly stable considering the massive error bars on a lot of the measurements. Every quadrant is under + 100 [V].

 

ETMY also initially had some saturations but reducing the gain by 20% stopped them and still yielded good coherence. ETMY's charge is slightly high, it's just under + 100 [V] on 2/3 of the quadrants, UR is the only quadrant below +/- 50 [V], it has risen since May, but zooming out to the whole year it doesn't look to have much of a trend, May wasn't a great measurement.

Images attached to this report
H1 ISC
matthewrichard.todd@LIGO.ORG - posted 10:58, Tuesday 18 November 2025 (88155)
Summary Table of OMC Scan and Gouy Phase measurements

M. Todd, J. Wright, S. Dwyer


Here is my attempt to summarize as many of the OMC scan measurements of the input beam overlap with the OMC mode, as well as PRC and SRC gouy phases -- all at different thermal states.

Measurement Time Test Masses CO2 [W] Ring Heater (per segment) [W] SR3 [W] OM2 [W] FOM aLOG
OMC Scan - Single Bounce off of ITMY 1443895154 Cold 0 0 0 0 Mismatch = 8.3% 87461
OMC Scan - Single Bounce off of ITMX 1443894875 Cold 0.45  0 0 Mismatch = 10.4% 87461
OMC Scan - Single Bounce off of ITMY 1443889943 Cold 1.7 0 0 0 Mismatch = 10.3% 87461
OMC Scan - Single Bounce off of ITMX 1443894875 Cold 1.7  0.45  0 0 Mismatch = 13.5% 87461
OMC Scan - Single Bounce off of ITMY 1431450536 Cold 0 0 5 0 Mismatch = 7.6% 85661
OMC Scan - Single Bounce off of ITMY 1403543046 Cold 0 0 0 4.6 Mismatch = 6.6% 78701
OMC Scan - Single Bounce off of ITMX 1431449762 Cold 0 0.45 5 0 Mismatch = 9.6% 85661
OMC Scan - Single Bounce off of ITMY 1431474471 Cold 0 0 5 4.6 Mismatch = 3.1% 85698
OMC Scan - Single Bounce off of ITMX 1431474101 Cold 0 0.45 5 4.6 Mismatch = 5.1% 85698
OMC Scan - Single Bounce off of ITMY 1444515634 Hot-ish 1.7 0 0 0 Mismatch = 7.1% 87461
OMC Scan - Single Bounce off of ITMX 1444515312 Hot-ish 1.7 0.45 0 0 Mismatch = 8.9% 87461
OMC Scan - SQZ Beam 1446952255 - - - - 4.6 Mismatch = 6.8% 88060
OMC Scan - SQZ Beam 1447088389 - - - - 0 Mismatch = 2.8% 88088
Gouy Phase - PRC 1255227492 Cold ITMY = 0.9, ITMX = 0.8 ITMY = 1.4, ITMX = 0.5 0 0 OneWay Gouy Phase = 23.2 [deg] 52504
Gouy Phase - PRC 1354415805 Cold 0 0 0 0 OneWay Gouy Phase = 20.7 [deg] 66215
Gouy Phase - SRC 1354410195 Cold 0 0 0 0 OneWay Gouy Phase = 19.9 [deg] 66211
Gouy Phase - SRC 1255907203 Cold ITMY = 0.9, ITMX = 0.8 ITMY = 1.4, ITMX = 0.5 0 0 OneWay Gouy Phase = 25.5 [deg] 52658
Gouy Phase - SRC 1255829128 Cold ITMY = 0.9, ITMX = 0.8 ITMY = 1.4, ITMX = 0.5 4 0 OneWay Gouy Phase = 29 [deg] 52641

 

H1 PSL
oli.patane@LIGO.ORG - posted 09:29, Thursday 13 November 2025 - last comment - 09:14, Wednesday 19 November 2025(88087)
IMC_LOCK stuck in FAULT due to FSS oscillation

During PRC Align, IMC unlocked and couldn't relock due to the FSS oscillating a lot - PZT MON was showing it moving all over the place, and I couldn't even take the IMC to OFFLILNE or DOWN due to the PSL ready check failing. To try and fix the oscillation issue, I turned off the autolock for the Loop Automation on the FSS screen, and after a few seconds re-enabled the autolocking, and then we were able to go to DOWN fine, and then I was able to relock the IMC.

TJ said this has happened to him and to a couple other operators recently.

 

Images attached to this report
Comments related to this report
jason.oberling@LIGO.ORG - 11:07, Thursday 13 November 2025 (88090)OpsInfo

Took a look at this, see attached trends.  What happened here is the FSS autolocker got stuck between states 2 and 3 due to the oscillation.  The autolocker is programmed to, if it detects an oscillation, jump immediately back to State 2 to lower the common gain and ramp it back up to hopefully clear the oscillation.  It does this via a scalar multiplier of the FSS common gain that ranges from 0 to 1, which ramps the gain from 0dB to its previous value (15dB in this case); it does not touch the gain slider, it does it all in block of C code called by the front end model.  The problem here is that 0dB is not generally low enough to clear the oscillation, so it gets stuck in this State 2/State 3 loop and has a very hard time getting out of it.  This is seen in the lower left plot, H1:PSL-FSS_AUTOLOCK_STATE, it never gets to State 4 but continuously bounces between States 2 and 3; the autolocker does not lower the common gain slider, as seen in the center-left plot.  If this happens, turning the autolocker off then on again is most definitely the correct course of action.

We have an FSS guardian node that also raises and lowers the gains via the sliders, and this guardian takes the gains to their slider minimum of -10dB which is low enough to clear the majority of oscillations.  So why not use this during lock acquisition?  When an oscillation is detected during the lock acquisition sequence, the guardian node and the autolocker will fight each other.  This conflict makes lock acquisition take much longer, several 10s of minutes, so the guardian node is not engaged during RefCav lock acquisition.

Talking with TJ this morning, he asked if the FSS guardian node could handle the autolocker off/on if/when it gets stuck in this State 2/State 3 loop.  On the surface I don't see a reason why this wouldn't work, so I'll start talking with Ryan S. about how we'd go about implementing and testing this.  For OPS: In the interim, if this happens again please do not wait for the oscillation to clear on its own.  If you notice the FSS is not relocking after an IMC lockloss, open the FSS MEDM screen (Sitemap -> PSL -> FSS) and look at the autolocker in the middle of the screen and the gain sliders at the bottom.  If the autolocker state is bouncing between 2 and 3 and the gain sliders are not changing, immediately turn the autolocker off, wait a little bit, and turn it on again.

Images attached to this comment
jason.oberling@LIGO.ORG - 09:14, Wednesday 19 November 2025 (88170)

Slight correction to the above.  The autolocker did not get stuck between states 2 and 3, as there is no path from state 3 to state 2 in the code.  What's happening is the autolocker goes into state 4, detects an oscillation, then immediately jumps back to state 2; so this is a loop from states 2 -> 3 -> 4 -> 2 due to the oscillation and the inability of the autolocker gain ramp to effectively clear it.  This happens at the clock speed of the FSS FE computer, while the channel that monitors the autolocker state is only a 16 Hz channel.  So the monitor channel is no where close to fast enough to see all of the state changes the code is going through during an oscillation.

H1 SEI
jim.warner@LIGO.ORG - posted 18:35, Tuesday 21 October 2025 - last comment - 09:06, Wednesday 19 November 2025(87634)
BRSY not damping like it should, bypassed, not used for now

Something happened with BRSY this morning during maintenance that caused it to ring up more than normal and now the damping is not behaving quite as expected. For now, I have paused the ETMY sensor correction guardian with the BRSY out of loop and turned off the output of the BRS so it won't be used for eq mode, should that transition happen.

So far today, I did a bunch of "recapturing frames" in the BRS C# code, which has usually fixed this issue in the past. We also restarted the beckhoff computer, then the plc, C# and epics ioc. This did not recover the BRS either. Marc, Fil and I went to EY and looked at the damping drive in the enclosure and I think it was behaving okay. When the drive came on, the output would reach ~1.8V, then go to 0V when it turned off.

I've contacted UW and we will take a look at this again tomorrow. 

Comments related to this report
jim.warner@LIGO.ORG - 13:32, Wednesday 22 October 2025 (87653)

Looked at this with Michael and Shoshana and the BRS is damped down now. Still not sure what is wrong but we have a theory that one side of the capacitive damper is not actuating. This seems to work okay when the velocities are either low or very high, but if they are moderate the high gain damping doesn't work well enough to get the BRS below a certain threshold, and instead keeps the BRS moderately rung up. We adjusted the damping on/off thresholds so the high gain damping will turn off at a higher velocities.

I will try to do some tests with it next week to see if we can tell if one side of the damper is working better than the other. For now, we should be able to bring the BRS back in loop.

thomas.shaffer@LIGO.ORG - 09:06, Wednesday 19 November 2025 (88169)

I've accepted these thresholds in SDF since it seems that this is the new normal.

Images attached to this comment
Displaying reports 1-20 of 85656.Go to page 1 2 3 4 5 6 7 8 9 10 End