Displaying reports 1-20 of 85965.Go to page 1 2 3 4 5 6 7 8 9 10 End
Reports until 18:30, Thursday 11 December 2025
H1 DAQ
jonathan.hanks@LIGO.ORG - posted 18:30, Thursday 11 December 2025 (88490)
WP 12930, test an alternate configuration for ECR E2500314

We have been seeing a low rate of data drops with the frame writer serving as both the frame writer and the data concentrator for the DAQD 0 systems.  This work is to try an alternate configuration.  Moving the gds broadcaster to be a data concentrator and making the frame writer single purpose again.

The data drops are due to messages from teh front end arriving too late and being discarded.  We are not seeing them on the DAQD 1 leg, only the 0.  This leads us to tend towards the issue be the load (and thus responsivness to input messages) of the FW machine.

Today I adjusted the system such that GDS0 became the data concentrator and the broadcaster.

I've attached a diagram of the current layout.

The main changes:

  1. Link TW0 to GDS0, this was done first to ensure the data link worked well and TW0 keeps functioning.
    1. Add a 10G card into gds0 and two
    2. Moved TW0 to read data from gds0
  2. Move the FE input into the gds machine and reconfigure the FW machine to just be a receiver

The final migration followed this rough order

The control room monitoring and medm screens still expect a DC, FW, TW, NDS, and GDS.  So gds0 as the DC is exporting DAQ-DC0 variables, and we are running a epics proxy ioc which maps DAQ-GDS0 channels to DAQ-DC0 for now.  This will change in the future.  Most of the DC0 variables will probably be taken over by the cps_recv process (this is in testing on the large test stand in LLO).

There were a few daqd restarts to make sure everything was working.  Data is flowing and fw0 and fw1 are producing identical frames.

The plan is to let this run though til January and evaluate the error rate.

Images attached to this report
H1 SUS (CDS)
ryan.short@LIGO.ORG - posted 18:29, Thursday 11 December 2025 (88491)
ITMX Coil Driver State Binary Readback Issue

A. Effler, R. Short

People had noticed earlier an issue with SRM's coil driver binary IO state (see alog88486) so other optics were checked. ITMX's monitors showed some concerning colors (see first screenshot), but after taking a spectra of the quad's top-mass OSEMs comparing similar lock states before and after the power outage last week (see second screenshot), I am confident in saying the actual behavior of ITMX is unchanged and this boils down to a readback issue on the binary IO. This evidence is backed up by the fact that ITMX has been correctly aligned in our locking attempts, and Anamaria and I cycled the state requests a few times and saw expected behavior from the suspension. Dave says he and others will be looking into this.

Images attached to this report
H1 SUS (ISC)
oli.patane@LIGO.ORG - posted 18:25, Thursday 11 December 2025 (88429)
SR3, PR3, LPY Estimator ON vs OFF times + LSC/ASC comparisons

Overall: The main change we are seeing in the LSC and ASC signals with the PR3 and SR3 estimators ON is the lowering of the 1 Hz resonance in ASC, which is known to be a bit of a problem peak.

We've been wanting times within the same lock where we take quiet times with the SR3 and PR3 estimators in various ON/OFF configurations so we can get a better look at possible differences in ISC signals, since the difference we are looking for, if any, is very small and the frequency response changes from lock to lock. We finally were able to get some time on November 24th. I made sure to get two sets of ALL ON times, one at the beginning and one at the end, and then also two sets of ALL OFF times, similarly one at the beginning and one at the end, to get a better idea of what changes between the two are really due to the estimator configurations.

Times:

SR3, PR3 Est ALL ON (one set before & one set after other sets)
start: 1448060052
end: 1448060452

start: 1448063252
end: 1448063616

SR3, PR3 Est ALL OFF (one set before & one set after other sets)
start: 1448060469
end: 1448060977

start: 1448063631
end: 1448063846

SR3, PR3 Est JUST L
start: 1448060995
end: 1448061621

SR3, PR3 Est JUST P
start: 1448061643
end: 1448062049

SR3, PR3 Est JUST Y
start: 1448062067
end: 1448062407

SR3, PR3 Est LP
start: 1448062846
end: 1448063239

SR3, PR3 Est PY
start: 1448062423
end: 1448062828

I used my script /ligo/svncommon/SusSVN/sus/trunk/Common/MatlabTools/estimator_ISC/estimator_isc_comparison.m, to plot ASDs for LSC (CTRL) and ASC (ERR) channels, as well as the LSC CAL channels and ASC diode channels when an ASC channel was made up of more than one diode (ex. INP1 uses REFL A 45I and REFL B 45I). I very quickly realized that the difference between each individual dof and LP or PY ON vs OFF is not enough of a change to be able to accurately say whether there is a difference. So we're just going to look at all estimator dofs ON vs all OFF. These result plots can be found in /ligo/svncommon/SusSVN/sus/trunk/Common/MatlabTools/estimator_ISC/H1/Results/ as revision r12809. On the plots, Light Green 'ON' is next to Dark Green 'OFF' in time, and Light Blue 'ON' is next to Dark Blue 'OFF' in time, so Light vs Dark Green and Light vs Dark Blue are useful for proving drops in noise when the estimator is turned on.

LSC
LSC, LSC-CAL (Length, Length Zoomed in)
- Don't show much improvement
    - Maybe MICH at 1 Hz has improved a bit with the estimators ON
    - PRCL shows slight decrease in noise between 3.5-6 Hz with the estimators ON
ASC
PITCH
REFL Diodes ASC - INP1, CHARD Pitch (would be affected by PR3):
- Don't show much improvement
- INP1 P shows slightly lower noise between 2.5-3.5 Hz
    - Similar to what was seen in INP1 P OUT in August: 86640
- Very small decrease in 1 Hz peak in CHARD
    - ASC-REFL_A_RF45_I_PIT sees a small decrease at 1 Hz while ASC-REFL_B_RF45_I_PIT does not see anything
AS Diodes ASC - MICH, SRC1, SRC2, DHARD Pitch (would be affected by SR3):
- Improvement seen in the 1 Hz resonance by 1.5-2x (except SRC1)
POP Diodes ASC - PRC1, PRC2 Pitch (would be affected by PR3):
- PRC1 P sees slight improvement seen in the 1 Hz resonance

YAW
REFL Diodes ASC - INP1, CHARD Yaw (would be affected by PR3):
- Don't show much improvement
- CHARD Y sees slight decrease betweeen 4-6 Hz with the estimators ON
AS Diodes ASC - MICH, SRC1, SRC2, DHARD Yaw (would be affected by SR3):
- Don't show much improvement
POP Diodes ASC - PRC1, PRC2 Yaw (would be affected by PR3)
- PRC2 Y sees slight decrease betweeen 4-6 Hz with the estimators ON
    - Similar to what was seen in PRC2 Y OUT in August: 86640

Non-image files attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 18:01, Thursday 11 December 2025 (88488)
Thursday Ops Shift Start

TITLE: 12/12 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
INCOMING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 4mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.41 μm/s 
SHIFT SUMMARY:

Initial_Alignment completed quickly thismornign with no intervention.
LL @ Start_TR_CARM

NO_IR found message when at CARM_TO_TR
Manualed back to Start_TR_CARM
LL @ Start TR CARM again

Matt noticed that ETMX Ring Heater has not been on for 36 hours. 

Reconfig of DAQ 0 leg, trying to pull a machine out. See Alog by Jonathan ***pending****
DC0 Status indicators may blink red on CDS Overview. 

TR_CARM Issue update: 
There is a perception that Beckhoff maybe have a "sticky slider" issue, So Dr. Driggers has pushed all buttons asscoitated with Corner 5 Beckhoff ~ 18:15 UTC. 
Sticky Slider Idea has been ruled out @ 20:03 UTC  
Many Hours Later...apparently there was a the PLL Boost that was stopping us from getting past Start_TR_CARM That Dr. Driggers found.

error with GS0 @ 20:54 UTC was Jonathan doing DC0 reboots.

Violin Damping Guadian node restart ~ 21:25 UTC
Manually setting Violin damping settings. Lost losk when applying Damping to EYmode8
The Guardian gate way needed to be restarted to solve this issue. Thank you Jonathan!
Pshyc! Violin Guardian issue is back! Turns out its a Python2.7 -> Python 3+ issue. Thanks Dave!
 
LHO's IFO Made it to Move spots before Lockloss.

error with GS0 @ 00:29 UTC  & 01:09 UTC was Jonathan doing DC0 reboots again and finding an Error.
Jonathan is doing another "Zero leg" Restart @ 1:20 UTC Alog still pending, but here is a work permit.

It is currently 02:00 UTC and the Locking team has called it for the night after losing lock at Max Power.
Lockloss was from an ASC ring up around 275 kW of recirculating power in the arms. 


LOG:                          

Start Time System Name Location Lazer_Haz Task Time End
15:57 SAFETY LASER HAZ STATUS LVEA NO LVEA is LASER SAFE \u0d26\u0d4d\u0d26\u0d3f (\u2022_\u2022) 10:51
15:57 FAC Kim & Nellie LVEA Y Technical Cleaning 16:33
17:23 JAC Jennie W, & Co JAC Lab N Working on JAC 19:23
17:39 FAC Randy X-arm N Clearing the path of tumbleweeds 19:39
17:48 Tours Cassidy & tour CTRL-Overpass N Giving a tour 18:48
17:52 LASER Saftey Travis LVEA, FCES, EY n Updating Laser status signs up around site 18:49
18:22 SQZ Karmeng & Daniel LVEA SQZt7 YES Working on HAM7 upgrades 18:57
19:26 JAC/Cheta Betsy Optics lab N Dropping off parts. 19:56
19:27 CDS Jonathan MSR N Rebooting DC0 & making CDS_Overview angry 20:57
19:31 ISC Jeff & Jennie D LVEA Y Checking on status of PSL ISCT racks. 20:11
19:42 JAC J.W. JAC Lab N Jac Work 19:48
22:15 FAC Tyler OSB Receiving N Forklifting something quickly over to OSB Recieving. 22:35
22:27 LAser Trans Sheila & Anamaria LVEA yes Transitioning the LVEA to Laser Hazard. 22:57
22:53 VAC Gerardo & Jordan LVEA N Getting parts 23:13
23:46 PEM RyanC PCAL lab N Get data from dust monitor huddle test 00:01
00:10 CDS Jonathan MSR N Working on the DNS and DC0 servers 00:50
Images attached to this report
H1 SUS (CDS)
jeffrey.kissel@LIGO.ORG - posted 16:42, Thursday 11 December 2025 - last comment - 17:20, Thursday 11 December 2025(88486)
Allegedly Stuck SRM M3 UL Low-pass Binary IO Switch
J. Kissel, J. Driggers, A. Effler

Trying to figure out why SRC1 loop, which is the fast AS_A_RF72 loop that drives controls SRM is being finicky, I pulled open the H1 SUS SRM coil driver binary IO state request screen (see attached) and found that the UL low pass (LP) bit on the M3 stage stage is allegedly stuck -- it doesn't change when I go from state 1 (LP OFF, ACQ OFF), to state 3 (LP ON, ACQ OFF). The rest of the coils on the M3 stage change when receiving the same request.

I say "allegedly" because it can be either the actual switch is stuck (the BO chassis) or the readback has failed / is stuck (the BI chassis).

To be investigated...
Images attached to this report
Comments related to this report
anamaria.effler@LIGO.ORG - 17:20, Thursday 11 December 2025 (88487)

Seems this bit has been stuck on and off since Feb 21 (for state 1 of M3 0 is good and 1 is bad in the attached ndscope). Also attached I put the spectra of the FASTIMONs which are after the analog switches. The state 1 spectra look the same as before, and the state 3 spectra looked the same anyway, so more likely that the readback has gone bad.

I further checked that PRM/PR2/BS/ITMX/ITMY do the correct BIO switching from high noise to low noise, according to their readbacks. I have not double checked the spectra.

Images attached to this comment
H1 CDS
david.barker@LIGO.ORG - posted 16:12, Thursday 11 December 2025 (88485)
SUS Violin mode EPICS IOC crashing on writes, fixed by moving to python2

RyanS, Jonathan, Dave:

The EPICS IOC cds_aux_ioc.py was crashing when Guardian started writing to its PVs this afternoon. 

This IOC was discovered to not be running after the power outage, its channels were missing from EDC.

I took the opportunity to "do the right thing" and add cds_aux_ioc to cdsioc0's systemd using puppet earlier in the week. Everything looked to be working, for example EDC connected to all 160 channels.

As mentioned, as soon as guardian violin node attempted to write to a PV it crashed the IOC. Systemd then restarted the service 10 minutes later and the cycled repeated.

To further confuse us, attempting to read PVs from the guardian machine failed. If we set the CA_ADDR vars we could get a connection with cdsioc0, but these were not normally set. We then thought that an EPICS gateway must be missing, server on FELAN client on CDSLAN. Jonathan found and started the gateway and we got to the point where Guardian could read the PVs, but crashed the IOC on write.

I could not find any alog/wiki description of where cds_aux_ioc had been running, but on reflection the fact that it is python2 code hinted at an old script machine.

I tried running the code on h1fescript0 and success,  it is not crashing on writes.

For now cds_aux_ioc.py is running in a TMUX session as user controls on h1fescript0

We then undid all the good work we did this week: stopped the gateway, stopped and disabled the systemd service on cdsioc0. I now need to take it out of puppet and delete it from cdsioc0.

TODO: convert script to python3, run on an updated FE script machine.

H1 General
matthewrichard.todd@LIGO.ORG - posted 11:17, Thursday 11 December 2025 - last comment - 18:35, Thursday 11 December 2025(88480)
Morning Locking Attempts

M. Todd, J. Driggers, J. Kissel, S. Dwyer, T. Sanchez, D. Sigg


For posterity's sake, I thought it would be nice to record some of the actions and efforts the locking team was doing this morning.

Sheila noticed we were losing lock at START_TR_CARM, and so we relocked DRMI and tried again but going back to PREP_TR_CARM and making sure things were reset. Upon trying to advance to START_TR_CARM, the ALS-C_REFL_DC_BIAS_GAIN was set before losing lock again.

This raised suspicion about the IMC_COMM_CARM path, and it was thought that the changes to Bekhoff over the last few days could have caused some of these problems. To double check everything in this path is set to the values that are being "reported", Jenne went through and toggled every button and slider in the path as a "sticky slider" approach to solving this. The chassis work involved corner 4 and 5, but Jeff narrowed these issues down to corner 5 chassis -- here is a wiring diagram of the chassis: D1100683.

We called in Daniel to see what his thoughts were on these issues.

Comments related to this report
sheila.dwyer@LIGO.ORG - 14:01, Thursday 11 December 2025 (88484)

We eventually lock TR_CARM by going through the guardian steps slowly.  We noticed that when we stepped through slowly, the ALS COMM VCO would pull the mode cleaner away causing the TR_CARM path to rail.  We disabled the VCO internal servo that uses the frequency comparator to keep the VCO at a fixed frequency.  This has been used in the past for our sucsesful transitions, but seemed to be railing each time this morning.  

We left the ALS not shuttered, and ran the QPD servos while ASC and soft loops were engaged, with Tony acting as a SRC1 servo.  After all the ASC had engaged, the camera set points still looked good, I set the QPD offsets back to their values from SDF, and the camera offsets still looked good.  The guardain has been reset to shutter ALS as normal next time we relock.  The POP A yaw offset was off slightly, I set it to -0.43 rather than -0.4.  

We then transitioned to DC_READOUT using the guardian without any intervention, or known fixes for the problems we were having.  

We are able to turn on the SRC1 yaw loop, but the SRC1 pitch loop pulls the side band build ups off.  I've added the SRC1 yaw loop back into ENGAGE_ASC_FOR_FULL_IFO.

jenne.driggers@LIGO.ORG - 18:02, Thursday 11 December 2025 (88489)

Re: Matt's alog this morning, I was worried that we had a sticky slider situation after the beckhoff work (which apparently isn't really a thing here, but was a thing at the 40m).  We moved sliders and flipped switches on the IMC common mode board, the IFO REFL common mode board, and the summing node. Probably that wasn't the issue.

After a few other TR_Carm unsuccessful attempts, I trended and it turns out that the time we had been successful doing things by hand that Sheila mentions, we had forgotten to turn off the H1:ALS-C_COMM_PLL_BOOST, so we did the whole transition with the boost left on.  I have now set in the guardian to leave the boost on, and we have now gotten through this several times without any intervention.  I also added turning off the ALS COMM VCO by clicking the Ext button (and resetting it to Int in PrepForLocking), however it's possible that that wasn't necessary, since guardian already had changing the On/Off switch as part of these states.  The real key seems to be leaving on the H1:ALS-C_COMM_PLL_BOOST.

As Sheila said, DC readout seems to just be working fine, no intervention needed.  Total mystery why it hadn't been working on Tuesday.

We have now powered up to 25 W two times!  Even if we're not able to get farther than this, there are very few items left that would need to be checked using the full IFO (eg, the ISS second loop can be checked using IMC-only).  

SRC1 P is still out of the guardian.  One time I was able to close it using the offset from lownoise ASC, at Elenna's suggestion.  But, we still lost lock in MoveSpots.  The other time I was by-hand watching SRM pitch. No need to move it up to 25W, and I think I picked the wrong direction during move spots, so we still lost lock.

We got up past 25W a third time.  This time, rather than doing MOVE_SPOTS, I manual-ed and did RF9 and then RF45 modulation depth reduction.  Both of those were fine, although the 45 MHz reduction did confuse my Jenne-in-the-loop SRM alignment loop. At some point, it became clear that SRC1 yaw was pulling us away, so we turned that off and I started dealing with SRM yaw alignment as well as SRM pit alignment.  We then did move spots with both SRC1 pit and yaw open and me watching them.  That seemed fine.  We then started going to MAX_POWER.  I think we got up to 50W, but then we got a nasty ASC ringup and lost lock.

The only analog things that we haven't tested yet are (I think) coil driver switching, ESD switching, and OMC whitening switching, and ISS Second loop engagement. Tony had checked PCals earlier in one of our locks, and they were fine. 

It sounds like there will be some folks around to perhaps try locking again tomorrow.  However, as Betsy pointed out, now we have a pretty small list of things that *haven't* been tested, so even if we aren't able to get to full NLN we have very few things to be suspicious of when relocking next calendar year after the vent. ISS second loop we can test using IMC-only.  Anamaria and Jeff were thinking through how could we check (using, eg, the IMON channels) the sus actuator switching. I think that we could also try OMC whitening switching using a single-bounce OMC.  

ryan.short@LIGO.ORG - 18:35, Thursday 11 December 2025 (88492)

Due to it giving Jenne trouble and pulling alignment away, I have commented out SRC1_Y from ENGAGE_ASC. This means the "human servo" will need to be in-use for now for SRM, especially during powerup and the spot move.

I've attached the ASC trends around where the ringup happened during the powerup. Looks like this is the known ~0.45 Hz instability and it's possible we were just unlucky.

The plan for tomorrow is to start locking just like we did this last time (fully auto with someone monitoring SRM alignment) and see how far we can get. If we're unable to fully reach low noise, there are ways we can check certain systems for functionality before we plan to close the arm gate valves mid-afternoon.

Images attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 10:33, Thursday 11 December 2025 (88481)
Thu CP1 Fill

Thu Dec 11 10:16:18 2025 INFO: Fill completed in 16min 14secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 DAQ
daniel.sigg@LIGO.ORG - posted 10:15, Thursday 11 December 2025 (88478)
TwinCAT setup

A quick summary of the current TwinCAT setup:

Any change of hardware needs to be reflected in the Altium workflow which serves as the basis for the system project using the provided scripts.

The Altium script will generate H1EcatC1_NetList.xml. ProcessTcNetList.ps1 will then use the netlist as an input to generate H1EcatC1_BoxList.xml and H1EcatC1_Mapping.xml. 
(All located in C:\SlowControls\TwinCAT3\Source\Interferometer\H1EcatC1\Configure).

We have updated the spare Beckhoff computer to this version. The upgrade of the main Beckhoff computer is pending.

LHO VE
david.barker@LIGO.ORG - posted 09:41, Thursday 11 December 2025 (88477)
Wed CP1 Fill

Wed Dec 10 10:11:15 2025 INFO: Fill completed in 11min 12secs

Late entry for yesterday's CP1 fill.

Images attached to this report
H1 ISC
jeffrey.kissel@LIGO.ORG - posted 09:16, Thursday 11 December 2025 (88476)
Thanks DIAG_MAIN! Found ASC_REFL_9_I Analog Whitening Settings didn't match the Digital Compensation; From Beckhoff Restart
J. Kissel, S. Dwyer

As we resume power-outage recovery lock acquisition now that the environment is more suitable than it's been in a week (LHO:88473), we (as in DIAG_MAIN) found this morning the ASC_REFL_9_I analog whitening state (whose setting is managed by Beckhoff) didn't match the digital compensation (whose setting is managed by h1asc SDF system). We suspect that this is a symptom of yesterday's beckhoff == twincat computer reboots that were necessary to support the beckhoff chassis upgrades for JAC (LHO:88463). 

Not sure where/if these analog whitening settings are in the Beckhoff SDFs, but I've trended
    - H1:ASC-REFL_A_RF9_I1_SWSTAT      digital compensation setting status
    - H1:ASC-REFL_A_RF9_WHITEN_SET_1   analog setting status
and have now reset them to "normal," with 2 stages of whitening ON, and FM1 and FM2 compensating them.
Images attached to this report
H1 SQZ
sheila.dwyer@LIGO.ORG - posted 08:57, Thursday 11 December 2025 (88474)
SQZ SHG locking

Yesterday we had trouble locking the SHG after the beckhoff restart.  The scan range for the PZT was restored correctly after the restart to 45-100V, but it seems to no longer be the right range to find a resonance.  This morning I changed the range to 0-100V, this allowed us to lock with the PZT voltage around 50.  I've now set it to 20-100V and accepted this in SDF.

Images attached to this report
H1 PEM (DetChar, ISC, SEI, SUS)
jeffrey.kissel@LIGO.ORG - posted 08:51, Thursday 11 December 2025 - last comment - 09:06, Thursday 11 December 2025(88473)
It's been ... WINDY.
J. Kissel

Post Dec 4th power outage, we've have an EPIC week of windstorms that have inhibited recovery effort, which has delayed upgrade progress. The summary pages (on their 24 hour cadence) and the OPS logs / environment summary don't really convey this well, so here's a citable link to show how bad last Friday (12/05), Monday (12/08), and Wednesday (12/10) were in terms of wind. Given the normal work weekend, it means that we really haven't had a conducive environment to recover from even a normal lockloss, let alone a 2-hour site-wide power outage. 

The attached screenshot is of the MAX minute trends (NOT the MEAN, to convey how bad it was) of wind speed at each station in UTC time. 
The 16:00 UTC hour mark is 08:00 PST -- the rough start of the human work day, so the vertical grid is marking the work days.
The arrow (and period where there's red-dashed 0 MPH no data) shows the 12/04 power outage.
The horizontal bar shows the weekend when we humans were trying to recover ourselves and not the IFO.
Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 09:06, Thursday 11 December 2025 (88475)
Oh right -- and also on Monday, even though the wind wasn't *that* bad, the Earth was mad from the after shocks of 7.0 mag Alaskan EQ, and there were end-station Software Watchdog trips related to it that -- because of an oversight in watchdog calibration -- scared everyone into thinking we should "stand down until we we figure out if this was because the hardware upgrades or power outage." See LHO:88399 and LHO:88415. So, Monday was a wash for environmental reasons too.

Images attached to this comment
H1 SQZ (SQZ)
daniel.desantis@LIGO.ORG - posted 17:22, Wednesday 10 December 2025 - last comment - 11:23, Thursday 11 December 2025(88461)
HAM7 Work Wednesday 12/10

[Sheila, Eric, Kar Meng, Daniel]

At the time of removing soft cover, the particle count was ~2. We begun with the OPO dither locked. We also updated the ramp time on the PSAM controllers to be 100ms (they were at 1ms). It seems like the PSAMS are reading back the correct value (matches setpoint/target).

Betsy provided us with the screws for locking down the VIP, they are a #10 button head screw but they seem to fit in the slot for the locking mechanism. We have no washers for the #10 screw, which is not ideal. The #10 button head is not sufficiently wide enough and only engaged on one side of the slot. As a compromise, a 1/4" washer was tried to buffer this. Unfortunately, this was thick enough to prevent the screw from engaging on the threads, so two of the other bolts were installed without washers. One of the bolts that was installed is the correct bolt/washer. Once all the screws were in, the thumb screws were tightened and the OSEMs were checked. Pitch yaw and roll are all less than 1urad different. Vertical displacement is 100um different.

After finishing locking down: particle count was ~ 60

We were thinking about moving the second iris on the sqz path from between ZM3 and FC1 to between ZM2 and ZM3 so that we could see the retroreflection from the FC, but we abandoned this idea because Sheila thinks it wont be that helpful considering how difficult it seems it would be to move it there. As far as irises go, all the irises we need to install in the HAM7 chamber have been placed. We have irises installed on the transmission path to the homodyne and the CLF path on T7. It looks like the homodyne/transmission path is slightly misaligned relative to the irises placed before the vent (see images attached). We also still need to install one final iris on the green pump REFL path on T7 before we remove the OPO. We could not install this today because we could not get the SHG to lock.

We were having quite a bit of trouble dither locking the OPO, so the seed/clf input alignment may be a bit off. TEM01 and TEM00 have very similar dip fractions. By adjusting the locking code, Sheila was able to get the OPO locked to the TEM00 mode. 

At the end of the day, we adjusted the iris between ZM3 and FC1 for better centering and placed 2 dogs around the OPO.

Images attached to this report
Comments related to this report
karmeng.kwan@LIGO.ORG - 11:23, Thursday 11 December 2025 (88482)

[Daniel, Karmeng]

We also placed an iris after the parascope on SQZT7 table to constraint the green reflected off the OPO. 

Images attached to this comment
H1 DetChar (DetChar)
joan-rene.merou@LIGO.ORG - posted 22:58, Monday 08 December 2025 - last comment - 11:44, Thursday 11 December 2025(88433)
Hunting down the source of the near-30 Hz combs with magnetometers
[Joan-Rene Merou, Alicia Calafat, Sheila Dwyer, Anamaria Effler, Robert Schofield]

This is a continuation of the work performed to mitigate the set of near-30 Hz and near-100 Hz combs as described is Detchar issue 340 and lho-mallorcan-fellowship/-/issues/3. As well as the work in alogs 88089, 87889 and 87414.

In this search, we have been moving around two magnetometers provided to us by Robert. Given our previous analyses, we thought the possible source of the combs would be around either the electronics room or the LVEA close to input optics. We have moved around these two magnetometers to have a total of more than 70 positions. In each position, we left the magnetometers alone and still for at least 2 minutes, enough to produce ASDs using 60 seconds of data and recording the Z direction (parallel to the cylinder). For each one of the positions, we recorded the data shown in the following plot

  

That is, we compute the ASD using 60s FT and check the amplitude of the ASD at the frequency of the first harmonic of the largest of the near-30 Hz combs, the fundamental at 29.9695 Hz. Then, we compute the median of the +- 5 surrounding Hz and save the ASD value at 29.9695 Hz "peak amplitude" and the ratio of the peak against the median to have a sort of "SNR" or "Peak to Noise ratio".

Note that we also check the permanent magnetometer channels. However, in order to compare them to the rest, we multiplied the ASD of the magnetometers that Robert gave us times a hundred so that all of them had units of Tesla.

After saving the data for all the positions, we have produced the following two plots. The first one shows the peak to noise ratio of all positions we have checked around the LVEA and the electronics room:

  

Where the X and Y axis are simply the image pixels. The color scale indicates the peak to noise ratio of the magnetometer in each position. The background LVEA has been taken from LIGO-D1002704.
Note that some points slightly overlap with other ones, this is because in some cases we have check different directions or positions in the same rack.

It can be seen how from this SNR plot the source of the comb appears to be around the PSL/ISC Racks. Things become more clear if we also look at the peak amplitude (not ratio) as shown in the following figure:

  

Note that in this figure, the color scale is logarithmic. It can be seen how, looking at the peak amplitudes, there is one particular position in the H1-PSL-R2 rack whose amplitude is around 2 orders of magnitude larger than the other positions. Note that this position also had the largest peak to noise ratio. 

This position, that we have tagged as "Coil", is putting the magnetometer into a coil of white cables behind the H1-PSL-R2 rack, as shown in this image:

  

The reason that led us to put the magnetometer there is that we also found the peak amplitude to be around 1 order of magnitude larger than on any other magnetometer on top of one set of white cables that go from inside the room towards the rack and up towards we are not sure where:

  

This image shows the magnetometer on top of the cables on the ground behind the H1-PSL-R2 rack, the white ones on the top of the image appear to show the peak at its highest. It could be that the peak is louder in the coil because there being so many cables in a coil distribution will generate a stronger magnetic field.

This is the actual status of the hunt. These white cables might indicate that the source of these combs is the different interlocking system in L1 and H1, which has a chassis in the H1-PSL-R2 rack. However, we still need to track down exactly these white cables and try turning things on and off based on what we find in order to see if the combs dissapear.
Images attached to this report
Comments related to this report
jason.oberling@LIGO.ORG - 13:51, Tuesday 09 December 2025 (88442)PSL

The white cables in question are mostly for the PSL enclosure environmental monitoring system, see D1201172 for a wiring diagram (page 1 is the LVEA, page 2 is the Diode Room).  After talking with Alicia and Joan-Rene there are 11 total cables in question: 3 cables that route down from the roof of the PSL enclosure and 8 cables bundled together that route out of the northern-most wall penetration on the western side of the enclosure (these are the 8 pointed out in the last picture of the main alog).  The 3 that route from the roof and 5 of those from the enclosure bundle are all routed to the PSL Environmental Sensor Concentrator chassis shown on page 1 of D1201172, which lives near the top of PSL-R2.  This leaves 3 of the white cables that route out of the enclosure unaccounted for.  I was able to trace one of them to a coiled up cable that sits beneath PSL-R2; this particular cable is not wired to anything and the end isn't even terminated, it's been cleanly cut and left exposed to air.  I haven't had a chance to fully trace the other 2 unaccounted cables yet, so I'm not sure where they go.  They do go up to the set of coiled cables that sits about half-way up the rack, in between PSL-R1 and PSL-R2 (shown in the next-to-last picture in the main alog), but their path from there hasn't been traced yet.

I've added a PSL tag to this alog, since evidence points to this involving the PSL.

joan-rene.merou@LIGO.ORG - 11:44, Thursday 11 December 2025 (88483)
[Joan-Rene, Alicia]

We tried yesterday disconnecting the PSL Environmental Sensor Concentrator where some of the suspicious white cables were going, but no change was seen in the comb amplitude.

Continuing our search with the magnetometer in the same rack, we found out that the comb is quite strong when the magnetometer is put besides the power supply that is close to the top of the rack:



So it may be that these lines may be transmitted elsewhere through this power supply.
We connected a voltage divider and connected it to the same channel we were using for the magnetometer (H1:PEM-CS_ADC_5_23_2K_OUT_DQ):




Out of this power supply, two dark green cables come out, the first one goes to the H1-PSL-R1 rack:



However, the comb did not appear as strong when we put the magnetometer besides the chassis where the cable leads.

On the other hand, the comb does appear strong if we follow the other dark green cable, that goes to this object



Which Jason told us it may be related to the interlock system.

Following the white cables that go from this object, it would appear that they go into the coil, where we saw that the comb was very strong.

We think it would be interesting to see what here can be turned off and see if the comb does disappear.
Images attached to this comment
H1 PEM (CDS)
oli.patane@LIGO.ORG - posted 17:26, Tuesday 02 December 2025 - last comment - 10:22, Thursday 11 December 2025(88320)
Added DAC calibration filters for PEM EX

h1pemex was upgraded from an 18-bit to a 20-bit DAC today, so we needed to make sure we had a calibration correction. The new filter banks that Jeff had put in (88321) for the calibration correction are called H1:PEMEX_EX_DACOUTF_1, H1:PEMEX_EX_DACOUTF_2, H1:PEMEX_EX_DACOUTF_3, and H1:PEMEX_EX_DACOUTF_4. I installed a filter called 20BitDAC that was a gain(4) in FM10 of each of these filter banks, loaded them in, and turned them on along with the input/output/gain of the filter banks. I've accepted these changes in sdf safe

Comments related to this report
oli.patane@LIGO.ORG - 10:22, Thursday 11 December 2025 (88479)

Late update, but I put the channel names wrong. The new channels are called H1:PEM-EX_DACOUTF_{1,2,3,4}.

Displaying reports 1-20 of 85965.Go to page 1 2 3 4 5 6 7 8 9 10 End