Displaying reports 281-300 of 83043.Go to page Start 11 12 13 14 15 16 17 18 19 End
Reports until 15:13, Friday 20 June 2025
LHO FMCS (PEM)
ryan.short@LIGO.ORG - posted 15:13, Friday 20 June 2025 (85208)
HVAC Fan Vibrometers Check - Weekly

FAMIS 26395, last checked in alog85021

All fans look largely unchanged compared to last week and are all within noise thresholds.

Images attached to this report
H1 ISC (PSL)
elenna.capote@LIGO.ORG - posted 14:35, Friday 20 June 2025 - last comment - 14:57, Friday 20 June 2025(85206)
CARM OLG and gain change

Sheila measured the CARM gain during vent recovery at a thermalized time (84944), and it seemed low, so today we plugged in the SR785 and measured the CARM OLG at the start of lock.

Right at the beginning of lock, the UGF was 14.4 Hz, which is fine except that we will lose gain from thermalization. I bumped up the gain by 1 dB on each REFL A and B slider ("H1:LSC-REFL_SUM_{A,B}_IN2GAIN") and remeasured, and the UGF was about 14.3 Hz. Then, I increased the gain another 2 dB each on each slider and remeasured, 16 kHz. This is where we want to be, although I imagine it will decay more as we thermalize.

This means that in NLN, we want our CARM gain settings to be 15 dB on each slider, which I accepted in SDF. However, I have not yet put them in the guardian because I am trying to figure out where that actually should be adjusted.

I have attached the three measurements I took. The title of the first one is misleading since I used the wrong template, but the time stamps on all are accurate.

First measurement

Second measurement, both gains up +1 dB

Third measurement, both gains up +2 dB more

SDF in observe

Procedural notes for future me to measure the OLG and plot the measurement:

> cd ligo/gitcommon/psl_measurements/templates

> conda activate psl

> python ../code/SRmeasure.py carm_olg_template.yml

> python ../code/quick_tf_plot.py ../data/carm_olg/[filename]

I unplugged the SR785 before observing.

Images attached to this report
Non-image files attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 14:57, Friday 20 June 2025 (85207)

I see now, with Keita's help, that I changed the "wrong" gain sliders, because I should have adjusted the "H1:LSC-REFL_SERVO_IN{1,2}GAIN" sliders. However, jsut based on how things are connected, I don't think it is having an overall different effect right now. But, to do this properly, I am updating the guardian code on line 5882:

if ezca['LSC-REFL_SERVO_IN1GAIN'] < 6:
ezca['LSC-REFL_SERVO_IN1GAIN'] += 1
ezca['LSC-REFL_SERVO_IN2GAIN'] += 1
 
That "6" had been 3, but Sheila had lowered that value from 6 to 3 during the O4a/b break (see 76751) following some measurements. Now, I am wondering if we should have lowered that gain at all during that break.
 
That means, on the next lockloss, the ISC_LOCK guardian must be loaded, and then when we go to observing, there will be SDF diffs in the H1:LSC-REFL_SERVO_IN{1,2}GAIN and H1:LSC-REFL_SUM_{A,B}_IN2GAIN channels. These differences should be accepted. Tagging OpsInfo.
H1 ISC
corey.gray@LIGO.ORG - posted 14:21, Friday 20 June 2025 - last comment - 14:24, Friday 20 June 2025(85204)
SDF Diffs Post Today's Commissioning/Troubleshooting Work

After today's commissioning and trouble shooting work we had a list of SDFs to address.  Elenna and I both addressed these. 

Elenna will add a comment to this alog with the SDFs she addressed.

I took care of the SDF Diff for  SQZ ADF.  Kevin Kuns noticed this channel was not restored to its  OBSERVING value; he took it to 320, but SDF flagged this because it should actually be 322, so I REVERTed this channel (H1:SQZ-ADF_VCXO_CONTROLS_SETFREQUENCYOFFSET) to 322 (see attached).

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 14:24, Friday 20 June 2025 (85205)

I don't understand what the ALS diffs are, but I reverted them because I think they may have occurred because Sheila and I left the green shutters open today to check the offsets.

I reverted all CAL gain tramps. Francisco had changed them due to a test.

I also SFDed the REFLAIR gains as done in this alog 85201, but didn't take a screenshot.

Images attached to this comment
H1 ISC
elenna.capote@LIGO.ORG - posted 13:21, Friday 20 June 2025 (85203)
ENGAGE ASC locklosses due to high CHARD P gain

We have been having large glitches and sometimes locklosses from ENGAGE ASC FOR FULL IFO. I have tracked the problem down to the CHARD P gain coming on at full gain at the start of the state. This is an error that we made when trying to recover from the vent- we commented out a 30 dB gain filter in the late part of the state, and then set the new full gain at teh gain that engages at the start of the state. I have now edited the guardian to engage CHARD P with a gain that is 30 dB less than the final gain ('ENGAGE_ASC_FOR_FULL_IFO_initial'), and then the final gain is ramped at the end of the state (ENGAGE_ASC_FOR_FULL_IFO_final). These gains are set in lscparams.

Also, we put back the ITMX spot position to center in pitch, instead of the strange offset we added during vent recovery.

H1 ISC
sheila.dwyer@LIGO.ORG - posted 13:03, Friday 20 June 2025 - last comment - 13:09, Friday 20 June 2025(85201)
DRMI acquistion sensing adjusted

Corey, Elenna, Sheila

Recently we have been having trouble locking DRMI even when the flashes seem good.  We stopped to check some DRMI phasing, similar to what we've done for PRMI in 85080 and 84630

The first screenshot shows the PRCL OLG, which has a large wiggle from 20-40 Hz before we started (REFL AIR 9 phase -21).  We ran the template in userapps/lsc/h1/templates/phase_REFLAIR.xml to send a 300 count excitation to PRM M3 at 132 Hz, and phased this into I for REFLAIR 45 (to reduce PRCL cross coupling to the MICH loop), and into I for REFLAIR 9.  Then we checked the TF of PRM to both I sigal to estimate the matrix element we should use to cancel the PRM coupling to the SRCL error signal, and updated this from -1.8 to -1.7. 

These three things have reduced the wiggle.  We are hoping this will help with DRMI acquisition, our first two attempts have gone well but we will wait to collect some statistics.

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 13:09, Friday 20 June 2025 (85202)

SDFs for phases

these will need to also be updated in OBSERVE

Images attached to this comment
H1 SUS
jeffrey.kissel@LIGO.ORG - posted 12:03, Friday 20 June 2025 - last comment - 10:39, Tuesday 24 June 2025(85198)
SRM Transverse OSEM, Mounted on Opposite Side, has incorrect OSEM2EUL / EUL2OSEM matrix sign; Inconsequential, but should be Fixed for Sanity's Sake.
J. Kissel

I'm building up some controls design documentation for the derivations of the OSEM2EUL / EUL2OSEM matrices for existing suspension types (see G2402388), in prep for deriving new ones for e.g. the HRTS and if we upgrade any SUS to use the 2-DOF AuSEMs.

In doing so, I re-remembered that the HLTS, HSTS, OMC controls arrangement poster (E1100109), defining the now-called "6HT" OSEM arrangement in T2400265 calls out two possible positions for the transverse sensor, the "side" and "opposite side," which I'll abbreviate as SD and OS, respectively from here on.

If the transverse sensor is mounted in the SD position, then as the suspension moves in +T, the OSEM further occults the LED beam, creating a more negative ADC signal. Thus, the OSEM2EUL matrix's SD to T element should be -1.0.
If the transverse sensor is mounted in the OS position, then as the suspension moves in +T, the OSEM opens up revealing more LED beam, creating a more positive ADC signal. Thus, the OSEM2EUL matrix's SD to T element should be +1.0.

Not *actually* remembering that the HLTSs PR3, SR3, and two of the 9 HSTSs, SR2 and SRM use OS as their transverse sensor yesterday, and missing the note from Betsy in the abstract of E1100109 to look at the each SUS' Systems Level SolidWorks assembly for transverse sensor location assignment (prior to this morning it was not in red, nor did it call which suspension explicitly have their transverse sensor mounted in the OS position), I was worried that we'd missed this when defining the sign of *all* HLTS / HSTS / OMCS OSEM2EUL / EUL2OSEM matrices, and assumed they were all installed as SD OSEMs with -1.0 OSEM2EUL and EUL2OSEM matrix elements.

Below, I inventory the status with 
    - suspension name, 
    - a reference to picture of the transverse OSEM (or the corresponding flag w/o the OSEM), 
    - confirming SW drawing does match the picture, 
    - the current value / sign of the OSEM2EUL / EUL2OSEM matrix element (H1:SUS-${OPTIC}_M1_OSEM2EUL_2_6 or H1:SUS-${OPTIC}_M1_EUL2OSEM_6_2)
    - a conclusion of "all good" or what's wrong.


Optic	T Sensor	aLOG pic	SW check	OSEM2EUL	Conclusion
	Mount				value		/EUL2OSEM
MC1     SD              LHO:6014        D0901088 g      -1.0            all good
MC3     SD              LHO:39098       D0901089 g      -1.0            all good
PRM     SD              LHO:39682       D0901090 g      -1.0            all good
PR3     OS              LHO:39682       D0901086 g      +1.0            all good

MC2     SD              LHO:85195       D0901099 g      -1.0            all good
PR2     SD              LHO:85195       D0901098 g      -1.0            all good

SR2     OS              LHO:41768       D0901128 g      +1.0            all good

SRM     OS              LHO:60515       D0901133 g      -1.0		OSEM2EUL/EUL2OSEM wrong!
SR3     OS              LHO:60515       D0901132 g      +1.0            all good

FC1     SD              LHO:61710       D1900364 g      -1.0            all good
FC2     SD              LHO:65530       D1900368 g      -1.0            all good

OMC     SD              LHO:75529       D1300240 g      -1.0            all good (see also G1300086)


So, as the title of this aLOG states, we've got the sign wrong on SRM.

Shouldn't we have discovered this with the "health check TFs?"
Why doesn't this show as a difference in the "plant" ("health check") transfer functions when comparing against other SUS that have the sign right?
Why *don't* we need a different sign on SRM's transverse damping loop? 

Because the sign in the EUL2OSEM drive and OSEM2EUL sensed motion is self consistent:

When SRM EUL2OSEM matrix requests to drive in +T as though it had an OSEM coil in the "SD" position, it's actually driving in -T because the OSEM coil is in the OS position. 
On the other side, the OSEM2EUL matrix corrects for a "SD" OSEM, with "more negative when moves in +T", and and has the (incorrect) -1.0 in the OSEM2EUL matrix. But since the SUS is actually moving in -T, making the flag occult more of the OS OSEM LED beam, yielding a more negative ADC signal, the -T is reported +T in the DAMP bank because of minus sign in "SD" OSEM2EUL matrix. 
So the phase between DAMP OUT and DAMP IN at DC is still zero, as though "everything was normal," because requested physical drive +T is sensed as +T.

Thus the Sensor / Drive phase is zero at DC like every other HSTS, we can use the same feedback -1.0 sign like every other DOF and every other HSTS.

Does this matter for the IFO?
No. This sensor is only used to damp transverse, i.e. transverse to the main IFO beam. If there're no defects on the SRM optic HR surface, and the transverse displacement doesn't span a large fraction of the beam width, then there should be no coupling into L, P or Y which are the DOFs to which the IFO should be sensitive. 
This is corroborated by Josh recent work where he measured the coupling of the "SD" OSEM drive (actually in the OS position, driving -T) and found it to be negligible; see LHO:83277, specifically this SRM plot.
Not only is the IFO not sensitive to the transverse drive from the OSEM, but also the absolute sign of whether it's +T or -T doesn't matter since there's no *other* sensors that measure this DOF that we'd have to worry about comparing signs against.

Should we fix it?
I vote yes, but with a low priority, perhaps during maintenance when we have the time to gather a "post transverse sensor sign change fix" set of transfer functions.
Non-image files attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 10:39, Tuesday 24 June 2025 (85279)
H1 SUS SRM's Basis Transformation Matrix elements for Transverse has been rectified as of 2025-06-24. See LHO:85277.
H1 ISC (GRD, OpsInfo)
elenna.capote@LIGO.ORG - posted 11:33, Friday 20 June 2025 - last comment - 16:46, Friday 20 June 2025(85200)
PRC Align in ALIGN IFO looks good

While Corey ran an initial alignment, I went to PREP_PRC_ALIGN to check the WFS error signal for that state. Both pitch and yaw looked good, I confirmed the signals cross zero when expected, and so I turned on the PRC ALIGN state and the initial alignment engaged and offloaded properly. Seems like we can use PRC align now!

I tagged OpsInfo so operators know. I recommend that the operators just pay attention during PRC Align over the next few days just to ensure nothing is going wrong (watch build ups as the ASC engages and confirm it's going in the right direction).

Comments related to this report
ryan.short@LIGO.ORG - 16:46, Friday 20 June 2025 (85212)OpsInfo

Since PRC_ALIGN should be working again, I removed the edge in the INIT_ALIGN Guardian between 'INPUT_ALIGN_OFFLOADED' and 'MICH_BRIGHT_ALIGNING' that was added in alog84950 so that automatic initial alignments will now include PRC.

LHO VE
david.barker@LIGO.ORG - posted 11:11, Friday 20 June 2025 (85199)
Fri CP1 Fill

Fri Jun 20 10:07:53 2025 INFO: Fill completed in 7min 49secs

 

Images attached to this report
H1 ISC
sheila.dwyer@LIGO.ORG - posted 10:40, Friday 20 June 2025 (85197)
DARM offset stepping, unrelated lockloss

Corey and I ran the DARM offset stepping script using Jennie Wright's instructions from 85136.

Writing data to  data/darm_offset_steps_2025_Jun_20_15_11_46_UTC.txt  Moving DARM offset H1:OMC-READOUT_X0_OFFSET to 4 at 2025 Jun 20 15:11:46 UTC UTC (GPS 1434467524)

We lost lock 2 minutes after this script finished.  Kevin was running an ADF sweep at the time, but that was not doing anything at the time of the lockloss, so this seems like an unrelated lockloss. 

 

H1 PSL (PSL)
corey.gray@LIGO.ORG - posted 10:39, Friday 20 June 2025 (85196)
PSL Status Report (FAMIS #26427)

This is for FAMIS #26427.
Laser Status:
    NPRO output power is 1.844W
    AMP1 output power is 70.07W
    AMP2 output power is 140.3W
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 3 days, 0 hr 25 minutes
    Reflected power = 23.23W
    Transmitted power = 105.5W
    PowerSum = 128.7W

FSS:
    It has been locked for 0 days 0 hr and 5 min
    TPD[V] = 0.8216V

ISS:
    The diffracted power is around 4.0%
    Last saturation event was 0 days 0 hours and 32 minutes ago


Possible Issues:
    PMC reflected power is high

LHO General
corey.gray@LIGO.ORG - posted 07:38, Friday 20 June 2025 (85193)
Fri Ops Day Transition

TITLE: 06/20 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 14mph Gusts, 5mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.11 μm/s
QUICK SUMMARY:

H1's been locked 8.75hrs with no Owl shift notifications and a fairly quiet environmental conditions (breezes picking up a little (just over 10mph).

After the lockloss Oli notes at the end of their shift, H1 relocked fine without a need for another alignment.

Commissioning is also scheduled from 8-2PDT today.

H1 General (SQZ)
oli.patane@LIGO.ORG - posted 22:08, Thursday 19 June 2025 (85192)
Ops Eve Shift End

TITLE: 06/20 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Currently relocking and in FIND_IR.

We've been having short locks all day, two of them being during my shift, despite the locklosses seemingly being caused by different or unknown causes. None of them seem to be caused by anything environmental, as wind and ground motion have been low (secondary useism was a bit elevated, but has been coming down and is nowhere near the amoount that we would typically be having problems with it). We've also now had two locklosses during the last two relock attempts from ENGAGE_ASC_FOR_FULL_IFO, from the spot where we've been having that glitchiness in the PR gain since coming back from the vent. Also, like Ibrahim mentioned in his end of shift alog (85186), DRMI has been taking a really long time to catch even though the flashes have been great. It's been taking us down to CHECK_MICH multiple times, and each time the alilgnment looks great.

During the relock following the 2025-06-20 00:09 UTC lockloss (85188), once we got back up to NLN, the squeezer was still working on trying to relock. It wasn't able to lock the OPO and was giving the messages "Scan timeout. Check trigger level" and "Cannot lock OPO. Check pump light on SQZT0". However, I just tried re-requesting LOCKED_CLF_DUAL on the OPO guardian, and it locked fine first try. So I guess that wasn't actually a problem? tagging sqz anyway?


LOG:

23:30 Relocking
23:59 NOMINAL_LOW_NOISE
    00:01 Observing
00:09 Lockloss
    - Running a manual initial alignment
    - Lockloss at ENGAGE_ASC_FOR_FULL_IFO
    - Lockloss from CARM_TO_TR
02:05 NOMINAL_LOW_NOISE
    02:11 Observing
03:32 Lockloss
    - Running a manual initial alignment
    - Lockloss at ENGAGE_ASC_FOR_FULL_IFO
   

Start Time System Name Location Lazer_Haz Task Time End
23:36 FAC Tyler XARM n Checking on his beloved bees 23:52
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 20:33, Thursday 19 June 2025 (85191)
Lockloss

Lockloss at 2025-06-20 03:32 UTC after 1.5 hours locked. Not sure why we're having all these short locks today

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 17:12, Thursday 19 June 2025 - last comment - 19:12, Thursday 19 June 2025(85188)
Lockloss

Lockloss at 2025-06-20 00:09 UTC after only 10 minutes locked

Comments related to this report
oli.patane@LIGO.ORG - 19:12, Thursday 19 June 2025 (85190)

02:11 Back to Observing

H1 CAL
ibrahim.abouelfettouh@LIGO.ORG - posted 09:06, Thursday 19 June 2025 - last comment - 15:28, Friday 20 June 2025(85179)
Calibration Sweep 06/19 - Failed - Lockloss during Simulines -

Headless Start: 1434382652

2025-06-19 08:42:24,391 bb measurement complete.
2025-06-19 08:42:24,391 bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250619T153714Z.xml
2025-06-19 08:42:24,392 all measurements complete.
Headless End: 1434382962

Simulines Start: 1434383042

2025-06-19 15:55:14,510 | INFO | Drive, on L3_SUSETMX_iEXC2DARMTF, at frequency: 8.99, and amplitude 0.53965, is finished. GPS start and end time stamps: 1434383704, 1434383727
2025-06-19 15:55:14,510 | INFO | Scanning frequency 10.11 in Scan : L3_SUSETMX_iEXC2DARMTF on PID: 2067315
2025-06-19 15:55:14,511 | INFO | Drive, on L3_SUSETMX_iEXC2DARMTF, at frequency: 10.11, is now running for 28 seconds.
2025-06-19 15:55:17,845 | ERROR | IFO not in Low Noise state, Sending Interrupts to excitations and main thread.
2025-06-19 15:55:17,846 | ERROR | Ramping Down Excitation on channel H1:LSC-DARM1_EXC
2025-06-19 15:55:17,846 | ERROR | Ramping Down Excitation on channel H1:SUS-ETMX_L2_CAL_EXC
2025-06-19 15:55:17,846 | ERROR | Ramping Down Excitation on channel H1:SUS-ETMX_L3_CAL_EXC
2025-06-19 15:55:17,846 | ERROR | Ramping Down Excitation on channel H1:CAL-PCALY_SWEPT_SINE_EXC
2025-06-19 15:55:17,846 | ERROR | Ramping Down Excitation on channel H1:SUS-ETMX_L1_CAL_EXC
2025-06-19 15:55:17,846 | ERROR | Aborting main thread and Data recording, if any. Cleaning up temporary file structure.
PDT: 2025-06-19 08:55:22.252424 PDT
UTC: 2025-06-19 15:55:22.252424 UTC
Simulines End GPS: 1434383740.252424

Could not generate a report using the wiki instructions (probably due to incomplete sweep).

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 15:28, Friday 20 June 2025 (85209)

Here is a screenshot of the broadband measurement. The calibration still looks good!

Images attached to this comment
H1 General (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 08:57, Thursday 19 June 2025 - last comment - 19:21, Thursday 19 June 2025(85178)
Locklosss 15:55 UTC

Lockloss whilst running simulines. Since this has happened in the last two weeks, caused by the sweep, and there are no immediate other causes, it is probably what caused the Lockloss. We were in Observing for nearly10 hrs.

Comments related to this report
oli.patane@LIGO.ORG - 19:21, Thursday 19 June 2025 (85189)

I wasn't able to confirm for sure that the lockloss was caused by the calibration sweep. It looks like the QUAD channels all had an excursion at the same time, before the lockloss was seen in DARM(ndscope1). It does look like in the seconds before the lockloss, there were two excitations that were ramping up (H1:SUS-ETMX_L2_CAL_EXC_OUT_DQ at ~6 Hz(ndscope2) and H1:SUS-ETMX_L3_CAL_EXC_OUT_DQ at ~10Hz(ndscope3)). Their ramping up is not seen in the QUAD MASTER_OUT channels, however, so it's hard to know if those excitations were the cause.

Images attached to this comment
H1 SUS (ISC, SEI, TCS)
jeffrey.kissel@LIGO.ORG - posted 16:59, Thursday 30 November 2017 - last comment - 09:19, Friday 20 June 2025(39590)
B&K Hammer Done in HAM3 Baffles, SR2 Cage, and of Primary/Final/Large in-vac TCS Steering Mirrors in BSC2
J. Kissel, S. Pai

Siddhesh and I B&K Hammer'ed the new MC2 and PR2 Scrapper Baffles, SR2 Cage, and of Primary/Final/Large? in-vac TCS Steering Mirrors in BSC2. More details, pictures, and results to follow.

Notes for myself later: 
- SR2 cage accelerometer was oriented with X Y Z aligned with the IFO global coordinates. 
- All other measurements had acc Y aligned with IFO Y, and acc Z aligned with IFO X.
Comments related to this report
jeffrey.kissel@LIGO.ORG - 09:19, Friday 20 June 2025 (85195)
Adding an oldy-but-goldy picture of MC2 (background) and PR2 (foreground) taken by Conor Mow-Lowry the day of this BnK exercise. The picture is taken from the -Y HAM 3 door, so IFO +X and the beam splitter is to the right, and +L for both suspensions is to the left as their HR surfaces point back toward HAM2 (to the left).

Critical to today's yak shaving: the OSEM measuring Transverse at the top mass (M1) stage is toward the camera for both suspensions, i.e. in their +T direction, which means it's the "SIDE," or SD OSEM, per E1100109 (rather than the OPPOSITE SIDE, or OS).

(yak-shaving: while putting together the derivation of the OSEM2EUL matrices in G2402388), I'm reminded that some HSTS were assembled with the Transverse sensor on in the OS position, and I'm on the hunt as to which ones -- and making sure the (SD to T) & (T to SD) element OSEM2EUL & EUL2OSEM matrices, respectively, are correctly -1.0 if SD and +1.0 if OS.)

MC2 and PR2's OSEM2EUL matrix value H1:SUS-[MC2,PR2]_M1_OSEM2EUL_2_6 is correctly -1.0.
Images attached to this comment
Displaying reports 281-300 of 83043.Go to page Start 11 12 13 14 15 16 17 18 19 End