Displaying reports 81-100 of 84979.Go to page 1 2 3 4 5 6 7 8 9 10 End
Reports until 08:43, Thursday 09 October 2025
H1 SQZ
camilla.compton@LIGO.ORG - posted 08:43, Thursday 09 October 2025 - last comment - 11:36, Thursday 09 October 2025(87385)
No SQZ Time, Checked NLG

We went into no SQZ from 15:34UTC to 15:42UTC. I checked NLG as in 76542

OPO Setpoint Amplified Max Amplified Min UnAmp Dark NLG
80 0.0134871 0.00017537 0.0005911 -2.57e-5 21.9
Comments related to this report
camilla.compton@LIGO.ORG - 11:36, Thursday 09 October 2025 (87391)

Elenna noticed this OPO temp change made SQZ worse at the yellow BLRMs. I then ran SQZ_OPO_LR's SCAN_OPOTEMP state which moved the OPO temp further in the same direction. This unlocked SQZ but shouldn't have but did make the yellow BLRMs better.

LHO General
corey.gray@LIGO.ORG - posted 07:41, Thursday 09 October 2025 - last comment - 08:50, Thursday 09 October 2025(87382)
Thurs DAY Ops Transition

TITLE: 10/09 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 4mph 3min avg
    Primary useism: 0.08 μm/s
    Secondary useism: 0.18 μm/s 
QUICK SUMMARY:

H1's been locked 16.5+hrs (even rode through an M6.0 Russia EQ an hr ago!).  

OVERNIGHT:  SQZ dropped H1 from Observing for 4min due to the PMC.  NO Wake up Calls!

ALSO:  Thurs Calibration from 830-noon (local time)...where Calibration will be deferred to later in the morning for Elenna task and/or Robert task starting things off.

Comments related to this report
corey.gray@LIGO.ORG - 08:50, Thursday 09 October 2025 (87386)PEM

Did the Dust Monitor Check and a new (to me atleast) problem would be for DR1 (Diode Room 1).

H1 General
ryan.crouch@LIGO.ORG - posted 22:00, Wednesday 08 October 2025 (87381)
OPS Wednesday EVE shift summary

TITLE: 10/09 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: We stayed locked the whole shift, 7 hours. Calm evening.
LOG:                                                                                                                    

Start Time System Name Location Lazer_Haz Task Time End
22:52 SAF Laser HAZARD LVEA YES LVEA is Laser HAZARD \u0d26\u0d4d\u0d26\u0d3f(\u239a_\u239a) 13:52
16:48 psl ryanS.jason optics lab YES NPRO! 00:43
22:36 iss keita.jennyq optics lab yes iss array align 00:53
22:46 ISS Rahul Optics lab YES Join iss array work 00:05
00:02 FAC Tyler X1 beamtube N Checks 00:30
H1 SUS (SUS)
ryan.crouch@LIGO.ORG - posted 20:28, Wednesday 08 October 2025 (87380)
Weekly In-Lock SUS Charge Measurement FAMIS28426

Closes FAMIS28426, last checked in alog87272

ETMX's Veff error bars are pretty big, as expected. The values looks fairly stable over time.

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 16:26, Wednesday 08 October 2025 (87368)
Wed DAY Ops Summary

TITLE: 10/08 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:

One ETMx Glitch lockloss and fire observed on Rattlesnake.
LOG:

H1 SEI
thomas.shaffer@LIGO.ORG - posted 16:14, Wednesday 08 October 2025 (87378)
H1 ISI CPS Noise Spectra Check - Weekly

FAMIS27393

Nothing new, all looks OK.

Last week's task

Non-image files attached to this report
H1 General
ryan.crouch@LIGO.ORG - posted 16:04, Wednesday 08 October 2025 (87377)
OPS Wednesday EVE shift start

TITLE: 10/08 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 10mph Gusts, 5mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.15 μm/s 
QUICK SUMMARY:

H1 General
corey.gray@LIGO.ORG - posted 15:33, Wednesday 08 October 2025 (87376)
H1 Lockloss From Infamous ETMx Glitch (Recovery Was Fine)

At 2047utc, H1 went out of Observing due to the SQZ PMC.  It was almost back, but then H1 had a lockloss at 2050utc & this was due to an ETMx Glitch.  Recovery was fairly quick, but I need to tweak ETMx & mostly TMSx.  And then DRMI looked bad, so I went through CHECK MICH FRINGES + PRMI.  Thern H1 got back to OBSERVING automatically.

H1 PSL
ryan.short@LIGO.ORG - posted 13:34, Wednesday 08 October 2025 (87374)
PSL Cooling Water pH Test - Monthly

FAMIS 27425

pH of PSL chiller water was measured to be between 10.0 and 10.5 according to the color of the test strip.

H1 IOO (ISC, PSL)
jennifer.wright@LIGO.ORG - posted 10:56, Wednesday 08 October 2025 - last comment - 18:13, Wednesday 08 October 2025(87373)
ISS measurements on Monday and Tuesday this week

Jennie W, Keita, Rahul

Monday:

Keita and I re-optimised the array input pointing using the tilt actuator on the PZT steering mirror and the translation stage.

After taking a vertical and horizontal coupling measurement, he realised that the DC values were very low when we optimised the pointing to improve the vertical coupling. Looking at the QPD the cursor was in the bottom half and so we cannot use the QPD y-readout  channel to work out the 'A' channel for either measurement (where the TF is B/A).

So for the A channel in the TF for the vertical coupling we had to use 
A = QPD_X/(cos(QPD_angle*pi/180))/sqrt(2) /Calib_X where A is the times series for thr TF, 'QPD_angle' is the angle between the horizontal dither axis and the QPD x-axis, Calib_X is the calibration of motion on the QPD in the x-axis in V/mm (LHO alog #85897).

And for the A channel in the TF for the horizontal coupling we had to use 
A = QPD_X/(cos(QPD_angle*pi/180))/Calib_X.

The data is plotted here.


Yesterday Keita and I double-shcked the beam size calculation I did on 26th June when we reset the alignment to the ISS array form the laser after it got bumped (we think). The beam size calculated was 0.23 mm beam radius on PD1 (the one with no transmissions through the mirrors) in x direction and 0.20 mm in y direction. The beam size calculated on the QPD was 0.20 mm in x direction and 0.19 mm in y direction. The waist should be close to this point (behind the plane of the array photodiodes) as the Rayleigh range is 9cm in x direction and 10cm in the y direction.

This check is because our beam calibration as reported in this entry, seems to be at least a factor of 2 off from Mayank and Shiva's measurements reported here (dcc LIGO-T2500077).

Since we already know the QPD was slightly off from our measurements on the 6th October, Keita and Rahul went in and retook the calibration measurements of volts on the QPD to mm on the QPD.

In the process Keita noticed that the ON-TRAK amplifier for the QPD had the H indicator lit and so was saturating. He turned the input pump current down from 130mA to120mA and the gain value on the amplifier from G3 (64k ohm) to G2 (16k ohm). The QPD was also recentred on the input pointing position where we had good vertical and horizontal coupling as we had left it in the position we found on Monday where it was off in yaw. We had to do several iterations of alignment switching between vertical and horizontal dither and still could only find a place where the coupling of PDs 1-4 were optimised. PDs 5-8 have bad coupling at this point. At this position we also took calibration measurments where we scanned the beam and noted down the voltage on the QPD X, Y and SUM channels.

Keita notes that for the QPD to be linear the voltages need to be below +/- 7V.

I will attach the final set of measurements in a comment below.

We left the alignment in this state with respect to the bullseye QPD readout.

Images attached to this report
Comments related to this report
jennifer.wright@LIGO.ORG - 14:27, Wednesday 08 October 2025 (87375)

The coupling measurement from the last measurements we took on Tuesday is here, and the calibration of the motion on the QPD is here.

Images attached to this comment
jennifer.wright@LIGO.ORG - 18:13, Wednesday 08 October 2025 (87379)

I was calibrating the above data using the Calib_X and Calib_Y values instead of by sqrt(Calib_X^2 + Calib_Y^2).

Fixed this in the attached graph.

Also 3 of the PDs are starting to near the edge of their aligned range which can be seen looking at the spread of DC values on the array PDs in this graph.

Images attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 10:17, Wednesday 08 October 2025 (87372)
Wed CP1 Fill

Wed Oct 08 10:06:51 2025 INFO: Fill completed in 6min 48secs

Gerardo addressed the TC-B issue due to ice buildup. Fill completed using the working TC-A.

Images attached to this report
H1 SQZ (SQZ)
corey.gray@LIGO.ORG - posted 08:36, Wednesday 08 October 2025 - last comment - 10:01, Wednesday 08 October 2025(87361)
Squeezer Not Squeezing After Maintenance

As H1 made its first attempt at Squeezing at 2150utc, when H1 was at NLN, the SQZ_Manager was in an odd state.  Althought it passed through the Inject Squeezing state, it hadn't even attempted squeezing (the SQZ ndscope fom had not action/changes on it).  It looked like SQZ_MANAGER was requesting DOWN, but it was not able to complete the DOWN. 

I've not seen this before.  So was going to take the opportunity to follow the new SQZ Flow Chart....

All these nodes were already DOWN (SQZ_MANAGER was requesting DOWN, but not in the (green) DOWN state)

These nodes were LOCKED:  PMC & SHG

Started going through SQZ Flow Chart Camilla passed on to opearators recently (T2500325). NOTE:  I did not take all SQZ nodes to DOWN...PMC & SHG were nominal/LOCKED, so I kept them there since this was a first step of Flow Chart (but was this the reason the SQZ_MANAGER wasn't in the DOWN state?).  In hindsight, I probably should have taken these down to truly follow the instructions, at any rate...

Going through this flow chart, I got to the step where we check the SHG Power to see if it is less then 90 (it was down at ~85).  And at this point I adjusted the SHG TEC temp to increase the SHG Power...made it up to ~119, but at this point attention was switched to the SQZ_MANAGER and why it wasn't DOWN.  Around this time Oli & Tony offered assistance.

At this point started relocking SQZ normally, and discovered the OPO_LR was going into "siren mode"---basically in a fast cycle of going to DOWN + it had the notification about "pump fiber rej power in ham7 high...".  To address the latter "common notification" involved working with SQZT0 waveplates (this was very likley do the big adjustment UP in SHG power I made).

Tony & Oli (whom had experience w/ these waveplate adjustments) needed several iterations of adjusting SQZT0 waveplates (due to the increased SHG power):  1/2 & 1/4 waveplates downstream of the SHG Launch beamsplitter path (see image #1) AND 1/2-waveplate upstream of the SHG Rejected + Launch paths (see image #2).  This took some time because of how big SHG power  was increased earlier and because every waveplate adjustment unlocked the PMC & SHG.

Eventually they were able to get SHG_FIBR_REJECTED_DC_POWERMON & SHG_LAUNCH_DC_POWERMON signals to their nominal values (these signals are also in Tony's screnshots, i.e. images #1 & #2).  After 30+min of adjusting, the SQZ was able to lock & H1 was automatically taken to Observing as soon as the new SHG TEC SETTEMP was accepted in SDF (see image#3).  H1 went to Observing just as Sheila was walking into the Control Room & able to assess and get a rundown of what happened. 

Many thanks to Oli & Tony for the help and Tony's screenshots!

 

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 10:01, Wednesday 08 October 2025 (87371)

It looks like the issue was that SQZ_MANAGER was staying in DOWN rather than moving up to the requested FRS_READY_IFO as the beam divertor had been left open after the 87342 tests. Re-reqesting SQZ_MANGER to DOWN would have closed the beam divertor and allowed it to try to relock.

Corey correcty increased the available SHG power with the TEC, however this mean that there was too much power going into the SHG fiber for the OPO to lock. This is surprising to me, I would have expected it to lock with no ISS and then unlock when the ISS couldn't get the correct OPO trans power, but that wasn't the case.  Oli then correctly adjusted the power control wave plate (PICO I #3) to reduce power going into fiber, this is annoying to do as unlocks everything each time the waveplate is touched, hence not being in the T2500325 flowchart. 

This is an usual reason why squeezing didn't work so no changes have been made to anything.

LHO General
corey.gray@LIGO.ORG - posted 07:44, Wednesday 08 October 2025 (87366)
Wed DAY Ops Transition

TITLE: 10/08 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 3mph 3min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.16 μm/s 
QUICK SUMMARY:

H1's been locked 1.25hrs.  Quiet environmentally.  2nd day in row of having a dust alarm for the LAB1 500nm alarm (ringing but not active) in the Optics Lab.

Overnight

1056-1319utc DOWN due to Lockloss from M5.0 Guatamala earthquake.  H1 spent 49min trying to get to DRMI w/ no luck before running an alignment & then the alignment was plagued by a glitch SRM with (2) SRM watchdog trips (TJ was awakened by this and alogged this morning).  After the SRM hub-bub (glitches from 1215-1228utc), TJ able to have H1 automatically make it back up to Observing.

H1 General
thomas.shaffer@LIGO.ORG - posted 05:44, Wednesday 08 October 2025 - last comment - 09:43, Wednesday 08 October 2025(87365)
Ops Owl Update

SRM M1-3 had tripped while in initial alignment while trying to lock SRY. This has been the case for the last few weeks. Even after the WD trips, the SRM model will continue to saturate for several minutes. After it failed on its own a few times, I requested ALIGN_IFO to DOWN and adjusted SRM in P to get AS_A brighter and the camera looking better. This worked. I ended up moving it by 26urad.

For whatever reason, trying to trend what what happening with SRM right before the trips freezes my ndscope. Everything else is running fine. I'll look into this back on site.

Just made it past DRMI, all looks good.

Comments related to this report
thomas.shaffer@LIGO.ORG - 09:43, Wednesday 08 October 2025 (87370)SUS

Looks like SRY locked but then immediately lost it, perhaps right on the cusp of having enough signal. ALIGN_IFO goes to DOWN and starts to turn things off, but the L drive from SRCL is still on for ~2.5 seconds. Even after it gets turned off SRM is still swinging. attachment 1

I'm a bit confused how Y is seen to be moving at 1Hz so much when we are only driving L. Looks like this issue has only been happening for a few weeks when I trend the SRM WD trips. We have tripped not only during SRY locking but a few times during DRMI acquisition as well. The timing is very suspicious with the SRM sat amp swap from Sept 23 - alog87103 alog87105.

Images attached to this comment
H1 General
ryan.crouch@LIGO.ORG - posted 22:00, Tuesday 07 October 2025 - last comment - 11:03, Wednesday 08 October 2025(87364)
OPS Tuesday EVE shift summary

TITLE: 10/08 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: We stayed locked the whole shift, just over 7 hours. PEM-MAG-INJ has a CFC.
LOG:                                                                                                                                                                                                                                                                                                                                                              

Start Time System Name Location Lazer_Haz Task Time End
22:52 SAF Laser HAZARD LVEA YES LVEA is Laser HAZARD \u0d26\u0d4d\u0d26\u0d3f(\u239a_\u239a) 13:52
23:52 PSL Jennie, Rahul Optics lab LOCAL ISS Array work, in at 23:00 00:18
Comments related to this report
corey.gray@LIGO.ORG - 11:03, Wednesday 08 October 2025 (87367)

Gosh---I thought I posted my Day shift Summary, but obviously did not---posting it here:


TITLE: 10/07 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:

Mostly straightforward Maintenace with all activity below.  The shift ended with some SQZ locking trouble, but after an hour of working this issue, H1 was back to Observing.
LOG:

  • 1430-1807 Xarm BTE inspections from corner to MX all day (randy)
  • 1439 reboot FOMs (eric)
  • 1440 PAUSE on ITMx/y CO2 PWR Guardians for Camilla and Matt T meas
  • 1447-1525 Kim & Dawn walkthru LVEA @8am PDT
  • 1452-1520 disassembly of blue  A-Frame gantry (chris, eric)
  • 1452-1551 EY cleaning (nellie)
    • 1529 EX cleaning (kim.dawn)
    • 1613-1723 LVEA cleaning (nellie)
    • 1646-1723 LVEA cleaning (kim)
    • 1756-1831 FCES cleaning (nellie,kim)
  • 1500 Maintenance Begins!
  • 1500 OMC Scans (mattT)
    • 1518 psl rack visit (MattT)
  • 1517 Site travels for safety/3ifo/periodic checks (tyler)
  • 1520 SR3/PR3 models updated (dave)----Related to upcoming SR3/PR3 work by Jeff/Oli
  • 1524-1612 FCES hvac (eric)
  • 1530-1841 EX Air Handler (AHU bearing replacement (chris)
    • 1611-1841 eric joining
  • 1533-1653 LN2 Norco truck arrives for MY (cp3)
    • 1555-1735 LN2 Norco truck arrives, but goes down X-arm (from trello thought they would be EY CP7)....3min later, I saw them backing up---assuming going to EY!
  • 1534-1620 floor measurements & parts (betsy)
    • 1648-1730 Heading back out (betsy)
  • 1548-1558 CO2x TCS laser needing to be turned back on (camilla, sheila, matt)
  • 1553-1836 MY & MX emergency/roughing pump work (travis)
    • 1624-1836 Janos joining
  • 1607-1628 Laser test set-up in Optics Lab (jason)
  • 1608-1622 HAM4 ISI off for 20min (jim.marc)
  • 1608-1632 Checking (JAC?) parts (daniel)
    • 1617-1632 Jenny joining
    • 1656-1712 Back out to get mechanical shutter (daniel)
  • 1637-1733 Various PSL tweak ups (jason, ryan)
  • 1659-1739 HAM6 HEPI sensor work [ISI to DAMPED] (jim)
  • ~1700-1921 PR3/SR3 work begins (jeff, oli, dave)
    • 1714 Restarts complete (dave)
  • 1711-1811 Check new SUS rack in lvea (marc)
  • 1713-1827 ISS PD Array/laser HAZARD (keita)
    • 1753-1827 rahul joining
    • 2156 Keita returning & 2234 Rahul/Jenny returning
  • 1715 HAM7 WD trip (jim said it wasn't him), and reisolated
  • 1718-2009 moving vertex (bsc1/2/3) accelerometers in/out of LVEA a few trips (robert)
  • 1743 new-NPRO characterization in optics lab (jason.ryan)
    • 2024-2113 Out for lunch
  • 1753-1800 leave parts grab (corey)
  • 1753-1822 checking isc drawers (jenny)
  • 1809-1919 VEA CDS laptops search & update (tony)
  • 1810-1852 Swap sled on Hartman table (camilla, tj)
  • 1852-1858 TJ Sweeping LVEA
  • 1921-1945 Initial Alignment started
  • 1937 Delivery truck pulls up to Gate (I mentioned signs point to LSB for deliveries, but they said they did not see a sign).  Steered truck back to LSB Receiving.
  • 1947 Locking begins
    • 2029 Lockloss at LOWNOISE ASC (it was determined this was caused from some Guardian changes from Maintenance---Elenna found the issue and updated Guardian for the next lock).
    • Next lock had horrible DRMI alignment, tried PRMI +tweaks with not much luck, so ran a Check Mich Fringes + PRMI and eventually locked PRMI at 2057; DRMI finally locked at 2112 (but I did tweak the SRM after waiting 10min!)
    • 2151 Back to NLN, BUT the SQZ was not Squeezing.  Roughly an hour later with alot of help from Oli & Tony, we were able to lock the SQZ and go back to Observing.
      • (will alog specifics for this later) 
  • 2045 Cassidy had a class tour in the Control Room
H1 SEI
jim.warner@LIGO.ORG - posted 12:20, Tuesday 30 September 2025 - last comment - 09:10, Wednesday 08 October 2025(87226)
HAM4 ISI coil driver found making a ticking noise, probably a fan

This morning while in the CER for other reasons, I heard a coil driver on HAM4 ISI making a ticking/chirping noise. Fil suggested this was probably a fan, so we've pulled the chassis to check it out. We've put in a spare and will run that until next maintenance probably.

Chassis pulled is S1100320, the spare we put in is S1103567. Frs is https://services1.ligo-la.caltech.edu/FRS/show_bug.cgi?id=35547 

Comments related to this report
jim.warner@LIGO.ORG - 09:10, Wednesday 08 October 2025 (87369)

We swapped the old chassis back in yesterday, after Fil replaced the fan last week.

Displaying reports 81-100 of 84979.Go to page 1 2 3 4 5 6 7 8 9 10 End