Displaying reports 101-120 of 83253.Go to page Start 2 3 4 5 6 7 8 9 10 End
Reports until 09:12, Tuesday 08 July 2025
H1 SUS
oli.patane@LIGO.ORG - posted 09:12, Tuesday 08 July 2025 (85606)
SUS OLTFs taken for ITMY R0

We decided to quickly take OLTFs for the ITM R0 stages since we didn't have any yet.

Note: These measurements were taken with the 0.4:10 (old) satamp

ITMY R0
- Measurements taken with suspension in HEALTH_CHECK but with damping loops on
    - optic align offsets off, L2->R0 damping off, etc
- We needed to lower the excitation amplitude for V (10 -> 2.5) and Y(300 -> 10) to keep the suspension dac from overflowing and saturating. In the case of V, we could get full measurements with the excitation amplitude at 5, but the coherence was poor, so I lowered the amplitude to 2.5 and that worked to make sure the osems weren't being overdriven. These excitation filters were originally matches to the ETM R0 ones, but we had to adjust them.
Data: /ligo/svncommon/SusSVN/sus/trunk/QUAD/H1/ITMY/SAGR0/Data/2025-07-08_1510_H1SUSITMY_R0_WhiteNoise_{L,T,V,R,P,Y}_0p02to50Hz_OpenLoopGainTF.xml r12395

H1 AOS
jeffrey.kissel@LIGO.ORG - posted 09:05, Tuesday 08 July 2025 (85605)
H1 SUS ITMX R0 Reaction Chain Open Loop Gain TFs
J. Kissel

Continuing on the campaign of gathering open loop gain (and loop suppression & closed loop gain TFs), I measured H1SUSITMX's reaction chain R0 damping loops this morning -- see
    /ligo/svncommon/SusSVN/sus/trunk/QUAD/H1/ITMX/SAGR0/Data/
        2025-07-08_1530_H1SUSITMX_R0_WhiteNoise_L_0p02to50Hz_OpenLoopGainTF.xml
        2025-07-08_1530_H1SUSITMX_R0_WhiteNoise_P_0p02to50Hz_OpenLoopGainTF.xml
        2025-07-08_1530_H1SUSITMX_R0_WhiteNoise_R_0p02to50Hz_OpenLoopGainTF.xml
        2025-07-08_1530_H1SUSITMX_R0_WhiteNoise_T_0p02to50Hz_OpenLoopGainTF.xml
        2025-07-08_1530_H1SUSITMX_R0_WhiteNoise_V_0p02to50Hz_OpenLoopGainTF.xml
        2025-07-08_1530_H1SUSITMX_R0_WhiteNoise_Y_0p02to50Hz_OpenLoopGainTF.xml

SUS was in new HEALTH_CHECK state, so R0 tracking is OFF and alignment offsets are OFF, but damping loops are *on.*
SEI / HPI / ISI was FULLY_ISOLATED, in its best performing state.

Data analysis and commentary to come.
H1 SUS
jeffrey.kissel@LIGO.ORG - posted 09:02, Tuesday 08 July 2025 (85604)
H1SUSTMSX M1 Damping Loop Open Loop Gain TFs
J. Kissel

Continuing on the campaign of gathering open loop gain (and loop suppression & closed loop gain TFs), I measured that for H1SUSTMSX M1 damping loops this morning -- see

   /ligo/svncommon/SusSVN/sus/trunk/TMTS/H1/TMSX/SAGM1/Data/
        2025-07-08_1525_H1SUSTMSX_M1_WhiteNoise_L_0p02to50Hz_OpenLoopGainTF.xml
        2025-07-08_1525_H1SUSTMSX_M1_WhiteNoise_P_0p02to50Hz_OpenLoopGainTF.xml
        2025-07-08_1525_H1SUSTMSX_M1_WhiteNoise_R_0p02to50Hz_OpenLoopGainTF.xml
        2025-07-08_1525_H1SUSTMSX_M1_WhiteNoise_T_0p02to50Hz_OpenLoopGainTF.xml
        2025-07-08_1525_H1SUSTMSX_M1_WhiteNoise_V_0p02to50Hz_OpenLoopGainTF.xml
        2025-07-08_1525_H1SUSTMSX_M1_WhiteNoise_Y_0p02to50Hz_OpenLoopGainTF.xml

SUS was in HEALTH_CHECK state, with alignment offsets and it's companion QUAD's L2/TMS tracking off, but damping loops turned back ON.

Data analysis and commentary to come.
H1 General
oli.patane@LIGO.ORG - posted 07:32, Tuesday 08 July 2025 - last comment - 07:33, Tuesday 08 July 2025(85602)
Ops Day Shift Start

TITLE: 07/08 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 6mph 3min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.05 μm/s
QUICK SUMMARY:

Currently we are in an initial alignment. I'll let that finish and then put the detector in idle

 

Comments related to this report
oli.patane@LIGO.ORG - 07:33, Tuesday 08 July 2025 (85603)

It was only doing input align so I just took the detector to IDLE once that offloaded

H1 AOS
erik.vonreis@LIGO.ORG - posted 07:01, Tuesday 08 July 2025 (85601)
Workstations updated

Workstations were updated and rebooted.  This was an OS packages update.  Conda packages were not updated.

H1 General
corey.gray@LIGO.ORG - posted 06:21, Tuesday 08 July 2025 - last comment - 12:58, Tuesday 08 July 2025(85600)
H1 Owl Shift 545amPDT Wake Up Call: Due To Not Being To Get Past DRMI For Over 3hrs

Got a notification for H1 assistance.  H1 dropped from observing just under 3.5hrs ago.  After a couple of PRMI cycles, DRMI did lock within 40min, but immediately lost lock at DRMI LOCKED CHECK ASC.  Then an Initial Alignment was immediately run in just under an hour @1024UTC.  At this point, H1 attempted lock but had another lockloss at DRMI LOCKED CHECK ASC again, but after this lockloss H1 made it up to CARM 5PicoMeters for the next lockloss.  Next attempt made it to a CARM OFFSET Reduction Lockloss.  Then after no luck locking PRMI and it being over 3hrs, I got the wake up call.  

It's sounding like H1 has been having this Locking ailment as of late.

Going to let it finish a 2nd IA I started at @1253UTC after seeing that PRMI looked fine (flashes looked good on camera and are over 100 counts for POP18/90).  Initial Alignment just completed and locking clock is restarting. 

(Which is good, because my NoMachine is having the symptom that the mouse pointer does not match it's location in my No Machine session intermittantly (basically I'll have the mouse pointer over a spot, i.e. "X" to close an MEDM window, but when I try to click the "X" with the pointer, the pointer is actually, in the No Machine session, about 10" to the left (or right---basically somewhere else!), so it's hard to operate in my NoMachine session. The mouse would be fine on my laptop's monitor.  I've had this issue off and on for a few months.  With that said, ....after about 15min, my mouse now works OK in NoMachine!)

H1 is now back to DRMI (after the recent Alignment).  Camera flashes look centered and POP18/90 flashes are just under 200.  Leaving H1 for automatic operations  while I see if I can get return to sleep.  At 1325UTC/625amPDT, DRMI locked, btw.

Comments related to this report
oli.patane@LIGO.ORG - 12:58, Tuesday 08 July 2025 (85622)

It looks like before Corey was called, there were some locking attempts that got up past DRMI. Before he was called, there were two locklosses from DRMI_LOCKED_CHECK_ASC (which we have figured out was due to some SRC1 offsets being turned on by accident), then one lockloss from CARM_5_PICOMETERS, then two lockloss from CARM_OFFSET_REDUCTION. So that makes a lot more sense as to why it didn't call for three hours!

Images attached to this comment
LHO General
ryan.short@LIGO.ORG - posted 22:02, Monday 07 July 2025 (85599)
Ops Eve Shift Summary

TITLE: 07/08 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Very quiet shift with H1 observing after getting relocked. I was originally going to drop observing at some point to try SQZ alignment and angle scans to bring the BNS range up, but since the range has actually been improving over the course of this lock stretch and all four detectors are up and observing, I opted not to and we will see how things change overnight. H1 has now been locked and observing for 4.5 hours.

H1 PSL
ryan.short@LIGO.ORG - posted 18:07, Monday 07 July 2025 (85598)
PSL 10-Day Trends

FAMIS 31093

Not much to report this week other than the FSS TPD has drifted down again slightly.

Images attached to this report
H1 ISC
elenna.capote@LIGO.ORG - posted 17:08, Monday 07 July 2025 (85596)
DRMI locking is a struggle- maybe not an alignment issue

Today we've had to run many initial alignments and mess around with mirror alignments to cajol DRMI into locking. In the first part of the day, we were coming back from a big earthquake which had kept us down for hours. I don't know what we can attribute the struggle to lock to, but it can't be related to the BS heating since we had been down for many hours.

Later in the day, we were relocking after a few hours up, and just after the lockloss we proceeded to DRMI locking. The alignment was very poor. The guardian quickly took us to PRMI and then MICH FRINGES because of the very bad alignment. This is perhaps a sign that the BS slow release method isn't working so well, since it should keep the alignment decent just after lockloss. I cleared the history on the BS M1 pitch lock bank, and proceeded to help the MICH FRINGES and PRMI state by hand. Once we got back to DRMI we were able to lock relatively quickly.

I am suggesting that our DRMI locking problems are "maybe not an alignment issue" because this morning after several alignments, DRMI took a very long time to lock, and just after lockloss in the afternoon, where the slow release method should help keep the alignment good, the alignment was very bad. Maybe we should look into the triggers or engagement of the locking to see if there is a problem there. Just based on my experience this afternoon, I would want to turn off the BS slow release, or recheck that it is doing what we want it to do.

H1 General
anthony.sanchez@LIGO.ORG - posted 17:05, Monday 07 July 2025 (85594)
Day shift report

TITLE: 07/07 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:

Relocking notes:
Today started off with an Initial_Alignment that was interrupted by earthquake mode being reactivated.
Which prompted another Inital_Alignment.
While trying to relock there was many attempts at the  DRMI-> PRMI-> Check_Mich_Fringes which yeilded no locking results.
Another Initial Alignment happened, Which allowed us to lock DRMI but we caught a lockloss at LOW_NOISE_COIL_DRIVERS, and on the next attempt OMC_WHITENING.

After that H1 Bouced around through DRMI-> PRMI-> and Check_Mich _Fridges again until another Inital_Alignment was ran.

We finally got back to NLN at 20:54 UTC and were locked for a little over 2 hours before the 4pm Lockloss struck.


LOG:                                                                           

Start Time System Name Location Lazer_Haz Task Time End
15:31 SQZ Matt & Sheila SQZT0 Local Adjusting the OPO Crystal 18:25
15:38 FAC Kim & Nelly MX & MY N Kim, Technical Cleaning at MX, Nelly MY. 16:41
16:28 PEM Sam & Robert LVEA HAM1 & 2 N Installing Accerometer 17:28
20:32 PEM Robert LVEA N Turning off PEM apparatuses 20:34
20:48 ISS Rahul Optics Lab Local Working in Optics lab, Likely on ISS 20:53
21:47 VAC Jordan Mx N Getting rack 22:17

 

H1 ISC
elenna.capote@LIGO.ORG - posted 17:02, Monday 07 July 2025 - last comment - 12:57, Tuesday 08 July 2025(85593)
Strange behavior from SRC1 offsets in DRMI ASC

Today while in DRMI ASC, and while trying to debug other problems with DRMI acquisition, Ryan, Tony, and I saw that the DRMI ASC starting pulling the buildups in a bad direction, which made no sense. We were trying to figure out which loops were the culprit, when I saw that the SRC1 offsets were engaged. These offsets had been put in place during the problems with the OFI, and we don't run with these offsets in full lock anymore. I turned the offsets off and the buildups starting moving in the good direction again. This is very confusing, because we've been running like this for ages probably without a problem. Today it was suddenly a problem. I commented out the lines in the ISC_DRMI guardian state PREP_DRMI_ASC where these offsets are turned on and loaded.

Comments related to this report
elenna.capote@LIGO.ORG - 12:57, Tuesday 08 July 2025 (85623)

I made a minor mistake here- I only commented out the lines in ISC_DRMI where the offset is SET, but I didn't comment out lines where the offset is turned ON. However, ISC_LOCK sets a random offset in SRC1 as well, and doesn't turn it on, but then we were in a situation where ISC_LOCK sets a weird offset, and then ISC_DRMI turns it ON. This meant today the DRMI ASC came on in a very strange way and pulled the alignment far off. I have now commented out all lines in both ISC_DRMI and ISC_LOCK that set these offsets, and turn the offsets on. Hopefully, this won't be an issue again.

LHO General
ryan.short@LIGO.ORG - posted 16:20, Monday 07 July 2025 - last comment - 17:55, Monday 07 July 2025(85591)
Ops Eve Shift Start

TITLE: 07/07 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 12mph Gusts, 4mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.06 μm/s
QUICK SUMMARY: H1 just lost lock after a couple hours of being locked, so starting relocking now.

Comments related to this report
ryan.short@LIGO.ORG - 17:06, Monday 07 July 2025 (85595)

Lockloss @ 23:14 UTC after almost 2.5 hrs locked - link to lockloss tool

No obvious cause.

ryan.short@LIGO.ORG - 17:55, Monday 07 July 2025 (85597)

H1 back to observing at 00:46 UTC. Automatic relock except for a small adjustment to FC2 pitch was needed to lock the filter cavity.

I also ran a 'SCAN_SQZANG_FDS' once at low noise to improve the SQZ angle, which helped bring the BNS range up to about 142Mpc, but not quite back to the 150Mpc we were hoping for. Perhaps a SQZ alignment scan would've helped too, but I did not take the time to do that here.

H1 SQZ
matthewrichard.todd@LIGO.ORG - posted 12:20, Monday 07 July 2025 - last comment - 16:35, Monday 07 July 2025(85589)
OPO crystal move and NLG

 M. Todd, S. Dwyer


We moved the OPO crystal around again, following the work we did last week. We had much better luck this time with the translation stage controller (until the end, where we seemed to lose where we were).

After we took the measurement at -1100, we decided that given the historesis in the controller the better way to go back to a "position" was to set our OPO temperature to the value that corresponded to that position and move the controller until we re-found the green/ir coresonance.

We tried this, and then got veeery lost. But eventually found our way closer to our high NLG and ~31.4 OPO crystal temp, but it was not at the position we expected it to be at. This method may be refined in the future, where instead of doing finite steps with the controller to map out the NLG parameter space, we could sweep the OPO temp and follow it with the controller position via the co-resonance.

Regardless, we are at a much higher NLG than when we started!

Position Max Thermistor Green Launch Unamplified Dark NLG Pthres P Notes
0 6.70E-02 30.28 29.4 7.11E-03 -2.70E-05 9.39 117.51 105 9:22:03 AM
100 4.52E-02 30.14 22.5 7.11E-03 -2.70E-05 6.34 124.67 105 9:45:12 AM
-100 5.98E-02 30.214 27.7 7.11E-03 -2.70E-05 8.38 119.22 105 9:59:40 AM
-300 7.83E-02 30.42 30.7 7.11E-03 -2.70E-05 10.97 115.53 105 10:07:48 AM
-500 1.08E-01 30.719 30 7.11E-03 -2.70E-05 15.14 112.43 105 10:16:46 AM
-700 1.20E-01 31.164 27 7.11E-03 -2.70E-05 16.82 111.64 105 10:26:05 AM
-900 2.11E-01 31.397 31 7.11E-03 -2.70E-05 29.57 108.68 105 10:33:52 AM
-1100 1.62E-01 31.348 27 7.11E-03 -2.70E-05 22.70 109.84 105 10:48:45 AM
-2880 1.70E-01 31.364 27 7.11E-03 -2.70E-05 23.82 109.60 105  
Comments related to this report
sheila.dwyer@LIGO.ORG - 16:35, Monday 07 July 2025 (85592)OpsInfo

I think that what has been happening is that we have slowly drifted away from our phase matching temperature in the OPO as we have moved spots.  Conceptualy, we'd like to adjust the translation stage for co-resonance with the temperature servo set to keep the crystal at the phase matching temperature.  In reality, the actual temperature temperature that we get for a particular temperature servo set point is not the same for different crystal positions because of local heating from green absorption.  This last week's expirience leaves me more conviced that we can't rely on the translation stage counts for information about where we are in the crytsal. 

In the end, we did improve the nonlinear gain today, but our squeezing angle servo was not doing well in the previous lock and seemed to be causing large range fluctuations.  We dropped out of observing for a few moments to turn off the servo and scan the squeezing angle, but this didn't restore us to a good range.  Now we have lost lock for some other reason, and I've set the OPO trans power set point down to 80 uW (it was 105), which according to RF6 gives us a similar NLG to what we had before the crystal move.  I've adjsuted the OPO temperature for this green power. 

For operators: When we relock, we will want to run scan sqz ang again before going to observing, and we might need to run it again later once the IFO has thermalized.

 

H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 11:55, Monday 07 July 2025 (85588)
18:52 UTC lockloss

18:52 UTC lockloss while we were in OMC_WHITENING damping violins

H1 DetChar (DetChar)
sreedevimohan.s@LIGO.ORG - posted 10:22, Monday 07 July 2025 (85586)
DQ Shift report for the week from 2025-06-16 to 2025-06-22

Below is the summary of the DQ shift for the week from 2025-06-16 to 2025-06-22

The full DQ shift report with day-by-day details and plots is available here.

H1 CAL (CAL)
ryan.crouch@LIGO.ORG - posted 18:25, Thursday 03 July 2025 - last comment - 12:37, Monday 07 July 2025(85543)
Thursday calibration measurement

We had been locked for just over 3 hours, the circulating power was ~378.5kW in each arm, a little under the usual 380kW.

Broadband:

Start: 2025-07-04 00:50:39

Stop: 2025-07-04 00:55:50

Data: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250704T005039Z.xml

Simulines:

Start: 2025-07-04 00:56:56.978617 UTC // GPS: 1435625834.978617

Stop: 2025-07-04 01:20:13.539672 UTC // GPS: 1435627231.539672

Data:

2025-07-04 01:20:13,381 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20250704T005657Z.hdf5
2025-07-04 01:20:13,389 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20250704T005657Z.hdf5
2025-07-04 01:20:13,394 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20250704T005657Z.hdf5
2025-07-04 01:20:13,398 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20250704T005657Z.hdf5
2025-07-04 01:20:13,403 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20250704T005657Z.hdf5

 

Images attached to this report
Non-image files attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 12:37, Monday 07 July 2025 (85590)

PCAL broadband results attached.

Images attached to this comment
Displaying reports 101-120 of 83253.Go to page Start 2 3 4 5 6 7 8 9 10 End