Displaying reports 141-160 of 78140.Go to page Start 4 5 6 7 8 9 10 11 12 End
Reports until 16:27, Friday 27 September 2024
H1 General
oli.patane@LIGO.ORG - posted 16:27, Friday 27 September 2024 (80335)
Ops Eve Shift Start

TITLE: 09/27 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 13mph Gusts, 8mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.52 μm/s
QUICK SUMMARY:

Currently relocking at LOWNOISE_LENGTH_CONTROL. Once we get to NLN we will be running a broadband calibration sweep as part of our continuation of updating/fixing the calibration.

LHO General
corey.gray@LIGO.ORG - posted 16:02, Friday 27 September 2024 (80333)
Fri Ops DAY Shift Summary

TITLE: 09/27 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Calibration
OUTGOING OPERATOR:  n/a
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 10mph Gusts, 4mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.54 μm/s
QUICK SUMMARY:

H1 CAL (CAL)
corey.gray@LIGO.ORG - posted 14:03, Friday 27 September 2024 (80332)
H1 Calibration Measurement (simulines only)

Ran simulines calibration measurement for Louis and then stayed out of Observing a little extra while they worked on calibration results.

Measurement NOTES:

Images attached to this report
H1 General
thomas.shaffer@LIGO.ORG - posted 13:39, Friday 27 September 2024 (80331)
Ops Day Mid-shift Report

There was some commissioning time this morning to get the SRCL offset that we needed before running a calibration measurement that we desperately needed. We then lost lock when running simulines. Relocked automatically and we are now running another simulines.

H1 CAL
thomas.shaffer@LIGO.ORG - posted 11:13, Friday 27 September 2024 (80330)
Callibration sweep started but we lost lock as simulines started

Once Sheila pushed a new SRC detuning, I ran a broadband measurement, then started simulines. We lost lock a little over a minute after starting, not sure of the cause yet.

Simulines start:

PDT: 2024-09-27 11:08:44.473513 PDT
UTC: 2024-09-27 18:08:44.473513 UTC
GPS: 1411495742.473513
 

Lock loss 1810UTC

 

Images attached to this report
H1 General (Lockloss, PSL)
thomas.shaffer@LIGO.ORG - posted 10:17, Friday 27 September 2024 - last comment - 10:42, Friday 27 September 2024(80328)
Lock loss early this morning seems to be FSS related

Lockloss 1411472854

The lock loss tool tagged FSS oscillation. Trending H1:PSL-FSS_FAST_MON_OUT_DQ shows that something started to get noisy 35seconds before lock loss. I didn't see any other strangeness in our usual LSC, ASC, ETMX, DARM signals.

Images attached to this report
Comments related to this report
thomas.shaffer@LIGO.ORG - 10:42, Friday 27 September 2024 (80329)

Unlike what Ryan C saw on Sept 21, the IMC refl servo splitmon and tidal seem stable when the FSS starts to get noisy and the PC mon channel starts to drift.

Images attached to this comment
H1 SQZ
sheila.dwyer@LIGO.ORG - posted 09:17, Friday 27 September 2024 - last comment - 14:56, Friday 27 September 2024(80318)
SRCL detuning with FIS again

Took another data set of FIS with different SRCL offsets, to try to set the SRCL detuning for calibration measurement, similar to 79903

 

Images attached to this report
Comments related to this report
sheila.dwyer@LIGO.ORG - 14:56, Friday 27 September 2024 (80334)

Here are some plots of this code, made by borrowing heavily from Vicky's repo here and from the Noise budget repo.  I will put this into a repo soon, perhaps here.

The first plot shows the spectra with different SRCL offsets in, always with the squeezing angle optimized for kHz squeezing.  The no squeezing model isn't well verified, I've used SRCL detuning of 0 which we know isn't correct.  We use this no squeezing model to subtract from the no squeezing measurement to estimate the non-quatum noise, shown in gray here.  The SRC detuning doesn't change this estimate much without squeezing injected.

The next plot is a re-creation of Vicky's brontosaurus plot, as in 79951.  The non-quantum noise estimate is subtracted from each of the FIS curves, which are then plotted in dB relative to the no squeezing model.  Each of those shows a squeezing data set with a model, where I by hand adjusted the SRCL offset in the model based on this plot.  The subtraction is needed to make the impact of the SRCL offset clear. 

The final plot shows the linear fit of the SRC detuning to SRCL offset, which gives us the SRCL offset we should use to go toward 0 detuning, (-191 counts). 

Images attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 09:06, Friday 27 September 2024 (80327)
Fri CP1 Fill

Fri Sep 27 08:14:05 2024 INFO: Fill completed in 14min 1secs

Images attached to this report
LHO General
thomas.shaffer@LIGO.ORG - posted 07:56, Friday 27 September 2024 (80326)
Ops Day Shift Start

TITLE: 09/27 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 4mph Gusts, 2mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.44 μm/s
QUICK SUMMARY: Locked for one and a half hours, some shorter locks overnight as well. Verbal didn't mention any PIs for the last lock loss, but the length of the lock is suspicious. More investigation needed. Our range isn't optimal and the squeezing looks poor in the higher frequencies based on the nuc33 FOM.

 

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 22:03, Thursday 26 September 2024 (80325)
OPS Eve Shift Summary

TITLE: 09/27 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:

IFO is LOCKING at ENGAGE_ASC_FOR_FULL_IFO. Fully auto so far...

Shift was mostly quiet. Despite locking issues today, IFO locked pretty automatically following Tony's initial alignment. We did lose lock during LOWNOISE_ESD_ETMX (558). Though, guardian brought it to NLN shortly after and we were in observing swiftly.

In terms of PI modes, there was one very harsh ringup 23 minutes into NLN (or 34 mins after MAX_POWER). This gave 3 verbal PI 24 alarms but the damping, even though at maximum, was able to bring it down. No other ringups.

There was one lockloss 04:08 UTC. probably attributed to the environment and rising secondary microseism/over 35mph wind combo - alog 80323.

TCS Work from today has 2 SDF Diffs (screenshot attached).
LOG:

Start Time System Name Location Lazer_Haz Task Time End
23:39 PCAL Tony, Neil EX N PCAL Computer Acquisition 23:51
00:21 PEM Robert Y-arm N Looking for parts 00:21

 

Images attached to this report
H1 SUS (SUS)
ibrahim.abouelfettouh@LIGO.ORG - posted 21:36, Thursday 26 September 2024 (80324)
Weekly In-Lock SUS Charge Measurement - FAMIS 28372

Weekly In-Lock SUS Charge Measurement - Closes FAMIS 28372

Observations:

Images attached to this report
H1 ISC (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 21:15, Thursday 26 September 2024 (80323)
Lockloss 04:08 UTC

Lockloss after a 3hr lock. A few details:

From the above, I think this lockloss may be environmentally cause rather than due to PI issues we have been experiencing. Now having trouble locking ALS due to high winds.

H1 General
anthony.sanchez@LIGO.ORG - posted 16:30, Thursday 26 September 2024 (80322)
Thursday OPS Day Shift End

TITLE: 09/26 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:

OMC issues
Ether Cat issues

NLN reached at 20:27 UTC!

CAL and PEM Team were able to get some Comissioning done in the later half of the day.... but....
PI 24 Ring up.... -> Lockloss   before Sheila could finish her work. :(
ETMY Ring Heater turned on, so ETMY Will start to drift.

 

LOG:

Start Time System Name Location Lazer_Haz Task Time End
23:58 SAF H1 LVEA YES LVEA is laser HAZARD Open ViewPort on HAM3! 18:24
15:02 FAC Christina Recycling bins N Using the fork lift to move some heavy items into recycling. 16:02
16:08 PEM Robert LVEA Yes Removed viewport and Setting up for commissioning 17:08
17:46 Fac Karen Optics lab & Vac Prep N Technical cleaning 18:31
18:35 EE Sigg & Fernando LVEA Yes Checking pinout for EthrCat 20:09
19:58 EE Sigg, Fill HAM7 racks Yes Removing and replacing Whitening boards 19:38
20:00 EE Fernando & Fil LVEA Yes OMC DCPD troubleshooting 20:09
20:53 EE Fil MidY N Gathering Parts to fix te Spare OMC Whitening Chassis 21:53
21:12 PEM Robert LVEA YES HAM2 PEM Injections 22:12
22:47 PEM Robert LVEA Yes Putting the Viewport back on and turn PEM equipment off. 23:47


 

 

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:10, Thursday 26 September 2024 (80321)
OPS Eve Shift Start

TITLE: 09/26 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Calibration
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 16mph Gusts, 10mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.24 μm/s
QUICK SUMMARY:

IFO is in MAINTENANCE (Calibration and OMC Whitening Chassis work) but now just trying to get locked.

We're in initial alignment now and prepping to lock for observing!

 

H1 TCS
sheila.dwyer@LIGO.ORG - posted 15:37, Thursday 26 September 2024 (80320)
increased ETMY ring heater

To try to avoid the 10kHz PI (MODE 24), I've increased the power on both segments of ETMY ring heater from 1W to 1.1W.  I did this after the calibration measurements when the PI was already ringing up, and we've now lost lock from the PI.  (for a recent history, see 80299)

 

 

H1 AOS
robert.schofield@LIGO.ORG - posted 15:20, Thursday 26 September 2024 (80319)
PR2 scraper baffle reflection at 19mW, was 17mW

I measured the power of the beam comming out of the HAM3 illuminator viewport and found it to be 19mW, as compared to 17mW measured in this alog 78878. The beam is a reflection off of the scraper baffle of the part of the PR2 to PR3 beam that is clipped by the aperture of the scraper baffle. We had minimized it from 47mW to 17mW for the referenced alog, and wanted to see if it was clipping more - not much.

H1 CAL
anthony.sanchez@LIGO.ORG - posted 15:05, Thursday 26 September 2024 (80317)
Calibration Sweep Complete

pydarm measure --run-headless bb
diag> save /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240926T212332Z.xml
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240926T212332Z.xml saved
diag> quit
EXIT KERNEL

2024-09-26 14:28:43,409 bb measurement complete.
2024-09-26 14:28:43,409 bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240926T212332Z.xml
2024-09-26 14:28:43,410 all measurements complete.
anthony.sanchez@cdsws29:

 

21:30 UTC gpstime;python /ligo/groups/cal/src/simulines/simulines/simuLines.py -i /ligo/groups/cal/H1/simulines_settings/newDARM_20231221/settings_h1_newDARM_scaled_by_drivealign_20231221_factor_p1_asc.ini;gpstime
PDT: 2024-09-26 14:30:05.062848 PDT
UTC: 2024-09-26 21:30:05.062848 UTC
GPS: 1411421423.062848

2024-09-26 21:53:35,296 | INFO | Finished gathering data. Data ends at 1411422832.0
2024-09-26 21:53:36,077 | INFO | It is SAFE TO RETURN TO OBSERVING now, whilst data is processed.
2024-09-26 21:53:36,077 | INFO | Commencing data processing.
2024-09-26 21:53:36,077 | INFO | Ending lockloss monitor. This is either due to having completed the measurement, and this functionality being terminated; or because the whole process was aborted.

2024-09-26 21:53:36,077 | INFO | It is SAFE TO RETURN TO OBSERVING now, whilst data is processed.
2024-09-26 21:53:36,077 | INFO | Commencing data processing.
2024-09-26 21:53:36,077 | INFO | Ending lockloss monitor. This is either due to having completed the measurement, and this functionality being terminated; or because the whole process was aborted.
2024-09-26 22:01:43,741 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240926T213006Z.hdf5
2024-09-26 22:01:43,754 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240926T213006Z.hdf5
2024-09-26 22:01:43,764 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240926T213006Z.hdf5
2024-09-26 22:01:43,774 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240926T213006Z.hdf5
2024-09-26 22:01:43,784 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240926T213006Z.hdf5
ICE default IO error handler doing an exit(), pid = 2864258, errno = 32
PDT: 2024-09-26 15:01:43.864814 PDT
UTC: 2024-09-26 22:01:43.864814 UTC
GPS: 1411423321.864814
anthony.sanchez@cdsws29:

 

 

Images attached to this report
H1 ISC (CDS, ISC)
keita.kawabe@LIGO.ORG - posted 12:24, Thursday 26 September 2024 - last comment - 14:54, Thursday 26 September 2024(80309)
OMC whitening switching issue (Tony, TJ, JoeB, Sheila, Fil, Patrick, Daniel, Keita among others)

This morning Tony and TJ had a hard time locking the OMC.

We've found that OMC DCPD A and B output are very assymmetric only when there was a fast transient (1st attachment) but not when the OMC length was slowly brought close to the resonance (2nd attachment), which suggested whitening problem.

The transfer function from OMCA to B suggested that the switchable hardware whitening was ON for DCPD_A and OFF for B when it was supposed to be OFF for both. 3rd attachment shows the transfer function from DCPD_A to B, and 4th attachment shows the anti-whitening filter shape.

Switching ON the anti-whitening only for DCPD_A made the frequency response flat. Trying to switch the analog whitening ON and OFF by toggling H1:OMC-DCPD_A_GAINTOGGLE didn't change the hardware whitening status, it's totally stuck.

We tried to lock the IFO by only using DCPD_B, but IFO unlocked for some reason.

After IFO lost lock, people found on the floor that it's the problem of the whitening chassis, not the BIO. It's not clear if we can fix the board in the chassis (which is preferrable) or have to swap the whitening chassis (less preferable as calibration group needs to measure the analog TF and generate a compensation filter).

We'll update as we make progress.

 

Images attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 12:57, Thursday 26 September 2024 (80310)

Fernando, Fil, Daniel

DCPD whitening chassis fixed.

We diagnosed a broken photocoupler in the DCPD whitening chassis. Since the photocoupler is located on the front interface board, we selected to swap this board with the one from the spare. This means the whitening transfer function should not have changed. Since we switched the front interface board together with the front-panel, the serial number of the chassis has (temporarily) changed to the one of the spare.

The in-vacuum DCPD amplifiers were powered off for 30-60 minutes while the repair took place. So, they need some time to thermalize.

filiberto.clara@LIGO.ORG - 13:32, Thursday 26 September 2024 (80312)

Unit installed is S2300003. The front panel and front interface board was removed/borrowed from S2300004.

louis.dartez@LIGO.ORG - 14:54, Thursday 26 September 2024 (80316)CAL
N.B. S2300004 and S2300002 have been characterized and fit already. See LHO:71763 and LHO:78072 for the S2300004 and S2300002 zpk fits, respectively.

Should the OMC DCPD Whitening chassis need to be fully swapped, we already have the information we need to install the corresponding compensation filters in the front end and in the pyDARM model to accommodate that change. This, of course, rides on the expectation that the electronics have not materially changed in their response in the interim.

Displaying reports 141-160 of 78140.Go to page Start 4 5 6 7 8 9 10 11 12 End