Displaying reports 8561-8580 of 84716.Go to page Start 425 426 427 428 429 430 431 432 433 End
Reports until 12:35, Friday 14 June 2024
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 12:35, Friday 14 June 2024 (78439)
OPS Day Midshift Update

IFO is in NLN and OBSERVING as of 19:26 UTC

Mini-Events Today:

Images attached to this report
H1 ISC
sheila.dwyer@LIGO.ORG - posted 12:29, Friday 14 June 2024 (78438)
a2l script test again, slowed down

Since the yaw ASC cross couplings to DARM were high in our last lock, Ibrahim and I took a couple minutes before going into observing to run the script that TJ edited yesterday  (78419) again.  The attached screenshot shows that in our first time running the script the ADS values were still drifting when the script moved on to measuring the next step.  This might have been because the IFO had locked recently, or because it needs to wait longer.  We added a 30 second wait after we change the A2L gains before checking the values by adding +30 on line 185, after this it looked like things had settled well. 

We ran the script for all test masses yaw only from the terminal using: python a2l_min_multi.py --quads ETMX ETMY ITMX ITMY --dofs Y 

After the script ran, there were many SDF diff that Ibrahim has screenshots of.  It would be good to edit the end of the script to have fewer of these diffs each time we run it. We added these new values to lscparams and loaded ISC_LOCK.

Here are our first and second run results. 

*************************************************
RESULTS
*************************************************
          ETMX Y
Initial:  4.94
Final:    4.95

     ETMY Y
Initial:  0.94
Final:    1.01
          ITMX Y
Initial:  2.89
Final:    2.89

          ITMY Y
Initial:  -2.51
Final:    -2.39
Diff:     0.11999999999999966

 

*************************************************
RESULTS
*************************************************
          ETMX Y
Initial:  4.95
Final:    4.91
Diff:     -0.040000000000000036

          ETMY Y
Initial:  1.01
Final:    1.0
Diff:     -0.010000000000000009

          ITMX Y
Initial:  2.89
Final:    2.85
Diff:     -0.040000000000000036

          ITMY Y
Initial:  -2.39
Final:    -2.45
Diff:     -0.06000000000000005

 

Images attached to this report
H1 SYS
sheila.dwyer@LIGO.ORG - posted 10:54, Friday 14 June 2024 (78432)
fast shutter checks before during and after pressure spikes

Here are some plots that show the fuctioning of the fast shutter, before, during and after the June 7th pressure spikes.  The fast shutter functions the same way before and after the pressure spikes.  However, in the locking attempts with the HAM6 alignment different the beam going to AS_C was clipped, and this meant that the fast shutter didn't block the beam on AS_A and AS_B as quickly as it normally does. 

The first attachment shows a lockloss from June 6th, the lockloss before the pressure spikes started.  The fast channel that records power on the shutter trigger diode (H1:ASC-AS_C_NSUM_OUT_DQ) is calibrated into Watts arriving in HAM6.  Normally these NSUM channels are normalized by the input power scaling, but as the simulink screenshot shows that is not done in this case.  Using the time that the power on AS_A is blocked, the shutter triggered when there was 0.733 Watts arriving in HAM6, and the light on AS_A which is behind the shutter is blocked.  There is a bounce, where the beam passes by the shutter again 51.5 ms after it first closes, this bounce last 15ms and in that time the power into HAM6 rises to 1.3W.  In total, AS_A (and AS_B) where saturated for 24ms.  This pattern is consistent for 4 locklosses in several locklosses from higher powers, with the normal alignment on AS_C, both before and after the pressure spikes in HAM6. 

In the first lockloss with a pressure spike, 78308, the interferometer input power was 10W, rather then the usual 60W, and the alignment into HAM6 was in an unusual state.  The shutter triggered when the input power was 0.3Watts according to AS_C, which was clipped at the time and so is underestimating the power into HAM6.  The trend of power on AS_A and AS_B was different this time, with what looks like two bounces and a total of 35ms of time when AS_A was saturated.  The first bounce is about 14ms after the shutter first triggers, but the beam isn't unblocked enough to satrate the diode, and a second bounce saturates the diode 36ms after the shutter first closed, and lasted 26ms, during which time the power into HAM6 rose to 0.55W.  The power on AS_C peaked about 250ms after the shutter triggered, at about 1W onto AS_C.  Keita is going to look at energy deposited into HAM6 in typical locklosses, where AS_C is not clipping as that will be more accurate. The pressure spike shows up on H0:VAC-LY_RT_PT152_MOD2_PRESS_TORR about 1.5 seconds after the power peaks on AS_C, and peaks at 1.1-7 Torr.

At the next pressure spike, the interferometer was locked with 60W input power, and AS_A was saturated for 80ms before the shutter triggered, and there was no bouncing.  This time the pressure rise was recorded 2 seconds after the lockloss, and 1.8 seconds after the power on AS_C peaked, this was a larger pressure spike than the first at 1.3e-7 Torr.  

The third pressure spike was also a 60W lockloss, with AS_A saturated for 80ms and no bounce from the AS shutter visibile.  The pressure rise was recorded 2.1 seconds after the lockloss, and was 3.1e-7 Torr. 

The final attachment shows the first lockloss after we reverted the alignment in AS_C, when the fast shutter behavoir follows the same pattern seen before the pressure spikes happened.

 

 

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:18, Friday 14 June 2024 (78434)
Fri CP1 Fill

Fri Jun 14 10:09:56 2024 INFO: Fill completed in 9min 53secs

Jordan confirmed a good fill curbside.

Images attached to this report
H1 PSL (PSL)
ibrahim.abouelfettouh@LIGO.ORG - posted 07:40, Friday 14 June 2024 (78431)
PSL Weekly Report - Weekly FAMIS 26252

Closes FAMIS 26252


Laser Status:
    NPRO output power is 1.813W (nominal ~2W)
    AMP1 output power is 66.77W (nominal ~70W)
    AMP2 output power is 137.7W (nominal 135-140W)
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 16 days, 20 hr 27 minutes
    Reflected power = 21.32W
    Transmitted power = 106.0W
    PowerSum = 127.3W

FSS:
    It has been locked for 0 days 18 hr and 48 min
    TPD[V] = 0.8414V

ISS:
    The diffracted power is around 2.7%
    Last saturation event was 0 days 18 hours and 48 minutes ago


Possible Issues:
    PMC reflected power is high

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 07:32, Friday 14 June 2024 (78430)
OPS Day Shift Start

TITLE: 06/14 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 5mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.12 μm/s
QUICK SUMMARY:

IFO is in NLN and OBSERVING (17 hrs 28 mins)

H1 SQZ (SQZ)
corey.gray@LIGO.ORG - posted 05:25, Friday 14 June 2024 - last comment - 10:27, Monday 17 June 2024(78428)
SQZ OPO ISS Hit Its Limit and Took H1 Out Of Observing

Woke up to see that the SQZ_OPO_LR Guardian had the message:

"disabled pump iss after 10 locklosses. Reset SQZ-OPO_ISS_LIMITCOUNT to clear message"

Followed 73053, but did NOT need to touch up the OPO temp (it was already at its max value); then took SQZ Manager back to FRE_DEP_SQZ, and H1 went back to OBSERVING.

Comments related to this report
corey.gray@LIGO.ORG - 05:38, Friday 14 June 2024 (78429)

Received wake-up call at 440amPDT (1140utc).  Took a few minutes to wake up, then log into NoMachine.  Spent some time figuring out the issue, and ultimately doing an alog search to find steps to restore SQZ (found an alog by Oli which pointed to 73053).  Once SQZ relocked, automatically taken back to OBSERVING at 517am(1217utc).

camilla.compton@LIGO.ORG - 11:05, Friday 14 June 2024 (78435)

Sheila, Naoki, Camilla. We've adjusted this so it should automacally relock the ISS.

IFO went out of observing from the OPO without the OPO Guardian going down as the OPO stayed locked, just turned it's ISS off. We're not sure what the issue with the ISS was, SHG power was fine as the controlmon was 3.5 which is near the middle of the range. Plot attached. It didn't reset until Corey intervened.

Sheila and I changed the logic in SQZ_OPO_LR's LOCKED_CLF_DUAL state so that now if the ISS lockloss counter* reaches 10, it will go to LOCKED_CLF_DUAL_NO_ISS, where it turns off the ISS before trying to relock the ISS to get back to LOCKED_CLF_DUAL. This will drop us from observing but should resolve itself in a few minutes. Naoki tested this by changing the power to make ISS unlock.
The message "disabled pump iss after 10 locklosses. Reset SQZ-OPO_ISS_LIMITCOUNT to clear message." has been removed, wiki updated. It shouldn't get caught in a loop as in ENGAGE_PUMP_ISS, if it's lockoss counter reaches 20, it will take the OPO to DOWN.

* this isn't really a lockloss counter, more of a count of how many seconds the ISS is saturating.

Images attached to this comment
camilla.compton@LIGO.ORG - 15:23, Friday 14 June 2024 (78445)

Worryingly the squeezing got BETTER while the ISS was unlocked, plot attached of DARM, SQZ BLRMs and range BLMS.

In the current lock, the SQZ BLRMs are back to the good values plot, why was the ISS injecting noise last night? Has this been a common occurrence? What is a good way of monitoring this? Coherence with DARM and the ISS

Images attached to this comment
camilla.compton@LIGO.ORG - 10:27, Monday 17 June 2024 (78488)

Check on this is 78486. Think that the SQZ OPO temperature or angle wasn't well tuned for the green OPO power at this time, when the OPO ISS was off, the SHG launch power dropped from 28.8mW to 24.5mW, plot. it was just chance that SQZ was happier here.

Images attached to this comment
LHO General
ryan.short@LIGO.ORG - posted 01:02, Friday 14 June 2024 (78427)
Ops Eve Shift Summary

TITLE: 06/14 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Quiet shift with H1 locked and observing for the duration. There was one errant Picket Fence trigger (see midshift log) and we rode out a M5.9 EQ from the southern Atlantic, but otherwise uneventful. H1 has now been locked for 11 hours.
LOG: No log for this shift.

H1 General (SEI)
ryan.short@LIGO.ORG - posted 20:49, Thursday 13 June 2024 - last comment - 16:54, Monday 08 July 2024(78426)
Ops Eve Mid Shift Report

State of H1: Observing at 157Mpc, locked for 6.5 hours.

Quiet shift so far except for another errant Picket Fence trigger to EQ mode just like ones seen last night (alog78404) at 02:42 UTC (tagging SEI).

Images attached to this report
Comments related to this report
edgard.bonilla@LIGO.ORG - 13:46, Friday 14 June 2024 (78440)SEI

That's about two triggers in a short time. If the false triggers are an issue, we should consider triggering on picket fence only if there's a Seismon alert.

jim.warner@LIGO.ORG - 10:13, Monday 24 June 2024 (78620)

The picket fence-only transition was commented out last weekend on the 15th by Oli. We now will only transition on picket fence signals if there is a live seismon notificaition.

edgard.bonilla@LIGO.ORG - 16:54, Monday 08 July 2024 (78946)SEI

Thanks Jim,

I'm back from my vacation and will resume work on the picket fence to see if we can fix these errant triggers this summer.

H1 CAL (CAL, ISC)
francisco.llamas@LIGO.ORG - posted 18:06, Thursday 13 June 2024 (78425)
Drivealign L2L gain changes using kappa_tst script - an attempt

SheilaD, FranciscoL, ThomasS [Remote: LouisD]

Today, we tried to evaluate the effect on H1:CAL-CS_TDEP_KAPPA_TST_OUTPUT from changing H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN. In summary, we were not able to collect enough data and we want to do this again with a quiet IFO.

We used the script KappaToDrivealign.py to change the value of H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN. The script worked as intended but has to round off the the value that it uses to change the drivealign to the digits that EPICS can take. H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN changed from 184.65 to 186.53576 (1.012%). We are chainging the gain to reduce the difference of the model of the actuation function and the measurement of \DeltaL_{res}. We did not change H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN because that would have a net-zero change between measurement and model. We were not able to assess the effect of our change at the end of the commissioning period so we reverted our changes.

kappa_tst_crossbar.png shows a trend of the channels we are concerned of. The crossbar on KAPPA_TST shows that the value drops closely after the gain of DRIVEALIGN_L2L was reverted. However, the uncertainty from H1:CAL-CS_TDEP_PCAL_LINE3_UNCERTAINTY increased during these values, which makes them less reliable. The data between the vertical dashed lines on KAPPA_TST from kappa_tst_deltat.png shows that there was a period of time at which KAPPA_TST was closer to 1 along with a lower uncertainty of PCAL_LINE3.

There are two ways we could repeat this measurement: With 25 minutes of quiet time of the interferometer, we can monitor the changes of KAPPA_TST after changing DRIVEALIGN_L2L gain *or* we can measure simulines before and after changing DRIVEALIGN_L2L. The latter method would illustrate better the sability of the UGF, but it would take around 40 minutes.

To recap the data we collected today was not reliable enough because it had a high uncertainty and it was not enough time to clearly see the effect of our changes. We will try again by either monitoring KAPPA_TST or running simuline measurements.

Images attached to this report
H1 General (Lockloss)
camilla.compton@LIGO.ORG - posted 17:12, Thursday 13 June 2024 (76052)
Comparing unknown locklosses in O4 to O3.
Using a lot of assumptions, I estimated that our lockloss rate in O4a is similar to O3a but much better than O3b.
O4b rate is worse but we are only 2 months in so it's not fair to compare.

I've also looked at the amount of hours we've lost from unknown locklosses, which is the majority of our locklosses at around 65%, and how many theortircal GW's we coudl ahev missed becasue of these. 

O4b so far 110 LHO locklosses from observing in O4b [1] x 65% unknown [2] x 2 hours down per lockloss = 143 hours downtime from unknown locklosses. 925 observing hours in O4b so far (60% of O4b [3], from April 10th).

O4a 350 LHO locklosses from observing in O4a [1] x 65% unknown [2] x 2 hours down per lockloss = 450 hours downtime from unknown locklosses. 5700 observing hours in O4a (67% of O4a [3]) with 81 candidates  in O4b [4]= 1 candidate / 70 hours so 450 hours / 70 observing hours per candidate ~ 6.5 gw candidates lost due to unknown locklosses The Ops team notes that we had considerably more locklosses during the first ~month of O4a while using 75W input power.

O3b 247 LHO locklosses from observing in O3b [5] x 65% unknown [using same as O4a] x 2 hours down per lockloss = 320 hours downtime from unknown locklosses. 2790 observing hours in O4a (79% of O3b [3]) with 35 events [6]= 1 events / 80 hours 320 hours / 80 observing hours per event ~ 4 events lost due to unknown locklosses.

O3a 182 LHO locklosses from observing in O3a [5] x 65% unknown [using same as O4a] x 2 hours down per lockloss = 235 hours downtime from unknown locklosses. 3120 observing hours in O4a (71% of O3a [3]) with 44 events [6]= 1 events / 70 hours 235 hours / 70 observing hours per event ~ 3.3 events lost due to unknown locklosses.

We currently don't track downtime per lockloss, but we could think about tracking it, 2 hours is a guess. It may be as low as 1 hour. 

[1] https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/ using date and Observing filters.
[2] O4 lockloss Googlesheet
[3] https://ldas-jobs.ligo.caltech.edu/~DetChar/summary/O4a/ or O4b or O3b or O3a
[4] https://gracedb.ligo.org/superevents/public/O4a/
[5] G2201762 O3a_O3b_summary.pdf
[6] https://dcc.ligo.org/LIGO-G2102395
LHO General (OpsInfo)
thomas.shaffer@LIGO.ORG - posted 16:23, Thursday 13 June 2024 (78418)
Ops Day Shift End

TITLE: 06/13 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Commissioning and calibration this morning, followed by a lock loss. Relocking was not automated for two things: touched ETMX Y by 0.3urads and it caught, then PRM to avoid an initial alignment and get PRMI to lock. Locked for 2 hours now.
LOG:

For operators: I put back in the MICH Fringes and PRMI flags that Sheila and company put to False (no auto MICH or PRMI) last weekend (alog78319). So it should try MICH and PRMI on its own again.

LHO General
ryan.short@LIGO.ORG - posted 16:06, Thursday 13 June 2024 (78423)
Ops Eve Shift Start

TITLE: 06/13 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 6mph Gusts, 4mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.09 μm/s
QUICK SUMMARY: H1 has been locked and observing for 2 hours; things are running smoothly after the commissioning window earlier today.

H1 ISC
thomas.shaffer@LIGO.ORG - posted 14:03, Thursday 13 June 2024 - last comment - 11:44, Friday 14 June 2024(78419)
Converted A2L script to run all optics/dofs simultaneously if desired

I took the script that we have been using to run our A2L and converted it to run the measurements for all quads and degrees of freedom at the same time, or less, as desired. The new script is (userapps)/isc/h1/scripts/a2l/a2l_min_multi.py. Today Sheila and I tested it for all quads with just Y with the results below. These values were accepted in SDF, updated in lscparams.py, and ISC_LOCK reloaded. More details about the script at the bottom of this log.

Results for ETMX Y
Initial:  4.99
Final:    4.94
Diff:     -0.04999999999999982

Results for ETMY Y
Initial:  0.86
Final:    0.94
Diff:     0.07999999999999996

Results for ITMX Y
Initial:  2.93
Final:    2.89
Diff:     -0.040000000000000036

Results for ITMY Y
Initial:  -2.59
Final:    -2.51
Diff:     0.08000000000000007

 


 

The script we used to use was (userapps)/isc/common/scripts/decoup/a2l_min_generic_LHO.py which was, I think, originally written by Vlad B. and then Jenne changed it up to work for us at LHO. I took this and changed a few things around to then call the optimiseDOF function for each desired quad and dof under a ThreadPool class from multiprocess to run all of the measurements simultaneously. We had to move or change filters in the H1:ASC-ADS_{PIT,YAW}{bank#}_DEMOD_{SIG, I, Q} banks so that each optic and dof is associated with a particular frequency and used the ADS banks 6-9. These frequencies needed to be spaced apart enought but still within our area of interest. We also had to engage notches for all of these potential lines in the H1:SUS-{QUAD}_L3_ISCINF_{P,Y} banks (FM6&7). We also accepted the ADS output matrix values in SDF for these new banks with a gain of 1.

This hasn't been tested for all quads and both P&Y, so far only Y.

optic_dict = {'ETMX': {'P': {'freq': 31.0, 'ads_bank': 6},
                                      'Y': {'freq': 31.5, 'ads_bank': 6}
                                     },
                       'ETMY': {'P': {'freq': 28.0, 'ads_bank': 7},
                                      'Y': {'freq': 28.5, 'ads_bank': 7}
                                     },
                       'ITMX': {'P': {'freq': 26.0, 'ads_bank': 8},
                                     'Y': {'freq': 26.5, 'ads_bank': 8}
                                     },
                       'ITMY': {'P': {'freq': 23.0, 'ads_bank': 9},
                                     'Y': {'freq': 23.5, 'ads_bank': 9}
                                     },
}
Comments related to this report
sheila.dwyer@LIGO.ORG - 11:44, Friday 14 June 2024 (78437)

Here's a screenshot of the ASC coherence after TJ ran this script yesterday, there is still high coherence with YAW ASC and DARM. 

 

Images attached to this comment
H1 ISC (ISC)
jennifer.wright@LIGO.ORG - posted 11:27, Thursday 13 June 2024 - last comment - 16:33, Friday 19 July 2024(78413)
DARM offset step

I ran the DARM offset step code starting at:

2024 Jun 13 16:13:20 UTC (GPS 1402330418)

Before recording this time stamp it records the PCAL current line settings and makes sure notches for 2 PCAL frequencies are set in the DARM2 filter bank.

It then puts all the PCAL power into these lines at 410.3 and 255Hz (giving them both a height of 4000 counts), and measures the current DARM offset value.

It then steps the DARM offset and waits for 120s each time.

The script stopped at 2024 Jun 13 16:27:48 UTC (GPS 1402331286).

In the analysis the PCAL lines can be used to calculate how the optical gain changes at each offset.

See the attached traces, where you can see that H1:OMC-READOUT_X0_OFFSET is stepped and the OMC-DCPD_SUM and ASC-AS_C respond to this change.

Watch this space for analysed data.

The script sets all the PCAL settings back to nominal after the test from the record it ook at the start.

The script lives here:

/ligo/gitcommon/labutils/darm_offset_step/auto_darm_offset_step.py

The data lives here:

/ligo/gitcommon/labutils/darm_offset_step/data/darm_offset_steps_2024_Jun_13_16_13_20_UTC.txt

 

Images attached to this report
Comments related to this report
jennifer.wright@LIGO.ORG - 11:10, Friday 14 June 2024 (78436)

See the results in the attached pdf also found at

/ligo/gitcommon/labutils/darm_offset_step/figures/plot_darm_optical_gain_vs_dcpd_sum/all_plots_plot_darm_optical_gain_vs_dcpd_sum_1402330422_380kW__Post_OFI_burn_and_pressure_spikes.pdf

The contrast defect is 0.889 ± 0.019 mW and the true DASRM offset 0 is 0.30 counts.

Non-image files attached to this comment
jennifer.wright@LIGO.ORG - 16:11, Monday 15 July 2024 (79144)

I plotted the power at the antisymmetric port as in this entry to find out the loss term between the input to HAM6 and the DCPDs, which in this case is  (1/1.652) =  0.605 with 580.3 mW of light at the AS port insensitive to DARM length changes.

Non-image files attached to this comment
victoriaa.xu@LIGO.ORG - 16:33, Friday 19 July 2024 (79251)ISC, SQZ

From Jennie's measurement of 0.88 mW contrast defect, and dcpd_sum of 40mA/resp = 46.6mW, this implies an upper bound on the homodyne readout angle of 8 degrees.

This readout angle can be useful for the noise budget (ifo.Optics.Quadrature.dc=(-8+90)*np.pi/180) and analyzing sqz datasets e.g. May 2024, lho:77710.

 

Table of readout angles "recently":

   
 total_dcpd_light
 (dcpd_sum = 40mA)
 contrast_defect
 homodyne_angle
 alog
 O4a  Aug 2023  46.6 mW  1.63 mW  10.7 deg  lho71913 
 ER16  9 March 2024  46.6 mW  2.1 mW  12.2 deg  lho76231
 ER16  16 March 2024  46.6 mW  1.15 mW  9.0  deg  lho77176 
 O4b  June 2024  46.6 mW  0.88 mW  8.0 deg  lho78413 
 O4b  July 2024  46.6 mW  1.0 mW  8.4 deg  lho79045 

 

##### quick python terminal script to calculate #########

# craig lho:65000
contrast_defect   = 0.88    # mW  # measured on 2024 June 14, lho78413, 0.88 ± 0.019 mW
total_dcpd_light  = 46.6    # mW  # from dcpd_sum = 40mA/(0.8582 A/W) = 46.6 mW
import numpy as np
darm_offset_power = total_dcpd_light - contrast_defect
homodyne_angle_rad = np.arctan2(np.sqrt(contrast_defect), np.sqrt(darm_offset_power))
homodyne_angle_deg = homodyne_angle_rad*180/np.pi # degrees
print(f"homodyne_angle = {homodyne_angle_deg:0.5f} deg\n")


##### To convert between dcpd amps and watts if needed #########

# using the photodetector responsivity (like R = 0.8582 A/W for 1064nm)
from scipy import constants as scc
responsivity = scc.e * (1064e-9) / (scc.c * scc.h)
total_dcpd_light = 40/responsivity  # so dcpd_sum 40mA is 46.6mW
LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 15:24, Monday 10 June 2024 - last comment - 16:45, Thursday 13 June 2024(78346)
Report of the Observed Vacuum Pressure Anomalities (06/06/2024 local)

On Friday 06/07/2024 Dave Barker sent an email to the vacuum group noting 3 spikes on the pressure of the main vacuum envelope, I took a closer look at the 3 different events and noticed that the events correlated to the IFO losing lock.  I contacted Dave, and together we contacted the operator, Corey, who made others aware of our findings.

The pressure "spikes" were noted by different components integral to the vacuum envelope.  Gauges noted the sudden rise on pressure, and almost at the same time ion pumps reacted to the rise on pressure.  The outgassing was noted on all stations, very noticeable a the mid stations, and with less effect at both end stations, and for both with a delay.

The largest spike for all 3 events is noted at HAM6 gauge, we do not have a gauge at HAM5 or HAM4.  The one near HAM6 is the one on the relay tube that joins HAM5/7 (PT152), with the restriction of the relay tube, then the next gauge is at BSC2 (PT120), however the spike is not as "high" as the one noted on HAM6 gauge.

A list of aLOGs made by others related to the pressure anomalies and their findings:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=78308
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=78320
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=78323
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=78343

Note: the oscillation visible on the plot of the outer stations (Mids and Ends) is the diurnal cycle, nominal behavior.

Images attached to this report
Comments related to this report
michael.zucker@LIGO.ORG - 13:13, Tuesday 11 June 2024 (78371)

Of live working gauges, PT110 appears closest to the source based on time signature. This is on the HAM5-7 relay tube and only indirectly samples HAM6*.  It registered a peak of 2e-6 8e-6 Torr with decay time of order 30 17s. Taking a HAM as sample volume (optimistic) this indicates at least 0.02 0.08 torr-liters of "something" must have been released at once.  The strong visible signal at mid- and end-stations suggests it was not entirely water vapor, as this should have been trapped in CP's. 

For reference, a mirror in a 2e-6 Torr environment intercepts about 1 molecular monolayer per second. Depending on sticking fraction, each of these gas pulses could deposit of order 100 monolayers of contaminant on everything. 

The observation that the IFO still works is comforting; maybe we should feel lucky. However it seems critical to understand how (for example) the lock loss energy transient could possibly hit something thermally unstable, and to at least guess what material that might be.  Recall we have previously noted evidence of melted glass on an OMC shroud.

Based on the above order-of-magnitude limits, similar gas pulses far too small to see on pressure gauges could be damaging the optics. 

It would be instructive to compare before/after measures of arm, MC, OMC, etc. losses, to at least bound any acquired absorption

 

*corrected, thanks Gerardo

jordan.vanosky@LIGO.ORG - 14:03, Tuesday 11 June 2024 (78374)

Corner RGA scans were collected today during maintenance, using RGA on Output tube. RGA volume has been open to main volume since last pumpdown ~March 2024, but electronics head/filament was turned off due to the small fan on the electronics head not spinning during observing. Unable to connect to HAM6 RGA, through either RGA computer in control room, or locally at unit with laptop. Only Output tube RGA available at this time.

Small aux cart and turbo was connected to RGA volume on output tube, then RGA Volume isolated from main volume and the filament turned on. The filament had warmed for ~2 hours prior to RGA scans being collected.

RGA Model: Pfeiffer PrismaPlus

AMU Range: 0-100

Chamber Pressure: 1.24E-8 torr on PT131(BSC 3), and 9.54E-8 torr on PT110 (HAM6), NOTE: Cold Cathode gauge interlocks tripped during filming activings in LVEA today, BSC2 pressure not recorded

Pumping Conditions: 4x 2500 l/s Ion Pumps and 2x 10^5 l/s cryopumps, HAM6 IP and HAM7/Relay tube

SEM voltage: 1200V

Dwell time: 500ms

Pts/AMU: 10

RGAVolume scans collected with main volume valve closed, only pumping with 80 l/s turbo aux cart

Corner scans collected with main volume valve open, and aux cart valve closed

Comparison to March 2024 scan provided as well.

RGA is still powered on and connected to main volume with a continuous 0-100AMU scan sweep at 5ms dwell time

Images attached to this comment
richard.mccarthy@LIGO.ORG - 07:34, Wednesday 12 June 2024 (78385)

Richard posting from Robert S.

I had a work permit to remove viewports so I opened the two viewports on the -Y side of HAM6. I used one of the bright LED arrays at one viewport and looked through the other viewport  so everything was well lit.  I looked for any evidence of  burned spots, most specifically on the fast shutter or in the area where the fast shutter directs the beam to the cylindrical dump.  I did not see a damaged spot but there are a lot of blocking components, so not surprising. I also looked at OM1 which is right in front of the viewports. I looked for burned spots on the cables etc but didnt see any. I tried to see if there were any spots on the OMC shroud, or around OM2 and OM3, the portions that I could see. I didnt see anything, but I think its pretty unlikely that I could have seen something.

jordan.vanosky@LIGO.ORG - 11:48, Wednesday 12 June 2024 (78390)

Repeated 0-100AMU scans of the corner today, after filament had full 24 hours to warm up. Same scan parameters as above June 11th scans. Corner pressure 9.36e-9 Torr, PT 120.

Dwell time 500 ms

Attached is comparison to yesterday's scan, and compared to the March 4th 2024 scan after corner pumpdown.

There is a significant decrease in AMU 41, 43 and 64 compared to yesterday's scan.

Images attached to this comment
jordan.vanosky@LIGO.ORG - 16:45, Thursday 13 June 2024 (78424)

Raw RGA text files stored on DCC at T2400198

H1 ISC
jennifer.wright@LIGO.ORG - posted 11:52, Monday 10 June 2024 - last comment - 16:19, Thursday 13 June 2024(78074)
Optical Gain Changes before and during O4b

Jennie W, Sheila

 

Sheila wanted me to look at how good our optical gain is doing since the burn on the OFI that roughly happened (we think) on the 22nd April.

Before this happened we made a measurement of the OMC alignment using dithers on the OMC ASC degrees of freedom. We got a set of new alignment offsets for the OMC QPDs that would have increased our optical gain but did not implement these at the time.

After the OFI burn we remeasured these alignment dithers and found a similar set of offsets that would imrpove our optical gain. Thus I have assumed that we would have achieved this optical gain increase before the OFI burn if we had implemented the offset changes then.

Below is the sequence of events then a table stating our actual or inferred optical gain and the date on which it was measured.

 

Optical gain before vent as tracked by kappa C: 2024/01/10 22:36:26 UTC is 1.0117 +/- 0.0039

Optical gain after vent: 2024/04/14 03:54:38 UTC is 1.0158 +/- 0.0028, optical gain if we had improved OMC alignment = 1.0158 + 0.0189 = 1.0347

SR3 yaw position and SR2 yaw and pitch positions were changed on the 24th April ( starting 17:18:15 UTC time) to gain some of our throughput back.

The OMC QPD offsets were changed on 1st May (18:26:26 UTC time) to improve our optical gain - this improved kappa c by 0.0189.

Optical gain after spot on OFI moved due to OFI damage: 2024-05-23 06:35:25 UTC 1.0051 +/- 0.0035

SR3 pitch and yaw positions and the SR2 pitch and yaw positions were changed on 28th May (starting at 19:14:34 UTC time).

SR3 and SR2 positions moved back to pre-28th values on 30th May (20:51:03 UTC time).

So we still should be able to gain around 0.0296 ~ 3% optical gain increase, provided we stay at the spot on the OFI we had post 24th April:

SR2 Yaw slider = 2068 uradians
SR2 Pitch slider = -3 uradians

SR3 Yaw slider = 120 uradians

SR3 Pitch slider = 438 uradians
 

Date kappa_C Optical Gain [mA/pm] Comments
10th January 1.0117 8.46 O4a
14th April 1.0347 8.66 Post vent
23rd May 1.0051 8.41 Post OFI 'burn'

 

Images attached to this report
Comments related to this report
jenne.driggers@LIGO.ORG - 12:05, Monday 10 June 2024 (78350)

I'm not really sure why, but our optical gain is particularly good right now.  And, it's still increasing even though we've been locked for 12+ hours. 

The other times in this plot where the optical gain is this high is about April 5th (well before the OFI incident), and May 30th.

Images attached to this comment
jenne.driggers@LIGO.ORG - 12:23, Monday 10 June 2024 (78353)

Actually, this *might* be related to the AS72 offsets we've got in the guardian now.  Next time we're commissioning, we should re-measure the SR2 and SRM spot positions.

Images attached to this comment
jennifer.wright@LIGO.ORG - 16:19, Thursday 13 June 2024 (78376)

Jennie W, Sheila, Louis

 

I recalculated the optical gain for pre-vent as I had mixed up the time in PDT with the UTC time for this measurement, it was actually from on the 11th January 2024.

Also the value I was using for OMC-DCPD_SUM_OUT/LSC-DARM_IN1 in mA/counts changes over time, and the optical gain reference value in counts/m also changes between before the vent, April, and now.

Louis wrote a script that grabs the correct front-end calibration (when this is updated the kappa C reference is updated) and the measured OMC-DCPD_SUM_OUTPUT to the DARM loop error point.

Instructions for running the code can be found here.

The code calculates current optical gain = kappa C  * reference optical gain * 1e-12 / (DARM_IN1 divided by OMC-DCPD_SUM)

                                                       [mA/pm] = [counts/m] * [m/pm]  / [counts/mA]

All the kappa Cs in the table below and all the optical gains were calculated by Louis's script except for the 14th April.

I calculated the optical gain on the 14th April assuming as in the entry above that we would have got a (~0.0189) increase in kappa C if we had previously (before the 14th April) implemented the OMC alignment offsets we in fact implemented post OFI burn, on 1st May.

I went to these reference times and measured coupled cavity pole, f_cc and P_circ the arm cavity power.

I also checked the OMC Offset and OMC-DCPD_SUM for these times (which shouldn't change).

Date kappa_C f_c Optical Gain [mA/pm] P_Circ X/Y OMC Offset OMC-DCPD_SUM Comments

11th January

06:36:26 UTC

1.0122 441 +/- 7 8.33 368 kW +/- 0.6kW 10.9405 +/- 0.0002 40mA -/+ 0.005 O4a, time actually 11/01/2024 06:36:26 UTC

14th April

03:54:54 UTC

1.0257 391 +/- 7 8.70 375 +/- 0.8 kW 10.9402 +/ - 0.0002 40mA -/+ 0.003mA Post vent

23rd May

06:35:41 UTC

1.0044 440 +/- 6 8.52 382 kW +/-0.4 kW/ 384 kW +/- 0.4 kW 10.9378 +/- 0.00002 40mA +/- 0.006 mA Post OFI 'burn'

12th June

09:43:04 UTC

1.0171 436 +/- 7 kW 8.62 379 kW +/- 0.6 kW/ 380 kW +/- 0.5 kW 10.9378 +/- 0.00002 40mA +/- 0.004 mA Post HAM6 pressure spike

 

 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

In summary we have (8.62/8.70) *100 = 99.1% of the optical gain that we could have achieved before the OFI burn, and our current optical gain  is (8.62/8.33)*100 = 103.5 % of that before the vent.

We do not appear to be doing worse in optical gain since the vacuum spikes last week.

 
 
Images attached to this comment
Displaying reports 8561-8580 of 84716.Go to page Start 425 426 427 428 429 430 431 432 433 End