Displaying reports 6101-6120 of 84061.Go to page Start 302 303 304 305 306 307 308 309 310 End
Reports until 16:25, Wednesday 25 September 2024
H1 General
anthony.sanchez@LIGO.ORG - posted 16:25, Wednesday 25 September 2024 (80298)
Wednesday Ops Shift End

TITLE: 09/25 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Wind
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:

Day was full of Short locks that all eneded with PI ring ups.

Lockloss from a PI 24 ring up
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1411331841
Screen shot. apparently the Lockloss page has no locklosses attributed to PI ring ups.

Relocking:
I did an Initial Alignment.
Wind is gusting above 40 mph
Holding ISC_LOCK in IDLE until the tumbleweed land.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
23:58 SAF H1 LVEA YES LVEA is laser HAZARD 18:24
15:19 PCAL Karen & Francisco PCAL lab yes Technical cleaning and storing parts. 15:37
16:45 FAC Eric EX & EY N Turning Down the Bugs the Bunnies alarm Volume at Chiller yards. 17:26
17:28 PEM Robert SR10 & Alabama N Checking along SR10 & Alabama for road Noise. 21:28
21:02 VAC Travis LVEA HAM6 Yes checking a feed through 21:22
Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:19, Wednesday 25 September 2024 (80297)
OPS Eve Shift Start

TITLE: 09/25 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Wind
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 39mph Gusts, 25mph 3min avg
    Primary useism: 0.07 μm/s
    Secondary useism: 0.32 μm/s
QUICK SUMMARY:

IFO is in DOWN due to ENVIRONMENT

High winds with around 40mph gusts. Shall re-attempt lock acquisition when winds are lower speed.

H1 ISC
francisco.llamas@LIGO.ORG - posted 15:42, Wednesday 25 September 2024 (80267)
Sensing function from AS_A Offset off

Sheila, Louis, Francisco

SUMMARY: On Thursday, September 19 2024, the low frequency error of the sensing function magnitude decreased (see figure 1) from turning off H1:ASC-AS_A_DC_YAW_OFFSET.

We turned off H1:ASC-AS_A_DC_YAW_OFFSET prior to making a calibration measurement (LHO:80180), given our observations from LHO:80063. Figure 3 shows a change in magnitude of ~5%, from turning off the AS_A_Y, in comparison to measurement 20240914T183802Z ("Saturday cal. meas.", the most recent measurement prior to our change), and figure 2 confirms an uncertainty of less than 3% for the frequencies (see table) of interest. The calibration measurements used in this log were done with a thermalized interferometer -- see LHO:79691, LHO:80057, LHO:80061, LHO:80093, LHO:80159, LHO:80180.

I'm adding figures 4, 5 , and 6 to summarize the coupling between DARM and DHARD_Y on LHO:80063. In figure 4, the trace where AS_A_DC_YAW_OFFSET = 0 (red trace, bottom plot) shows minimal coupling with DARM (top plot). The frequencies at which DHARD_Y PSD magnitude was minimized match the injections from the following table

Frequency (Hz)

Channel name
15.6 H1:SUS-ETMY_L1_CAL_LINE_FREQ
16.4 H1:SUS-ETMY_L2_CAL_LINE_FREQ
17.1 H1:CAL-PCALY_PCALOSC1_OSC_FREQ
17.6 H1:SUS-ETMY_L3_CAL_LINE_FREQ

which indicates a coupling from DARM to DHARD_Y.

To contextualize, we are interested in understanding (and supressing) the parasitic cross-coupling between ASC and DARM loops (similar to LHO:50498, and, more recently, LHO:78606). The cross-coupling can be coming from DARM and affecting ASC or coming from ASC and affecting DARM. We are using the sensing function, instead of the actuation function, to rule out other coupling mechanisms. A technical note describing the parasitic cross-coupling is in development.

DESCRIPTION OF THE FIGURES

Fig 1 - out_10-30Hz: Sensing TF ranging from 10 to 30 Hz for different calibration measurement reports.
Fig 2 - unc_10-30Hz: Relative uncertainty of each TF from figure 1.
Fig 3 - ratio_10-30Hz: Change in magnitude of each TF from figure 1, using report from 20240919T153719Z as reference.
Fig 4 - darm_and_dhardy_psd: PSDs of (top) LSC-DARM_IN1_DQ and (bottom) DHARD_Y_OUT_DQ for each value of ASC-AS_A_DC_YAW_OFFSET (AS_A_Y for reference).
Fig 5 - dhardy_psd_and_coh: PSD of DHARD_Y and coherence of DHARD_Y_OUT_DQ/LSC-DARM_IN1_DQ for each value of AS_A_Y. Note the coherence when AS_A_Y = 0 (red trace).
Fig 6 - dhardy_tf: TF of DHARD_Y_OUT_DQ/LSC-DARM_IN1_DQ. Even though the phase changes substantially at AS_A_Y = 0, the coherence on figure 5 indicates that the coupling between DAHRD_Y and DARM is low.
Fig 7, 8, and 9 are, respectively, full ranges references of figures 1, 2, and 3.

Images attached to this report
H1 ISC
sheila.dwyer@LIGO.ORG - posted 13:40, Wednesday 25 September 2024 (80295)
changing input power to 61W

Oli is preparing an alog about the history of our PIs since the OFI vent. 

Since we have lost lock to PI 24 more and more frequently over the last week and today all locks have been short, I've changed the input power to 61W, which did help with this PI when we did it in early September. 

H1 General
anthony.sanchez@LIGO.ORG - posted 12:43, Wednesday 25 September 2024 (80293)
Wednesday mid day report.

TITLE: 09/25 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 18mph Gusts, 11mph 3min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.31 μm/s
QUICK SUMMARY:

Robert is out at SR10 & alabama along the easement on the sholder. We should make sure hes still ok...

After this Lockloss we started relocking again and the DRMI signals looked terrible and were pulling away.
Shelia had us Stopped in OFFLOAD DRMI ACS. I touched up SRM Yaw and Turned off the following.
H1:ASC-SRC1_P_SW1
H1:ASC-SRC1_Y_SW1
H1:ASC-SRC2_P_SW1
H1:ASC-SRC2_Y_SW1

NLN Reached at 19:06 UTC
OBSERVING Reached at 19:08:35 UTC

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 11:07, Wednesday 25 September 2024 (80292)
Lockloss

Lockloss @ 09/25 17:52UTC from PI24 ringup. Tony valiantly tried to damp it but was unsuccessful.

LHO VE
david.barker@LIGO.ORG - posted 08:18, Wednesday 25 September 2024 (80289)
Wed CP1 Fill

Wed Sep 25 08:14:14 2024 INFO: Fill completed in 14min 10secs

Jordan confirmed a good fill curbside.

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 08:05, Wednesday 25 September 2024 (80288)
Wednesday Morning Ops Shift Start

TITLE: 09/25 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 8mph Gusts, 5mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.24 μm/s
QUICK SUMMARY:

H1 was unlocked and stuck in locking green arms for 3 hours.
I'm not sure why yet but I will be trying to lock and will probably find out.
I've requested an initial alignment since there was an earthquake last night .
Both ALSX & Y locked easily enough 61% for x and 95% for Y.  ALSX DOF3 just nose dived for some reason though.

But eventually we got to PRC Aligning with out intervention, so I guess its all working well.




 

 

Images attached to this report
H1 General
corey.gray@LIGO.ORG - posted 00:17, Wednesday 25 September 2024 (80286)
Took H1 To Observing & Then Lockloss

I happened to not be able to sleep, and happened to check on H1 and saw that it looked like a rough day after Maintenance.  I also saw that H1 went immediately into the NLN_CAL_MEAS state and the GRD IFO was set to COMMISSIONING; H1 was also in Managed Operation.

I didn't see any notes in the alog to stay out of observing for the night--Ibrahim noted there were some calibration issues. 

So I took H1 to Observing. (there was an sdf diff for ISC EY---accepted and snapshot attached)

But then there was also an EQ Alert from guatamala....which then took us down. 

I'm still confused for plan for the night.  I am setting ISC Lock for NLN and leaving H1 in Automatic Operation and also taking GRD IFO back to Observing (vs Commissioning).  Going to try to sleep.

Images attached to this report
LHO General (CAL)
ibrahim.abouelfettouh@LIGO.ORG - posted 22:00, Tuesday 24 September 2024 (80282)
OPS Eve Shift Summary

TITLE: 09/25 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Calibration
INCOMING OPERATOR: Corey
SHIFT SUMMARY:

IFO is in MAINTENANCE and RELOCKING at FIND_IR

Shift was dominated by locking and calibration issues. Fought ALS to get to NLN then lost lock during impromptu CAL maintenance (though seems to be unrelated to the maintenance) right before shift end.

ALS Issues:

ALSY is still having locking issues. Thankfully, Keita was on-site when this happened and was able to help me out of it by shifting an offset (COMM Offset (V)) in ALS_CUST_CM_REFL - screenshot attached. This visible allowed ALSY to catch but the offset was moved to its maximum slider positon (-10V). The issue wasn't the counts, as we had had earlier today, but that ALSY was not locking at the maximum of its count (0.83 cts but locking closer to 0.6 and sometimes at 0.2, which is likely a different higher order mode).

Weirder yet was that after fixing this, neither PRMI nor DRMI were able to lock, which prompted me to start an initial alignment. The initial alignment reset the offset back from -10V to the nominal -0.025V BUT ALS was indeed able to lock, and quite quickly. In fact, initial alignment was fully automated, and so was normal locking, all the way upto NLN!

I'll keep the offset at its normal -0.025V instead of Keita's 10V fix since we didn't end up needing it.

Other than that, Keita went into comissioning for a very short <2 min to make/revert an ALSY change which I believe is related to alog 808280. I've attached the accepted SDF Diff.

Calibration Issues:

Louis and (and later, Joe B) got in contact with LHO Control Room while I was powering up and asked to reset the calibration due to very off/non-nominal calibration. Keita approved 1hr of CALIBRATION_MAINTENANCE. This was done, but in the process, we had to reload the GDS and reset the DTT, none of which caused any issues. We attempted to run a broadband measurements (attached cal monitor) but weirdly enough, the CAL Lines did not dissapear when I entered NLN_CAL_MEAS. By this time, I had already run the BB but by instruction from calibration, was told to cancel it. Well, cancelling it via ctrl_c and terminal closing didn't work and then we tried to wait until the measurement was done. This did not work either and it went over the 5 min time by another 5 minutes. At this point, we went the awg line clear route and forced the lines to close. So now, we are back to the earlier calibration state, which has a bad 15% error. Louis and Joe B are attempting to revert it back to a better state, which has <10% error. They're having difficulties with GDS so had to reload coefficients again, resulting in downed DTT again (for 12 mins). This happened successfully! Riding on this great luck, we lost lock 5 mins later...

Other:


LOG:

None

Images attached to this report
H1 ISC (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 21:49, Tuesday 24 September 2024 (80284)
Lockloss 04:45 UTC

Lockloss during calibration broadband sweep during impromptu EVE shift cal maintenance. Unlikely the cause. IY saturated.

H1 ISC (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 18:28, Tuesday 24 September 2024 - last comment - 20:00, Friday 27 September 2024(80281)
Lockloss 01:16 UTC

Lockloss during Post-Event Standdown. Unknown cause not enviuronmental. Relocking now buyt having difficulties with both ALSY and ALSX.

Comments related to this report
oli.patane@LIGO.ORG - 20:00, Friday 27 September 2024 (80336)

Weirdly-shaped glitch seen in ASC, LSC/quads

Images attached to this comment
H1 ISC (ISC)
keita.kawabe@LIGO.ORG - posted 17:58, Tuesday 24 September 2024 (80280)
ALSY investigation (not finished)

Since ALS (both X and Y) have been sub-optimal to say the least forever, and since ALS difficulty is apparently a large part of our relocking time according to people, I started investigating ALSY.

I didn't have time to finish anything and in the end I reverted everything back for now. I'll make similar measurements again next Tuesday for ALSX.

H1 CDS (CAL, CDS)
erik.vonreis@LIGO.ORG - posted 16:52, Tuesday 24 September 2024 (80279)
H1:CAL-CALIB_REPORT_ID_INT momentarily bumped

At 23:48:02 UTC, I incremented H1:CAL-CALIB_REPORT_ID_INT for one second, then decremented it back to its original value, to force a save of the current CALIB_REPORT hash values, which had been blocked by a full drive.

H1 General
ryan.crouch@LIGO.ORG - posted 16:31, Tuesday 24 September 2024 (80269)
OPS Tuesday day shift summary

TITLE: 09/24 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147 Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: The LVEA remains in laser hazard. Maintenance ended a little late due to SR3 issues, locking also was a small struggle as ALS-Y wouldn't lock as it had low IR and green power. After ALS-Y was solved relocking has been great.
LOG:                                                                                                                                                                                                                                                                                                                                                           

Start Time System Name Location Lazer_Haz Task Time End
23:58 SAF H1 LVEA YES LVEA is laser HAZARD 18:24
14:34 FAC Chris Weldshop N Housekeeping, move forklift 17:06
14:54 FAC Kim EndX Y Tech clean 16:27
14:54 FAC Karen EndY N Tech clean 15:53
14:58 FAC Nelly HAM shack N Tech clean 15:59
15:00 CAL Tony PCAL lab Y PCAL work 15:27
15:04 CAL Dripta PCAL lab Y Check with Tony 15:27
15:05 ALS Keita EndY N Mode matching investigation 19:38
15:19 SUS Jason, Oli LVEA Y SR3 OPLEV centering, dead laser :( and translation stage 20:03
15:19 FAC Eric, Tyler FIre pump N Fire pump testing 16:04
15:34 CAL Tony, Dripta EndX Y PCAL measurement 19:09
15:37 FAC Betsy LVEA Y Talk to Jason 15:48
15:47 SQZ Sheila SQZ0 Y Pump align 18:13
16:04 FAC Eric Mech room N Heater coil repair 18:35
16:09 SEI Jim Office/EndX remote N BRS-X adjustments 17:55
16:16 VAC Travis LVEA Y Check feedthroughs 16:51
16:21 FAC Richard LVEA Y Check with Jason 16:37
16:26 EE Marc & Fernando EndX Y SUS DAC checks 17:13
16:27 FAC Kim, Karen Ham Shack N Tech clean 17:09
17:09 FAC Kim & Karen LVEA Y Tech clean 18:07
17:12 EE Fil MSR N Looking under the floors, wire routing for strobe light 20:18
17:12 FAC Christina Recieving N Forklift 17:46
17:12 FAC Tyler Lvea, Mids Y 3IFO checks 17:49
17:36 FAC Chris LVEA Y Load bearing labels for storage racks 18:32
18:06 PSL Fernando PSL racks N Checks 18:25
18:12 VAC Janos, Jordan, Travis FCT N Push a cleam room to the FCES 18:34
18:24 SEI Jim CR N ETMX BLND checks/tests 19:11
18:59 OPS RyanC LVEA Y Run out FARO laptop to Jason 19:12
19:43 ALS Keita EndX N Take pictures of racks 20:06
21:57 ALS Keita EndY Y ALS-Y table adjustment 22:33
22:33 CDS Tony EndX N WIFI investigation 23:10

Busy maintenance day, the work completed includes:

Team SR3 was the last one out and made sure the lights were off

Lock#1:

Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:03, Tuesday 24 September 2024 (80278)
OPS Eve Shift Start

TITLE: 09/24 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 14mph Gusts, 6mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.29 μm/s
QUICK SUMMARY:

IFO is in MAINTENANCE and MOVE_SPOTS

After Keita fixed the ALSY locking issues, we were able to automatically finish initial_alignment and are currently on track for a fully auto NLN acquisition.

H1 CDS
david.barker@LIGO.ORG - posted 15:35, Tuesday 24 September 2024 - last comment - 21:40, Tuesday 24 September 2024(80277)
CDS Maintenance Summary: Tuesday 24th September 2024

WP12093 Add h1iopsusex new LIGO DAC to the SWWD

Ryan C, Erik, Dave:

A new h1iopsusex model was built and installed using the custom rcg-9eb07c3200.

The new IOP model was installed using the procedure:

We decided to reboot rather than just restart the models because in the past this had not worked and we ended up rebooting anyways. Since the IOP restart necessitated a full model restart, rebooting did not slow the process much.

Now if the software watchdog (SWWD) on h1iopsusex were to initiate a local DACKILL, it wil kill all six local DACs: three 18-bit, two 20-bit and the one LIGO 28-bit.

WP12101 Reboot h1digivideo servers

Erik, Dave:

Due to a slow memory leak in the old digivideo servers h1digivideo[0,1,2] after an uptime of about 7 months h1digivideo2's memory usage had crept up to 95% (2024 trend attached).

We rebooted all three machines at 12:22 following the conclusion of h1alsey work (the h1alsey model receives ITMY camera data from h1digivideo2).

At time of writing the mem usages are; 0=20%; 1=22%; 2=18%

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 21:40, Tuesday 24 September 2024 (80283)

Tue24Sep2024
LOC TIME HOSTNAME     MODEL/REBOOT
12:23:23 h1susex      h1iopsusex  
12:23:36 h1susex      h1susetmx   
12:23:49 h1susex      h1sustmsx   
12:24:02 h1susex      h1susetmxpi 
 

H1 DetChar (DetChar, Lockloss)
bricemichael.williams@LIGO.ORG - posted 11:33, Thursday 12 September 2024 - last comment - 16:04, Wednesday 30 October 2024(80001)
Lockloss Channel Comparisons

-Brice, Sheila, Camilla

We are looking to see if there are any aux channels that are affected by certain types of locklosses. Understanding if a threshold is reached in the last few seconds prior to a lockloss can help determine the type of lockloss, which channels are affected more than others, as well as

We have gathered a list of lockloss times (using https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi) with:

  1. only Observe and Refined tags (plots, histogram)
  2. only Observe, Refined, and Windy tags (plots, histogram)
  3. only Observe, Refined, and Earthquake tags (plots, histogram)
  4. Observe, Refined, and Microseism tags (note: all of these also have an EQ tag, and all but the last 2 have an anthropogenic tag) (plots, histogram)

(issue: the plots for the first 3 lockloss types wouldn't upload to this aLog. Created a dcc for them: G2401806)

We wrote a python code to pull the data of various auxilliary channels 15 seconds before a lockloss. Graphs for each channel are created, a plot for each lockloss time are stacked on each of the graphs, and the graphs are saved to a png file. All the graphs have been shifted so that the time of lockloss is at t=0.

Histograms for each channel are created that compare the maximum displacement from zero for each lockloss time. There are also a stacked histogram based on 12 quiet microseism times (all taken from between 4.12.24 0900-0930 UTC). The histrograms are created using only the last second of data before lockloss, are normalized by dividing by the numbe rof lockloss times, and saved to a seperate pnd file from the plots.

These channels are provided via a list inside the python file and can be easily adjusted to fit a user's needs. We used the following channels:

channels = ['H1:ASC-AS_A_DC_NSUM_OUT_DQ','H1:ASC-DHARD_P_IN1_DQ','H1:ASC-DHARD_Y_IN1_DQ','H1:ASC-MICH_P_IN1_DQ', 'H1:ASC-MICH_Y_IN1_DQ','H1:ASC-SRC1_P_IN1_DQ','H1:ASC-SRC1_Y_IN1_DQ','H1:ASC-SRC2_P_IN1_DQ','H1:ASC-SRC2_Y_IN1_DQ', 'H1:ASC-PRC2_P_IN1_DQ','H1:ASC-PRC2_Y_IN1_DQ','H1:ASC-INP1_P_IN1_DQ','H1:ASC-INP1_Y_IN1_DQ','H1:ASC-DC1_P_IN1_DQ', 'H1:ASC-DC1_Y_IN1_DQ','H1:ASC-DC2_P_IN1_DQ','H1:ASC-DC2_Y_IN1_DQ','H1:ASC-DC3_P_IN1_DQ','H1:ASC-DC3_Y_IN1_DQ', 'H1:ASC-DC4_P_IN1_DQ','H1:ASC-DC4_Y_IN1_DQ']
Images attached to this report
Comments related to this report
bricemichael.williams@LIGO.ORG - 17:03, Wednesday 25 September 2024 (80294)DetChar, Lockloss

After talking with Camilla and Sheila, I adjusted the histogram plots. I excluded the last 0.1 sec before lockloss from the analysis. This is due to (in the original post plots) the H1:ASC-AS_A_NSUM_OUT_DQ channel have most of the last second (blue) histogram at a value of 1.3x10^5. Indicating that the last second of data is capturing the lockloss causing a runawawy in the channels. I also combined the ground motion locklosses (EQ, Windy, and microseism) into one set of plots (45 locklosses) and left the only observe (and Refined) tagged locklosses as another set of plots (15 locklosses). Both groups of plots have 2 stacked histograms for each channel:

  1. Blue:
    • the max displacement from zero between one second before and 0.1 second before lockloss, for each lockloss. 
    • The data is one second before until 0.1 second before lockloss, for each lockloss
    • the histogram is the max displacement from zero for each lockloss
    • The counts are weighted as 1/(number of locklosses in this data set) (i.e: the total number of counts in the histogram)
  2. Red:
    • I took all the data points from eight seconds before until 2 seconds before lockloss for each lockloss.
    • I then down-sampled the data points from 256 Hz to 16Hz sampling rate by taking every 16th data point.
    • The histogram is the displacement from zero of these down-sampled points
    • The counts are weighted as 1/(number of down-samples data points for each lockloss) (i.e: the total number of counts in the histogram)

Take notice of the histogram for the H1:ASC-DC2_P_IN1_DQ channel for the ground motion locklosses. In the last second before lockloss (blue), we can see a bimodal distribution with the right groupling centered around 0.10. The numbers above the blue bars is the percentage of the counts in that bin: about 33.33% is in the grouping around 0.10. This is in contrast to the distribution for the observe, refined locklosses where the entire (blue) distribution is under 0.02. This could indicate a threshold could be placed on this channel for lockloss tagging. More analysis will be required before that (I am going to next look at times without locklosses for comparison).

 

Images attached to this comment
bricemichael.williams@LIGO.ORG - 14:17, Wednesday 09 October 2024 (80568)DetChar, Lockloss

I started looking at the DC2 channel and the REFL_B channel, to see if there is a threshold in REFL_B that can be put for a new lockloss tag. I plotted the last eight seconds before lockloss for the various lockloss times. This time I split up the times into different graphs based on if the DC2 max displacement from zero in the last second before lockloss was above 0.06 (based on the histogram in previous comment): Greater = the max displacement is greater than 0.06, Less = the max displacement is less than 0.06. However, I discovered that some of the locklosses that are above 0.06 for the DC2 channel, are failing the logic test in the code: getting considered as having a max displacement less than 0.06 and getting plotted on the lower plots. I wonder if this is also happening in the histograms, but this would only mean that we are underestimating the number of locklosses above the threshold. This could be suppressing possible bimodal distributions for other histograms as well. (Looking into debugging this)

I split the locklosses into 5groups of 8 and 1 group of 5 to make it easier to distinghuish between the lines in the plots.

Based on the plots, I think a threshold for H1:ASC-REFL_B_DC_PIT_OUT_DQ would be 0.06 in the last 3 seconds prior to lockloss

 

 

Images attached to this comment
bricemichael.williams@LIGO.ORG - 11:30, Tuesday 15 October 2024 (80678)DetChar, Lockloss

Fixed the logic issue for splitting the plots into pass/fail the threshold of 0.06 as seen in the plot.

The histograms were unaffected by the issue.

Images attached to this comment
bricemichael.williams@LIGO.ORG - 16:04, Wednesday 30 October 2024 (80949)

Added code to the gitLab

H1 DetChar (DetChar)
ansel.neunzert@LIGO.ORG - posted 16:01, Friday 30 August 2024 - last comment - 10:55, Wednesday 25 September 2024(79822)
9.5 Hz comb triplet status after PSL control box moved to separate power supply

Ansel Neunzert, Evan Goetz, Owen (Zhiyu) Zhang

Summary

Following the PSL control box 1 move to a separate power supply (see LHO aLOG 79593), we search the recent Fscan spectra for any evidence of the 9.5 Hz comb triplet artifacts. The configuration change seems promising. There is strong evidence that this change has had a positive effect. However, there are a few important caveats to keep in mind.

Q: Does the comb improve in DARM?

A: Yes. However, it has changed/improved before (and later reversed the change), so this is not conclusive by itself.

Figures 1-4 show the behavior of the comb in DARM over O4 so far. Figures 1 and 2 are annotated with key interpretations, and Figure 2 is a zoom of Figure 1. Note that the data points are actually the maximum values within a narrow spectral region (+/- 0.11 Hz, 20 spectral bins) around the expected comb peak positions. This is necessary because the exact frequency of the comb shifts unpredictably, and for high-frequency peaks this shift has a larger effect.

Based on these figures, there was a period in O4b when the comb’s behavior changed considerably, and it was essentially not visible at high frequencies in daily spectra. However, it was stronger at low frequencies (below 100 Hz) during this time. This is not understood, and in fact has not been noted before. Maybe the coupling changed? In any case, it came back to a more typical form in late July. So, we should be aware that an apparent improvement is not conclusive evidence that it won’t change again.

However, the recent change seems qualitatively different. We do not see evidence of low or high frequency peaks in recent days. This is good news.

Q: Does the comb improve in known witness channels?

A: Yes, and the improvement is more obvious here, including in channels where the comb has previously been steady throughout O4. This is cause for optimism, again with some caveats.

To clarify the situation, I made similar history plots (Figures 5-8) for a selection of channels that were previously identified as good witnesses for the comb. (These witness channels were initially identified using coherence data, but I’m plotting normalized average power here for the history tracks. We’re limited here to using channels that are already being tracked by Fscans.)

The improvement is more obvious here, because these channels don’t show the kind of previous long-term variation that we see in the strain data. I looked at two CS magnetometer channels, IMC-F_OUT_DQ, and LSC-MICH_IN1. In all cases, there’s a much more consistent behavior before the power supply isolation, which makes the improvement that much more convincing.

Q: Is it completely gone in all known witness channels?

A: No, there are some hints of it remaining.

Despite the dramatic improvements, there is subtle evidence of the comb remaining in some places. In particular, as shown in Figure 9, you can still see it at certain high frequencies in the IMC-F_OUT channel. It’s much improved from where it was before, but not entirely gone.

Images attached to this report
Comments related to this report
ansel.neunzert@LIGO.ORG - 10:55, Wednesday 25 September 2024 (80290)DetChar

Just an update that this fix seems to be holding. Tracking the comb height in weekly-averaged spectra shows clear improvement (plot attached). The combfinder has not picked up these combs in DARM recently, and when I spot-check the daily and weekly spectra I see no sign of them by eye, either.

Images attached to this comment
Displaying reports 6101-6120 of 84061.Go to page Start 302 303 304 305 306 307 308 309 310 End