Displaying reports 4081-4100 of 83590.Go to page Start 201 202 203 204 205 206 207 208 209 End
Reports until 16:06, Tuesday 17 December 2024
H1 CDS
david.barker@LIGO.ORG - posted 16:06, Tuesday 17 December 2024 (81881)
TW0 raw minute trend offload completed

WP12249

Jonathan, Dave:

The offloading of the past 6 months of raw minute trend files from h1daqtw0 SSD-RAID to permanent archive on HDD-RAID is completed.

nds0 restart 11:41 Mon 16dec2024  
file copy 12:06 Mon 16dec2024 - 12:59 Tue 17dec2024 24hr 53min
nds0 restart 13:27 Tue 17dec2024  
old file deletion 13:38 - 15:56 Tue 17dec2024 2hr 18min

TW0 raid usage went from 91% to 2%. Jonathan made the DAQ puppet change to configure nds0's daqdrc file with the new archive.

H1 CDS
david.barker@LIGO.ORG - posted 14:36, Tuesday 17 December 2024 (81878)
DUST Monitor LAB2 not working

Currently Dust LAB2 is not working, 0.3u count is NaN, 0.5u count is flatline zero.

FRS32930

H1 PSL (PSL)
ryan.crouch@LIGO.ORG - posted 12:59, Tuesday 17 December 2024 (81873)
PSL Status Report - Weekly

Laser Status:
    NPRO output power is 1.842W
    AMP1 output power is 70.15W
    AMP2 output power is 138.6W
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 0 days, 2 hr 46 minutes
    Reflected power = 23.78W
    Transmitted power = 105.4W
    PowerSum = 129.2W

FSS:
    It has been locked for 0 days 0 hr and 33 min
    TPD[V] = 0.8228V

ISS:
    The diffracted power is around 3.2%
    Last saturation event was 0 days 0 hours and 33 minutes ago


Possible Issues:
    PMC reflected power is high, its a little higher after the FSS/PMC work today

Images attached to this report
H1 ISC
thomas.shaffer@LIGO.ORG - posted 12:56, Tuesday 17 December 2024 (81872)
SRY OLG measured

Sheila D, TJ

To help understand why we've been having inconsistent locking of SRY lately, Sheila and I took an OLG of SRCL with SRY locked this morning. I stopped ALIGN_IFO at WFS_CENTERING_SRY so it was locked but not fully aligned. Compared to the last reference (July 2018) that was on the template, things look a bit worse off. We plan to have someone look at this and try to tune the loop a bit better.

Images attached to this report
H1 General
ryan.crouch@LIGO.ORG - posted 12:37, Tuesday 17 December 2024 (81871)
LVEA swept

The LVEA has been swept, the FARO is left plugged in in the East bay.

H1 PSL
ryan.short@LIGO.ORG - posted 12:33, Tuesday 17 December 2024 (81865)
PSL FSS On-Table Alignment (WP 12250)

We've been seeing the FSS RefCav TPD signal dropping over recent weeks (not surprising for this time of year), so I went into the PSL enclosure this morning to tune up the FSS path alignment in advance of the holiday break.

To start, I attempted to touch up alignment into the PMC remotely using the picomotors right before it with the ISS off, but I wasn't able to get much of any improvement, so I turned the ISS back on and made my way out to the enclosure. Once there, I started with a power budget of the FSS path using the 3W-capable Ophir stick head:

The largest (and least surprising) area of power loss I noticed was in the AOM diffraction efficiencies, so I started there. I adjusted the AOM stage itself, mostly in pitch, to improve the single-pass, and M21 to improve the double-pass. I also checked that the beam was passing nicely through the FSS EOM (it was, no adjustment needed here). The final power budget:

Having made good improvements, I proceeded to adjust M23 and M47, the picomotor-controlled mirrors before the RefCav, to align the beam back onto the alignment iris at the input of the RefCav. That done, I then instructed the FSS autolocker to lock the RefCav. As seen before and noted most explicitly in alog81780, the autolocker could briefly grab the TEM00 mode but then lose it. I lowered the autolocker's State 2 delay (which determines how long to wait before turning on the temperature control loop after finding resonance) from 1.0 seconds to 0.5, and the autolocker was immediately successful. I've accepted this shorter delay time in SDF; screenshot attached. The TPD was reporting a signal of 0.515 V with the RefCav locked, so I used the pictomotors to improve alignment, finishing with a TPD signal of 0.830 V.

Seeing the beam spot on the RefCav REFL camera was now more than half out of view, I rotated the camera in its mount slightly to center the image. I then unlocked the FSS and used M25 to tweak up the alignment onto the RFPD, improving the voltage when using a multimeter from 0.370 V to 1.139 V, then locked the FSS again to get an RFPD voltage of 0.213 V. This gives a visibility of the RefCav of 81.3%. I wrapped up in the enclosure, turned the environmental controls back to science mode, and returned to the control room. After about an hour while maintenance activities were finishing, I turned the ISS back on; currently diffracting around 3.3%.

This closes WP 12250.

Images attached to this report
H1 GRD
sheila.dwyer@LIGO.ORG - posted 12:32, Tuesday 17 December 2024 (81870)
minor edit to ALS arm guardians

While Oli was ding initial alignment, I saw that the Y arm guardian was in the state ENABLE_WFS although the arm wasn't locked and wasn't well enough aligned to lock.  looking at the code, there was no check that the arm was locked before the locking state returned true, it only checks for errors.  I've changed the return true to happen if H1:ALS-Y_REFL_LOCK_STATUS = 1

After doing this I caused an issue by trying to cycle through these states while the INIT_ALIGN guardian was still managing the arm guardians.  The X arm had already run the WFS and completed, but the initial alignment guardian saw that it wasn't locked, and requested it to scan_alignment.  Perhaps this check could be made to only happen if the arm hasn't already offloaded, or if the arm has been in the locking state for a certain amount time. 

H1 CDS
david.barker@LIGO.ORG - posted 11:41, Tuesday 17 December 2024 - last comment - 13:13, Tuesday 17 December 2024(81867)
Slow controls Beckhoff device issue

Fil, Fernando, Patrick, Dave:

We are investigating a Beckhoff device error on the CS-AUX PLC, DEV1 on the CDS Overview. Device went into error at 11:21 PST.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 12:18, Tuesday 17 December 2024 (81869)

Robert has powered down the chamber illuminator control chassis in the LVEA. This chassis permits remote control of the chamber illuminators via ethernet, which is not needed during O4. There is a worry that these chassis, even when in stand-by mode, could be a source of RF noise.

On the CDS overview I will change DEV1's display logic to be GREEN if the dev count is 21 of 23 and RED if any other value.

david.barker@LIGO.ORG - 13:13, Tuesday 17 December 2024 (81874)

New CDS Overview has GREEN DEV1 when device count = 21, RED otherwise.

H1 TCS
camilla.compton@LIGO.ORG - posted 10:29, Tuesday 17 December 2024 - last comment - 10:52, Wednesday 08 January 2025(81863)
CO2 Lasers Spectrum Locked vs Unlocked

Looked at the spectum of th 50W CO2 lasers on the fast VIGO PVM 10.6 ISS detectors when the CO2 is locked (using the laser's PZT) and unlocked/ free running: time series attached. Small differences <6Hz, see spectrum attached. 

The filter banks suggest that the _OUT signals used have been converted from counts to volts and de-whitened. Looking at the spectrum 50Hz -1kHz and using the DC level of 7e-3, we get  a RIN of 4.5e-9 / 7e-3 = 6 x 10-7.
This is better than the L5L free running RIN of 10-5 at 100Hz and 10-6 at 1kHz measured in CIT#389
Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 12:20, Wednesday 18 December 2024 (81868)

Gabriele, Camilla.

We are not sure if this measurement makes sense. 

Attached is the same spectrum with the CO2X laser turned off to see the dark noise. It appears that measurement is limited to dark noise of the diode above 40Hz. The ITMX_CO2_ISS_IN_AC channel dark noise is actually above the level when the laser is on, this doesn't make sense to me. 

  • Is there anything strange happening in models or chassis? No.
    • The filters in the DC _OUT channels include a CountstoVolts gain(0.000610352) and UndoGain gain(0.00196078) which is 1/510 from the PD electronics D1201111 as expected
    • The AC _OUT channel includes a CountstoVolts gain(0.000610352) and de-whiten zpk([20], [0.01], 0.011322, "n").
    • There is nothing else in the h1hsccs model.
    • Looking at the TCS block diagram E1100892, the ISS PD outputs go though the TCS ISS and Interface Chassis D1300649 (D1300015 shows there nothing happening to the signals inside this chassis), before going to the ADC. So the model should see the signals as if they were coming straight from the PD DB9 output + counts to volts gain. 
  • Does DC value of 7e-3 V make sense? Yes.
    • In CIT#389 saw saw -1V (DCMON so gain of 255) for 60mW of laser power (can jump ~by a factor of 2 dependent on wavelength/polarization/how the laser is feeling). From this, expect 1mV from diode (before electronics gain) is 15mW of laser power. So that the DC value measured of 7e-3 V would be around 100mW of power on the PD (within a factor of 2 is 50mW to 200mW). This makes sense as we would expect around 250mW on the ISS PD (50W out of laser x 99/1 BS x 50/50 BS).
  • Does Dark noise value make sense: Unsure.
    • In CIT#541, Gabriele shows DC dark noise values of 4e-7 /rHz, this is straight out of the diode (DCMON so x gain of 255) so the comparable value to our _OUT channels would be 1.5e-9 /rHz. This is a factor of 3 different from the 5e-9/rHz dark value we measure at LHO on the DC OUT.
    • Debatable math: The power noise of the diode is 1e-8 W/rHz (from Keita's computations attached to CIT#549). So the RIN should be noise-limited to 1e-8 W/rHz / 0.250W = 4e-8 /rHz. We are measuring down to 5e-9/rHz on the DC channel which is a factor of 10 lower than expected possible
Images attached to this comment
camilla.compton@LIGO.ORG - 10:52, Wednesday 08 January 2025 (82181)

Gabriele and I checked that the H1:TCS-ITM{X,Y}_CO2_ISS_{IN/OUT}_AC filter: de-whiten zpk([20], [0.01], 0.011322, "n"), is as expected from the PD electronics D1201111, undoing the gain of 105dB with the turning point around 20Hz, foton bode plot attached.

This means that both the AC and DC outputs should be the voltage out of the photodetector before the electronics, where the PD shunt resistance was measured to be 66 Ohms.

Images attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 10:08, Tuesday 17 December 2024 (81864)
Tue CP1 Fill

Tue Dec 17 10:02:58 2024 INFO: Fill completed in 2min 56secs

Quick fill, coincident with dewar filling.

Images attached to this report
H1 General (Lockloss)
camilla.compton@LIGO.ORG - posted 08:45, Tuesday 17 December 2024 (81862)
Range drop before lockloss at 11:34UTC could have been due to unknown glitches

Ryan S noticed that the range drop before the lockloss at 11:34UTC could be related to the glitches we've been seeing as Omicron sees similar glitches screenshot. DARM looks worse 30-100Hz plot.

Besfreo the lockloss we see a 26-28Hz wobble in DARM (not in the other LSC loops) plot.

Images attached to this report
H1 General
camilla.compton@LIGO.ORG - posted 07:37, Tuesday 17 December 2024 (81861)
OPS TUES Day Shift Start

TITLE: 12/17 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 3mph Gusts, 2mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.39 μm/s
QUICK SUMMARY:

IFO re-locked itself twice overnight and is currently running magnetic injections before we start maintenance activities.

Slight strangeness that the IFO mode was in "relocking" H1:ODC-OBSERVATORY_MODE = 21 when I arrived.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 22:00, Monday 16 December 2024 (81860)
OPS Eve Shift Summary

TITLE: 12/17 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
LOG:

IFO is in INITIAL_ALIGNMENT and ENVIRONMENT

Still recovering from the 7.4 EQ today in Vanuatu. After this, there were many 5+ Mag aftershocks and our primary microseism is still above the 0.1 line. We briefly left EQ mode for 5 or so minutes before another earthquake (Atlantic Ocean this time) caused us to reactivate.

I began locking at 05:22 UTC and managed to get to PRMI but had no flashes. A few minutes later, another EQ hit and we lost lock. At this point, I started initial alignment. I believe IFO will be able to lock very soon since seismic conditions are coming down quickly.

Lockloss Alog 81859

Other:

As the pacific plate shook, Robert took the time to conclude his viewport work, meaning we are capable of transitioning to LASER SAFE once again. I have informed the morning operator independently too.

H1 PEM (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 18:07, Monday 16 December 2024 (81859)
Lockloss 01:58 UTC

Interesting LL that happened between the detection of a 7.4 mag EQ but before any picket fence or STS detection. So I don't think it's EQ caused but seems suspicious. Either way, we're riding out a huge EQ right now and will attempt locking afterwards.

H1 PSL
ryan.short@LIGO.ORG - posted 19:12, Wednesday 11 December 2024 - last comment - 14:33, Tuesday 17 December 2024(81780)
FSS Autolocker Investigations

J. Oberling, R. Short

This afternoon, Jason and I started to look into why the FSS has been struggling to relock itself recently. In short, once the autolocker finds a RefCav resonance, it's been able to grab it, but loses it after about a second. This happens repeatedly, sometimes taking up to 45 minutes for the autolocker to finally grab and hold resonance on its own (which led me to do this manually twice yesterday). We first noticed the autolocker struggling when recovering the FSS after the most recent NPRO swap on November 22nd, which led Jason to manually lock it in that instance.

While looking at trends of when the autolocker both fails and is successful in locking the RefCav, we noticed that the fastmon channel looks the most different between the two cases. In a successful RefCav lock (attachment 1), the fastmon channel will start drifting away from zero as the PZT works to center on the resonance, but once the temperature loop turns on, the signal is brought back and eventually settles back around zero. In unsuccessful RefCav lock attempts (attachments 2 and 3), the fastmon channel will still drift away, but then lose resonance once the signal hits +/-13V (the limit of the PZT as set by the electronics within the TTFSS box) before the temploop is able to turn on. I also looked back to a successful FSS lock with the NPRO installed before this one (before the problems with the autolocker started, attachment 4), and the behavior looks much the same as with successful locks with the current NPRO.

It seems that with this NPRO, for some reason, the PZT is frequently running out of range when trying to center on the RefCav resonance before the temploop can turn on to help, but it sometimes gets lucky. Jason and I took some time familiarizing ourselves with the autolocker code (written in C and unchanged in over a decade) to give us a better idea of what it's doing. At this point, we're still not entirely sure what about this NPRO is causing the PZT to run out of range, but we do have some ideas of things to try during a maintenance window to make the FSS lock faster:

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 14:33, Tuesday 17 December 2024 (81876)

As part of my FSS work this morning (alog81865), I brought the State 2 delay down from 1 second to 0.5, and so far today every FSS lock attempt has been grabbed successfully on the first try. I'll leave in this "Band-Aid" fix until we find a reason to change it back.

H1 PEM (DetChar, PEM, TCS)
robert.schofield@LIGO.ORG - posted 18:06, Thursday 14 November 2024 - last comment - 10:19, Thursday 19 December 2024(81246)
TCS-Y chiller is likely hurting Crab sensitivity

Ansel reported that a peak in DARM that interfered with the sensitivity of the Crab pulsar followed a similar time frequency path as a peak in the beam splitter microphone signal. I found that this was also the case on a shorter time scale and took advantage of the long down times last weekend to use  a movable microphone to find the source of the peak. Microphone signals don’t usually show coherence with DARM even when they are causing noise, probably because the coherence length of the sound is smaller than the spacing between the coupling sites and the microphones, hence the importance of precise time-frequency paths.

Figure 1 shows DARM and the problematic peak in microphone signals. The second page of Figure 1 shows the portable microphone signal at a location by the staging building and a location near the TCS chillers. I used accelerometers to confirm the microphone identification of the TCS chillers, and to distinguish between the two chillers (Figure 2).

I was surprised that the acoustic signal was so strong that I could see it at the staging building - when I found the signal outside, I assumed it was coming from some external HVAC component and spent quite a bit of time searching outside. I think that this may be because the suspended mezzanine (see photos on second page of Figure 2) acts as a sort of soundboard, helping couple the chiller vibrations to the air. 

Any direct vibrational coupling can be solved by vibrationally isolating the chillers. This may even help with acoustic coupling if the soundboard theory is correct. We might try this first. However, the safest solution is to either try to change the load to move the peaks to a different frequency, or put the chillers on vibration isolation in the hallway of the cinder-block HVAC housing so that the stiff room blocks the low-frequency sound. 

Reducing the coupling is another mitigation route. Vibrational coupling has apparently increased, so I think we should check jitter coupling at the DCPDs in case recent damage has made them more sensitive to beam spot position.

For next generation detectors, it might be a good idea to make the mechanical room of cinder blocks or equivalent to reduce acoustic coupling of the low frequency sources.

Non-image files attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 14:12, Monday 25 November 2024 (81472)DetChar, TCS

This afternoon TJ and I placed pieces of damping and elastic foam under the wheels of both CO2X and CO2Y TCS chillers. We placed thicker foam under CO2Y but this did make the chiller wobbly so we placed thinner foam under CO2X.

Images attached to this comment
keith.riles@LIGO.ORG - 08:10, Thursday 28 November 2024 (81525)DetChar
Unfortunately, I'm not seeing any improvement of the Crab contamination in the strain spectra this week, following the foam insertion.

Attached are ASD zoom-ins (daily and cumulative) from Nov 24, 25, 26 and 27.
Images attached to this comment
camilla.compton@LIGO.ORG - 15:02, Tuesday 03 December 2024 (81598)DetChar, TCS

This morning at 17:00UTC we turned the CO2X and CO2Y TCS chiller off and then on again, hoping this might change the frequency they are injecting into DARM. We do not expect it to effect it much we had the chillers off for a ling period 25th October 80882 when we flushed the chiller line and the issue was seen before this date.

Opened FRS 32812.

There were no expilcit changes to the TCS chillers bettween O4a and O4b although we swapped a chiller for a spare chiller in October 2023 73704

camilla.compton@LIGO.ORG - 11:27, Thursday 05 December 2024 (81634)TCS

Between 19:11 and 19:21 UTC, Robert and I swapped the foam from under CO2Y chiller (it was flattened and not providing any damping now) to new, thicker foam and 4 layers of rubber. Photo's attached. 

Images attached to this comment
keith.riles@LIGO.ORG - 06:04, Saturday 07 December 2024 (81663)
Thanks for the interventions, but I'm still not seeing improvement in the Crab region. Attached are daily snapshots from UTC Monday to Friday (Dec 2-6).
Images attached to this comment
thomas.shaffer@LIGO.ORG - 15:53, Tuesday 10 December 2024 (81745)TCS

I changed the flow of the TCSY chiller from 4.0gpm to 3.7gpm.

These Thermoflex1400 chillers have their flow rate adjusted by opening or closing a 3 way valve at the back of the chiller. for both X and Y chillers, these have been in the full open position, with the lever pointed straight up. The Y chiller has been running with 4.0gpm, so our only change was a lower flow rate. The X chiller has been at 3.7gpm already, and the manual states that these chillers shouldn't be ran below 3.8gpm. Though this was a small note in the manual and could be easily missed. Since the flow couldn't be increased via the 3 way valve on back, I didn't want to lower it further and left it as is.

Two questions came from this:

  1. Why are we running so close to the 3.8gpm minimum?
  2. Why is the flow rate for the X chiller so low?

The flow rate has been consistent for the last year+, so I don't suspect that the pumps are getting worn out. As far back as I can trend they have been around 4.0 and 3.7, with some brief periods above or below.

Images attached to this comment
keith.riles@LIGO.ORG - 07:52, Friday 13 December 2024 (81806)
Thanks for the latest intervention. It does appear to have shifted the frequency up just enough to clear the Crab band. Can it be nudged any farther, to reduce spectral leakage into the Crab? 

Attached are sample spectra from before the intervention (Dec 7 and 10) and afterward (Dec 11 and 12). Spectra from Dec 8-9 are too noisy to be helpful here.



Images attached to this comment
camilla.compton@LIGO.ORG - 11:34, Tuesday 17 December 2024 (81866)TCS

TJ touched the CO2 flow on Dec 12th around 19:45UTC 81791 so the flowrate further reduced to 3.55 GPM. Plot attached.

Images attached to this comment
thomas.shaffer@LIGO.ORG - 14:16, Tuesday 17 December 2024 (81875)

The flow of the TCSY chiller was further reduced to 3.3gpm. This should push the chiller peak lower in frequency and further away from the crab nebula.

keith.riles@LIGO.ORG - 10:19, Thursday 19 December 2024 (81902)
The further reduced flow rate seems to have given the Crab band more isolation from nearby peaks, although I'm not sure I understand the improvement in detail. Attached is a spectrum from yesterday's data in the usual form. Since the zoomed-in plots suggest (unexpectedly) that lowering flow rate moves an offending peak up in frequency, I tried broadening the band and looking at data from December 7 (before 1st flow reduction), December 16 (before most recent flow reduction) and December 18 (after most recent flow reduction). If I look at one of the accelerometer channels Robert highlighted, I do see a large peak indeed move to lower frequencies, as expected.

Attachments:
1) Usual daily h(t) spectral zoom near Crab band - December 18
2) Zoom-out for December 7, 16 and 18 overlain
3) Zoom-out for December 7, 16 and 18 overlain but with vertical offsets
4) Accelerometer spectrum for December 7 (sample starting at 18:00 UTC)
5) Accelerometer spectrum for December 16
6) Accelerometer spectrum for December 18 
Images attached to this comment
H1 ISC
sheila.dwyer@LIGO.ORG - posted 09:20, Monday 06 May 2024 - last comment - 16:31, Tuesday 17 December 2024(77641)
DRMI PRCL gain change

Sheila, Jenne, Tony, Camilla

We've had locklosses in DRMI because the PRCL gain has been to high when locked on REFL1F.  Tony looked and thinks that this started on 77583, the day of our big shift in the output alignment.

Today we acquired DRMI with half the gain in the PRCL input matrix for 1F, this one acquisition was fast.  I've attached the OLG measurements for PRCL and MICH after the change. 

Tony is working on making histograms of the DRMI acquisition times, before the 23rd, from the 23rd to today, and eventually a histogram from today for the next few weeks to evaluate if this change has an impact on the DRMI acquisition times.

Jenne also found that it seems out POP18 build up seems higher in DRMI since the 23rd.

Images attached to this report
Comments related to this report
jenne.driggers@LIGO.ORG - 11:45, Friday 10 May 2024 (77763)

I'm no longer quite so sure about the conclusion that Pop18 is higher, or at least enough to really matter.

Here are 2 screenshots that I made extremely quickly, so they are not very awesome, but they can be a placeholder until Tony's much more awesome version arrives.  They both have the same data, displayed 2 ways.

The first plot is pop18 and kappaC versus time.  The x-axis is gpstime, but that's hard to interpret, so I made a note on the screenshot that it ranges from about April 20th (before The Big Shift) to today.  Certainly during times when the optical gain was low, Pop18 was also low.  But, Pop18 is sometimes high even before the drop in optical gain.  So, probably it's unrelated to The Big Shift.  That means that the big shift in the output arm is not responsible for the change in PRCL gain (which makes sense, since they should be largely separate).

The second plot is just one value versus the other, to see that there does seem to be a bit of a trend that if kappaC is low, then definitely Pop18 is low.  But the opposite is not true - if pop18 is low kappaC isn't necessarily low.

The last attachment is the jupyter notebook (you'd have to download it and fix up the suffix to remove .txt and make it again a .ipynb), with my hand-typed data and the plots.

Images attached to this comment
Non-image files attached to this comment
sheila.dwyer@LIGO.ORG - 09:10, Monday 13 May 2024 (77805)

I actually didn't load the guardian at the time of this change, so it didn't take effect until today.

So, we'd like histograms of DRMI acquitisiton times from before April 23rd, from April 23rd until today, and for a few weeks from today.

anthony.sanchez@LIGO.ORG - 16:31, Tuesday 17 December 2024 (81879)

Using the Summary pages I was able to get a quick google sheet to give me before and after Histograms of how long ISC_LOCK was in DRMI 1F.

https://docs.google.com/spreadsheets/d/1xVmvYJdEq8GfKVcSzPj1fyeS95cIWrAkFjI23AxtsAQ/edit?gid=286373047#gid=286373047

First Sheet's data is before Nov 18th 2024, consisting of 100 gpstimes and durations where ISC_LOCK was in AQUIRE_DRMI_1F.
Second Sheet's data is After Nov 18th 2024. Consisting of 100 gpstimes and durations where ISC_LOCK was in AQUIRE_DRMI_1F

Interesting notes about ISC_LOCK.
ISC_Lock will request PRMI or Check MITCH Fringes some where between 180 seconds and 600 seconds, depending on how much light is seen on AS_AIR.
If AS_AIR sees flashes above 80 then ISC_LOCK will not kick us out of DRMI until 600 seconds.

So it looks like one of the changes that happened on or around Nov18th made the Flashes on AS_Air higher but we are still not actually locking DRMI.
We had fewer Aquire DRMI durations, over 180 Seconds before Nov 18th's changes.

Images attached to this comment
Displaying reports 4081-4100 of 83590.Go to page Start 201 202 203 204 205 206 207 208 209 End