Displaying reports 101-120 of 78140.Go to page Start 2 3 4 5 6 7 8 9 10 End
Reports until 12:25, Monday 30 September 2024
H1 SQZ
camilla.compton@LIGO.ORG - posted 12:25, Monday 30 September 2024 (80373)
AS42 sensing matrix re-measured

Sheila, Camilla

As Naoki/Vicky started, I moved ZM4,6,5 -50urad and then +50urad and recorded change in AS_A/B_AS42_PIT/YAW. It may have been easier if I'd changed AS_A/B AS42 offsets to have PIT/YAW outputs zero'ed to start with...  

Plot attached of ZM4, ZM5 and ZM6. 

There is a some cross coupling, and ZM5 gave very strange results in pitch and yaw with overshoot and the same direction of AS42 recorded with a different direction of alignment. This suggests we should use ZM4 and ZM6 as we do for our SCAN_SQZ_ALIGNMENT script.

Sensing/Input matrices calculated using /sqz/h1/scripts/ASC/AS42_sensing_matrix_cal.py

Using ZM4 and ZM6.
PIT Sensing Matrix is:
[[-0.0048 -0.0118]
 [ 0.0071  0.0016]]
PIT Input Matrix is:
[[ 21.02496715 155.05913272]
 [-93.29829172 -63.07490145]]
YAW Sensing Matrix is:
[[-0.00085 -0.009  ]
 [ 0.0059   0.0029 ]]
YAW Input Matrix is:
[[  57.2726375   177.74266811]
 [-116.52019354  -16.78680754]]

Images attached to this report
H1 General
ryan.short@LIGO.ORG - posted 12:04, Monday 30 September 2024 (80375)
Ops Day Mid Shift Report

H1 went out of observing from 15:35 to 18:37 UTC for planned commissioning activities, which included PRCL FF testing, an update to the SQZ_PMC Guardian, shaker tests at HAM1, ITMY compensation plate sweeps, and measuring the AS42 sensing matrix by moving ZMs.

H1 has now been locked for 8 hours.

H1 CAL
louis.dartez@LIGO.ORG - posted 11:48, Monday 30 September 2024 (80372)
Troubleshooting Cal
This is a late entry to list everything we checked on Friday afternoon (which spilled to roughly midnight CT).

-- Current Status --
LHO is currently running the same cal configuration as in 20240330T211519Z. The error reported over the weekend using the PCAL monitoring lines suggests that LHO is well within the 10% magnitude & 10 degree error window after about 15 minutes into each lock (attached image). Here is a link to the 'monitoring line' grafana page that we often look at to keep track of the PCALY/GDS_CALIB_STRAIN response at the monitoring line frequencies. The link covers Saturday to today. This still suggests the presence of calibration error near 33Hz that is not currently understood but, as mentioned earlier, it's now well within the 10%/10deg window. We think that the SRCL offset adjustment in LHO:80334 and LHO:80331 accounts for the largest contribution to the improvement of the calibration at LHO in the past week. 

-- What's Been Tried --
No attempts by CAL (me, JoeB, Vlad) to correct the 33Hz error have been successful. Many things were tried throughout the week (mostly on Tuesday & Friday) that went without being properly logged. Here's a non-exhaustive list of checks we've tried (in no particular order):

  • We cross-checked every single front end filter and gain parameter in the pyDARM model file against what's installed at LHO. Other than non-impacting foton file changes (e.g. out-of-path module changes) the only mismatch we identified is that the bias voltage in pyDARM was set to 3.3 instead of the IFO's 3.25.
  • We quickly implemented SRC detuning by copying the LLO setup. I did this by making on-the-fly changes to pyDARM on a local copy of the repo. I'll have to follow up with what these exact adjustments entailed. This seemed to do what we expected but it did not significantly impact the large 33Hz error.
  • Some GDS pipeline-affecting parameters made their way into the pydarm ini at LHO. These changes were made around the time of the Cal F2F over the summer. I reverted them, regenerated GSTLAL filters and installed them in the GDS pipeline. This didn't help. The commits in question are d2bfe349, 8717a9f5, and dc7208e0.
  • We don't believe the delays being reported by the pyDARM fits of the actuation delay on the PUM stage. We made adjustments to the fitting parameters for this stage to increase the fit range to roughly 400Hz in the hopes that pyDARM would produce a somewhat believable number. More details on this item as time allows; we are going to tune our injections to reduce uncertainty on these measurements in particular. This tuning will need to be done in tandem with other tunings (e.g. reducing PCAL at low frequencies to avoid leakage from polluting simultaneous higher frequency measurements).
  • We made similar adjustments to the sensing fit parameters. We extended the low end of the fit range to better capture the detuning near 10Hz in the fit.
  • Once it approached midnight on Friday night we reverted everything and left the IFO back in the state we found it in. This process is also not straightforward and requires creating a new 'fake' pyDARM report each time due recently-discovered bugs in the pyDARM infrastructure. This bug is being tracked here.
--What Changed?-- We think the SRC change(s) on September 5 (LHO:79929, also discussed in LHO:79903) line up well with the time frame at which the 33Hz error issue popped up. Here is a screenshot showing that the 33Hz error showed up on September 5. I've also attached a shot of ndscope showing the SRCL offset change on 09/05 from -175cts to -290cts here. cal_better_after_srcl_changed.png shows the error tracked by the monitoring lines before and after the SRCL offset was changed back to -190cts. The last image is a scope shot of that change (here).
Images attached to this report
H1 PSL
ryan.short@LIGO.ORG - posted 11:01, Monday 30 September 2024 (80374)
PSL 10-Day Trends

FAMIS 31053

The NPRO power, amplifier powers, and a few of the laser diodes inside the amps look to have some drift over the past several days that follows each other, sometimes inversely. Looking at past checks, this relationship isn't new, but the NPRO power doesn't normally drift around this much. Since it's still not a huge difference, I'm mostly just noting it here and don't think it's a major issue.

The rise in PMC reflected power has definitely slowed down, and has even dropped noticeably in the past day or so by almost 0.5W.

Images attached to this report
H1 SQZ
camilla.compton@LIGO.ORG - posted 10:16, Monday 30 September 2024 (80371)
SQZ PMC PZT checker added to SQZ_MANAGER

The operator team has been noticing that we are dropping out of observing for the PMC PZT to relock (80368, 80214, 80206...).

There's already a PZT_checker() in SQZ_MANGER at FDS_READY_IFO and FIS_READY_IFO to check the OPO and SHG PZTs are not at their end of their range (OPO PZT re-locks if not between 50-110V and SHG PZT is 15-85V). If they are, it requests them to unlock and relock. This is to force them into a better place before we go into observing. 

I've added PMC to the PZT_checker, will relock if it's outside of 15-85V. Full range is 0-100V. SQZ_MANAGER reloaded. Plan to take GRD though DOWN and back to FDS. 

Images attached to this report
LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 09:50, Monday 30 September 2024 (80370)
HAM2 Annulus Ion Pump Failure

Annulus ion pump failed about 4:04 AM.  Since the annulus volume is shared by HAM1 and HAM2, the other pump is actively working on the system, noted on the attached trend plot.

System will be evaluated as soon as posible and determine if the pump or controller need replacing.

Images attached to this report
H1 ISC
camilla.compton@LIGO.ORG - posted 09:29, Monday 30 September 2024 - last comment - 11:32, Thursday 03 October 2024(80369)
PRCL FF measurements taken as FM6 fit didn't reduce PRCL noise.

I tried Elenna's FM6 from 80287, this made the PRCL coupled noise worse, see first attached plot. 

Then Ryan turned off CAL lines and we retook to preshaping (PRCLFF_excitation_ETMYpum.xml) and PRCL injection (PRCL_excitation.xml) templates.  I took the PRCL_excitation.xml injection with the PRCL FF off and increased amplitude from 0.02 to 0.05 to increase coherence over 50Hz. Exported as prclff_coherence/tf.txt, and prcl_coherence/tf_FFoff.txt. All in /opt/rtcds/userapps/release/lsc/h1/scripts/feedforward

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 13:50, Monday 30 September 2024 (80377)

Elenna pointed out that I tested the wrong filter, the new one is actual FM7, labeled "new0926". We can test that on Thursday.

elenna.capote@LIGO.ORG - 11:32, Thursday 03 October 2024 (80444)

The "correct" filter today in FM7 was tested today and still didn't work. Possibly because I still didn't have the correct pre-shaping applied in this fit. I will refit using the nice measurement Camilla took in this alog.

LHO VE
david.barker@LIGO.ORG - posted 08:31, Monday 30 September 2024 (80367)
Mon CP1 Fill

Mon Sep 30 08:11:07 2024 INFO: Fill completed in 11min 3secs

Jordan confirmed a good fill curbside.

Because of cold outside temps (8C, 46F) the TCs only just cleared the -130C trip. I have increased the trip temps to -110C, the early-winter setting.

Images attached to this report
H1 ISC (ISC, Lockloss, PSL)
ryan.short@LIGO.ORG - posted 08:13, Monday 30 September 2024 - last comment - 09:20, Monday 30 September 2024(80358)
Looking into Recent Locklosses with FSS_OSCILLATION Tags

I don't consider this a full investigative report into why we've been having these FSS_OSCILLATION tagged locklosses, but here are some of my quick initial findings:

Please feel free to add comments of anything else anyone finds.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 09:20, Monday 30 September 2024 (80366)

Two interesting additional notes:

  1. The fist lockloss with FSS_OSCLATION tag was on Tuesday September 17th, during the fire alarm test, we've since had 28 more locklosses (from all states) with this tag. Could either the Tuesday maintenance activities (ETMX DAC swapped) in alog 80153 or the fire alarm lockloss 1410638748 have started this?
  2. In the FSS_OSCALTION tagged locklosses (and some that aren't tagged), H1:ASC-AS_A_DC_NSUM_OUT_DQ and H1:IMC-TRANS_OUT_DQ are loosing lock at the same time, this is very rare in O4. Iain Morton checked in G2401576 (tagged locklosses "SAME") that we'd had < 3 of these locklosses in the whole of O4 before the emergency vent.  In G2201762 O3a_O3b_summary.pdf, I called this a TOGETHER and FAST  lockloss and we saw it exclusively in the second half of O3b. A normal lockloss as the AS_A losing lock > 100ms before IMC. See plot attached of the last 3 NLN locklosses, the left plot is normal and the right two are the strange type where the IMC looses lock at the same time. 
Images attached to this comment
LHO General
ryan.short@LIGO.ORG - posted 07:37, Monday 30 September 2024 (80365)
Ops Day Shift Start

TITLE: 09/30 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 2mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.13 μm/s
QUICK SUMMARY: H1 has been locked and observing for almost 4 hours. One lockloss overnight, which doesn't have an obvious cause but looks to have a sizeable ETMX glitch. Commissioning time is scheduled today from 15:30 to 18:30 UTC.

H1 General
oli.patane@LIGO.ORG - posted 22:04, Sunday 29 September 2024 (80363)
Ops Eve Shift End

TITLE: 09/30 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Reacquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Trying to relock and at LOCKING_ALS. Relocking from the previous lockloss wasn't too bad so hopefully this relocking goes smoothly as well. We had superevent candidate S240930aa during my shift!
LOG:

23:00UTC Observing and have been Locked for 2 hours

23:32 Lockloss
    00:09 Started an initial alignment after cycling through CHECK_MICH_FRINGES
    00:47 Initial alignment done, relocking
01:28 NOMINAL_LOW_NOISE
01:32 Observing

04:00 Superevent S240930aa

04:54 Lockloss

Start Time System Name Location Lazer_Haz Task Time End
21:43 PEM Robert Y-arm n Testing equipment 23:33
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 21:59, Sunday 29 September 2024 - last comment - 21:16, Monday 30 September 2024(80364)
Lockloss

Lockloss @ 09/30 04:54UTC

We went out of Observing at 04:53:56, and then lost lock four seconds later at 04:54:01.

Comments related to this report
ryan.short@LIGO.ORG - 09:56, Monday 30 September 2024 (80368)PSL, SQZ

It looks like the reason for dropping observing at 04:53:56 UTC (four seconds before lockloss) was due to the SQZ PMC PZT exceeding its voltage limit, so it unlocked and Guardian attempted to bring things back up. This has happened several times before, where Guardian is usually successful in bringing things back and H1 returns to observing within minutes, so I'm not convinced this is the cause of the lockloss.

However, when looking at Guardian logs around this time, I noticed one of the first things that could indicate a cause for a lockloss was from the IMC_LOCK Guardian, where at 04:54:00 UTC it reported "1st loop is saturated" and opened the ISS second loop. While this process was happening over the course of several milliseconds, the lockloss occurred. Indeed, it seems the drive to the ISS AOM and the secondloop output suddenly dropped right before the first loop opened, but I don't notice a significant change in the diffracted power at this time (see attached screenshot). Unsure as of yet why this would've happened to cause this lockloss.

Other than the ISS, I don't notice any other obvious cause for this lockloss.

Images attached to this comment
oli.patane@LIGO.ORG - 21:16, Monday 30 September 2024 (80384)

After comparing Ryan's channels to darm, still not sure whether this lockloss was caused by something in the PSL or not, see attached

Images attached to this comment
LHO General
ryan.short@LIGO.ORG - posted 16:37, Sunday 29 September 2024 (80357)
Ops Day Shift Summary

TITLE: 09/29 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY: H1 was locked this morning until winds picked up, which kept it down through midday. Was able to get relocked this afternoon, but we just had another lockloss as I'm drafting this log.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
23:58 SAF H1 LVEA YES LVEA is laser HAZARD Ongoing
17:01 PEM Robert Y-arm n Testing equipment 18:34
21:43 PEM Robert Y-arm n Testing equipment Ongoing
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 16:36, Sunday 29 September 2024 - last comment - 18:30, Sunday 29 September 2024(80360)
Lockloss

Lockloss @ 09/29 23:32UTC. Ryan S and I had been watching the FSS channels glitching but Ryan closed them literally seconds before the lockloss happened so not sure yet if it was the FSS but we think it might've been. Wind is also above 30mph and increasing so could've been that too

Comments related to this report
oli.patane@LIGO.ORG - 18:05, Sunday 29 September 2024 (80361)

Even though the wind was (just a bit) over threshold, it definitely looks like the FSS (green, lower right) was the cause of the lockloss

Images attached to this comment
oli.patane@LIGO.ORG - 18:30, Sunday 29 September 2024 (80362)

09/30 01:30UTC Observing

H1 General
oli.patane@LIGO.ORG - posted 16:05, Sunday 29 September 2024 (80359)
Ops Eve Shift Start

TITLE: 09/29 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 27mph Gusts, 16mph 3min avg
    Primary useism: 0.10 μm/s
    Secondary useism: 0.15 μm/s
QUICK SUMMARY:

Observing at 150Mpc and have been Locked for 2 hours. Winds are still above 25mph but microseism is getting lower.

H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 10:16, Sunday 29 September 2024 - last comment - 14:11, Sunday 29 September 2024(80354)
Lockloss @ 16:51 UTC

Lockloss @ 16:51 UTC - link to lockloss tool

Cause looks to me to be from high winds; gusts just hit up to 40mph and suspensions were moving right before the lockloss in a familiar way.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 14:11, Sunday 29 September 2024 (80356)

H1 back to observing at 21:09 UTC. After a few hours, thanks to a lull in the wind, H1 was able to get relocked. Fully automatic acquisition.

Displaying reports 101-120 of 78140.Go to page Start 2 3 4 5 6 7 8 9 10 End