TITLE: 02/22 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 3mph Gusts, 0mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.45 μm/s
QUICK SUMMARY:
H1's been locked almost 4hrs. H1 has not shown the glitchy behavior for the last 12hrs (on Omicron) and our violins are looking much better, too!
Environmentally secondary microseism continued its increase and is now squarely at the 95th percentile; winds are calm, but there was a little windstorm between 3-5hrs ago with gusts above 20mph.
Calibration is scheduled for 1130amPDT (1930utc).
TITLE: 02/22 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
H1 was locked for 6 Hours and 58 Minutes...
And Then... Unknown lockloss 15 Minutes before the end of the Shift.
https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi?event=1424238353
I had H1 do an Initial alignment and started normal locking before leaving.
The Violin Mode IY Mode 5 may need to be adjusted again as it was previously set to FM 6,8,10 with a gain of + 0.01 instead of it's nominal state which was working well.
LOG:
No Log
Strange Dip in H0:VAC-MY_FAN2 _2702 and a half days ago.
Averaging Mass Centering channels for 10 [sec] ...
2025-02-21 19:13:29.266265
There are 16 T240 proof masses out of range ( > 0.3 [V] )!
ETMX T240 2 DOF X/U = -0.872 [V]
ETMX T240 2 DOF Y/V = -1.013 [V]
ETMX T240 2 DOF Z/W = -0.359 [V]
ETMY T240 3 DOF X/U = 0.313 [V]
ITMX T240 1 DOF X/U = -1.715 [V]
ITMX T240 1 DOF Y/V = 0.441 [V]
ITMX T240 1 DOF Z/W = 0.52 [V]
ITMX T240 2 DOF Y/V = 0.337 [V]
ITMX T240 3 DOF X/U = -1.779 [V]
ITMY T240 3 DOF X/U = -0.723 [V]
ITMY T240 3 DOF Z/W = -2.288 [V]
BS T240 1 DOF Y/V = -0.325 [V]
BS T240 3 DOF Z/W = -0.397 [V]
HAM8 1 DOF X/U = -0.307 [V]
HAM8 1 DOF Y/V = -0.421 [V]
HAM8 1 DOF Z/W = -0.687 [V]
All other proof masses are within range ( < 0.3 [V] ):
ETMX T240 1 DOF X/U = 0.091 [V]
ETMX T240 1 DOF Y/V = 0.038 [V]
ETMX T240 1 DOF Z/W = 0.101 [V]
ETMX T240 3 DOF X/U = 0.107 [V]
ETMX T240 3 DOF Y/V = 0.05 [V]
ETMX T240 3 DOF Z/W = 0.051 [V]
ETMY T240 1 DOF X/U = 0.129 [V]
ETMY T240 1 DOF Y/V = 0.217 [V]
ETMY T240 1 DOF Z/W = 0.287 [V]
ETMY T240 2 DOF X/U = -0.034 [V]
ETMY T240 2 DOF Y/V = 0.248 [V]
ETMY T240 2 DOF Z/W = 0.096 [V]
ETMY T240 3 DOF Y/V = 0.168 [V]
ETMY T240 3 DOF Z/W = 0.19 [V]
ITMX T240 2 DOF X/U = 0.229 [V]
ITMX T240 2 DOF Z/W = 0.295 [V]
ITMX T240 3 DOF Y/V = 0.172 [V]
ITMX T240 3 DOF Z/W = 0.164 [V]
ITMY T240 1 DOF X/U = 0.13 [V]
ITMY T240 1 DOF Y/V = 0.166 [V]
ITMY T240 1 DOF Z/W = 0.083 [V]
ITMY T240 2 DOF X/U = 0.054 [V]
ITMY T240 2 DOF Y/V = 0.286 [V]
ITMY T240 2 DOF Z/W = 0.192 [V]
ITMY T240 3 DOF Y/V = 0.123 [V]
BS T240 1 DOF X/U = -0.126 [V]
BS T240 1 DOF Z/W = 0.193 [V]
BS T240 2 DOF X/U = -0.002 [V]
BS T240 2 DOF Y/V = 0.106 [V]
BS T240 2 DOF Z/W = -0.047 [V]
BS T240 3 DOF X/U = -0.099 [V]
BS T240 3 DOF Y/V = -0.283 [V]
Assessment complete.
Averaging Mass Centering channels for 10 [sec] ...
2025-02-21 19:16:03.265814
There are 1 STS proof masses out of range ( > 2.0 [V] )!
STS EY DOF X/U = -2.336 [V]
All other proof masses are within range ( < 2.0 [V] ):
STS A DOF X/U = -0.46 [V]
STS A DOF Y/V = -0.842 [V]
STS A DOF Z/W = -0.555 [V]
STS B DOF X/U = 0.251 [V]
STS B DOF Y/V = 0.955 [V]
STS B DOF Z/W = -0.313 [V]
STS C DOF X/U = -0.86 [V]
STS C DOF Y/V = 0.793 [V]
STS C DOF Z/W = 0.694 [V]
STS EX DOF X/U = 0.016 [V]
STS EX DOF Y/V = -0.038 [V]
STS EX DOF Z/W = 0.123 [V]
STS EY DOF Y/V = -0.066 [V]
STS EY DOF Z/W = 1.362 [V]
STS FC DOF X/U = 0.192 [V]
STS FC DOF Y/V = -1.1 [V]
STS FC DOF Z/W = 0.662 [V]
Assessment complete.
TITLE: 02/22 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 10mph Gusts, 7mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.40 μm/s
QUICK SUMMARY:
H1 Is currently Locked and Observing for 1.5 hours.
Our plan is to continue to Observe for the rest of the night.
I am told I likely need to keep an eye on PI mode 8, and Violin mode IY Mode 5 as it may require non-nominal settings.
Pi Mode 8 channel name to watch = H1:SUS-PI_PROC_COMPUTE_MODE8_NORMLOG10RMSMON
Since the Ring Heater settings were changed, I'll likely have to do an Initial Alignment after each Lockloss.
Everything else seems to be functioning normally.
TITLE: 02/21 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
LOG:
WP12339
Dave:
The offload of the past 6 months of raw minute trend files from h1daqtw1's SSD-RAID to permanent storage is complete (disk usage was reduced from 92% to 2%).
To test the new h1digivideo4 server we moved four production cameras from servers running the old software to the new server. This required changes to the camera overview MEDM, which up to this time had been hand edited.
I took this opportunity to write a python program to generate the camera overview MEDM (see attached).
The main changes are:
. cameras are sorted alphabetically by name, not by camera number.
. cameras are not grouped by the server machine they run on.
. camera data is stored in a yaml database. Camera details can be viewed by pressing the INFO buttons.
. process control buttons are provided for each camera (VID0 = h1digivideo0, etc). Servers running the new code have green text (VID3 and VID4)
SITEMAP.adl has been upgraded to use the new overview. To open the old overview there is a "Traditional Overview" button provided in the bottom right corner.
Paths for the generator code and the yaml database file are shown at the bottom of the window.
FM6 + FM8 + FM10 Gain +0.01 (+30deg phase) looks to be working for now - see attached screenshot.
The other settings which I tried and it didn't work are listed below,
zero phase Gain 0.01 (IY05 increasing, IY06 decreasing).
-30deg phase gain 0.01 (IY05 increasing, IY06 decreasing).
We lost lock and then during the next lock Corey applied the above settings (in bold fonts) and it seems to be working fine. Hence, I will continue with this for the next few lock stretches. Not committing it to lscparams yet since things can still change.
FM6 + FM8 + FM10 Gain +0.01 (+30deg phase) have been committed to lscparams for ITMY mode 05/06.
We lost lock because of PI 24 in our last lock. We are also seeing a correlation between PI channels and loud glitches in H1 (Derek made some nice plots in 82961).
We've stepped up the EY ring heater both upper and lower segements from 1.2W to 1.3W. This has been stepped up in Decmeber 81890 81891 and September 80320.
We did the step after Corey had the IFO locked with all ASC on, to deal with the alignment shift after this lockloss we will probably need an initial alignment.
Fri Feb 21 10:10:02 2025 INFO: Fill completed in 9min 59secs
Jordan confirmed a good fill curbside. TCmins [-77C, -75C] OAT (+1C, 33F) DeltaTempTime 10:10:05
There have been a few other alogs about this already:
H1's range is slighty improved after the initial drop (possibly due to PRCL and A2L tuning in the commissioning window), but the large glitches are still present. They were not present in the early part of yesterday's lock, but re-appeared after TJ took a calibration measurement around 22 UTC (were present all night and caused a retraction of a GW alert), and they are not present so far in the current lock, which was automatic. Looking at the summary page plots, the glitches that come and go are those with SNR between 10 and 30, at 60 and just below 50 Hz. (I can't read the precise frequency from the summary page plot, to know if these glitches line up with the PR3 roll mode well or not). When the glitches are on, they have a rate of something like 2e-2 Hz (between SNR of 10-20), which means roughly a glitch every 50 seconds, more if we include the SNR 20-30 glitches. Hveto doesn't identify an auxillary channel that can veto these glitches.
Looking at the range plot, there seem to be pretty regular drops in range when the glitches at 50 + 60 Hz are present, zooming in the spacing between these range drops is 5-6 minutes, although I'm using a channel that only updates every minute. Looking at DCPD sum or ESD drive channels time series don't show an obvious way to get a better handle on the timing of these larger, less frequent glitches.
The correlation with 82944 the PI channel found by Jane's Lasso run seems to continue where the PI channel keeps being rung up in the time periods when the range has these drops every 5 minutes.
Editing to add:
It looks like the MODE8 channel got elevated like this for the first time on Feb 5th, and there have been a number of incidences where this channel was elevated with glitches at 60 Hz and just below 50:
Following up Sheila's investigation, I've made a set of slides comparing the glitching seen in the strain channel against the 10.4 kHZ PI channel for many of the times highlighted above. We can see a clear correlation between this channel and the presence of the glitching in strain.
Within the glitchy periods, there seem to be correlations between MODE8 (TMS X QPDs bandpassed from 10kHz-10.8kHz) and MODE24 (DCPDs) monitors and the rnage drops that are somewhat regular during these glitchy periods (see screenshot).
We don't have an equivalent monitor set up for TMS Y QPDs.
Daniel and I looked at this time in a dtt watching exponetial averages, and it does seem that the mode at 10432 Hz is going up and down by a few orders of magnitude, at least once this seems to happen after the glitch that shows up in the GW band. The mode directly below it is also going up and down. Watching this with exponetial averaging tends to crash dtt, but some screenshots are attached.
Hepi pump stations are running smoothly, no new leaks.
EX Trip is 6-7/8 running at 7
EY Trip is 8-1/8 running at 8-3/16 will adjust level Monday with Jim
Corner Trip is 5-5/16 running at 5-5/8
Laser Status:
NPRO output power is 1.847W
AMP1 output power is 71.66W
AMP2 output power is 141.6W
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 16 days, 20 hr 5 minutes
Reflected power = 23.88W
Transmitted power = 105.7W
PowerSum = 129.6W
FSS:
It has been locked for 0 days 3 hr and 18 min
TPD[V] = 0.7434V
ISS:
The diffracted power is around 3.3%
Last saturation event was 0 days 3 hours and 18 minutes ago
Possible Issues:
PMC reflected power is high
Now that I have seen in alogs () that ITMy Mode 5 & 6 have returned to causing grief, doing a morning check after 2hrs of lock. Currently, ITMy Mode 05 was taken to the following via guardian:
Will continue monitoring.
Since IY06 was growing consistently, hence I have switched off damping for some time and will look for another setting.
I tried the following settings this morning and none of them worked,
-60 deg phase gain -0.01 (IY05 decreasing, IY06 increasing)
-30 deg phase gain -0.01 (IY05 increasing, IY06 increasing)
+30 deg phase gain -0.01 (IY05 increasing, IY06 increasing)
TJ, Oli
Starting around 02/20 at 11:30 UTC, the range had a step down, where it stayed for the rest of the lock. After this step down, the range was also much noisier than it had been before the step (ndscope1). Jane Glanzer ran Lasso for us during this lock stretch (lasso), and the top channel that came back with the highest correlation to this range drop was H1:SUS-PI_PROC_COMPUTE_MODE8_NORMLOG10RMSMON
, with the other channels all having much lower correlation coefficients. This was weird to us because we bandpass and downconvert to monitor mode 8 but we don't monitor or damp it, and we don't even turn on its PLL. When you plot the correlated channel along with related PLL channels (SUS-PI_PROC_COMPUTE_MODE8_PLL_AMP_FILT_OUTPUT
, SUS-PI_PROC_COMPUTE_MODE8_PLL_LOCK_ST
, and SUS-PI_PROC_COMPUTE_MODE8_PLL_ERROR_SIG
), you can see there was some weird noise in multiple of these channels that started when the range dropped (ndscope2).
I tried plotting a series of PI_PROC_COMPUTE_MODE channels for every mode (there are 32 in total), and out of all of them, only mode 8 and mode 24 showed any change in any of their respective channels around the time of the range drop(ndscope3). It only being these two channels is very interesting. PI mode 8 comes from the TRX QPD and has a bandpass between 10 and 10.8 kHz. Like I mentioned earlier, we do not actively do anything to this channel. Mode 24 on the other hand, we definitely monitor and are damping it a lot of the time. Mode 24 is read in from the DCPDs and has a bandpass of, and the PI is centered around 10.431 kHz. It is damped by using the ESD on ETMY. Mode 24 actually has more channels that correlate better and have larger amplitudes than mode 8, but Mode 8 NORMLOG10RMSMON correlated better with lasso over the entire lock stretch.
Zooming into when the range drop started yesterday, we actually see that the large drop in range happened about 12 minutes before we see the huge error signals in the mode 24 PLL, and it is at the very beginning of the rise in mode 8 and 24 NORMLOG10RMSMON (ndscope4).
Zooming out, this excess noise in those PI channels seems to have started early on February 5th, aka after relocking after the February 4th maintenance day(ndscope5). On other days, the range doesn't seem to have been affected by this noise though, at least nowhere near the amount that it was affected yesterday. Sometimes during these periods of noise, the range won't be very good (below 155), but other times we'll see this noise in modes 8 and 24 and still be right around or above 160.
Since it looks like the range drop yesterday started before the PI channels really started changing, we're pretty sure the issue is somewhere else and is just bleeding into these two downconverted channels. Because these two PI channels go through bandpasses in the 10kHz regime, there might be something in that frequency range. It is interesting though that although another PI channel we actively monitor, Mode 31, is also in the same frequency area (centered at 10.428 kHz), is read from the OMC DCPDs, and is damped using ETMY, all just like PI24, there doesn't seem to be any coupling into its channels.
Either way, pretty good leads were made this morning towards finding the actual cause of this range drop. TJ and Camilla looked into the many glitches that appeared then at 60 and 48 Hz, as well as noting that the line at 46.1 Hz had grown, which is a known PR3 roll mode (82924).
The glitching related to this range drop appears to have subsided in the most recent lock. Comparison of glitchgrams from
When these glitches were occurring, they appeared on roughly a 6-minute cadence.