VACSTAT issued alarms for a BSC vacuum glitch at 02:13 this morning. Attachment shows main VACSTAT MEDM, the latch plots for BSC3, and a 24 hour trend of BSC2 and BSC3.
The glitch is a square wave, 3 second wide. Vacuum goes up from 5.3e-08 to 7.5e-08 Torr for these 3 seconds. Neighbouring BSC2 shows no glitch in the trend.
Looks like a PT132_MOD2 sensor glitch.
vacstat_ioc.service was restarted on cdsioc0 at 09:04 to clear the latched event.
This is the second false-positive VACSTAT PT132 glitch detected.
02:13 Tue 01 Oct 2024 |
02:21 Sun 01 Sep 2024 |
The FSS glitching isn't seen in this lock that ended a 11 hour lock, but our old friend the DARM wiggle reared its head again. (Attachment 1)
Useism came up fast, so many of the signals are moving a bit more, but DARM sees a 400ms ringup that I don't see other places. Again, the FSS didn't seem to be glitching at this time.
TITLE: 10/01 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive maintenance
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 2mph Gusts, 1mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.73 μm/s
QUICK SUMMARY: The IFO had just lost lock at low noise coil drivers as I arrived. I put it into IDLE and maintenance is already starting. Useism has grown to over 90th percentile, perhaps the cause of the lock loss, but I still need to look into this.
Workstations anhd wall displays were updated and rebooted. This was an OS packages update. Conda packages were not updated.
TITLE: 10/01 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 137Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Observing at 140Mpc and have been locked for 7 hours. Range is still low unfortunately but it's been a quiet evening, but we did get another superevent!
LOG:
23:00 Observing and have been Locked for 1 hour
23:47 Superevent S240930du
We've been Observing since before the start of my shift and we have been Locked for 5 hours. Our range hasn't gotten better and I haven't had a chance to tune up the squeezing. Squeezing is pretty bad with both the 350Hz and the 1.7kHz bands being around -1.2, with the lowest they got to being ~-1.5 a couple of hours into the lock. Besides the abysmal range, everything is going well.
1. The average duty cycle for Hanford for this week was 69.25%, with a high of 95.6% on Friday 2. The BNS range fluctuated between 150-160 Mpc, the lower end 150 Mpc is due to high winds 3. Strain lines in h(t): Some of the strain lines especially lower frequency can be explained due to ground motion Noise lines in h(t) spectrogram every day at 500Hz most of them can be correlated to after H1 gets locked back, but some of these persist. I have attached a plot highlighting it 4. Some glitch clusters happen right before losing lock and can be explained by comparing to alogs, others (the one tuesday) are unexplained 5. Lockloss: Some locklosses can be explained due to increased ground motion (eg: monday, wednesday and friday locklosses) First FSS_OSCILLATION lockloss from NLN in a year on wednesday 18th Sept Another lockloss that tagged FSS on Thursday Lockloss with FSS oscillation tag on Saturday 6. PEM tab: Most of the noise in corner BSCs happens when H1 is not observing Noise in acoustics can be related to environmental factors like wind Recurring noise line in BSC1 motion X and sometimes other BSCs too at ~11 Hz every day starting at 15:00 UTC 7. H1:ASC-DHARD_Y_OUT_DQ and H1:PEM-EX_EFM_BSC9_ETMX_Y_OUT_DQ are present multiple times in the hveto tab, there are also some new channels. 8. The fscan plots have appeared and the line count varies from 600 to ~850 with an average of 684.
TITLE: 09/30 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 137Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Planned commissioning time this morning, then a lockloss early afternoon with a simple relocking process afterwards made for a relatively light day.
H1 has now been locked and observing for 1.5 hours.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:58 | SAF | H1 | LVEA | YES | LVEA is laser HAZARD | Ongoing |
16:45 | CAL | Tony, Jackie | PCal Lab | local | Measuring | 17:26 |
17:20 | FAC | Kim | MX | n | Technical cleaning | 18:27 |
17:32 | PEM | Robert | LVEA | - | Shaker tests | 18:32 |
TITLE: 09/30 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 139Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 3mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.17 μm/s
QUICK SUMMARY:
Observing at 140 Mpc and have been Locked for 1.5 hours. Might drop out of Observing to run sqz align if Livingston loses lock or also drops out.
WP 12095
Damaged lightning rod and mounting bracket on the OSB viewing platform replaced.
Lockloss @ 20:26 UTC - link to lockloss tool
This would appear to be another FSS-related lockloss, as evidenced by the fact that the IMC and arms lost lock at the same time, we see some glitches in the FSS fast channel, and nothing else jumps out to me for another cause.
H1 back to observing at 22:04 UTC. I needed to adjust SRM slightly to lock DRMI, but otherwise a fully automated relock.
Sheila, Camilla
As Naoki/Vicky started, I moved ZM4,6,5 -50urad and then +50urad and recorded change in AS_A/B_AS42_PIT/YAW. It may have been easier if I'd changed AS_A/B AS42 offsets to have PIT/YAW outputs zero'ed to start with...
Plot attached of ZM4, ZM5 and ZM6.
There is a some cross coupling, and ZM5 gave very strange results in pitch and yaw with overshoot and the same direction of AS42 recorded with a different direction of alignment. This suggests we should use ZM4 and ZM6 as we do for our SCAN_SQZ_ALIGNMENT script.
Sensing/Input matrices calculated using /sqz/h1/scripts/ASC/AS42_sensing_matrix_cal.py
Using ZM4 and ZM6.
PIT Sensing Matrix is:
[[-0.0048 -0.0118]
[ 0.0071 0.0016]]
PIT Input Matrix is:
[[ 21.02496715 155.05913272]
[-93.29829172 -63.07490145]]
YAW Sensing Matrix is:
[[-0.00085 -0.009 ]
[ 0.0059 0.0029 ]]
YAW Input Matrix is:
[[ 57.2726375 177.74266811]
[-116.52019354 -16.78680754]]
H1 went out of observing from 15:35 to 18:37 UTC for planned commissioning activities, which included PRCL FF testing, an update to the SQZ_PMC Guardian, shaker tests at HAM1, ITMY compensation plate sweeps, and measuring the AS42 sensing matrix by moving ZMs.
H1 has now been locked for 8 hours.
This is a late entry to list everything we checked on Friday afternoon (which spilled to roughly midnight CT). -- Current Status -- LHO is currently running the same cal configuration as in 20240330T211519Z. The error reported over the weekend using the PCAL monitoring lines suggests that LHO is well within the 10% magnitude & 10 degree error window after about 15 minutes into each lock (attached image). Here is a link to the 'monitoring line' grafana page that we often look at to keep track of the PCALY/GDS_CALIB_STRAIN response at the monitoring line frequencies. The link covers Saturday to today. This still suggests the presence of calibration error near 33Hz that is not currently understood but, as mentioned earlier, it's now well within the 10%/10deg window. We think that the SRCL offset adjustment in LHO:80334 and LHO:80331 accounts for the largest contribution to the improvement of the calibration at LHO in the past week. -- What's Been Tried -- No attempts by CAL (me, JoeB, Vlad) to correct the 33Hz error have been successful. Many things were tried throughout the week (mostly on Tuesday & Friday) that went without being properly logged. Here's a non-exhaustive list of checks we've tried (in no particular order):
FAMIS 31053
The NPRO power, amplifier powers, and a few of the laser diodes inside the amps look to have some drift over the past several days that follows each other, sometimes inversely. Looking at past checks, this relationship isn't new, but the NPRO power doesn't normally drift around this much. Since it's still not a huge difference, I'm mostly just noting it here and don't think it's a major issue.
The rise in PMC reflected power has definitely slowed down, and has even dropped noticeably in the past day or so by almost 0.5W.
The operator team has been noticing that we are dropping out of observing for the PMC PZT to relock (80368, 80214, 80206...).
There's already a PZT_checker() in SQZ_MANGER at FDS_READY_IFO and FIS_READY_IFO to check the OPO and SHG PZTs are not at their end of their range (OPO PZT re-locks if not between 50-110V and SHG PZT is 15-85V). If they are, it requests them to unlock and relock. This is to force them into a better place before we go into observing.
I've added PMC to the PZT_checker, will relock if it's outside of 15-85V. Full range is 0-100V. SQZ_MANAGER reloaded. Plan to take GRD though DOWN and back to FDS.
I tried Elenna's FM6 from 80287, this made the PRCL coupled noise worse, see first attached plot.
Then Ryan turned off CAL lines and we retook to preshaping (PRCLFF_excitation_ETMYpum.xml) and PRCL injection (PRCL_excitation.xml) templates. I took the PRCL_excitation.xml injection with the PRCL FF off and increased amplitude from 0.02 to 0.05 to increase coherence over 50Hz. Exported as prclff_coherence/tf.txt, and prcl_coherence/tf_FFoff.txt. All in /opt/rtcds/userapps/release/lsc/h1/scripts/feedforward
Elenna pointed out that I tested the wrong filter, the new one is actual FM7, labeled "new0926". We can test that on Thursday.
The "correct" filter today in FM7 was tested today and still didn't work. Possibly because I still didn't have the correct pre-shaping applied in this fit. I will refit using the nice measurement Camilla took in this alog.
We went out of Observing at 04:53:56, and then lost lock four seconds later at 04:54:01.
It looks like the reason for dropping observing at 04:53:56 UTC (four seconds before lockloss) was due to the SQZ PMC PZT exceeding its voltage limit, so it unlocked and Guardian attempted to bring things back up. This has happened several times before, where Guardian is usually successful in bringing things back and H1 returns to observing within minutes, so I'm not convinced this is the cause of the lockloss.
However, when looking at Guardian logs around this time, I noticed one of the first things that could indicate a cause for a lockloss was from the IMC_LOCK Guardian, where at 04:54:00 UTC it reported "1st loop is saturated" and opened the ISS second loop. While this process was happening over the course of several milliseconds, the lockloss occurred. Indeed, it seems the drive to the ISS AOM and the secondloop output suddenly dropped right before the first loop opened, but I don't notice a significant change in the diffracted power at this time (see attached screenshot). Unsure as of yet why this would've happened to cause this lockloss.
Other than the ISS, I don't notice any other obvious cause for this lockloss.
After comparing Ryan's channels to darm, still not sure whether this lockloss was caused by something in the PSL or not, see attached