No new damage noted. First pic is EX, second is EY.
h1iscey frontend server was replaced because of a possible bad PCI slot that was preventing connection to one of the Adnaco PCIe expansion boards in the IO Chassis , see here: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=80035
The new frontend is running without issue, connected to all four Adnacos.
The new frontend model is Supermicro X11SRL-F, the same as the old frontend.
Tue Oct 01 08:09:24 2024 INFO: Fill completed in 9min 20secs
VACSTAT issued alarms for a BSC vacuum glitch at 02:13 this morning. Attachment shows main VACSTAT MEDM, the latch plots for BSC3, and a 24 hour trend of BSC2 and BSC3.
The glitch is a square wave, 3 second wide. Vacuum goes up from 5.3e-08 to 7.5e-08 Torr for these 3 seconds. Neighbouring BSC2 shows no glitch in the trend.
Looks like a PT132_MOD2 sensor glitch.
vacstat_ioc.service was restarted on cdsioc0 at 09:04 to clear the latched event.
This is the second false-positive VACSTAT PT132 glitch detected.
02:13 Tue 01 Oct 2024 |
02:21 Sun 01 Sep 2024 |
The FSS glitching isn't seen in this lock that ended a 11 hour lock, but our old friend the DARM wiggle reared its head again. (Attachment 1)
Useism came up fast, so many of the signals are moving a bit more, but DARM sees a 400ms ringup that I don't see other places. Again, the FSS didn't seem to be glitching at this time.
TITLE: 10/01 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive maintenance
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 2mph Gusts, 1mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.73 μm/s
QUICK SUMMARY: The IFO had just lost lock at low noise coil drivers as I arrived. I put it into IDLE and maintenance is already starting. Useism has grown to over 90th percentile, perhaps the cause of the lock loss, but I still need to look into this.
Workstations anhd wall displays were updated and rebooted. This was an OS packages update. Conda packages were not updated.
TITLE: 10/01 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 137Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Observing at 140Mpc and have been locked for 7 hours. Range is still low unfortunately but it's been a quiet evening, but we did get another superevent!
LOG:
23:00 Observing and have been Locked for 1 hour
23:47 Superevent S240930du
We've been Observing since before the start of my shift and we have been Locked for 5 hours. Our range hasn't gotten better and I haven't had a chance to tune up the squeezing. Squeezing is pretty bad with both the 350Hz and the 1.7kHz bands being around -1.2, with the lowest they got to being ~-1.5 a couple of hours into the lock. Besides the abysmal range, everything is going well.
1. The average duty cycle for Hanford for this week was 69.25%, with a high of 95.6% on Friday 2. The BNS range fluctuated between 150-160 Mpc, the lower end 150 Mpc is due to high winds 3. Strain lines in h(t): Some of the strain lines especially lower frequency can be explained due to ground motion Noise lines in h(t) spectrogram every day at 500Hz most of them can be correlated to after H1 gets locked back, but some of these persist. I have attached a plot highlighting it 4. Some glitch clusters happen right before losing lock and can be explained by comparing to alogs, others (the one tuesday) are unexplained 5. Lockloss: Some locklosses can be explained due to increased ground motion (eg: monday, wednesday and friday locklosses) First FSS_OSCILLATION lockloss from NLN in a year on wednesday 18th Sept Another lockloss that tagged FSS on Thursday Lockloss with FSS oscillation tag on Saturday 6. PEM tab: Most of the noise in corner BSCs happens when H1 is not observing Noise in acoustics can be related to environmental factors like wind Recurring noise line in BSC1 motion X and sometimes other BSCs too at ~11 Hz every day starting at 15:00 UTC 7. H1:ASC-DHARD_Y_OUT_DQ and H1:PEM-EX_EFM_BSC9_ETMX_Y_OUT_DQ are present multiple times in the hveto tab, there are also some new channels. 8. The fscan plots have appeared and the line count varies from 600 to ~850 with an average of 684.
TITLE: 09/30 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 137Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Planned commissioning time this morning, then a lockloss early afternoon with a simple relocking process afterwards made for a relatively light day.
H1 has now been locked and observing for 1.5 hours.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:58 | SAF | H1 | LVEA | YES | LVEA is laser HAZARD | Ongoing |
16:45 | CAL | Tony, Jackie | PCal Lab | local | Measuring | 17:26 |
17:20 | FAC | Kim | MX | n | Technical cleaning | 18:27 |
17:32 | PEM | Robert | LVEA | - | Shaker tests | 18:32 |
TITLE: 09/30 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 139Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 3mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.17 μm/s
QUICK SUMMARY:
Observing at 140 Mpc and have been Locked for 1.5 hours. Might drop out of Observing to run sqz align if Livingston loses lock or also drops out.
WP 12095
Damaged lightning rod and mounting bracket on the OSB viewing platform replaced.
Lockloss @ 20:26 UTC - link to lockloss tool
This would appear to be another FSS-related lockloss, as evidenced by the fact that the IMC and arms lost lock at the same time, we see some glitches in the FSS fast channel, and nothing else jumps out to me for another cause.
H1 back to observing at 22:04 UTC. I needed to adjust SRM slightly to lock DRMI, but otherwise a fully automated relock.
I tried Elenna's FM6 from 80287, this made the PRCL coupled noise worse, see first attached plot.
Then Ryan turned off CAL lines and we retook to preshaping (PRCLFF_excitation_ETMYpum.xml) and PRCL injection (PRCL_excitation.xml) templates. I took the PRCL_excitation.xml injection with the PRCL FF off and increased amplitude from 0.02 to 0.05 to increase coherence over 50Hz. Exported as prclff_coherence/tf.txt, and prcl_coherence/tf_FFoff.txt. All in /opt/rtcds/userapps/release/lsc/h1/scripts/feedforward
Elenna pointed out that I tested the wrong filter, the new one is actual FM7, labeled "new0926". We can test that on Thursday.
The "correct" filter today in FM7 was tested today and still didn't work. Possibly because I still didn't have the correct pre-shaping applied in this fit. I will refit using the nice measurement Camilla took in this alog.
We went out of Observing at 04:53:56, and then lost lock four seconds later at 04:54:01.
It looks like the reason for dropping observing at 04:53:56 UTC (four seconds before lockloss) was due to the SQZ PMC PZT exceeding its voltage limit, so it unlocked and Guardian attempted to bring things back up. This has happened several times before, where Guardian is usually successful in bringing things back and H1 returns to observing within minutes, so I'm not convinced this is the cause of the lockloss.
However, when looking at Guardian logs around this time, I noticed one of the first things that could indicate a cause for a lockloss was from the IMC_LOCK Guardian, where at 04:54:00 UTC it reported "1st loop is saturated" and opened the ISS second loop. While this process was happening over the course of several milliseconds, the lockloss occurred. Indeed, it seems the drive to the ISS AOM and the secondloop output suddenly dropped right before the first loop opened, but I don't notice a significant change in the diffracted power at this time (see attached screenshot). Unsure as of yet why this would've happened to cause this lockloss.
Other than the ISS, I don't notice any other obvious cause for this lockloss.
After comparing Ryan's channels to darm, still not sure whether this lockloss was caused by something in the PSL or not, see attached
Edit : Neil knows how to make links, now. Tony and Camilla were instrumental in this revelation.
This is an update to my previous post (Neil’s previous post on this topic) about looking for possible explinations why similar seismic wave velocities on-site may or may not knock us out of lock.
The same channels are used in addition to:
SEI- * ARM_GND_BLRMS_30M_100M
SEI- * ARM_GND_BLRMS_100M_300M
Where * is C, D, X, or Y.
I have looked at all earthquake events in O4b, and only ones which knocked us out of lock. This is to simplify the pattern search, for now. Here are the results.
Total events : 29
Events with ISI y-motion dominant : 11 (30-100 Hz) : 8 (100-300 Hz)
Events with ISI x-motion dominant : 3 (30-100 Hz) : 0 (100-300 Hz)
Events with ISI z-motion dominant : 9 (30-100 Hz) : 19 (100-300 Hz)
Events with ISI xy-motion dominant : 1 : 0 (Both axes are similar in amplitude.)
Events with ISI yz-motion dominant : 0 : 1
Events with ISI xz-motion dominant : 0 : 1
Events with IS xyz-motion dominant : 5 : 0
Total SEI- * ARM recorded events : 8
CARM dominant events : 7 : 8
C/XARM dominant events : 1 : 0
Conclusion is that in the 30-100 Hz band, it is equally likely to have either z- or y-axis motion be dominant. In the 100-300 Hz band, the ratio is about 1:2 for z and y motion being dominant during lockloss.
Clearly, common modes are a common (^_^) cause of lockloss.
Note that velocity amplitudes should be explored more.
NOTE : All units are in mHz, not Hz.