Ryan S, Ibrahim
Unexpected lockloss being investigated but confirmed not the IMC nor the environment. Ryan S is seeing that it is looking like the ETM Glitch.
FAMIS 31059
Late entry; these are trends taken on Monday as usual but I neglected to post them until now.
Since troubleshooting of laser glitches is still ongoing, several things in the PSL have been changing more than usual, including enclosure incursions, temperature changes, and ISS diffracted power increases. The only unexpected thing of note is that PMC reflected power seems to have been rising slowly over the past few days, but I think it's too soon to tell if this is a similar increase to what we were seeing here pre-NPRO swap or if it's related to laser troubleshooting. Will certainly be keeping an eye on this.
Fri Nov 15 10:12:59 2024 INFO: Fill completed in 12min 55secs
TITLE: 11/15 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 2mph Gusts, 1mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.35 μm/s
QUICK SUMMARY:
IFO is LOCKING at PRMI_ASC
I arrived and guardian has recently finished an initial alignment and was at CHECK_IR. Microseism is much lower than last few days.
Locklosses overnight:
1415716992 not a PSL/IMC problem, the IMC losses lock 260ms after the IFO, from observing, not sure what it was
1415703738 also not a PSL/IMC problem, the IMC losses lock 240ms after the IFO, from observing, not sure what caused it
1415697803 not a PSL/IMC problem, there was a small earthquake while we were in the ESD transitions. This lockloss is listed as being from state 558 (the one that Elenna increased a ramp time in to avoid locklosses 81260), however the lockloss actually happened in the state before when DARM was still controled by the ITMX ESD. This is just a less robust state to large ground motion.
Ibrahim looked at the long time between the earthquake and relocking, about an hour of that was waiting in ready for the ground motion to come down (which the guardian does independently now), then Ibrahim saw that there were 20 something PRMI locklosses in a row, all about 2 seconds after PRMI locked. (We will look into this more)
We also looked back at Corey's shift, and think that lockloss during the ETM transitions at 2:05 was not a PSL/IMC glitch (noted as LOCK #2 in Corey's alog). So this means we think that we have had 17 hours or so without a lockloss due to the PSL. We will wait and see how today and the weekend goes.
As a reminder, we saw FSS glitches yesterday morning, then did several things before this 17 hour stretch without glitch locklosses.
TITLE: 11/15 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
Started shift with beginning locking for H1. Had a couple of locklosses (as posted earlier), but made it to NLN on the 3rd attempt. had a lockloss a few min ago due to M6.7 EQ from south pacific. Have been touching base with TJ (who will be on OWL), and he wanted to receive Owl alerts and if locking is rough overnight, he'll switch to the PMC/FSS tests overnight.
LOG:
After tonight's Initial Alignment, H1's been fairly good with getting through the bulk of ISC_LOCK, but have had 2 consecutive locklosses a couple of states after MAX POWER:
Locking Notes:
0003-0043 INITIAL ALIGNMENT (w/ ALSy needing touch up by hand)
0044 LOCK#1
LOCK #2: DRMI looked its ugly self. Needed to run CHECK MICH FRINGES and the BS was definitely the culprit for the nasty alignment and was fixed. PRMI & DRMI both locked immediately.
Will continue locking for the next 2.5-3hrs, and then take H1 to IDLE and leave FSS & PMC -ON- for the night.
But if H1 makes it to NLN, will contact Louis or Joe B for a ~15min calibration check.
Vicky, Ryan, Sheila
Zooming in on this 2:05 UTC lockloss, MC2 trans dropped about 150ms after the IFO lost lock, so we think this was not due to the usual PSL/IMC issue, even though there was a glitch in the FSS right before the lockloss.
Ansel reported that a peak in DARM that interfered with the sensitivity of the Crab pulsar followed a similar time frequency path as a peak in the beam splitter microphone signal. I found that this was also the case on a shorter time scale and took advantage of the long down times last weekend to use a movable microphone to find the source of the peak. Microphone signals don’t usually show coherence with DARM even when they are causing noise, probably because the coherence length of the sound is smaller than the spacing between the coupling sites and the microphones, hence the importance of precise time-frequency paths.
Figure 1 shows DARM and the problematic peak in microphone signals. The second page of Figure 1 shows the portable microphone signal at a location by the staging building and a location near the TCS chillers. I used accelerometers to confirm the microphone identification of the TCS chillers, and to distinguish between the two chillers (Figure 2).
I was surprised that the acoustic signal was so strong that I could see it at the staging building - when I found the signal outside, I assumed it was coming from some external HVAC component and spent quite a bit of time searching outside. I think that this may be because the suspended mezzanine (see photos on second page of Figure 2) acts as a sort of soundboard, helping couple the chiller vibrations to the air.
Any direct vibrational coupling can be solved by vibrationally isolating the chillers. This may even help with acoustic coupling if the soundboard theory is correct. We might try this first. However, the safest solution is to either try to change the load to move the peaks to a different frequency, or put the chillers on vibration isolation in the hallway of the cinder-block HVAC housing so that the stiff room blocks the low-frequency sound.
Reducing the coupling is another mitigation route. Vibrational coupling has apparently increased, so I think we should check jitter coupling at the DCPDs in case recent damage has made them more sensitive to beam spot position.
For next generation detectors, it might be a good idea to make the mechanical room of cinder blocks or equivalent to reduce acoustic coupling of the low frequency sources.
This afternoon TJ and I placed pieces of damping and elastic foam under the wheels of both CO2X and CO2Y TCS chillers. We placed thicker foam under CO2Y but this did make the chiller wobbly so we placed thinner foam under CO2X.
Unfortunately, I'm not seeing any improvement of the Crab contamination in the strain spectra this week, following the foam insertion. Attached are ASD zoom-ins (daily and cumulative) from Nov 24, 25, 26 and 27.
This morning at 17:00UTC we turned the CO2X and CO2Y TCS chiller off and then on again, hoping this might change the frequency they are injecting into DARM. We do not expect it to effect it much we had the chillers off for a ling period 25th October 80882 when we flushed the chiller line and the issue was seen before this date.
Opened FRS 32812.
There were no expilcit changes to the TCS chillers bettween O4a and O4b although we swapped a chiller for a spare chiller in October 2023 73704.
Between 19:11 and 19:21 UTC, Robert and I swapped the foam from under CO2Y chiller (it was flattened and not providing any damping now) to new, thicker foam and 4 layers of rubber. Photo's attached.
Thanks for the interventions, but I'm still not seeing improvement in the Crab region. Attached are daily snapshots from UTC Monday to Friday (Dec 2-6).
I changed the flow of the TCSY chiller from 4.0gpm to 3.7gpm.
These Thermoflex1400 chillers have their flow rate adjusted by opening or closing a 3 way valve at the back of the chiller. for both X and Y chillers, these have been in the full open position, with the lever pointed straight up. The Y chiller has been running with 4.0gpm, so our only change was a lower flow rate. The X chiller has been at 3.7gpm already, and the manual states that these chillers shouldn't be ran below 3.8gpm. Though this was a small note in the manual and could be easily missed. Since the flow couldn't be increased via the 3 way valve on back, I didn't want to lower it further and left it as is.
Two questions came from this:
The flow rate has been consistent for the last year+, so I don't suspect that the pumps are getting worn out. As far back as I can trend they have been around 4.0 and 3.7, with some brief periods above or below.
Thanks for the latest intervention. It does appear to have shifted the frequency up just enough to clear the Crab band. Can it be nudged any farther, to reduce spectral leakage into the Crab? Attached are sample spectra from before the intervention (Dec 7 and 10) and afterward (Dec 11 and 12). Spectra from Dec 8-9 are too noisy to be helpful here.
TJ touched the CO2 flow on Dec 12th around 19:45UTC 81791 so the flowrate further reduced to 3.55 GPM. Plot attached.
The flow of the TCSY chiller was further reduced to 3.3gpm. This should push the chiller peak lower in frequency and further away from the crab nebula.
The further reduced flow rate seems to have given the Crab band more isolation from nearby peaks, although I'm not sure I understand the improvement in detail. Attached is a spectrum from yesterday's data in the usual form. Since the zoomed-in plots suggest (unexpectedly) that lowering flow rate moves an offending peak up in frequency, I tried broadening the band and looking at data from December 7 (before 1st flow reduction), December 16 (before most recent flow reduction) and December 18 (after most recent flow reduction). If I look at one of the accelerometer channels Robert highlighted, I do see a large peak indeed move to lower frequencies, as expected. Attachments: 1) Usual daily h(t) spectral zoom near Crab band - December 18 2) Zoom-out for December 7, 16 and 18 overlain 3) Zoom-out for December 7, 16 and 18 overlain but with vertical offsets 4) Accelerometer spectrum for December 7 (sample starting at 18:00 UTC) 5) Accelerometer spectrum for December 16 6) Accelerometer spectrum for December 18
TITLE: 11/15 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 15mph Gusts, 10mph 3min avg
Primary useism: 0.05 μm/s
Secondary useism: 0.68 μm/s
QUICK SUMMARY:
H1 was down for troubleshooting all day shift and at the end of TJ's shift he started an Initial Alignment which we are running now.
The goal for today is similar to last night's shift: Will try locking from 430-9pm.
Operator NOTE from TJ: When restoring for locking--order is PMC -> FSS -> ISS (as they are listed on the Ops Overview)
Environmental Notes: µseism is worse than last night (is clearly higher than the 95th percentile & touching "1 count" on FOM). Had been windy most of the day, but has become calmer in the last hour.
Initial Alignment Note (since it has just completed while I've been trying to write this alog for the last 45min!): ALSy wasn't great after INCREASE FLASHES, Elenna touched up the ETMy by hand and this immediately helped-----EVEN WITH HIGH MICROSEISM!
TITLE: 11/15 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Corey
SHIFT SUMMARY: The entire shift was dedicated to troubleshooting the laser glitching issue. We also had high winds and high useism, so it was good timing. The wind has died down and the troubleshooting has ended for the day so we are starting initial alignment.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
21:41 | SAF | Laser | LVEA | YES | LVEA is laser HAZARD | 08:21 |
16:22 | ISC | Sheila | LVEA | yes | Unplugging cable for IMC feedback | 16:42 |
16:47 | TCS | Camilla | LVEA | yes | Power cycle TCSY chassis | 17:04 |
17:44 | EE | Fernando | MSR | n | Continuing work on the ISC backup computer | 22:38 |
18:05 | PSL | Patrick, Ryan, Fil | LVEA | yes | Power cycle PSL Beckhoff computer | 18:26 |
18:39 | PSL/CDS | Sheila, Fil, Marc, Richard | LVEA | yes | 35MHz swap or wiring | 19:09 |
19:18 | PSL | Ryan | LVEA | yes | Reset noise eater | 19:49 |
19:58 | TCS | TJ, Camilla | LVEA | YES | Checking TCSY cables | 20:58 |
19:58 | PSL | Vicky | LVEA | YES | Setting up PSL scope & poking cables | 20:58 |
22:43 | PSL | Jason, Sheila | LVEA | yes | PMC meas. | 00:20 |
22:44 | PSL | Ryan | LVEA | yes | PMC meas | 23:14 |
22:44 | PSL | Vicky | LVEA | yes | Setting up sr785 | 00:21 |
Vicky and Jason measured the PMC olg, and I grabbed the data from the SR785 for them. The plots are attached. The second measurement is a zoomed in version of the first.
Looks like the feature above 5 kHz is around the same frequency as the peak we are seeing in the intensity and frequency noise (alogs 80603, 81230)
These are the steps I took to get the data:
> cd /ligo/gitcommon/psl_measurements
> conda activate psl
> python code/SRmeasure.py -i 10.22.10.30 -a 10 --getdata -f data/name
This will save your data in the data folder as "name_[datetime string].txt"
To confirm connection before running, try
> ping 10.22.10.30
you should get something like
PING 10.22.10.30 (10.22.10.30) 56(84) bytes of data.
64 bytes from 10.22.10.30: icmp_seq=1 ttl=64 time=1.26 ms
64 bytes from 10.22.10.30: icmp_seq=1 ttl=64 time=1.54 ms (DUP!)
64 bytes from 10.22.10.30: icmp_seq=2 ttl=64 time=0.748 ms
64 bytes from 10.22.10.30: icmp_seq=2 ttl=64 time=1.03 ms (DUP!)
64 bytes from 10.22.10.30: icmp_seq=3 ttl=64 time=0.730 ms
64 bytes from 10.22.10.30: icmp_seq=3 ttl=64 time=1.02 ms (DUP!)
^C
--- 10.22.10.30 ping statistics ---
3 packets transmitted, 3 received, +3 duplicates, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 0.730/1.054/1.538/0.282 ms
----
run Ctrl-C to exit. If you don't get these messages, you are probably not plugged in properly.
To plot the data (assuming that you are measuring transfer functions), use
> python code/quick_tf_plot.py data/name_[datetime str].txt
Craig has lots of great options in this code to make nice labels, save the plot in certain places, etc. if you want to get fancy. Also, he has other scripts that will plot spectra, multiple spectra or multiple tfs.
If you want to measure from the control room, there are yaml templates that run different types of measurements, such as carm or imc olgs.
We restarted the calibration pipeline with a new configuration ini such that it no longer subtracts the 60 Hz (and harmonics up to 300 Hz). The configuration change is recorded in commit here: https://git.ligo.org/Calibration/ifo/H1/-/commit/53f2e892a38cfb18815912c33b1f1b8385cfff62 I restarted the pipeline at around 9:35 am PST. Around the same time as this was done at LLO (LLO:74051). The IFO was down at the time, so I left a request with the H1 operators to contact both me and Joe B. when H1 is back at NLN but before going to Observing mode so that we (whoever is first) can confirm that the GDS restart is behaving as expected. Initial checks at LLO indicate that things are working properly, which is promising.
Joe B., Louis D., Corey G. Corey called as soon as H1 reached NLN. The gstlal-calibration pipeline restart with 60 Hz subtraction turned off looks like it's behaving as expected so we gave Corey the green light from Cal to go into Observing. The 60Hz line and its harmonics up to 300Hz look good (i.e. NOLINES looks identical to STRAIN since subtraction for those lines was turned off).
Ryan, Jason, Patrick, Filiberto As part of troubleshooting the PSL we hardware power cycled the PSL Beckhoff computer in the diode room this morning, along with all of the associated diode power supplies and a chassis in the LVEA. I had guessed that everything would autostart, but I was wrong, so I took the opportunity to set it up to do so. This required putting a shortcut to the EPICS IOC startup script in the C:\TwinCAT\3.1\Target\StartUp directory (see attached screenshots), and selecting an option in the TwinCAT Visual Studio project to autostart the TwinCAT runtime. We software restarted the computer again to test this, and after logging in, the Beckhoff runtime and PLC code started, along with the EPICS IOC, but the visualization did not. I found documentation that pointed to the location of the executable that starts the visualization, and added a shortcut to that to the startup directory as well. We didn't have time to restart the computer again to see if that would autostart correctly. For some reason there seemed to be issues with processes reconnecting to the EPICS IOC channels. I tested running caget on the Beckhoff computer itself and got a message about connecting to two different instances of the channel, and a couple of pop up windows related I think to allowing network access, which I said to allow. caget worked, although it gave a blank space for the value, so I tried it again with an invalid channel name, which it correctly gave an error for. On the Linux workstation we were using, the MEDM screens were not reconnecting, even after closing and reopening them, but again caget worked. We had to restart the entire medm process for it to reconnect. The EDCU and SDF also had issues reconnecting, and they had to be restarted too.
As Patrick mentioned, channel access clients which had been connected to the IOC on h1pslctrl0 would not reconnect after its restart.
The EDC stayed in its disconnect state for almost an hour, even though cagets on h1susauxb123 itself were connecting, albeit with "duplicate list entry" warnings:
(diskless)controls@h1susauxb123:~$ caget H1:SYS-ETHERCAT_PSL_INFO_TPY_TIME_HOUR
Warning: Duplicate EPICS CA Address list entry "10.101.0.255:5064" discarded
H1:SYS-ETHERCAT_PSL_INFO_TPY_TIME_HOUR 18
The restart of the DAQ EDC did not go smoothly, I had added a missing channel to H1EPICS_CDSRFM.ini (WP12195) in preparation for next Tuesday maintenance and so the EDC came back with a different channel list to that of the rest of the DAQ. I reverted this file change and a second EDC restart was successful.
11:38:35 h1susauxb123 h1edc[DAQ]
11:46:17 h1susauxb123 h1edc[DAQ]
The slow controls h1pslopcsdf system was also unable to reconnect to the 4 PSL WD channels it monitors. This was restarted at 12:08 14nov2024 PST.
Erik found that MEDM on some workstations would continue to show white-screen for h1pslctrl0 channels and a full restart of MEDM was needed to resolve this.
TJ, Vicky, Camilla
Vicky set up an oscilloscope on a cable simular to the PMC mixer and we watched the second trend of Sheila's PSL ndscope. The largest cause of repeatable glitches by touching cables: ISS AOM 80MHz that we found loose and tightened, it sounds like the PSL team found the PSL side of this cable loose and tightened on Tuesday too. Times noted below if we want to go back and look at the raw data.
Thu Nov 14 10:15:06 2024 INFO: Fill completed in 15min 3secs
Jordan confirmed a good fill curbside.
New Vacuum section on CDS Overview shows CP1 LLCV percentage open as a red bar. Bar limits are between 40% and 100% open so normally you won't see any red in this widget.
Sheila, Vicky, Elenna, TJ, Marc, Filiberto, Richard, Daniel, Jason, Ryan Short
The IMC did not stay locked overnight, after Corey left it locked at 2W with the ISS on (screen shot from TJ). Vicky noticed that the SQZ 35MHz LO monitor sees something going on sometimes before the IMC lost lock, screenshot from Elenna. A few days ago Nutsinee flagged this squeezer 35MHz LO monitor channel, it does show increased noise when the FSS is unlocked which doesn't make sense but it seems like this is at least partially due to cross talk (when we intentionally unlock the FSS, there is extra noise in this channel).
A bit before 8:29 I unplugged the cable from the IMC servo to the PSL VCO, and we left the FSS and PMC locked with the ISS off. At 8:31 pacific time the FSS came unlocked, and glitches were visible in the PMC mixer and HV (screenshot) . The new channel plugged in on TUesday H1:PSL-PWR_HPL_DC_OUT_DQ might show some extra noise, more visible in the zoomed in screenshot. There are some small glitches seen in the SQZ 35MHz LO monitor at the time of the reference cavity glitches, the squeezer 35MHz LO is shared by the SQZ + PSL PMCs.
8:44 unlocked the FSS and sitting with the PMC only locked, a few seconds later we had a few glitches in the mixer and HV. PMC alone screenshot.
The PSL was powered down to restart the beckhoff, a few minutes before 11 until 11:30 or so.
A few minutes before 11 pacific time, Marc Filiberto and Richard went to the CER, measured the power out of the 35MHz source (11.288dBm) and adjusted the Marconi setting to match the power measured on the RF meter to 11.313dBm. There is a 10MHz signal which is locked to gps through the timing system plugged into the back of the Marconi, Daniel says that he thinks the Marconi is locked to that source if it is plugged in.
At 19:33 (11:33 UTC) the PMC is relocked after the beckhoff reboot with the 35MHz source changed.
Does not look like an IMC/PSL lockloss; trends attached. The NPRO temp has been glitching this morning, not drastically, which has been causing the EOM drive to be higher in general, but these glitches were not happening at the time of this lockloss. The shape of of the signal on AS_A is indicative of an arms lockloss and happens before the IMC loses lock according to MC2_TRANS. Additionally, the lockloss tool shows glitches on ETMX starting up to second before the lockloss, although the ETM_GLITCH tag was not applied in this case as the glitches did not meet the required threshold.
(The ndscope template I'm using lives in my home directory ~/templates/psl_glitch_hunting.yaml)