TITLE: 12/08 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is DOWN due to MICROSEISM/ENVIRONMENT since 22:09 UTC
First 6-7 hrs of shift were very calm and we were in OBSERVING for majority of the time.
The plan is to stay in DOWN and intermittently try to lock but the last few attempts have resulted in 6 pre-DRMI LLs with 0 DRMI acquisitions. Overall, microseism is just very high.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:03 | PEM | Robert | LVEA | YES | Finish setting up for Monday | 23:48 |
23:03 | HAZ | LVEA IS LASER HAZARD | LVEA | YES | LVEA IS LASER HAZARD | 06:09 |
TITLE: 12/08 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT_USEISM
Wind: 15mph Gusts, 9mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.78 μm/s
QUICK SUMMARY:
Currently in DOWN and trying to wait out the microseism a bit. Thankfully wind has gone back down
Trying to gather more info about the nature of these M3 SRM WD trips in light of OWL Ops being called (at least twice in recent weeks) to press one button.
Relevant Alogs:
Part 1 of this investigation: 81476
Tony OWL Call: alog 81661
TJ OWL Call: alog 81455
TJ OWL Call: alog 81325
It's mentioned in some more OPS alogs but no new info.
Next Steps:
Lockloss @ 12/07 22:09 UTC. Possibly due to a gust of wind since at EY it had jumped up from lower 20s to almost 30mph the same minute of the lockloss? A possible contributer could also be the secondary microseism - it has been quickly rising over the last several hours and is now up to 2 um/s.
Calibration sweep done using the usual wiki.
Broadband Start Time: 1417641408
Broadband End Time: 1417641702
Simulines Start Time: 1417641868
Simulines End Time: 1417643246
Files Saved:
2024-12-07 21:47:09,491 | INFO | Commencing data processing.
2024-12-07 21:47:09,491 | INFO | Ending lockloss monitor. This is either due to having completed the measurement, and this functionality being terminated; or because the whole process was aborted.
2024-12-07 21:47:46,184 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20241207T212404Z.hdf5
2024-12-07 21:47:46,191 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20241207T212404Z.hdf5
2024-12-07 21:47:46,196 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20241207T212404Z.hdf5
2024-12-07 21:47:46,200 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20241207T212404Z.hdf5
2024-12-07 21:47:46,205 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20241207T212404Z.hdf5
ICE default IO error handler doing an exit(), pid = 2104592, errno = 32
PST: 2024-12-07 13:47:46.270025 PST
UTC: 2024-12-07 21:47:46.270025 UTC
GPS: 1417643284.270025
Sat Dec 07 10:09:18 2024 INFO: Fill completed in 9min 15secs
TITLE: 12/07 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 2mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.44 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING as of 11:47 UTC (4 hrs)
There was one lockloss last night and a known issue where SUS SRM WD trips during initial alignment. OWL was called (alog 81661) to untrip it.
TITLE: 12/07 Owl Shift: 0600-1530 UTC (2200-0730 PST), all times posted in UTC
STATE of H1: Aligning
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 5mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.28 μm/s
QUICK SUMMARY:
IFO stuck in initial alignment because SRM watchdog H1:SUS-SRM_M3_WDMON_STATE trip.
Watch dog tripped while we were in Initial alignmnet not before, and was not due to ground motion.
I logged in discovered the trip. Reset the watchdog and reselected myself for Remote OWL notifications.
SUS SDF drive aligh L2L gain change accepted.
Just commenting that this is not a new issue. TJ and I were investigating it earlier and had early thoughts that SRM was catching on the wrong mode during SRC alignment in ALIGN_IFO either during the re-alignment of SRM (pre-SRC align) or after the re-misalignment of SRM. This results in the guardian thinking that SRC is aligned, which results in saturations and trips because it's actually not. Again, we think this is the case as of 11/25 but still investigating. I have an alog about it here: 81476.
TITLE: 12/07 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Currently Observing at 160 Mpc and have been Locked for 16.5 hours. Quiet shift where nothing happened and we were just Observing the entire time.
LOG:
00:30 Observing and have been Locked for 11 hours
Currently observing at 155 Mpc and have been Locked for 14.5 hours. Quiet evening with nothing to report
J. Freed,
Update on Double Mixer Progress 81593, Proceded through step 2a of the double mixer test plan T2400327. Initial results are sugesting a possible noise improvemnt compaired to the other options in the area of interest for SPI.
DM_PN1.pdf Shows the phase noise test run in step 2a of the Double Mixer Test plan. While not a true 1-to-1 comparison of the phase noise performance of the double mixer compered to other options (step 2b is for that), it shows that adding the double mixer into this system (except for the peaks) improves phase noise performance by a factor of 2-5 from 100 Hz to 20 kHz. Of note, there is a large peak are centered around the 4096 Hz that is of interest to SPI. As there was only a cursory attept to properly phase match the signals in the internals of the double mixer for this initial test, the 4096 Hz sideband was not properly removed.
A possible cause of this phase miss match is our phase delayer inside the double mixer (ZMSCQ-2-90B) causes a phase delay of about 89.82 degrees at 80MHz and not the 90 degrees we are expecting.
Possible fixes
The very low frequency (<0.05Hz) contains DC noise caused by the external phase mismatch of the double mixer and the reference source for the phase noise measurments. It is not an indication of double mixer drift and there has not been yet investigations in drift.
TITLE: 12/07 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 14:14 UTC (11 hr lock).
Of note:
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
19:03 | PSL | Jason | Optics Lab | Local | NPRO Diagnostics Prep | 19:26 |
19:40 | EE | Fil | Recieving Door | N | Parts transport | 20:40 |
19:48 | PSL | Jason | Optics Lab | Yes | NPRO Diagnostics | 20:07 |
TITLE: 12/07 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 160Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 4mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.23 μm/s
QUICK SUMMARY:
We're Observing and have been Locked for 11 hours. Winds are low and secondary microseism is not too bad.
Closes FAMIS#26345, last checked 81548
Corner Station Fans (attachment1)
- All fans are looking normal and within range.
Outbuilding Fans (attachment2)
- MY FAN1 accelerometer 1 has become a little more chaotic looking the past three days, but is still well within range. All other fans are looking normal and are within range.
The way that I used to run bruco didn't work for me today. With help from Elenna and Camilla I was able to run it, so here is what I did:
ssh into ssh.ligo.org, select 4 for LHO cluster and c for ldas.pcdev1
made a fresh clone of https://git.ligo.org/gabriele-vajente/bruco in a directory called new_bruco
cd /home/sheila.dwyer/new_bruco/bruco
python -m bruco --ifo=H1 --channel=GDS-CALIB_STRAIN_CLEAN --gpsb=1417500178 --length=400 --outfs=4096 --fres=0.1 --dir=/home/sheila.dwyer/public_html/brucos/GDS_1417500178 --top=100 --webtop=20 --plot=html --nproc=20 --xlim=7:2000 --excluded=share/lho_excluded_channels_O4.txt
The bruco now appears at: https://ldas-jobs.ligo-wa.caltech.edu/~sheila.dwyer/brucos/GDS_1417500178/
This is for a quiet time overnight last night.
Input jitter does have a contribution, as Robert suspected based on looking at the DARM spectrum. Jenne looked at cleaning, and plans to try out a new cleaning during Monday's commissoning window.
Note to Operators: During the CP1 fill (which starts daily at 10am) the CDS ALARM system shows RED because the LN2 Liquid Level Control Valve (LLCV) is ramped up to 100% open. The alarm system puts this channel into alarm when its value exceeds 50%. This alarm should clear within 1 minute of the end of the fill.
TITLE: 12/06 Owl Shift: 0600-1530 UTC (2200-0730 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
LOG:
H1 called me to help with SDF issues.
SUS ETMX and ITMX SDFs
These seemed like they were changed randomly in the night but also seemed like that may have unlocked the IFO if reverted so I accepted them.
But now that i'm looking back at Francisco's Alog I think some may have been intentional.
More than 1 IFO H1 call tonight:
H1 Called me earlier in the night and I slept though the calls because I did not realise my phone was not properly set up for this owl shift before I went to sleep.
Sheila, Camilla
Looking into last night's issues, there were two sets of sdfs that stopped us going into observing:
If we loose lock today we need to: load ISC_LOCK and ALS_XARM and ISC_LOCK and accept 198.6 as the H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN and the TRAMPs as 2s.
If
we don't loose lock by the end of the day we should drop out of observing, change the L2L and TRAMPS, reload guardians and accept sdf.
https://services1.ligo-la.caltech.edu/FRS/show_bug.cgi?id=32847
FRS Ticket Logging the 3hrs 30 mins out of OBSERVING
Ansel reported that a peak in DARM that interfered with the sensitivity of the Crab pulsar followed a similar time frequency path as a peak in the beam splitter microphone signal. I found that this was also the case on a shorter time scale and took advantage of the long down times last weekend to use a movable microphone to find the source of the peak. Microphone signals don’t usually show coherence with DARM even when they are causing noise, probably because the coherence length of the sound is smaller than the spacing between the coupling sites and the microphones, hence the importance of precise time-frequency paths.
Figure 1 shows DARM and the problematic peak in microphone signals. The second page of Figure 1 shows the portable microphone signal at a location by the staging building and a location near the TCS chillers. I used accelerometers to confirm the microphone identification of the TCS chillers, and to distinguish between the two chillers (Figure 2).
I was surprised that the acoustic signal was so strong that I could see it at the staging building - when I found the signal outside, I assumed it was coming from some external HVAC component and spent quite a bit of time searching outside. I think that this may be because the suspended mezzanine (see photos on second page of Figure 2) acts as a sort of soundboard, helping couple the chiller vibrations to the air.
Any direct vibrational coupling can be solved by vibrationally isolating the chillers. This may even help with acoustic coupling if the soundboard theory is correct. We might try this first. However, the safest solution is to either try to change the load to move the peaks to a different frequency, or put the chillers on vibration isolation in the hallway of the cinder-block HVAC housing so that the stiff room blocks the low-frequency sound.
Reducing the coupling is another mitigation route. Vibrational coupling has apparently increased, so I think we should check jitter coupling at the DCPDs in case recent damage has made them more sensitive to beam spot position.
For next generation detectors, it might be a good idea to make the mechanical room of cinder blocks or equivalent to reduce acoustic coupling of the low frequency sources.
This afternoon TJ and I placed pieces of damping and elastic foam under the wheels of both CO2X and CO2Y TCS chillers. We placed thicker foam under CO2Y but this did make the chiller wobbly so we placed thinner foam under CO2X.
Unfortunately, I'm not seeing any improvement of the Crab contamination in the strain spectra this week, following the foam insertion. Attached are ASD zoom-ins (daily and cumulative) from Nov 24, 25, 26 and 27.
This morning at 17:00UTC we turned the CO2X and CO2Y TCS chiller off and then on again, hoping this might change the frequency they are injecting into DARM. We do not expect it to effect it much we had the chillers off for a ling period 25th October 80882 when we flushed the chiller line and the issue was seen before this date.
Opened FRS 32812.
There were no expilcit changes to the TCS chillers bettween O4a and O4b although we swapped a chiller for a spare chiller in October 2023 73704.
Between 19:11 and 19:21 UTC, Robert and I swapped the foam from under CO2Y chiller (it was flattened and not providing any damping now) to new, thicker foam and 4 layers of rubber. Photo's attached.
Thanks for the interventions, but I'm still not seeing improvement in the Crab region. Attached are daily snapshots from UTC Monday to Friday (Dec 2-6).
I changed the flow of the TCSY chiller from 4.0gpm to 3.7gpm.
These Thermoflex1400 chillers have their flow rate adjusted by opening or closing a 3 way valve at the back of the chiller. for both X and Y chillers, these have been in the full open position, with the lever pointed straight up. The Y chiller has been running with 4.0gpm, so our only change was a lower flow rate. The X chiller has been at 3.7gpm already, and the manual states that these chillers shouldn't be ran below 3.8gpm. Though this was a small note in the manual and could be easily missed. Since the flow couldn't be increased via the 3 way valve on back, I didn't want to lower it further and left it as is.
Two questions came from this:
The flow rate has been consistent for the last year+, so I don't suspect that the pumps are getting worn out. As far back as I can trend they have been around 4.0 and 3.7, with some brief periods above or below.
Thanks for the latest intervention. It does appear to have shifted the frequency up just enough to clear the Crab band. Can it be nudged any farther, to reduce spectral leakage into the Crab? Attached are sample spectra from before the intervention (Dec 7 and 10) and afterward (Dec 11 and 12). Spectra from Dec 8-9 are too noisy to be helpful here.
TJ touched the CO2 flow on Dec 12th around 19:45UTC 81791 so the flowrate further reduced to 3.55 GPM. Plot attached.
The flow of the TCSY chiller was further reduced to 3.3gpm. This should push the chiller peak lower in frequency and further away from the crab nebula.
The further reduced flow rate seems to have given the Crab band more isolation from nearby peaks, although I'm not sure I understand the improvement in detail. Attached is a spectrum from yesterday's data in the usual form. Since the zoomed-in plots suggest (unexpectedly) that lowering flow rate moves an offending peak up in frequency, I tried broadening the band and looking at data from December 7 (before 1st flow reduction), December 16 (before most recent flow reduction) and December 18 (after most recent flow reduction). If I look at one of the accelerometer channels Robert highlighted, I do see a large peak indeed move to lower frequencies, as expected. Attachments: 1) Usual daily h(t) spectral zoom near Crab band - December 18 2) Zoom-out for December 7, 16 and 18 overlain 3) Zoom-out for December 7, 16 and 18 overlain but with vertical offsets 4) Accelerometer spectrum for December 7 (sample starting at 18:00 UTC) 5) Accelerometer spectrum for December 16 6) Accelerometer spectrum for December 18