Displaying reports 4461-4480 of 83766.Go to page Start 220 221 222 223 224 225 226 227 228 End
Reports until 16:27, Saturday 07 December 2024
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:27, Saturday 07 December 2024 (81672)
OPS Day Shift Summary

TITLE: 12/08 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

IFO is DOWN due to MICROSEISM/ENVIRONMENT since 22:09 UTC

First 6-7 hrs of shift were very calm and we were in OBSERVING for majority of the time.

The plan is to stay in DOWN and intermittently try to lock but the last few attempts have resulted in 6 pre-DRMI LLs with 0 DRMI acquisitions. Overall, microseism is just very high.

LOG:                                                                                                                                                                       

Start Time System Name Location Lazer_Haz Task Time End
23:03 PEM Robert LVEA YES Finish setting up for Monday 23:48
23:03 HAZ LVEA IS LASER HAZARD LVEA YES LVEA IS LASER HAZARD 06:09
H1 General
oli.patane@LIGO.ORG - posted 16:21, Saturday 07 December 2024 (81671)
Ops Eve Shift Start

TITLE: 12/08 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: SEISMON_ALERT_USEISM
    Wind: 15mph Gusts, 9mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.78 μm/s
QUICK SUMMARY:

Currently in DOWN and trying to wait out the microseism a bit. Thankfully wind has gone back down

H1 ISC (SUS)
ibrahim.abouelfettouh@LIGO.ORG - posted 16:12, Saturday 07 December 2024 (81670)
Investigating SRM M3 WD Trips During Initial Alignment Part 2

Trying to gather more info about the nature of these M3 SRM WD trips in light of OWL Ops being called (at least twice in recent weeks) to press one button.

Relevant Alogs:

Part 1 of this investigation: 81476

Tony OWL Call: alog 81661

TJ OWL Call: alog 81455

TJ OWL Call: alog 81325

It's mentioned in some more OPS alogs but no new info.

Next Steps:

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 14:11, Saturday 07 December 2024 (81667)
Lockloss

Lockloss @ 12/07 22:09 UTC. Possibly due to a gust of  wind since at EY it had jumped up from lower 20s to almost 30mph the same minute of the lockloss? A possible contributer could also be the secondary microseism - it has been quickly rising over the last several hours and is now up to 2 um/s.

H1 CAL (CAL)
ibrahim.abouelfettouh@LIGO.ORG - posted 14:03, Saturday 07 December 2024 (81666)
Calibration Sweep 12/07

Calibration sweep done using the usual wiki.

Broadband Start Time: 1417641408

Broadband End Time: 1417641702

Simulines Start Time: 1417641868

Simulines End Time: 1417643246

Files Saved:

2024-12-07 21:47:09,491 | INFO | Commencing data processing.
2024-12-07 21:47:09,491 | INFO | Ending lockloss monitor. This is either due to having completed the measurement, and this functionality being terminated; or because the whole process was aborted.
2024-12-07 21:47:46,184 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20241207T212404Z.hdf5
2024-12-07 21:47:46,191 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20241207T212404Z.hdf5
2024-12-07 21:47:46,196 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20241207T212404Z.hdf5
2024-12-07 21:47:46,200 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20241207T212404Z.hdf5
2024-12-07 21:47:46,205 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20241207T212404Z.hdf5
ICE default IO error handler doing an exit(), pid = 2104592, errno = 32
PST: 2024-12-07 13:47:46.270025 PST
UTC: 2024-12-07 21:47:46.270025 UTC
GPS: 1417643284.270025

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 10:12, Saturday 07 December 2024 (81665)
Sat CP1 Fill

Sat Dec 07 10:09:18 2024 INFO: Fill completed in 9min 15secs

 

Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 07:52, Saturday 07 December 2024 (81664)
OPS Day Shift Start

TITLE: 12/07 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 6mph Gusts, 2mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.44 μm/s
QUICK SUMMARY:

IFO is in NLN and OBSERVING as of 11:47 UTC (4 hrs)

There was one lockloss last night and a known issue where SUS SRM WD trips during initial alignment. OWL was called (alog 81661) to untrip it.

H1 General (SUS)
anthony.sanchez@LIGO.ORG - posted 03:03, Saturday 07 December 2024 - last comment - 15:26, Saturday 07 December 2024(81661)
SRM Watchdog trip

TITLE: 12/07 Owl Shift: 0600-1530 UTC (2200-0730 PST), all times posted in UTC
STATE of H1: Aligning
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 5mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.28 μm/s
QUICK SUMMARY:

IFO stuck in initial alignment because SRM watchdog H1:SUS-SRM_M3_WDMON_STATE trip.
Watch dog tripped while we were in Initial alignmnet not before, and was not due to ground motion.


I logged in discovered the trip. Reset the watchdog and reselected myself for Remote OWL notifications.
 

Images attached to this report
Comments related to this report
anthony.sanchez@LIGO.ORG - 03:49, Saturday 07 December 2024 (81662)SUS

SUS SDF drive aligh L2L gain change accepted.

Images attached to this comment
ibrahim.abouelfettouh@LIGO.ORG - 15:26, Saturday 07 December 2024 (81669)

Just commenting that this is not a new issue. TJ and I were investigating it earlier and had early thoughts that SRM was catching on the wrong mode during SRC alignment in ALIGN_IFO either during the re-alignment of SRM (pre-SRC align) or after the re-misalignment of SRM. This results in the guardian thinking that SRC is aligned, which results in saturations and trips because it's actually not. Again, we think this is the case as of 11/25 but still investigating. I have an alog about it here: 81476.

H1 General
oli.patane@LIGO.ORG - posted 22:00, Friday 06 December 2024 (81660)
Ops Eve Shift End

TITLE: 12/07 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Currently Observing at 160 Mpc and have been Locked for 16.5 hours. Quiet shift where nothing happened and we were just Observing the entire time.
LOG:

00:30 Observing and have been Locked for 11 hours

H1 General
oli.patane@LIGO.ORG - posted 20:03, Friday 06 December 2024 (81659)
Ops EVE Midshift Status

Currently observing at 155 Mpc and have been Locked for 14.5 hours. Quiet evening with nothing to report

X1 DTS
joshua.freed@LIGO.ORG - posted 17:09, Friday 06 December 2024 (81658)
Initial Noise results for Double Mixer

J. Freed, 

 

Update on Double Mixer Progress 81593, Proceded through step 2a of the double mixer test plan T2400327. Initial results are sugesting a possible noise improvemnt compaired to the other options in the area of interest for SPI.

DM_PN1.pdf Shows the phase noise test run in step 2a of the Double Mixer Test plan. While not a true 1-to-1 comparison of the phase noise performance of the double mixer compered to other options (step 2b is for that), it shows that adding the double mixer into this system (except for the peaks) improves phase noise performance by a factor of 2-5 from 100 Hz to 20 kHz. Of note, there is a large peak are centered around the 4096 Hz that is of interest to SPI. As there was only a cursory attept to properly phase match the signals in the internals of the double mixer for this initial test, the 4096 Hz sideband was not properly removed. 

A possible cause of this phase miss match is our phase delayer inside the double mixer (ZMSCQ-2-90B) causes a phase delay of about 89.82 degrees at 80MHz and not the 90 degrees we are expecting.

Possible fixes

The very low frequency (<0.05Hz) contains DC noise caused by the external phase mismatch of the double mixer and the reference source for the phase noise measurments. It is not an indication of double mixer drift and there has not been yet investigations in drift.

Non-image files attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:30, Friday 06 December 2024 (81656)
OPS Day Shift Summary

TITLE: 12/07 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

IFO is in NLN and OBSERVING as of 14:14 UTC (11 hr lock).

Of note:

LOG:                                                                                                                                                                                                                                   

Start Time System Name Location Lazer_Haz Task Time End
19:03 PSL Jason Optics Lab Local NPRO Diagnostics Prep 19:26
19:40 EE Fil Recieving Door N Parts transport 20:40
19:48 PSL Jason Optics Lab Yes NPRO Diagnostics 20:07
Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 16:23, Friday 06 December 2024 (81657)
Ops Eve Shift Start

TITLE: 12/07 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 160Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 6mph Gusts, 4mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.23 μm/s
QUICK SUMMARY:

We're Observing and have been Locked for 11 hours. Winds are low and secondary microseism is not too bad.

LHO FMCS (PEM)
oli.patane@LIGO.ORG - posted 14:21, Friday 06 December 2024 (81655)
HVAC Fan Vibrometers Check FAMIS

Closes FAMIS#26345, last checked 81548

Corner Station Fans (attachment1)
- All fans are looking normal and within range.

Outbuilding Fans (attachment2)
- MY FAN1 accelerometer 1 has become a little more chaotic looking the past three days, but is still well within range. All other fans are looking normal and are within range.

Images attached to this report
H1 DetChar
sheila.dwyer@LIGO.ORG - posted 12:59, Friday 06 December 2024 (81653)
running bruco, new instructions

The way that I used to run bruco didn't work for me today.  With help from Elenna and Camilla I was able to run it, so here is what I did:

ssh into ssh.ligo.org, select 4 for LHO cluster and c for ldas.pcdev1

made a fresh clone of https://git.ligo.org/gabriele-vajente/bruco in a directory called new_bruco

cd /home/sheila.dwyer/new_bruco/bruco

python -m bruco --ifo=H1 --channel=GDS-CALIB_STRAIN_CLEAN --gpsb=1417500178 --length=400 --outfs=4096 --fres=0.1 --dir=/home/sheila.dwyer/public_html/brucos/GDS_1417500178 --top=100 --webtop=20 --plot=html --nproc=20 --xlim=7:2000 --excluded=share/lho_excluded_channels_O4.txt

The bruco now appears at: https://ldas-jobs.ligo-wa.caltech.edu/~sheila.dwyer/brucos/GDS_1417500178/

This is for a quiet time overnight last night.

Input jitter does have a contribution, as Robert suspected based on looking at the DARM spectrum.  Jenne looked at cleaning, and plans to try out a new cleaning during Monday's commissoning window.

H1 CDS (OpsInfo)
david.barker@LIGO.ORG - posted 10:22, Friday 06 December 2024 - last comment - 13:20, Friday 06 December 2024(81650)
CDS alarm status during CP1 fill

Note to Operators: During the CP1 fill (which starts daily at 10am) the CDS ALARM system shows RED because the LN2 Liquid Level Control Valve (LLCV) is ramped up to 100% open. The alarm system puts this channel into alarm when its value exceeds 50%. This alarm should clear within 1 minute of the end of the fill.

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 13:20, Friday 06 December 2024 (81654)

Following user feedback, the IFO range "7 segment LED" display on the CDS overview is now GREEN when H1 is in OBSERVE, and orange otherwise.

Images attached to this comment
H1 General (SUS)
anthony.sanchez@LIGO.ORG - posted 06:26, Friday 06 December 2024 - last comment - 12:23, Friday 06 December 2024(81643)
OWL Shift Call

TITLE: 12/06 Owl Shift: 0600-1530 UTC (2200-0730 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
LOG:
H1 called me to help with SDF issues.
SUS ETMX and ITMX SDFs

These seemed like they were changed randomly in the night but also seemed like that may have unlocked the IFO if reverted so I accepted them.
But now that i'm looking back at Francisco's Alog I think some may have been intentional.
 

More than 1 IFO H1 call tonight:
H1 Called me earlier in the night and I slept though the calls because I did not realise my phone was not properly set up for this owl shift before I went to sleep.
 

 

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 09:29, Friday 06 December 2024 (81646)

Sheila, Camilla

Looking into last night's issues, there were two sets of sdfs that stopped us going into observing:

  • L2L gain: this was changed in lscparams but ISC_LOCK was not loaded so the sdf was accepted as the new value but the guardians set us back to the old value when we relocked.
    • Francisco has added notes to the 81630 code to remember to edit lscparams, reload ISC_LOCK and accept in sdf.
  • TRAMPS: The new SCAN_ALIGNEMENT code 81597 had hard coded TRMAPs of 3 when nominal is 2. Only X arm ran scan alignment so the sdf diffs were only in TSMX and ETMX. 
    • Sheila checked the code doesn't depend on them and edited ALS_ARM.py to set these SCAN_ALIGNEMENT TRAMPS to 2s.

If we loose lock today we need to: load ISC_LOCK and ALS_XARM and ISC_LOCK  and accept 198.6 as the H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN and the TRAMPs as 2s. If we don't loose lock by the end of the day we should drop out of observing, change the L2L and TRAMPS, reload   guardians and accept sdf.

ibrahim.abouelfettouh@LIGO.ORG - 12:23, Friday 06 December 2024 (81652)

https://services1.ligo-la.caltech.edu/FRS/show_bug.cgi?id=32847

FRS Ticket Logging the 3hrs 30 mins out of OBSERVING

H1 PEM (DetChar, PEM, TCS)
robert.schofield@LIGO.ORG - posted 18:06, Thursday 14 November 2024 - last comment - 10:19, Thursday 19 December 2024(81246)
TCS-Y chiller is likely hurting Crab sensitivity

Ansel reported that a peak in DARM that interfered with the sensitivity of the Crab pulsar followed a similar time frequency path as a peak in the beam splitter microphone signal. I found that this was also the case on a shorter time scale and took advantage of the long down times last weekend to use  a movable microphone to find the source of the peak. Microphone signals don’t usually show coherence with DARM even when they are causing noise, probably because the coherence length of the sound is smaller than the spacing between the coupling sites and the microphones, hence the importance of precise time-frequency paths.

Figure 1 shows DARM and the problematic peak in microphone signals. The second page of Figure 1 shows the portable microphone signal at a location by the staging building and a location near the TCS chillers. I used accelerometers to confirm the microphone identification of the TCS chillers, and to distinguish between the two chillers (Figure 2).

I was surprised that the acoustic signal was so strong that I could see it at the staging building - when I found the signal outside, I assumed it was coming from some external HVAC component and spent quite a bit of time searching outside. I think that this may be because the suspended mezzanine (see photos on second page of Figure 2) acts as a sort of soundboard, helping couple the chiller vibrations to the air. 

Any direct vibrational coupling can be solved by vibrationally isolating the chillers. This may even help with acoustic coupling if the soundboard theory is correct. We might try this first. However, the safest solution is to either try to change the load to move the peaks to a different frequency, or put the chillers on vibration isolation in the hallway of the cinder-block HVAC housing so that the stiff room blocks the low-frequency sound. 

Reducing the coupling is another mitigation route. Vibrational coupling has apparently increased, so I think we should check jitter coupling at the DCPDs in case recent damage has made them more sensitive to beam spot position.

For next generation detectors, it might be a good idea to make the mechanical room of cinder blocks or equivalent to reduce acoustic coupling of the low frequency sources.

Non-image files attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 14:12, Monday 25 November 2024 (81472)DetChar, TCS

This afternoon TJ and I placed pieces of damping and elastic foam under the wheels of both CO2X and CO2Y TCS chillers. We placed thicker foam under CO2Y but this did make the chiller wobbly so we placed thinner foam under CO2X.

Images attached to this comment
keith.riles@LIGO.ORG - 08:10, Thursday 28 November 2024 (81525)DetChar
Unfortunately, I'm not seeing any improvement of the Crab contamination in the strain spectra this week, following the foam insertion.

Attached are ASD zoom-ins (daily and cumulative) from Nov 24, 25, 26 and 27.
Images attached to this comment
camilla.compton@LIGO.ORG - 15:02, Tuesday 03 December 2024 (81598)DetChar, TCS

This morning at 17:00UTC we turned the CO2X and CO2Y TCS chiller off and then on again, hoping this might change the frequency they are injecting into DARM. We do not expect it to effect it much we had the chillers off for a ling period 25th October 80882 when we flushed the chiller line and the issue was seen before this date.

Opened FRS 32812.

There were no expilcit changes to the TCS chillers bettween O4a and O4b although we swapped a chiller for a spare chiller in October 2023 73704

camilla.compton@LIGO.ORG - 11:27, Thursday 05 December 2024 (81634)TCS

Between 19:11 and 19:21 UTC, Robert and I swapped the foam from under CO2Y chiller (it was flattened and not providing any damping now) to new, thicker foam and 4 layers of rubber. Photo's attached. 

Images attached to this comment
keith.riles@LIGO.ORG - 06:04, Saturday 07 December 2024 (81663)
Thanks for the interventions, but I'm still not seeing improvement in the Crab region. Attached are daily snapshots from UTC Monday to Friday (Dec 2-6).
Images attached to this comment
thomas.shaffer@LIGO.ORG - 15:53, Tuesday 10 December 2024 (81745)TCS

I changed the flow of the TCSY chiller from 4.0gpm to 3.7gpm.

These Thermoflex1400 chillers have their flow rate adjusted by opening or closing a 3 way valve at the back of the chiller. for both X and Y chillers, these have been in the full open position, with the lever pointed straight up. The Y chiller has been running with 4.0gpm, so our only change was a lower flow rate. The X chiller has been at 3.7gpm already, and the manual states that these chillers shouldn't be ran below 3.8gpm. Though this was a small note in the manual and could be easily missed. Since the flow couldn't be increased via the 3 way valve on back, I didn't want to lower it further and left it as is.

Two questions came from this:

  1. Why are we running so close to the 3.8gpm minimum?
  2. Why is the flow rate for the X chiller so low?

The flow rate has been consistent for the last year+, so I don't suspect that the pumps are getting worn out. As far back as I can trend they have been around 4.0 and 3.7, with some brief periods above or below.

Images attached to this comment
keith.riles@LIGO.ORG - 07:52, Friday 13 December 2024 (81806)
Thanks for the latest intervention. It does appear to have shifted the frequency up just enough to clear the Crab band. Can it be nudged any farther, to reduce spectral leakage into the Crab? 

Attached are sample spectra from before the intervention (Dec 7 and 10) and afterward (Dec 11 and 12). Spectra from Dec 8-9 are too noisy to be helpful here.



Images attached to this comment
camilla.compton@LIGO.ORG - 11:34, Tuesday 17 December 2024 (81866)TCS

TJ touched the CO2 flow on Dec 12th around 19:45UTC 81791 so the flowrate further reduced to 3.55 GPM. Plot attached.

Images attached to this comment
thomas.shaffer@LIGO.ORG - 14:16, Tuesday 17 December 2024 (81875)

The flow of the TCSY chiller was further reduced to 3.3gpm. This should push the chiller peak lower in frequency and further away from the crab nebula.

keith.riles@LIGO.ORG - 10:19, Thursday 19 December 2024 (81902)
The further reduced flow rate seems to have given the Crab band more isolation from nearby peaks, although I'm not sure I understand the improvement in detail. Attached is a spectrum from yesterday's data in the usual form. Since the zoomed-in plots suggest (unexpectedly) that lowering flow rate moves an offending peak up in frequency, I tried broadening the band and looking at data from December 7 (before 1st flow reduction), December 16 (before most recent flow reduction) and December 18 (after most recent flow reduction). If I look at one of the accelerometer channels Robert highlighted, I do see a large peak indeed move to lower frequencies, as expected.

Attachments:
1) Usual daily h(t) spectral zoom near Crab band - December 18
2) Zoom-out for December 7, 16 and 18 overlain
3) Zoom-out for December 7, 16 and 18 overlain but with vertical offsets
4) Accelerometer spectrum for December 7 (sample starting at 18:00 UTC)
5) Accelerometer spectrum for December 16
6) Accelerometer spectrum for December 18 
Images attached to this comment
Displaying reports 4461-4480 of 83766.Go to page Start 220 221 222 223 224 225 226 227 228 End