Displaying reports 2861-2880 of 83389.Go to page Start 140 141 142 143 144 145 146 147 148 End
Reports until 16:12, Wednesday 19 February 2025
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:12, Wednesday 19 February 2025 (82915)
OPS Eve Shift Start

TITLE: 02/20 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 144Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 6mph Gusts, 4mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.60 μm/s
QUICK SUMMARY:

IFO is in NLN and COMISSIONING

The plan is to optimize SQZ via SQZ angle adjustment followed by a PRCL Open Loop Gain Measurement since there is some evidence that PR is experiencing noise at certain problem frequencies. Then, we go back to OBSERVING

 

H1 General
thomas.shaffer@LIGO.ORG - posted 14:17, Wednesday 19 February 2025 (82913)
Back to Observing 2133 UTC

Back to observing after a lock loss, some commissioning time, and trying to fix the transition from etmx lock losses.

useism is still high and our range is a bit low at 145Mpc. If the range doens't improve with more thermalization, we will take it out for some tuning.

H1 ISC
sheila.dwyer@LIGO.ORG - posted 13:45, Wednesday 19 February 2025 (82912)
transition to ETMX low noise DARM control causing too low a light level on DCPDs with microseism high

Oli, TJ, Sheila

There have been several locklosses over the last day from the LOWNOISE_ESD_ETMX state, which happened while the gain was ramping down ITMX darm control and ramping up ETMX control.  This is similar to what Elenna was trying to avoid by adjusting ramp times in 81260 and 81195, which was also at a time when the microseism was high.

Oli and I found that the problem with some of our transitions today was that the power on the DCPDs was dropping too low during the initial transition, we lost lock when it got to less than 1mA, in one of the sucsesful transitions it was as low as 4mA.  We edited the guardian to not turn off the darm boost (DARM1 FM1) before making the transition, and instead we are turning it off directly after transitioning control back to ETMX, before the other filter changes that happen in this state. 

This is the boost that we thought was causing locklosses when ramping off, 81638 which motivated Erik's quadratic ramping change 82263 which was then reverted 82284    82277.  Today Oli and I increased the ramp time on this filter from 10 to 30 seconds.  We have make the guardian wait the full 30 seconds for this ramp time, so this is making us wait longer in this state.

The attached screenshot shows the transition with the boost on on the left, and off on the right, the wiggle in the DCPD sum is about 1 mA rather than 15mA. 

Oli is thinking about adding a check for DCPD sum dropping low to the lockloss tool.

Images attached to this report
H1 SEI
ryan.crouch@LIGO.ORG - posted 12:19, Wednesday 19 February 2025 (82911)
FAMIS ISI CPS check

HAM7 & 8 look less noisy at high frequency, same with the BSCs especially ETMY.

Non-image files attached to this report
H1 SQZ
sheila.dwyer@LIGO.ORG - posted 11:00, Wednesday 19 February 2025 (82908)
SQZ pico controller has been on

Nutsinee pointed us to this 66765 where the squeezer pico was left on and caused a large line. 

Indeed, the HAM7/SQZT7  pico has been on since Jan 7th, when we were making homodyne measurements  82153 .  This must have been accepted in SDF, but I don't see any alog about it on the 7th.

LHO VE
david.barker@LIGO.ORG - posted 10:11, Wednesday 19 February 2025 (82906)
Wed CP1 Fill

Wed Feb 19 10:05:46 2025 INFO: Fill completed in 5min 43secs

TCmins [-62C, -60C] OAT (+1C, 34F) DeltaTempTime 10:05:47

Images attached to this report
H1 General
thomas.shaffer@LIGO.ORG - posted 08:33, Wednesday 19 February 2025 (82903)
Lock loss 1556UTC and commissioning start

We had a lock loss at 1556UTC (1424015786) tagged as an ETM glitch. We are going to take this "opportunity" to start moving the spot on PR2 in the direction to find the other edge of clipping. After this we will try to lock again and work on PRCL OLG, SQZ, and other issues we can address within the commissioning window of 1700-2000UTC (9-12PT).

H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 08:19, Wednesday 19 February 2025 (82902)
CDS Maintenance Summary: Tuesday 18th February 2025

WP12321 Add FMCSSTAT channels to EDC

Erik, Dave:

Recently FMCS-STAT was expanded to monitor FCES, EX and EY temperatures. These additional channels were added to the H1EPICS_FMCSSTAT.ini file. DAQ and EDC restart was required

WP12339 TW1 raw minute trend file offload

Dave:

The copy of the last 6 months of raw minute trends from the almost full SSD-RAID on h1daqtw1 was started. h1daqnds1 was temporarily reconfigured to serve these data from their temporary location while the copy proceeds.

A restart of h1daqnds1 daqd was needed, this was done when the DAQ was restarted for EDC changes.

WP12333 New digivideo server and network switch, move 4 cameras to new server

Jonathan, Patrick, Fil, Dave:

A new Cisco POE network switch, called sw-lvea-aux1, was installed in the CER below the current sw-lvea-aux. This is a dual powered switch, both power supplies are DC powered. Note, sw-lvea-aux has one DC and one AC power supply, this has been left unchanged for now.

Two multimode fiber pairs were used to connect sw-lvea-aux1 back to the core switch in the MSR.

For testing, four relatively unused cameras were moved from h1digivideo1 to the server h1digivideo4. These are MC1 (h1cam11), MC3 (h1cam12), PRM (h1cam13) and PR3 (h1cam14).

The new server IOC is missing two EPICS channels compared with the old IOC, _XY and _AUTO. To green up the EDC due to these missing channels a dummy IOC is being ran (see alog).

The MC1, MC3, PRM and PR3 camera images on the control room FOM (nuc26) started showing compression issues, mainly several seconds of smeared green/magenta horizontal stripes every few minutes. This was tracked to maximizing CPU resources, and has been temporaily fixed by stopping one of these camera viewers.

EY Timing Fanout Errors

Daniel, Marc, Jonathan, Erik, Ibrahim, Dave:

Soon after lunchtime the timing system started flashing RED on the CDS overview. Investigation tracked this down to the EY fanout, port_5 (numbering from zero, so the sixth physical port). This port sends the timing signal to h1iscey's IO Chassis LIGO Timing Card.

Marc and Dave went to EY at 16:30 with spare SFPs and timing card. After swapping these out with no success, the problem was tracked to the fanout port itself. With the original SFPs, fiber and timing card, using port_6 instead of port_5 fixed the issue.

For initial SFP switching, we just stopped all the models on h1iscey (h1iopiscey, h1iscey, h1pemey, h1caley, h1alsey). Later when we replaced the timing cards h1iscey was fenced from the Dolphin fabric and powered down.

The operator put all EY systems (SUS, SEI and ISC) into a safe mode before the start of the investigation.

DAQ Restart

Erik, Dave:

The 0-leg restart was non-optimal. A new EDC restart procedure was being tested, whereby both trend-writers were turned off before h1edc was restarted to prevent channel-hopping which causes outlier data.

The reason for the DAQ restart was an expanded H1EPICS_FMCSSTAT.ini

After the restart of the 0-leg it was discovered that there were some naming issues with the FMCS STAT FCES channels. Erik regenerated a new H1EPICS_FMCSSTAT.ini and the EDC/0-leg were restarted again.

Following both 0-leg restarts, FW0 spontaneously restarted itself after running only a few minutes.

When the EDC and the 0-leg were stable, the 1-leg was restarted. During this restart NDS1 came up with a temporary daqdrc serving TW1 past data from its temporary location.

Reboots/Restarts

Tue18Feb2025
LOC TIME HOSTNAME     MODEL/REBOOT
09:45:03 h1susauxb123 h1edc[DAQ] <<< first edc restart, incorrect FCES names
09:46:02 h1daqdc0     [DAQ] <<< first 0-leg restart
09:46:10 h1daqtw0     [DAQ]
09:46:11 h1daqfw0     [DAQ]
09:46:12 h1daqnds0    [DAQ]
09:46:19 h1daqgds0    [DAQ]
09:47:13 h1daqgds0    [DAQ] <<< GDS0 needed a restart
09:52:58 h1daqfw0     [DAQ] <<< Sponteneous FW0 restart


09:56:21 h1susauxb123 h1edc[DAQ] <<< second edc restart, all channels corrected
09:57:44 h1daqdc0     [DAQ] <<< second 0-leg restart
09:57:55 h1daqfw0     [DAQ]
09:57:55 h1daqtw0     [DAQ]
09:57:56 h1daqnds0    [DAQ]
09:58:03 h1daqgds0    [DAQ]


10:03:00 h1daqdc1     [DAQ] <<< 1-leg restart
10:03:12 h1daqfw1     [DAQ]
10:03:13 h1daqnds1    [DAQ]
10:03:13 h1daqtw1     [DAQ]
10:03:21 h1daqgds1    [DAQ]
10:04:07 h1daqgds1    [DAQ] <<< GDS1 restart


10:04:48 h1daqfw0     [DAQ] <<< Spontaneous FW0 restart


17:20:37 h1iscey      ***REBOOT*** <<< power up h1iscey following timing issue on fanout port
17:22:17 h1iscey      h1iopiscey  
17:22:30 h1iscey      h1pemey     
17:22:43 h1iscey      h1iscey     
17:22:56 h1iscey      h1caley     
17:23:09 h1iscey      h1alsey     
 

LHO General
thomas.shaffer@LIGO.ORG - posted 07:33, Wednesday 19 February 2025 (82901)
Ops Day Shift Start

TITLE: 02/19 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 141Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
    SEI_ENV state: USEISM
    Wind: 1mph Gusts, 0mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.68 μm/s
QUICK SUMMARY: Rough night for locking. We are currently in a 45 min lock, range trending down. useism is peaking above 1 um/s, so we might have a rough day as well. There is planned commissioning time today from 1700-2000UTC (9-12PT).

H1 General
ryan.crouch@LIGO.ORG - posted 06:56, Wednesday 19 February 2025 - last comment - 07:04, Wednesday 19 February 2025(82899)
OPS OWL assistance

H1 called for help again at 14:50 UTC, from the TO_NLN timer expiring. By the time I logged in we were ready to go into Observing.

14:53 UTC Observing

Comments related to this report
ryan.crouch@LIGO.ORG - 07:04, Wednesday 19 February 2025 (82900)

There's been a high stage lockloss between the last 2 acquisitions, at state 558.

H1 General (CAL, SQZ)
ryan.crouch@LIGO.ORG - posted 01:28, Wednesday 19 February 2025 - last comment - 15:33, Tuesday 11 March 2025(82898)
OPS OWL report SDF diffs

To get in Observing I had to accept some SDF diffs for SQZ, and PCALY. There was also still a PEM CS excitation point open as well. There was a notification about PCALY OFS servo malfunction so I looked at it and it was railed at -7.83, so I toggled it off and back on and it brought it back to a good value. I also did not receive a call, a voicemail just appeared.

09:21 UTC observing

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 11:18, Wednesday 19 February 2025 (82909)SQZ

H1:SQZ-LO_SERVO_IN1GAIN was left at -15 by accident, reverted to -12 and saved in sdf.

francisco.llamas@LIGO.ORG - 15:33, Tuesday 11 March 2025 (83310)

DriptaB, FranciscoL

SDF diffs for PCALY were incorrect. The timing of these changes match the h1iscey reboot done the same day (82902). Today around 19:00 UTC (almost three weeks later), we used EPICS values from the Pcal calibration update done in September (80220) to revert changes. Saved changes in OBSERVE and SAFE.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 22:07, Tuesday 18 February 2025 (82897)
OPS Eve Shift Summary

TITLE: 02/19 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:

IFO is in MAINTENANCE and LOCKING_ALS

Locking has been near-impossible due to microseism and oppurtunistic earthquakes.

Essentially been dealing with the same issues the whole shift. IFO alignment is actually quite good since we have caught DRMI over 10 times automatically and very quickly but we end up losing lock due to clear instability as a cause of EQs that are exacerbated by over 1 micron/sec secondary microseism.

3 of this shift's EQs have been over 5.0 and we have lost lock around the arrival of the S or R waves each time, with 2 of those being at TRANSITION_FROM_ETMX, around an hour into acquisition, resulting in the long lock acquisitions we're experiencing. Other than TRANSITION, we've lost lock at or before DRMI with ASC and LSC signals oscillating a lot.

As I type this, we just lost lock for the 3rd time at around LOWNOISE_ESD_ETMX so there is potentially something wrong with this state specifically, despite CPS signals showing noise at the same time as the lockloss. The good news is I haven't been experiencing ALS issues at all this shift.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
19:34 SAF Laser Haz LVEA YES LVEA is laser HAZARD!!! 06:13
15:57 FAC Kim, Nelly EY, EX, FCES n Tech clean 18:40
17:27 CDS Jonathan, Fil, Austin MSR, CER n Move temp switch and camera server 20:10
17:28 VAC Ken EY n Disconnect compressor electrical 19:49
17:35 VAC Travis EX n Compressor work 18:33
17:46 VAC Gerardo, Jordan MY n CP4 check 19:16
17:49 SQZ Sheila, Camilla LVEA YES SQZ table meas. 19:39
18:19 SUS Jason, Ryan S LVEA YES PR3 OpLev recentering at VP 21:05
18:19 SUS Matt, TJ CR n ETMX TFs for rubbing 21:21
18:37 FAC Tyler Opt Lab n Check on lab 19:03
18:39 CDS Erik CER n Check on switch 19:16
18:41 FAC Kim LVEA yes Tech clean 19:51
18:56 VAC Travis Opt Lab n Moving around flow bench 19:03
18:58 PEM Robert LVEA yes Check on potential view ports 19:51
19:04 FAC Tyler Mids n Check on 3IFO 19:25
20:21 VAC Gerardo LVEA yes Checking cable length on top of HAM6 20:50
20:27 CDS Fil EY n Updating drawings 22:14
20:46 PEM Robert LVEA yes Viewport checks 21:26
20:51 VAC Janos EX n Mech room work 21:25
21:18 OPS TJ LVEA - Sweep 21:26
21:19 FAC Chris X-arm n Check filters 22:54
22:44 PEM Robert LVEA yes Setup tests 00:06
22:46 SQZ Sheila, Camilla, Matt LVEA yes SQZ meas at racks 00:06
00:19 CDS Dave, Marc EY N Timing card issue fix 01:19
H1 CDS
david.barker@LIGO.ORG - posted 21:39, Tuesday 18 February 2025 (82896)
Dummy IOC running to

Jonathan, Patrick, Dave:

Following the move of MC1, MC3, PRM and PR3 cameras from the old h1digivideo1 server to h1digivideo4 running the new EPICS IOC, two channels per camera were no longer present. This meant the EDC has been running with 8 disconnected channels

To "green up" the EDC until such time as we can restart the DAQ, I am running a dummy IOC on cdsws33 which serves these channels.

These cameras have sequentially numbers, CAM[11-14], and the eight channels in question are:

H1:VID-CAM11_AUTO 
H1:VID-CAM11_XY
H1:VID-CAM12_AUTO
H1:VID-CAM12_XY
H1:VID-CAM13_AUTO
H1:VID-CAM13_XY
H1:VID-CAM14_AUTO 
H1:VID-CAM14_XY   
 

H1 CDS
david.barker@LIGO.ORG - posted 21:34, Tuesday 18 February 2025 - last comment - 11:57, Wednesday 19 February 2025(82895)
EY timing error tracked to bad timing fanout port

Daniel, Patrick, Jonathan, Erik, Fil, Ibrahim, Marc, Dave:

Starting around lunchtime and getting more frequent after 2pm the Timing system was showing errors with EY's fanout port_5 (the sixth port). This port sends timing to h1iscey's IO Chassis timing card.

At EY, Marc and I replaced the SFPs in the fanout port_5 and h1iscey's timing card. At this point we could not get port_5 to sync. We tried replacing the timing card itself, but no sync was possible using the new SFPs. Installing the original SFPs restored the sync, but the timing problem was still there. Moving to the unused port_7 (seventh port) of the fanout fixed the problem. We put the original timing card back into the IO Chassis, so at this point all the hardware was original and the fanout SFP had been moved from port_5 to port_6.

 

Comments related to this report
david.barker@LIGO.ORG - 11:57, Wednesday 19 February 2025 (82910)
H1 CAL (CAL)
vladimir.bossilkov@LIGO.ORG - posted 10:03, Tuesday 18 February 2025 - last comment - 12:03, Thursday 20 February 2025(82878)
Calibration sweeps losing lock.

I reviewed the weekend lockloss where lock was lost during the calibration sweep on Saturday.

I've compared the calibration injections and what DARM_IN1 is seeing [ndscopes], relative to the last successful injection [ndscopes].
Looks pretty much the same but DARM_IN1 is even a bit lower because I've excluded the last frequency point in the DARM injection which sees the least loop suppression.

It looks like this time the lockloss was a coincidence. BUT. We desperately need to get a successful sweep to update the calibration.
I'll be reverting the cal sweep INI file, in the wiki, to what was used for the last successful injection (even though it includes that last point which I suspected caused the last 2 locklosses), out of abundance of caution and hoping the cause of locklosses is something more subtle that I'm not yet catching.

Images attached to this report
Comments related to this report
vladimir.bossilkov@LIGO.ORG - 09:08, Wednesday 19 February 2025 (82904)

Despite the lockloss, I was able to utilise the log file saved in /opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/H1/ (log file used as input into simulines.py), to regenerate the measurement files.

As you can imagine the points where the data is incomplete are missing but 95% of the sweep is present and fitting all looks great.
So it is in some way reassuring that in case we lose lock during a measurement, data gets salvaged and processed just fine.

Report attached.

Non-image files attached to this comment
vladimir.bossilkov@LIGO.ORG - 12:03, Thursday 20 February 2025 (82933)CAL

How to salvage data from any failed attempt simulines injections:

  • simulines siletently dumps log files into this directory: /opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/{IFO}/ for IFO=L1,H1
  • navitating there you will be greeted by a log the outputs of simulines every single time it has ever been run. The one you are interested in can be identified by the time, as the file name format is the same as the measurement and report directory time-name format.
  • running the following will automagically populate .hdf5 files in the calibration measurement directories that the 'pydarm report' command searches in for new measurements:
    • './simuLines.py -i /opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/H1/{time-name}.log'
    • for time-name resembling 20250215T193653Z
    • where './simuLines.py' is the simulines exectuable and can have some full path like the calibration wiki does: './ligo/groups/cal/src/simulines/simulines/simuLines.py'
LHO General (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 22:01, Friday 14 February 2025 - last comment - 10:16, Wednesday 19 February 2025(82824)
OPS Eve Shift Summary

TITLE: 02/15 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Locking
INCOMING OPERATOR: Corey
SHIFT SUMMARY:

IFO is in NLN and OBSERVING as of 04:16 UTC.

EDIT: IFO is LOCKING at SDF_REVERT after a 05:59 UTC Lockloss

Overall calm shift with one lockloss due to an Earthquake (alog 82821).

Other:

LOG:

None

Comments related to this report
david.barker@LIGO.ORG - 10:16, Wednesday 19 February 2025 (82907)

Lockloss is coincident with a vacuum glitch at the X-arm beamtube ion pump at X6 (100m from EX).

Images attached to this comment
H1 CAL (Lockloss)
ryan.crouch@LIGO.ORG - posted 18:10, Monday 10 February 2025 - last comment - 09:17, Wednesday 19 February 2025(82704)
Calibration locklosses

I took a look at the locklosses during the calibration measurements the past week. Looking at DARM right before the locklosses, both times a large feature grows around ~42 Hz right before the lockloss. Sat LL Thur LL

Thursday:

DARM_IN was dominated by the 42 Hz long oscillation and a ~505 short oscillation until the LL, DARM_OUT was dominated by the harmonic of the violins ~1020 Hz.

Saturday:

DARM_IN had a long and a short oscillation, the fund violin modes, ~510 Hz and ~7.5 Hz, DARM_OUT was dominated by the harmonic of the violins ~1020 Hz

I'm not sure how to/where to see exactly what frequencies the simulines were injecting during and before the lockloss.

Images attached to this report
Comments related to this report
vladimir.bossilkov@LIGO.ORG - 10:03, Tuesday 11 February 2025 (82739)

Looking into what's going awry.
I pushed for a change of calibration sweep amplitudes on the Pcal and the PUM (which had been tested a couple of month's back) which was instilled into the calibration sweep wiki last week, labeled informatively as "settings_h1_20241005_lowerPcal_higherPUM.ini".

Both of these sweeps were very near the end, where Pcal is driving at 7.68 Hz and PUM is driving at either 42.45 Hz or 43.6 Hz, which should clarifiy the source of the signals you are pointing out in this aLog.

The driving amplitude of the Pcal at 7.68 is about 20% lower than the injections that were being run the week before, deliberately done to reduce kicking the Pcal during ramping to reduce broad band coupling into DARM which would affect other measurement frequencies like the L1 which is driving at ~12 Hz at this time.
The driving amplitude of the PUM at ~42 Hz is unchanged from injections that had been running up until last week.

Not seeing any SUS stage saturating at lock losses. Presently unconvinced lock losses are related to new sweep parameters.

vladimir.bossilkov@LIGO.ORG - 12:14, Tuesday 11 February 2025 (82746)

Both locklosses coincided with the ramping ON of the final DARM1_EXC at 1200 Hz

vladimir.bossilkov@LIGO.ORG - 09:17, Wednesday 19 February 2025 (82905)CAL

Tagging CAL properly

Displaying reports 2861-2880 of 83389.Go to page Start 140 141 142 143 144 145 146 147 148 End