Displaying reports 3261-3280 of 83419.Go to page Start 160 161 162 163 164 165 166 167 168 End
Reports until 20:04, Wednesday 29 January 2025
H1 General
oli.patane@LIGO.ORG - posted 20:04, Wednesday 29 January 2025 (82541)
Ops EVE Midshift Status

Currently Observing at 156Mpc and have been Locked for over 3.5 hours. Quiet evening, nothing to report.

LHO General
corey.gray@LIGO.ORG - posted 16:34, Wednesday 29 January 2025 (82524)
Wed DAY Ops Summary

TITLE: 01/29 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 137Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:

Had 2-locklosses this shift which were not trivial.  First lockloss had issues with ALS & Finding IR, but this was eventually was addressed with an Initial Alignment, but it still took about 2.5hrs to lock.  After the 2nd lockloss, ALSx had issues with its WFS not engaging (this happened for locking and a Manual Alignment), but when an Initial Alignment was run the ALSx WFS were fine.

EX HVAC continues to run with limited heater coils, but temperatures are OK, but if there is an issue, we should contact Tyler (he's not heard back from the vendor for purchasing more heater coils).
LOG:

H1 General
oli.patane@LIGO.ORG - posted 16:17, Wednesday 29 January 2025 - last comment - 16:32, Wednesday 29 January 2025(82539)
Ops Eve Shift Start

TITLE: 01/30 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 2mph Gusts, 0mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.37 μm/s
QUICK SUMMARY:

Currently relocking and at MOVE_SPOTS. Relocking has been going okay this time besides a bit of waiting at the beginning for ALS to behave and having a time where ALSX WFS didn't turn on because of it locking at the wrong mode.

Comments related to this report
oli.patane@LIGO.ORG - 16:32, Wednesday 29 January 2025 (82540)

00:30 UTC Observing

H1 TCS
thomas.shaffer@LIGO.ORG - posted 14:48, Wednesday 29 January 2025 (82538)
TCS Chiller Water Level Top Off - Biweekly

FAMIS27807

No water was added, both levels looked to be at the top of their reseviours. All filters looked clean, and there was no water in the Dixie leak detection device.

Interestingly, water hasn't been added since Jan 7 according to the T2200289 sheet, but levels have stayed the same.

H1 SUS
oli.patane@LIGO.ORG - posted 14:24, Wednesday 29 January 2025 (82537)
In-Lock SUS Charge Measurements FAMIS

Closes FAMIS#28390, last checked 82163

Measurements analyzed for the data taken January 28th. Nothing looking too wild.

ITMX  ITMY  ETMX  ETMY

Images attached to this report
H1 SEI
thomas.shaffer@LIGO.ORG - posted 14:14, Wednesday 29 January 2025 (82536)
H1 ISI CPS Noise Spectra Check - Weekly

FAMIS26028

As noted in the last few checks, ITMX ST1 V1 and ITMY ST1 V2 seem to be higher than other dofs, and this time they are slightly above last week's check as well (alog82366).

Non-image files attached to this report
H1 General
corey.gray@LIGO.ORG - posted 11:38, Wednesday 29 January 2025 (82533)
Lockloss at 1722utc

Lockloss (ETMx Glitch tag) at 1722utc.

Relocking was not trivial.  Had issues with consecutive LOCKING ALS locklosses (pauses appeared to help), but then had ALS COMM Fine Tuning issues where TRX could not get above 0.6 (which is too low).  Ultimately, a full Initial Alignment did the trick.  Now finally getting ready to power up H1.  It's been over 2hrs so far and hope to make it back up within 45min.

H1 PSL
jason.oberling@LIGO.ORG - posted 10:15, Wednesday 29 January 2025 (82530)
PSL ISS RefSignal Adjustment

Saw Corey's note re: "ISS Diffracted Power Low," so I took the opportunity provided by the recent lockloss to adjust the ISS RefSignal.  The RefSignal changed from -1.90 V to -1.88 V, which brought the diffracted power % to ~4% from just below 3%.  The diffracted power % changed due to the enclosure slowly returning to thermal equlibrium after yesterday's FSS tune up, which changed the amount of power transmitted through the PMC.

LHO VE
david.barker@LIGO.ORG - posted 10:10, Wednesday 29 January 2025 (82529)
Wed CP1 Fill

Wed Jan 29 10:05:45 2025 INFO: Fill completed in 5min 42secs

TCmins[-66C, -65C] OAT (-3C, 27F) DetaTempTime 10:05:47. TC-B is nominal again.

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 08:45, Wednesday 29 January 2025 (82527)
Slow channel outliers at time of DAQ restart (11:30 Tuesday) if using default NDS

If you are trending slow channels which are acquired by the EDC spanning the time of the EDC+DAQ restart yesterday (11:30 Tue 28 Jan 2025) and you are using the default NDS (h1daqnds1) you may see outlier data for a period of about 10 minutes. This is because we staggered the DAQ restart to preserve front-end data, starting with the 0-leg and EDC first, and then the 1-leg later after fw0 was back to writing frames. 

The EDC change yesterday was to add/remove mechanical room vacuum channels, and the vacuum systems are at the beginning of the EDC DAQ configuration file. The result of this is that after h1edc was restarted at 11:28, from h1daqfw1's perspective almost all the EDC channels moved relative to each other (channel hop), meaning that for each channel nearby channel data was being written instead for about 10 minutes.

h1daqfw0 does not have this issue because h1edc was restarted while it was down, such that when it started writing frames again it was running with the new h1edc configuration. Ditto for raw minute trends on h1daqtw0.

Switching your NDS from the default h1daqnds1 to h1daqnds0 removes this issue. The attached plots show a 24 hour trend of the outside temperature in degC obtained from nds1 and nds0. The nds1 command is:

ndscope --light -t "24 hours ago" -t now H1:PEM-CS_TEMP_ROOF_WEATHER_DEGC

and the nds0 command is

ndscope --nds h1daqnds0:8088 --light -t "24 hours ago" -t now H1:PEM-CS_TEMP_ROOF_WEATHER_DEGC

 

Images attached to this report
H1 PEM (OpsInfo)
corey.gray@LIGO.ORG - posted 08:08, Wednesday 29 January 2025 - last comment - 09:59, Wednesday 29 January 2025(82525)
LAB1 Dust Monitor Looks Dead & Needs Power Cycle

Ran the dust monitor check this morning and noticed LAB2 had this note:  Error: data set contains 'not-a-number' (NaN) entries

Comments related to this report
corey.gray@LIGO.ORG - 08:18, Wednesday 29 January 2025 (82526)

Dave mentioned this dust monitor has been down since christmas time and giving odd data (vs showing signs of being "dead"/offline).  The plan was to get it replaced; so this is a known issue.  Not sure where it is (Optics Lab or Vacuum Prep?).

ryan.crouch@LIGO.ORG - 09:59, Wednesday 29 January 2025 (82528)

The dust monitor is located in the Vacuum prep area, which is adjacent to the PCAL lab past the garb room. I've power cycled it, restarted the IOC and unplugged and replugged it from its network switch. It probably needs to be swapped but I don't have any internally pumped spares left right now, only pumpless spares that hook up to vacumm pumps. There is a internally pumped DMed in the LVEA that is unused currently, it's turned off during observing (LVEA dust monitor 5) as its pump is noisy, that I could potentially swap it for.

LHO General
corey.gray@LIGO.ORG - posted 07:38, Wednesday 29 January 2025 (82523)
Wed DAY Ops Transition

TITLE: 01/29 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 2mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.19 μm/s
QUICK SUMMARY:

H1's been locked 9.5hrs and even stayed locked through a local M4.2 eq off the WA/BC coast.  H1 was dropped out of Observing from 1053-1055 due to PI24.

Getting some ISS Diffracted Low Power messages ever few seconds and have some EX glitches from Verbal over the last 4hrs.

Looking like the EX temperature drop & recovery returned to normal at around 12hrs ago (300utc) and it's a balmy 17degF (-8degC) out!

H1 General (ISC)
oli.patane@LIGO.ORG - posted 22:06, Tuesday 28 January 2025 (82522)
Ops Eve Shift End

TITLE: 01/29 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Just started Observing a few minutes ago after relocking. Relocking went relatively well. Two locklosses during my shift and the only problem with relocking both times was waiting for ALSY and ALSX to calm down so we could lock them. ALSX has an excuse because of the temperature excursion at EX (which is now not too bad - the heating overshot a bit but is coming back down and the rate at which the temperature is changing isn't too bad anymore). However, ALSY doesn't have a good excuse and was also causing issues. We noticed this started happening around January 22nd or 23rd. Almost every lockloss, ALSY does this thing (ndscope) where it will be okay, then gets worse and unlocks, then goes back to being okay, and repeat. A few days ago we checked if it was being caused by the WFS, but the WFS seemed fine and turning them off doesn't stop ALSY from acting strange. Tagging ISC
LOG:

01:00 Lockloss
    - ALSX kept losing lock (probably due to temperature changes at EX)
    - Let ALS lock and WFS converge more before continuing
    - Stuck doing MICH so ran an initial alignment
02:59 NOMINAL_LOW_NOISE
03:02 Observing


04:45 Lockloss
    - ALSX and ALSY kept losing lock
    - Let ALS lock and WFS converge more before continuing
05:56 NOMINAL_LOW_NOISE
05:59 Observing

Start Time System Name Location Lazer_Haz Task Time End
22:41 FAC eric EX - checking on temperature drop 00:11
23:11 FAC tyler MechRoom.Mids.EY - spare heater coil search 00:35
Images attached to this report
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 20:45, Tuesday 28 January 2025 - last comment - 21:59, Tuesday 28 January 2025(82520)
Lockloss

Lockloss @ 01/29 04:44 UTC after 1.75 hours locked

Comments related to this report
oli.patane@LIGO.ORG - 21:59, Tuesday 28 January 2025 (82521)

05:59 Observing

H1 General (Lockloss, SEI)
ryan.crouch@LIGO.ORG - posted 18:10, Sunday 26 January 2025 - last comment - 11:52, Wednesday 29 January 2025(82476)
2025-01-26 11:42 UTC Lockloss

Lockloss tool tags anthropogenetic, there was a ground motion spike in the 0.3 - 1.0 band that matches up to the lockloss. We lose lock right around the CS_Z motions peak.

Images attached to this report
Comments related to this report
ryan.crouch@LIGO.ORG - 11:52, Wednesday 29 January 2025 (82534)PEM

Tagging PEM

H1 CAL (Laser Safety)
anthony.sanchez@LIGO.ORG - posted 10:29, Friday 22 November 2024 - last comment - 11:08, Wednesday 29 January 2025(81416)
PCAL Lab LASER Powered OFF for Ceiling light fixture replacement

On Tuesday NOV 19th Eric started the replacement of the ceilign light fixtures in the PCAL Lab.
Francisco and I grabbed 3 HAM Door covers to stretch out over the PCAL table to minimize dust particles on the PCAL optical bench.
I also put all the Spheres away in the cabinet and made sure that they all had their apature covers on.
I went in to shutter the laser using the internal PCAL TX module shutter, and the shutter stopped working.
I then just powered off the laser, removed the shutter, and repaired the shutter in the EE lab.
Put in an FRS ticket: https://services1.ligo-la.caltech.edu/FRS/show_bug.cgi?id=31730
The Shutter is repaired and just waiting for the FAC team to finish the Light fixture replacement to reinstall.

Update: Friday NOV 22nd, there are 2 more light fixtures in the PCAL LAB that need to be replaced. One directly above the PCAL Optics table.
PCAL Laser Remains turned off with the key out. 
 

Comments related to this report
anthony.sanchez@LIGO.ORG - 11:08, Wednesday 29 January 2025 (82531)

Since this event the Lab shutter from inside the Tx module hasn't been reading back correctly. Today Fil and I figured out why after replacing the switching regulator LM22676.

There is a reed switch Meder 5-B C9 on the bottom of the PCB that gets switched on via a magnet mounted on the side of the shutter door.
One of these reed switches is stuck in the "closed" position, which leaves the OPEN readback LED on all the time. Parts incoming.

 

Displaying reports 3261-3280 of 83419.Go to page Start 160 161 162 163 164 165 166 167 168 End