Currently Observing at 156Mpc and have been Locked for over 3.5 hours. Quiet evening, nothing to report.
TITLE: 01/29 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 137Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
Had 2-locklosses this shift which were not trivial. First lockloss had issues with ALS & Finding IR, but this was eventually was addressed with an Initial Alignment, but it still took about 2.5hrs to lock. After the 2nd lockloss, ALSx had issues with its WFS not engaging (this happened for locking and a Manual Alignment), but when an Initial Alignment was run the ALSx WFS were fine.
EX HVAC continues to run with limited heater coils, but temperatures are OK, but if there is an issue, we should contact Tyler (he's not heard back from the vendor for purchasing more heater coils).
LOG:
TITLE: 01/30 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 2mph Gusts, 0mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.37 μm/s
QUICK SUMMARY:
Currently relocking and at MOVE_SPOTS. Relocking has been going okay this time besides a bit of waiting at the beginning for ALS to behave and having a time where ALSX WFS didn't turn on because of it locking at the wrong mode.
Closes FAMIS#28390, last checked 82163
Measurements analyzed for the data taken January 28th. Nothing looking too wild.
FAMIS26028
As noted in the last few checks, ITMX ST1 V1 and ITMY ST1 V2 seem to be higher than other dofs, and this time they are slightly above last week's check as well (alog82366).
Lockloss (ETMx Glitch tag) at 1722utc.
Relocking was not trivial. Had issues with consecutive LOCKING ALS locklosses (pauses appeared to help), but then had ALS COMM Fine Tuning issues where TRX could not get above 0.6 (which is too low). Ultimately, a full Initial Alignment did the trick. Now finally getting ready to power up H1. It's been over 2hrs so far and hope to make it back up within 45min.
Saw Corey's note re: "ISS Diffracted Power Low," so I took the opportunity provided by the recent lockloss to adjust the ISS RefSignal. The RefSignal changed from -1.90 V to -1.88 V, which brought the diffracted power % to ~4% from just below 3%. The diffracted power % changed due to the enclosure slowly returning to thermal equlibrium after yesterday's FSS tune up, which changed the amount of power transmitted through the PMC.
Wed Jan 29 10:05:45 2025 INFO: Fill completed in 5min 42secs
TCmins[-66C, -65C] OAT (-3C, 27F) DetaTempTime 10:05:47. TC-B is nominal again.
If you are trending slow channels which are acquired by the EDC spanning the time of the EDC+DAQ restart yesterday (11:30 Tue 28 Jan 2025) and you are using the default NDS (h1daqnds1) you may see outlier data for a period of about 10 minutes. This is because we staggered the DAQ restart to preserve front-end data, starting with the 0-leg and EDC first, and then the 1-leg later after fw0 was back to writing frames.
The EDC change yesterday was to add/remove mechanical room vacuum channels, and the vacuum systems are at the beginning of the EDC DAQ configuration file. The result of this is that after h1edc was restarted at 11:28, from h1daqfw1's perspective almost all the EDC channels moved relative to each other (channel hop), meaning that for each channel nearby channel data was being written instead for about 10 minutes.
h1daqfw0 does not have this issue because h1edc was restarted while it was down, such that when it started writing frames again it was running with the new h1edc configuration. Ditto for raw minute trends on h1daqtw0.
Switching your NDS from the default h1daqnds1 to h1daqnds0 removes this issue. The attached plots show a 24 hour trend of the outside temperature in degC obtained from nds1 and nds0. The nds1 command is:
ndscope --light -t "24 hours ago" -t now H1:PEM-CS_TEMP_ROOF_WEATHER_DEGC
and the nds0 command is
ndscope --nds h1daqnds0:8088 --light -t "24 hours ago" -t now H1:PEM-CS_TEMP_ROOF_WEATHER_DEGC
Ran the dust monitor check this morning and noticed LAB2 had this note: Error: data set contains 'not-a-number' (NaN) entries
Dave mentioned this dust monitor has been down since christmas time and giving odd data (vs showing signs of being "dead"/offline). The plan was to get it replaced; so this is a known issue. Not sure where it is (Optics Lab or Vacuum Prep?).
The dust monitor is located in the Vacuum prep area, which is adjacent to the PCAL lab past the garb room. I've power cycled it, restarted the IOC and unplugged and replugged it from its network switch. It probably needs to be swapped but I don't have any internally pumped spares left right now, only pumpless spares that hook up to vacumm pumps. There is a internally pumped DMed in the LVEA that is unused currently, it's turned off during observing (LVEA dust monitor 5) as its pump is noisy, that I could potentially swap it for.
TITLE: 01/29 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 2mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.19 μm/s
QUICK SUMMARY:
H1's been locked 9.5hrs and even stayed locked through a local M4.2 eq off the WA/BC coast. H1 was dropped out of Observing from 1053-1055 due to PI24.
Getting some ISS Diffracted Low Power messages ever few seconds and have some EX glitches from Verbal over the last 4hrs.
Looking like the EX temperature drop & recovery returned to normal at around 12hrs ago (300utc) and it's a balmy 17degF (-8degC) out!
TITLE: 01/29 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Just started Observing a few minutes ago after relocking. Relocking went relatively well. Two locklosses during my shift and the only problem with relocking both times was waiting for ALSY and ALSX to calm down so we could lock them. ALSX has an excuse because of the temperature excursion at EX (which is now not too bad - the heating overshot a bit but is coming back down and the rate at which the temperature is changing isn't too bad anymore). However, ALSY doesn't have a good excuse and was also causing issues. We noticed this started happening around January 22nd or 23rd. Almost every lockloss, ALSY does this thing (ndscope) where it will be okay, then gets worse and unlocks, then goes back to being okay, and repeat. A few days ago we checked if it was being caused by the WFS, but the WFS seemed fine and turning them off doesn't stop ALSY from acting strange. Tagging ISC
LOG:
01:00 Lockloss
- ALSX kept losing lock (probably due to temperature changes at EX)
- Let ALS lock and WFS converge more before continuing
- Stuck doing MICH so ran an initial alignment
02:59 NOMINAL_LOW_NOISE
03:02 Observing
04:45 Lockloss
- ALSX and ALSY kept losing lock
- Let ALS lock and WFS converge more before continuing
05:56 NOMINAL_LOW_NOISE
05:59 Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
22:41 | FAC | eric | EX | - | checking on temperature drop | 00:11 |
23:11 | FAC | tyler | MechRoom.Mids.EY | - | spare heater coil search | 00:35 |
Lockloss @ 01/29 04:44 UTC after 1.75 hours locked
05:59 Observing
Lockloss tool tags anthropogenetic, there was a ground motion spike in the 0.3 - 1.0 band that matches up to the lockloss. We lose lock right around the CS_Z motions peak.
Tagging PEM
On Tuesday NOV 19th Eric started the replacement of the ceilign light fixtures in the PCAL Lab.
Francisco and I grabbed 3 HAM Door covers to stretch out over the PCAL table to minimize dust particles on the PCAL optical bench.
I also put all the Spheres away in the cabinet and made sure that they all had their apature covers on.
I went in to shutter the laser using the internal PCAL TX module shutter, and the shutter stopped working.
I then just powered off the laser, removed the shutter, and repaired the shutter in the EE lab.
Put in an FRS ticket: https://services1.ligo-la.caltech.edu/FRS/show_bug.cgi?id=31730
The Shutter is repaired and just waiting for the FAC team to finish the Light fixture replacement to reinstall.
Update: Friday NOV 22nd, there are 2 more light fixtures in the PCAL LAB that need to be replaced. One directly above the PCAL Optics table.
PCAL Laser Remains turned off with the key out.
Since this event the Lab shutter from inside the Tx module hasn't been reading back correctly. Today Fil and I figured out why after replacing the switching regulator LM22676.
There is a reed switch Meder 5-B C9 on the bottom of the PCB that gets switched on via a magnet mounted on the side of the shutter door.
One of these reed switches is stuck in the "closed" position, which leaves the OPEN readback LED on all the time. Parts incoming.
00:30 UTC Observing