I startedtesting the PCALX_STAT Guardian node today.
/opt/rtcds/userapps/release/cal/h1/guardian/PCALX_STAT.py
It created a new ini file, But the DAQ was not restarted after this new ini file creation.
As it currently stands this is a draft of the final product that will be tested for a week and further refined.
This Guardian node, does not make any changes to the IFO, it's only job is to determine if PCALX arm is broken or not. TJ has already added it to the Guardian Ignore list.
TITLE: 11/19 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: 2ndary microseism has been consistently high (above the 90th percentile) for the whole day and has risen even more in the last few hours of the shift and has reached the top of the plot. It will likely will get even worse tomorrow as the storm off the pacific coast closes in. Wind has been pretty low today. ITMY5/6 is still kind of high.
As of 23:30 UTC we're back to sitting in idle, we unlocked the FSS, ISS, and IMC just leaving the PMC locked.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
21:41 | SAF | Laser | LVEA | YES | LVEA is laser HAZARD | 17:55 |
16:08 | FAC | Eric | EndX | N | Glycol piping famis task, upright fallen portapotty by FCES | 16:50 |
16:14 | TCS | TJ, Camilla | LVEA | Y | CO2Y table cable jiggling | 17:01 |
16:17 | FAC | Kim & Karen | FCES | N | Tech clean | 16:31 |
16:19 | FAC | Tyler | LVEA | N | Verify strobes in H2/PSL area | 16:28 |
16:20 | FAC | Tyler + Cascade Fire | OSB, Ends | N | Fire alarm testing | 18:54 |
16:27 | FAC | Tyler, Tristan | EndX | N | Fire alarm testing | 18:47 |
16:30 | FAC | Kim | EndX | N | Tech clean | 17:19 |
16:31 | FAC | Karen | EndY | N | Tech clean | 17:35 |
17:00 | ISC | Sheila | LVEA | Y | Unplug ISS Feed forward | 17:08 |
16:34 | VAC | Jordan | LVEA | Y->N | Prep new scroll pumps, gauge swap | 20:04 |
16:41 | FAC | Chris + Fire | LVEA | Y | Check fire ext. | 17:10 |
16:59 | VAC | Gerardo | LVEA | Y->N | Join Jordan, Scroll pumps | 20:04 |
17:03 | OPS | Camilla | LVEA | Y -> N | LASER HAZARD transition | 17:20 |
17:08 | FAC | Chris + Cascade fire | EndY | N | Fire checks, End then mid | 18:54 |
17:10 | VAC | Janos | LVEA | Y | Join VAC team | 17:53 |
17:20 | CDS | Marc, Fil | EndX | N | DAC swap | 18:12 |
16:45 | EPO | Corey+1 | EndY | N | Pictures | 17:15 |
17:30 | FAC | Kim | FCES | N | Tech clean | 18:00 |
17:32 | VAC | Janos | LVEA | N | Pump/Gauge work | 18:35 |
17:36 | FAC | Karen | FCES | N | Tech clean | 18:00 |
17:42 | OPS | Oli | LVEA | N | Grab power meter | 17:47 |
17:47 | SEI | Jim | CR | N | BS and ITM sei tests | 18:22 |
17:00 | ISC | Daniel | LVEA | Y | Investigations, scope installation | 17:52 |
17:56 | SAF | LASER | LVEA | N | LVEA is LASER SAFE | 20:46 |
17:57 | ALS | Camilla, Oli | EndY | Y | Beam profiling | 19:54 |
18:01 | FAC | Richard | LVEA | N | Walk around | 18:21 |
18:02 | ISC | Keita | LVEA | N | Cable checks | 18:29 |
18:16 | ISC | Sheila | CER | N | Plug and unplug FF cable | 18:40 |
18:22 | VAC | Janos | EndX then EndY | N | Scroll pumps | 19:25 |
18:22 | SEI | Jim | LVEA Biergarten / CR | N | working on Seismic sub systems | 19:55 |
17:30 | EPO | Corey+1 | LVEA | N | B-Roll | 18:25 |
18:37 | FAC | Kim & Karen | LVEA | N | Tech clean, Kim out 19:37 | 19:53 |
18:54 | FAC | Tyler+Cascade | Mids | N | Fire checks | 20:20 |
19:25 | VAC | Janos | LVEA | N | Join VAC team | 20:04 |
19:56 | SEI | Jim | LVEA | N | Take pictures | 20:06 |
20:03 | OPS | Oli | LVEA | N | Return power meter | 20:11 |
20:11 | CAL | Tony | PCAL lab | Y | Make it laser safe | 21:07 |
20:40 | CAL | Francisco | PCAL lab | Y | PCAL work | 22:28 |
20:43 | OPS/TCS | TJ | LVEA | N-> Y -> N | HAZARD TRANSITION then CO2Y table adjustment, back to safe | 21:13 |
20:48 | TCS | Camilla | LVEA | Y | Join TJ, CO2Y table | 21:13 |
21:17 | SAF | LVEA LASER SAFE | LVEA | N | LVEA IS LASER SAFE | 01:17 |
21:31 | ISC | Keita | LVEA | N | AS_B SEG3 testing | 21:49 |
21:47 | OPS | Oli | LVEA | N | Sweep | 22:13 |
23:37 | PSL | RyanS, Jason | CER, PSL racks | N | Pull cable |
IFO/Locking:
Some of the maintenance work completed includes:
We did not do a DAQ restart
Oli, Camilla WP12203. Repeat of some of the work done in 2019: EX: 52608, EY: 52636, older: part 1, part 2, part 3.
We misaligned ITMY and turned off the ALS-Y QPD servo with H1:ALS-Y_PZT_SWITCH and placed the Ophir Si scanning slit beam profiler to measure both the 532nm ALSY outgoing beam and the ALSY return beam in the HWS path.
The outgoing beam was a little oblong in the measurements but looked pretty clean and round by eye, the return beam did not! Photos of outgoing and return beam attached. Outgoing beam was 30mW, return beam 0.75mW.
Attached is the 13.5% and D4sigma measurements, I also have photos of the 50% measurements if needed. Distances are measured from the optic where HWS and ALS beams combine, ALS-M11 in D1400241.
We had previously removed HWS-M1B and HWS-M1C and translated HWS-M1A from whats shown in D1400241-v8 to remove clipping.
TJ, Camilla
We expanded on these measurements today and measured the positions of the lenses and mirrors in both ALS and HWS beampaths and took beamscan data further from the periscope, where the beam is changing size more. Data attached for today and all data together calculated from the VP. Photo of the beamscanner in the HWS return ALS beam path also attached.
Oli, Camilla
Today we took some beam measurements between ALS-L6 and ALS-M9. These are in the attached documents with today's data and all the data. The horizontal A1 measurements seemed strange before L6. We're unsure why as further downstream when the beam is larger and easier to see by eye it looks round.
TITLE: 11/19 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: USEISM
Wind: 10mph Gusts, 8mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 1.47 μm/s
QUICK SUMMARY: Due to very rapidly rising microseism, H1 will not be locking this evening or overnight, which allows us to run a long test for PSL glitch hunting by leaving just the PMC locked (no FSS, ISS, IMC). The DB37 cable was unlplugged from the FSS fieldbox at 23:45 UTC.
Camilla C., TJ
Recently, the CO2Y laser that we replaced on Oct 22 has been struggling to stay locked for long periods of time (alog81271 and trend from today). We've found some loose or bad cables in the past that have caused us issues, so we went out on table today to double check they are all ok.
The RF cables that lead into the side of the laser can slightly impact the output power when wiggled, in particular the ones with a BNC connector, but not to the point that we think it would be causing issues. The only cable that we found loose was for the PZT that goes to the head of the laser. The male portion of the SMA that comes out of the laser head was loose, and cannot be tightened from outside of the laser. We verified that the connection from this to the cable were good, but wiggling it did still introduce glitched in the PZT channel. I don't think that we've conviced ourselves that this is a problem though, because the PZT doesn't seem to glitch when the laser loses lock and instead it will run away.
An unfortunate consequence of the cable wiggling was that one of the Beckhoff plugs at the feedthrough must have been unseated slightly and caused our mask flipper read backs to read incorrectly. The screws for this plug were not working so we just pushed the plug back in to fully seat it and all seemed to work again.
We still are not sure why we've been having these lock losses lately, the 2nd and 3rd attachments show a few of them from the last day or so. They remind me of back in 2019 when we saw this - example1, example2. The fix ultimately a chiller swap (alog54980), but the flow and water temp seem more stable this time around. Not completely ruling it out yet though.
We've only had two relocks in the last two weeks since we readjusted cables. This is within its normal behavior. I'll close this FRS32709 unless this suddenly becomes unstable again. Though there might be a larger problem of laser stability, I think closing the this FRS makes sense since it is referencing a specific instance of instability.
Both X&Y tend to have periods of long stretches where they don't relock, and periods where they have issues staying locked (attachment 2). Unless there are obvious issues with chiller supply temperature, loose cables, wrong settings, etc. I don't think that we have a great grasp as to why it loses lock sometimes.
LVEA has been swept. Just had to unplug a(n already off) PEM function generator and turn off the lights for the HAM5/6 giant cleanroom.
Today we swapped the PT154 gauge on the FC-A section that was reported faulty in alog 81078 with a brand new gauge, same make/model.
FCV1 & 2 were closed to isolate the FCA section, and the angle valve on the A2 cross closed to isolate the gauge. Volume was vented with dry nitrogen and gauges swapped. CF connection was helium leak checked, no He signal above the HLD background ~2e-10 Torr-l/s.
Closing WP 12200
Setup a scope near the PSL rack. The channels are FSS test2, PMC mixer out, ISS PDB, and IMC servo test 1. The trigger has been connected to the IMC REFL shutter. The shutter usually triggers upon a lock loss, or when more than ~4W are detected in reflection of the IMC.
22 Nov 2024 around 20:30pm PT, I disconnected the remote scope and all its input BNCs, and I unplugged this power strip for the PSL remote scope + 785 + aglient because Ryan was close to relocking.
Sheila, writing for a large crew (Jason, Vicky, Daniel, Keita, Marc, Rick)
This morning we spent about 3 hours with the IMC offline and the ISS autolocker requested to off, (from 16:30 UTC) then unplugged the IMC feedback to the IMC VCO (from 17:08 UTC to 11:54 UTC) .
In the attached screenshot you can see that there are a few large disturbances during this time, which show up with an amplitude of 0.1 on the PMC mixer channel, and large oscillations in the FSS fast monitor (30 counts), dips in the reference cavity transmission, and glitches in the PMC high voltage. The second screenshot shows a zoomed in version of the problem times.
We also noticed that unplugging the feedback from the IMC to the VCO changed the reference cavity transmitted power, by about 1%, Daniel suggests that this might be OK because the change in laser frequency causes a small change in alignment out of the AOM.
We set the FSS autolocker to off at 19:54 UTC. This doesn't acutaly disable the FSS, but we can track times when the reference cavity would go through resonance by watching in the trans PD. At 21:19 UTC, Ryan Crouch started locking the IFO as the winds are supposed to be calm this afternoon (we did not see any glitches in this short test but need a longer one).
We will plan to continue this PMC only test overnight when the wind is supposed to come back up.
Related alog: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=81320
Sometime ago (July/09/2024) the analog DC gain of H1:ASC-AS_B_DC was set from "High" to "Low" when I was looking to extract more information from WFSs about the power coming past the Fast Shutter when the shutter bounces down after high power lock losses. Since this is not necessary any more (see plots in alog 81130) and since keeping this means that we have to either adjust dark offset once in a while and/or making the "shutter closed" threshold for Fast Shutter test (alog 81320), I set the gain switch back to "HIGH", and disabled the +20dB digital gain in H1:ASC-AS_B_DC_SEG[1234] FM4.
Interestingly, when I flipped the gain switch, SEG3 output didn't change (1st attachment). It could be that the specific DC channel is stuck to HIGH or LOW, but it could be that the analog offset happens to be really small for that channel. I cannot test it right now as we're leaving IMC unlocked. As soon as the light comes back I will go to the floor and test.
The second attachment is the relevant MEDM screen (that shows the state it should be in now) and the third one is the picture of the interface chassis (the switch I flipped is circled in red, and it's supposed to be HIGH now).
After the IMC was relocked I went to the floor, opened the fast shutter, switched the analog gain switch back and forth several times and confirmed that all segments responded correctly.
I accepted in SDF in both SAFE and OBSERVE to keep off FM4
It looks like usually the PSL PMC + FSS can stay locked for long periods of time, for example >1 month over the emergency vent in August. See the attached trends of the # of locked days and hours in the last few months (ndscope saved as /ligo/home/victoriaa.xu/ndscope/PSL/pmc_fss_can_stay_locked_over_emergency_o4b_vent.yaml).
The last time the PMC was locked for almost a month ended on Sept 23, 2024. Since then the PSL PMC has not stayed locked for over 5 days, but this is most likely due to commissioning tests and debugging which started around then.
Some trends showing several PSL signals over O4b, and the end of O4a.
Key points- PMC + FSS stayed locked continuously during both the O4a/b break (Feb 2024), and the emergency vent (Aug 2024), with minimal glitching in PMC TRANS and Ref Cav Trans PD (FSS-TPD).
Trying to compare PSL-related spectra between the original O4 NPRO and the current NPRO using DTT is kinda confusing.
Comparing times with before O4 NPRO from 21 Oct 2024 04:55 UTC (blue, 169 Mpc), and now with O3 NPRO at 16 Nov 2024 08:20 UTC (red, 169.9 Mpc).
For the fast mon spectra in the top left plot, FSS FAST MON OUT looks 2-5x noisier now than before. H1:PSL-FSS_FAST_MON_OUT_DQ calibrated into Hz/rtHz using zpk = ([], [10], [1.3e6]) (see 81251, this makes H1 and L1 spectra into comparable Hz/rtHz units 81210).
But for FSS-TPD (ref cav trans), it looks like there's some extra 1-10Hz noise, but otherwise the trans spectra might be quieter? Similarly confusing, the ISS AOM control signal looks quieter. Not a clear takeaway from just these spectra on how to compare O3/O4 NPROs.
Bypass will expire:
Tue Nov 19 10:45:37 PM PST 2024
For channel(s):
H0:FMC-CS_FIRE_PUMP_1
H0:FMC-CS_FIRE_PUMP_2
Tue Nov 19 10:12:46 2024 INFO: Fill completed in 12min 42secs
WP12201
Marc, Fil, Dave, Ryan
h1susex is powered down to restore cabling from the new 28bit LIGO-DAC to the original 20bit General Standards DACs.
Procedure:
Put h1iopseiex SWWD into long bypass
Safe the h1susetmx, h1sustmsx and h1susetmxpi models
Stop all models on h1susex
Fence h1susex from the Dolphin fabric
Power down h1susex.
D. Barker, F. Clara, M. Pirello
Swap to original 20 bit DAC complete. Here are the steps we took to revert from LD32 to GS20
Included images of front and rear of rack prior to changes.
Tue19Nov2024
LOC TIME HOSTNAME MODEL/REBOOT
09:59:38 h1susex h1iopsusex
09:59:51 h1susex h1susetmx
10:00:04 h1susex h1sustmsx
10:00:17 h1susex h1susetmxpi
2024 Nov 12
Neil, Fil, and Jim installed an HS-1 geophone in the biergarten (image attached). HS-1 is threaded to plate and plate is double-sided taped to the floor. Signal given was non-existent. Must install pre-amplifier to boost signal.
2024 Nov 13
Neil and Jim installed an amplifier (SR560) to boost HS-1 signal (images attached). Circuitry checked to ensure signal makes it to the racks. However, when left alone there is no signal coming through (image attached, see blue line labelled ADC_5_29). We suspect the HS-1 is dead. HS-1 and amplifier are now out of LVEA, HS-1's baseplate is still installed. We can check one or two more things, or wait for more HS-1s to compare.
Fil and I tried again today, we couldn't get this sensor to work. We started from the PEM rack in the CER, plugging the HS1 through the SR560 into the L4C interface chassis, confirming the HS1 would see something when we tapped it. We then moved out to the PEM bulkhead by HAM4, again confirmed the HS1/SR560 combo still showed signal when tapping the HS1. Then we moved to the biergaren and plugged in the HS1/SR560 right next to the other seismometers. While watching the readout in the DAQ of the HS1 and one of the Guralps I have connected to the PEM AA, we could see that both sensors could see when I slapped the ground near the seismometers, but the signal was barely above what looks like electronics noise on the HS1, while the Guralp showed lots of signal that looked like ground motion. We tried gains from 50-200 on the SR560, none of them really seemed to improve the snr of the HS1. The HS1 is still plugged in over night, but I don't think this particular sensor is going to measure much ground motion.
One check for broken sensors - A useful check is to be sure you can feel the mass moving when the HS-1 is in the correct orientation. A gentle shake in the vertical, inverted, and horizontal orientations will quickly reveal which orientation is correct.
After discussions with control room team: Jason, Ryan S, Sheila, Tony, Vicky, Elenna, Camilla
Conclusions: The NPRO glitches aren't new. Something changed to make us not be able to survive them as well in lock. The NRPO was swapped so isn't the issue. Looking the timing of these "IMC" locklosses, they are caused by something in the IMC or upstream 81155.
Tagging OpsInfo: Premade templates for looking at locklosses are in /opt/rtcds/userapps/release/sys/h1/templates/locklossplots/PSL_lockloss_search_fast_channels.yaml and will come up with command 'lockloss select' or 'lockloss show 1415370858'.
Updated list of things that have been checked above and attache a plot where I've split the IMC only tagged locklosses (orange) from those tagged IMC and FSS_OSCIALTION (yellow). The non IMC ones (blue) are the normal lock losses and (mostly) only once we saw before September.