FAMIS26004
Same as for the last time this was run (alog79474), the following seem elevated:
FAMIS26283
Laser Status:
NPRO output power is 1.831W (nominal ~2W)
AMP1 output power is 64.55W (nominal ~70W)
AMP2 output power is 138.3W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 5 days, 20 hr 28 minutes
Reflected power = 20.62W
Transmitted power = 105.1W
PowerSum = 125.8W
FSS:
It has been locked for 1 days 19 hr and 20 min
TPD[V] = 0.5346V
ISS:
The diffracted power is around 3.1%
Last saturation event was 0 days 0 hours and 0 minutes ago
Possible Issues:
AMP1 power is low
PMC reflected power is high
FSS TPD is low
ISS diffracted power is high
I've lowered the alarm level for PT120B (BSC2) from 2.0e-06 to 2.0e-07. Two day trend of PT120 attached shows last nights excursion
Config:
Channel name="H0:VAC-LY_Y1_PT120B_PRESS_TORR" low="1.0e-10" high="2.0e-07" description="VE gauge, PT120B BSC2 CC"
Sun Aug 18 08:08:47 2024 INFO: Fill completed in 8min 43secs
(Richard M., Dave B, Gerardo M.)
While at home I Noticed that the pressure trend for PT120 (BSC2 pressure gauge) had a sudden rise on pressure, see trend. Called Richard M. and we tried to diagnose the system from of site, instead I decided to drive over to the site and diagnose the situation. I found two turbos off, and the scroll pump for the turbo at XBM ("X" beam manifold) was still running, the other one (YBM) was off. Closed the isolation gate valves for both systems, then proceeded to do a restart for both units, no issues encountered during restart, both of the turbo pumps reached full speed very quick, and after waiting for some time, 15 minutes, set points were set, then the system was valved in to the main volume.
Checked all other turbo pumps, output mode cleaner tube, HAM6 and HAM7, they were all running and doing good.
We have 4 aux carts pumping on different systems and they were good too.
I checked all ion pump controllers and they were good.
Still we do not know what caused the two turbos to power off, we will look further into it.
Leaving site now.
Sat Aug 17 08:09:27 2024 INFO: Fill completed in 9min 23secs
TITLE: 08/16 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: None
SHIFT SUMMARY: Got the ion pumps pumping today, so the pressure has gone down quite a bit. Also did a quick check of AS_C and the fast shutter using single bounce (79575). The IMC is LOCKED, and ITMs are MISALIGNED.
LOG:
14:30 In Corrective maintanence, IMC in OFFLINE
- I adjusted the OPTICALIGN OFFSET values for MC1,MC2, and MC3 until the driftmon values were back to what they had been yesterday before the computer crash. Then I took IMC_LOCK to LOCKED and it caught easily.
16:30 Ion pumps turned on
18:00 - 19:00 Single bounce measurement
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:32 | VAC | Gerardo, Travis | LVEA | n | Possibly turning on ion pumps | 17:02 |
16:55 | SUS | RyanC | CR | N | OPLEV charge measurements, EX & EY | 18:23 |
17:03 | Betsy | LVEA | n | Going around HAM7 (in @ 16:50) | 17:03 | |
18:21 | VAC | Gerardo, Fil | MY | n | Putting down cable and picking up cable (allegedly different cables) | 18:55 |
19:05 | VAC | Gerardo, Travis | LVEA | n | Adding another ion pump | 19:13 |
20:51 | TOUR | Genevieve+1 | LVEA | n | tour | 21:41 |
22:25 | VAC | Gerardo | EY | n | Taking a photo | 22:57 |
22:57 | VAC | Gerardo | LVEA | n | A check before he leaves | 23:57 |
Evan Goetz, Alan Knee, Ansel Neunzert (on behalf of line list contributors, see LIGO-T2400204) Summary: Version 1 of the lines and combs list of non-astrophysical artifacts yields insights into the O4a H1 data about long duration narrow spectral artifacts. In summary, H1 data has nearly 7x more of these artifacts than L1, mostly due to the much higher proliferation of comb artifacts. H1 data also suffers from a large number of narrow artifacts in the ~100 Hz bands near test mass violin resonance harmonics, usually during times of rung up violin resonances. Without careful handling of the data in frequency bands near the violin modes, nearly 18% of the 10 Hz - 2 kHz band is contaminated, though more careful handling results in about 6% of the band being contaminated (roughly equivalent to L1). The L1 detector does not suffer from these same artifacts. Details: After compiling a list of vetted, non-astrophysical artifacts into the version 1 lines and combs list for O4a, we extracted some statistics for each detector in order to compare and contrast the data quality from this perspective, and to determine which of the artifacts are most problematic for long duration narrowband searches (CW or narrowband directed stochastic searches). New for this observing run, we have two new features for the lines list: 1) artifacts with known time variation (turn on/turn off) where we can establish reasonable metrics for their presence in the data we also provide a segment list for times those artifacts are present; 2) a new type 3 artifact that is not simply a narrow frequency band for a single artifact, but rather a band (typically 1 Hz or larger) with many narrow artifacts degrading the data quality within the band. Type H1 L1 ----------------------------------------- Total number of artifacts 1721 251 Number of vetted combs 26 16 Comb lines 1510 168 Type 3 artifacts 29 0 Vetoed band percentage (excl. type 3 artifacts) 5.8% 6.3% Vetoed band percentage (incl. type 3 artifacts) 17.9% 6.3% Most problematic H1 artifacts: 1) Type 3 artifacts: non-stationary line artifacts impacting ~100 Hz bands around TM violin resonances - partial understanding of some of these artifacts due to calibration line mixing with violin resonances (future aLOG), but not all artifacts are understood 2) 9.5 Hz comb triplet: comb of line triplets present from 10 Hz to 2000 Hz (seen weakly at L1) - potential cause identified in PSL flow sensors (LHO aLOG 79533) 3) ~11.11 Hz comb seen in both H1 and L1 - not understood. It was also seen at H1 in O3. 4) ~11.9 Hz comb seen in both H1 and L1 - not understood. See also checks on different changes. Seen also in O3 H1 data. H1 artifacts that have been mitigated: 1) Hartmann wavefront sensor (HWS) comb mitigated near 1 Hz, near 5 Hz, and near 7 Hz combs 2) OM2 heater driver comb mitigated ~1.66 Hz comb (also 1.1086 Hz briefly appearing) Ongoing detailed analysis (future aLOG) of the violin mode region contamination indicates that at least some of the lines are calibration line + violin mode mixing. We are not yet sure whether this explains all the contaminated time periods in O4a, nor to all the contaminated lines, and we have not done a systematic comparison with L1. However, at the overview level, the mixing of calibration lines and violin resonances seems much worse in H1 than in L1. Perhaps this is somehow related to the new signal readout chain installed ahead of O4 that is different from the readout at L1. The number of lines also should instill some caution into further efforts to put additional calibration monitoring lines into the DARM control loop as the added lines would greatly increase the number of artifacts in the spectrum. This needs to be solved before further lines are added. We anticipate further understanding of the O4a data with continued investigation, so the number of artifacts known to be non-astrophysical may go up.
Late entry from yesterday: - The controllers of IP1, IP2, IP3, and IP14 ion pumps were replaced with the old rebuilt Varian controllers, in order to prepare for valving the ion pumps in (IP4 still has its original Varian controller). This is the reason why these Ion pumps turned red in the vacuum MEDM screen - The new instrument air scroll compressor came online: it was valved in, and the Kobelco was valved out. Huge kudos for Tyler and Eric and the FAC team! Today's activities: - The large Ion pumps were valved in. At first IP1 and IP2 - these pumps have Chevron baffles, so the immediate gas rush was somewhat mitigated by valving these 2 in first. Then IP3 and IP4. Then, after a few hours IP14 (HAM6) - The pressure immediately went down from 3.7E-7 to 1.7E-7, and it is dropping. The ion current in the IPs has not started to go down just yet - The Kobelco was switched off The pressures: - Corner (PT-120): 1.6E-7 Torr - HAM6: 3.4E-7 Torr - HAM7: 9.4E-8 Torr As soon as the pressure according to PT-120 reaches 8E-8 Torr, the gate valves will be opened. HAM7 and the filter cavity will follow after a few hours. Huge congratulations to Gerardo and Travis, and everyone who helped the vacuum team to achieve this fast pumpdown, and that the plan of opening up on Monday (19th) is still on.
Oli P, TJ S
Single Bounce AS_C level
To check that we can get light through to HAM6, we brought the IFO to the same alignment as in alog79193, where they also had HAM5,6 HEPI locked. We started with the fast shutter closed. With alignments the same we went into a single bounce configuration with ITMY aligned and went up to 10W. There was no light on AS_C initally, so Oli had to move SR2 about 200urads before there was light on the sensor. SR2 moved about 60urads to center on AS_C. With AS_C centered, H1:ASC-AS_C_SUM_OUTPUT was at 0.0078 vs the 0.0045 reported on July 17 reported in the linked alog above (73% increase). We ran into some P motion when we turned sensor correction on, but maybe this is expected while we have some HEPIs locked. The AS Air camera had the slightest bit of a spot on the top left corner of it when we were centered.
Aug 16 | July 17 | May 7 | April 17 | |
AS_C_SUM_OUTPUT | 0.0078 | 0.0045 | ||
AS_C_NSUM_OUT | 0.0230 | .017 | .0226 | 0.0227 |
Fast Shutter Test
While centered on AS_C we opened the fast shutter. We immediately saw light on AS_A and AS_B (1600 and 2000 counts respectively). We closed the shutter and it went away. 1st attachment
The fast shutter still exhibits a bounce when closing, as discussed in alog79397. On this particular instance, after the fast shutter initially blocked the beam, 37ms later it allowed light through for 10ms more, before closing for 38ms, then it lets one more bit of light through for 5ms - 2nd attachment shows it better. AS_C also seems to see some of this, perhaps from scattered light.
Alan Knee, Camilla Compton, Evan Goetz
We changed the SQZ ADF frequency from 322 Hz to 10 kHz where it should not impact long duration searches (figure), using the command 'python /opt/rtcds/userapps/release/sqz/h1/scripts/ADF/setADF.py -f 10000'. This line appeared in DARM during O4a when it was set to 1300 Hz. It was moved from 1300 Hz to 322 Hz around 1397588061 (2024/04/19 18:54:03 UTC, figure), and was visible in Fscan spectra in O4b (figure).
This line was previously turned "off" on 2024/04/04 (see 76962), but was still visible in DARM (at reduced amplitude) due to the RF driver being left on. You can see the change in behaviour in the attached line height plots. The first shows the height of the 1300 Hz line during April 2024, where the line is at first "on", decreases in strength after April 4, and then disappears on April 19 when it was moved to 322 Hz. The second shows the height at 322 Hz, which appears on April 19.
I ran the OPLEV charge measurements for both the ETMs this morning.
On ETMX the charge still looks to be decreasing towards zero on all DOF/quads.
On ETMY the charge looks fairly stable, hovering just above 0 and 50 on all the DOFs/quads, where it has been the past few measurements.
Closes FAMIS#26323, last checked 79458
Corner Station Fans (attachment1)
- All fans are looking normal and within range.
Outbuilding Fans (attachment2)
- All fans are looking normal and within range.
We get a "Check PSL chiller" verbal alarm this morning so I did exactly that. The water level was about 1/2 way between max and min, but not in alarm. I added 175mL to get it back to max. The filter on the wall is starting to not look as pristine as it once was, but I'm not sure at what point it needs to be replaced. All else looks good.
Fri Aug 16 08:07:59 2024 INFO: Fill completed in 7min 55secs
Gerardo confirmed a good fill curbside.
TITLE: 08/16 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
SEI_ENV state: MAINTENANCE
Wind: 19mph Gusts, 15mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY:
Going to be turning ion pumps on today and start some basic alignment stuff
We had a failure of h1omc0 at 16:37:24 PDT which precipitated a Dolphin crash of the usual corner station system.
System recovered by:
Fencing h1omc0 from Dolphin
Complete power cycle of h1omc0 (front end computer and IO Chassis)
Bypass SWWD for BSC1,2,3 and HAM2,3,4,5,6
Restart all models on h1susb123, h1sush2a, h1sush34 and h1sush56
Reset all SWWDs for these chambers
Recover SUS models following restart.
Cause of h1omc0 crash: Low Noise ADC Channel Hop
Right as this happened, LSC-CPSFF got much noisier, but there was not any motion seen by peakmon or HAM2 GND-STS in Z direction(ndscope). After everything was back up, it was still noisy. Probably nothing weird but still wanted to mention it.
Also, I put the IMC in OFFLINE for the night since it decided to now have trouble locking and was showing a bunch fringes. Tagging Ops aka tomorrow morning's me
FRS31855 Opened for this issue
LOGS:
2024-08-15T16:37:24-07:00 h1omc0.cds.ligo-wa.caltech.edu kernel: [11098181.717510] rts_cpu_isolator: LIGO code is done, calling regular shutdown code
2024-08-15T16:37:24-07:00 h1omc0.cds.ligo-wa.caltech.edu kernel: [11098181.718821] h1iopomc0: ERROR - A channel hop error has been detected, waiting for an exit signal.
2024-08-15T16:37:25-07:00 h1omc0.cds.ligo-wa.caltech.edu kernel: [11098181.817798] h1omcpi: ERROR - An ADC timeout error has been detected, waiting for an exit signal.
2024-08-15T16:37:25-07:00 h1omc0.cds.ligo-wa.caltech.edu kernel: [11098181.817971] h1omc: ERROR - An ADC timeout error has been detected, waiting for an exit signal.
2024-08-15T16:37:25-07:00 h1omc0.cds.ligo-wa.caltech.edu rts_awgtpman_exec[28137]: aIOP cycle timeout
Reboot/Restart Log:
Thu15Aug2024
LOC TIME HOSTNAME MODEL/REBOOT
16:49:17 h1omc0 ***REBOOT***
16:50:45 h1omc0 h1iopomc0
16:50:58 h1omc0 h1omc
16:51:11 h1omc0 h1omcpi
16:53:56 h1sush2a h1iopsush2a
16:53:59 h1susb123 h1iopsusb123
16:54:03 h1sush34 h1iopsush34
16:54:10 h1sush2a h1susmc1
16:54:13 h1susb123 h1susitmy
16:54:13 h1sush56 h1iopsush56
16:54:17 h1sush34 h1susmc2
16:54:24 h1sush2a h1susmc3
16:54:27 h1susb123 h1susbs
16:54:27 h1sush56 h1sussrm
16:54:31 h1sush34 h1suspr2
16:54:38 h1sush2a h1susprm
16:54:41 h1susb123 h1susitmx
16:54:41 h1sush56 h1sussr3
16:54:45 h1sush34 h1sussr2
16:54:52 h1sush2a h1suspr3
16:54:55 h1susb123 h1susitmpi
16:54:55 h1sush56 h1susifoout
16:55:09 h1sush56 h1sussqzout
I turned the CO2s back on today and CO2X came back to its usual 53W, but CO2Y came back at 24W. We've seen in the past that it will jump up a handful of watts overnight after a break, so maybe we will see something similar here. If not, we will have to investigate this loss. Trending the output of this laser, it has definitely been dropping in the last year, but we should be higher than 24W.
Sure enough, the power increased overnight and we are back to 34W. This is still low, but in line with the power loss that we've been seeing. Camilla is looking into the spare situation and we might swap it in the future.
We searched for home on both CO2 lasers, took them back to minium power and then asked CO2_PWR guardian to NOM_ANNULAR_POWER. This gave 1.73W on CO2X and CO2Y to 1.71W (1.4W before bootstrapping).
/psl/h1/scripts/RotationStage/CalibRotStage.py
), so we should change to this method next time.