The Picket Fence display on Nuc5 was updated and restarted. There were some minor code changes. The only significant change was a switch from HWUD station back to HLID.
TITLE: 08/20 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
CURRENT ENVIRONMENT:
SEI_ENV state: MAINTENANCE
Wind: 3mph Gusts, 1mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY: Pumpdown of corner volume continues. Some light maintenance activities planned for the day as well.
Workstations were updated and rebooted. This was an OS packages updated. Conda packages were not updated.
Jordan V., Gerardo M., Travis S., Janos Cs. The PT-154 MKS wide range gauge was replaced due to failure, on the small FCT section between HAM7 and BSC3. Its small volume is being pumped down. As the pressure of that FCT section between GVs FCV-1 and FCV-2 is supposedly around 1-2E-8 Torr, it will be pumped overnight. Now it is at 6.1E-8 Torr. Tomorrow it will be valved in to the main volume. After this, this small FCT section will be valved together with HAM7 and the relay tube (FCV-1 to be opened). Leak check was also done, and no leak bigger than 1E-10 Torr was detected.
Naoki, Camilla.
After SQZ laser was turned on laser week, today we locked PMC and SHG with no issues. We struggled with OPO locking (CLF dual). In the end Naoki could lock by hand but the trigger wasn't triggering. We will continue to troubleshoot tomorrow. The OPO dither lock worked well but only after we swapped H1:SQZ-OPO_IR_LSC_SERVO_GAIN sign from -10 to 10: strange!
I put ZM4/5 PSAMS and ZM4/5/6, ZM2/3 and FC1 alignments back to the same values as in alog79193. Todays plot and July 17th plot attached. There was only around 0.3mW on the IR trans, when we expect 1mW. So may need some more alignment moved. We have the SQZT7 iris to help if needed.
CLF input power was lower than before. After I increased CLF input power, OPO can be locked by guardian.
After Camilla changed ZM2 PSAMS in 79625, we can see 1.3mW seed trans in SQZT7 with 75mW seed. However, seed dither lock did not work. It seems that the OPO PZT offset after scan by guardian does not match well with the seed resonance. I found that there is an intentional PZT offset for hysteresis in line 728 of OPO guardian as follows.
ezca['SQZ-OPO_SERVO_SLOWOUTOFS'] += 0.08 # hysteresis
I flipped the sign of this offset from 0.08 to -0.08 and the dither lock works. I am not sure if this is a correct way to fix this issue, but I tested the dither lock several times and all worked.
The dither lock gain needs to be reverted to -10.
TITLE: 08/19 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Tony-ish
SHIFT SUMMARY: HAM5 and 6 got unlocked today.
LOG:
14:30 In Corrective maintanence
16:00 HAM5/6 HEPI unlocked
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:18 | VAC | Jordan | LVEA | n | Pump checks | 15:28 |
15:23 | FAC | Karen, Kim | LVEA | n | Tech clean | 16:20 |
15:52 | SEI | Jim | LVEA | n | Unlocking HAM5/6 | 16:20 |
16:52 | FAC | Karen | EY | n | Tech clean | 17:31 |
19:35 | FAC | Kim | LVEA Receiving | n | Moving stuff | 19:57 |
19:45 | VAC | Janos | LVEA | n | Replacing gauge | 19:57 |
20:22 | SEI | Jim, Mitchell | LVEA | n | Taking HEPI pump stations down | 21:32 |
20:40 | PSL | RyanS, Fil | CR,LVEA | n | Shutting down PSL to get rid of comb | 21:04 |
20:53 | VAC | Jordan, Travis, Gerardo | LVEA | n | Swapping gauge (Travis out 22:34) | 22:40 |
20:59 | VAC | Janos | LVEA | n | Helping swap gauge | 22:28 |
21:09 | PEM | Genevieve, Sam | EX, EY | n | Taking photos o fPEM equipment | 22:42 |
23:10 | PCAL | Francisco, Miriam | EX | YES | Transitioning to LasHaz & centering beam spot for tomorrow | ongoing |
Camilla, Oli
We looked into the weird results we were seeing in the ITMX In-Lock SUS charge measurements. The coherence when the bias is on for both the bias and length drives is always good, but the coherences are bad (usually below 0.09) for both bias and length drives when the excitations run with the bias off.
I compared the differences in the coherence value outputs in the python script versus the matlab script(screenshot), and although the values are calculated slightly differently and are not exactly the same, they are resonably close enough that we can say that there is not an issue with how we are calculating the coherences.
Next, we used ndscope to look at the latest excitations and measurements from July 09th (ndscope-bais on, ndscope-bias off). Plotting L3_LOCK_BIAS_OUTPUT, L3_DRIVEALIGN_L2L_OUTPUT, L3_LVESDAMON_UL_OUT_DQ, and L3_ESDAMON_DC_OUT_DQ for both ITMX and ITMY, if there was an issue with the excitations not going through, we would expect to see nothing on the ESDAMON and LVESDAMON channels, but we do see them on ITMX.
We were still confused as to why we would see the excitations go through the ESDAMON channels but still have such low coherence, so we compared the ITMX measurements to ITMY on dtt for the July 09th measurements looking at how each excitation showed up in DARM and what the coherence was. When the bias was on, both the bias drive and Length drive measurements look as we expect, with the drive in their respective channels, a peak seen in DARM at that frequency, and a coherence of 1 at that frequency(bias_drive_bias_on, length_drive_bias_on). However, in the comparisons with the bias off, we can see the excitations in their channels for both ITMX and ITMY, but while ITMY has the peak in DARM like the bias on measurements, ITMX is missing this peak in DARM(bias_drive_bias_off, length_drive_bias_off). The coherence between DARM and the excitation channel is also not 1 on ITMX.
We showed these results to Sheila and she said that these results for ITMX with the bias off make sense if there is no charge built up on the ITM, which would be the first time this has been the case! So there are no issues with the excitations or script thankfully.
We will be making changes to the analysis script to still run the analysis even if the coherence is low, and will be adding a note explaining what that means.
Hey LHO, the in lock charge measurement script is another script, other than A2L scripts and calibration scripts, that I overhauled last year due to my deep dissatisfaction with the existing code.
I'll point you to the aLog when I ran it: [68310].
Among the huge amount of behaviors that I correct I implemented many lessons I learned in implementing simuLines for LHO: your DARM units are a very small number and you must explicitly cast DARM data to np.float64 in order to have the TFs and coherences (in particular coherences) calculate correctly. I've had to repeat this lesson to lat least 4 people writing code for LHO now in the calibration group because it trips up people again and again and it is not an obvious thing to do, and something I solved through sheer brute force (took Louis a lot to convince since he initially refused to believe it).
In particular inside the ''digestData
" function of the "/ligo/svncommon/SusSVN/sus/trunk/QUAD/L1/Common/Scripts/InLockChargeMeasurements/process_single_measurement.py
" you will see me casting the gwpy data to float64 on lines 50 and 51; followed by some sampling rate tricks to get coherence to calculate correctly with gwpy's coherence
call as well as gwpy handles average_fft
calls.
Hope it helps!
Thanks Vlad, we'll have a look at that.
While looking at these measurements we realized that we were not using the same bias setting for all the quads (ITMY around half bias). We want to change this using the attached code but first will run the charge measurements to directly compare before and post vent.
I've combined the existing Python and MATLAB scripts that are used for calibrating the PSL rotation stage into one cohesive, cleaner Python script: /opt/rtcds/userapps/release/psl/h1/scripts/RotationStage/CalibRotStage.py
The script takes two arguments to run the full process of sweeping the stage and running the calibration fitting: python CalibRotStage.py -m -c
To skip the rotation stage sweep or calibration fitting steps, simply omit the -m or -c arguments, respectively.
The full process for running a PSL rotation stage calibration now goes as follows:
J. Kissel IIET Ticket 31266 Conversations are still brewing about what to do about DuoTone timing signals showing up in DARM -- presumably through the OMC DCPD's ADC card (see LHO:77579). Recall that we've narrowed down the coupling mechanism to direct channel-to-channel cross coupling within the ADC (see LHO:78238). As such, one of the options on the table is to reduce the amplitude of the DuoTone timing signal, which is using 1/4 of the ADC channel's range at 5 [V_pp], rather loud, IMHO (see LHO:78218). Daniel and Erik did a whole bunch of work at the start of O4 to get the front-end-code computed ADC timing measure more accurately reporting and answer with a better algorithm in the front-end (see LHO:67693 and LHO:67545), so I figured I show the trend of a few important front-end's answers, since I haven't seen such a trend: (1) the h1omc0 segregated IO chassis which houses the GW DCPDs, (2) the h1lsc0 computer which *used* to house the DCPDs prior to O4. (3 & 4) the two end-station ISC computers which house, among other things, the PCAL systems. The answers from the long-term trend: (1) 0.12 +/- 0.035 or +/- 0.08 [usec] (2) 7.05 +/- 0.031 or +/- 0.06 [usec] (3) 61.6 +/- 0.15 or +/- 0.31 [usec] (4) 61.6 +/- 0.15 or +/- 0.31 [usec] where I'm eye-balling (via y-cursor at least) the long-term trend of the mean, the middle of the max/min excursions, and the worst of the max/min excursions. These answers (the means) are all expected, but it's good to have some numbers on the max / min excursions, which are, presumably, noise. Zoom in closer, one can see on the minute time scale trend of a gravitational wave event, the estimation of the timing error is about as precise.
R. Short, F. Clara
In the ongoing effort to mitigate the 9.5Hz comb recently found to be coming from the PSL flow meters (alog79533), this afternoon Fil put the PSL control box in the LVEA PSL racks on its own, separate 24V bench-top power supply. Once I shut down the PSL in a controlled manner (in order of ISS, FSS, PMC, AMP2, AMP1, NPRO, chiller), Fil switched CB1 to the separate power supply, then I brought the system back up without any issues. I'm leaving the ISS off for now while the system warms back up. I'm leaving WP12051 open for now until enough data can be collected to say whether this separate supply helps the 9.5Hz comb or not.
Please be aware of the new power supply cable running from under the table outside the PSL enclosure to behind the racks; Fil placed a cone here to warn of the potential trip hazard.
This looks promising! I have attached a comparison with a high-resolution daily spectrum from July (orange) vs yesterday (black), zoomed in on a strong peak of the 9.5 Hz comb triplet. Note that the markers tag the approximate average peak position of the combs from O4a, so they are a couple of bins off from the actual positions of the July peaks.
FAMIS 21224
pH of PSL chiller water was measured to be just above 10.0 according to the color of the test strip.
FAMIS 21188
The PSL was taken down last Tuesday as part of Jason and Fil's comb hunt (alog79533), which shows in basically every trend six days ago.
The ISS was not turned back on when the PSL was brought back up last week, so Jason enabled it this morning, which explains the lower PMC transmitted power over the past week or so. However, even with the ISS now on, the PMC is putting out ~1.6W less power compared to before it was unlocked last week.
The FSS TPD has been trending down over the past several weeks and alignment looks poor on the camera, so sometime today or tomorrow I'll touch up the RefCav alignment.
About a week ago the PSL enclosure main-room Axis camera image froze mid-frame twice over two days in a row on the LVEA FOM machine (lveapslfom) but not on the control room FOM (nuc21).
Last night both the enclosure anti-room and main-room images on the control room FOM froze mid-frame but the LVEA FOM was fine.
In all cases I remote-desktopped to the machine and reloaded the web page to clear the error.
Mon Aug 19 08:09:15 2024 INFO: Fill completed in 9min 11secs
TITLE: 08/19 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
SEI_ENV state: MAINTENANCE
Wind: 4mph Gusts, 1mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY:
Vacuum pressure in BSC3 around 2.04e-07 (need to be around 8.8e-8), so probably no gate valves today. Over the weekend an earthquake from Japan tripped the ISI watchdogs for ITMY, BS, ETMX, and ETMY. Those have been untripped by TJ.
FAMIS26004
Same as for the last time this was run (alog79474), the following seem elevated:
FAMIS26283
Laser Status:
NPRO output power is 1.831W (nominal ~2W)
AMP1 output power is 64.55W (nominal ~70W)
AMP2 output power is 138.3W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 5 days, 20 hr 28 minutes
Reflected power = 20.62W
Transmitted power = 105.1W
PowerSum = 125.8W
FSS:
It has been locked for 1 days 19 hr and 20 min
TPD[V] = 0.5346V
ISS:
The diffracted power is around 3.1%
Last saturation event was 0 days 0 hours and 0 minutes ago
Possible Issues:
AMP1 power is low
PMC reflected power is high
FSS TPD is low
ISS diffracted power is high
I've lowered the alarm level for PT120B (BSC2) from 2.0e-06 to 2.0e-07. Two day trend of PT120 attached shows last nights excursion
Config:
Channel name="H0:VAC-LY_Y1_PT120B_PRESS_TORR" low="1.0e-10" high="2.0e-07" description="VE gauge, PT120B BSC2 CC"
Sun Aug 18 08:08:47 2024 INFO: Fill completed in 8min 43secs
Oli P, TJ S
Single Bounce AS_C level
To check that we can get light through to HAM6, we brought the IFO to the same alignment as in alog79193, where they also had HAM5,6 HEPI locked. We started with the fast shutter closed. With alignments the same we went into a single bounce configuration with ITMY aligned and went up to 10W. There was no light on AS_C initally, so Oli had to move SR2 about 200urads before there was light on the sensor. SR2 moved about 60urads to center on AS_C. With AS_C centered, H1:ASC-AS_C_SUM_OUTPUT was at 0.0078 vs the 0.0045 reported on July 17 reported in the linked alog above (73% increase). We ran into some P motion when we turned sensor correction on, but maybe this is expected while we have some HEPIs locked. The AS Air camera had the slightest bit of a spot on the top left corner of it when we were centered.
Aug 16 | July 17 | May 7 | April 17 | |
AS_C_SUM_OUTPUT | 0.0078 | 0.0045 | ||
AS_C_NSUM_OUT | 0.0230 | .017 | .0226 | 0.0227 |
Fast Shutter Test
While centered on AS_C we opened the fast shutter. We immediately saw light on AS_A and AS_B (1600 and 2000 counts respectively). We closed the shutter and it went away. 1st attachment
The fast shutter still exhibits a bounce when closing, as discussed in alog79397. On this particular instance, after the fast shutter initially blocked the beam, 37ms later it allowed light through for 10ms more, before closing for 38ms, then it lets one more bit of light through for 5ms - 2nd attachment shows it better. AS_C also seems to see some of this, perhaps from scattered light.