TJ, Camilla. WP12162. Continuing work in 76030
We installed a CFCS11-A adjustable SMA fiber collimator on the 50um fiber for the ETMX HWS fiber coupled LED source M530F2. This improved the size of the beam but it was still too large at the edge of the collimators range.
We removed HWS-L3 (D1800270) which made the beam a better size but still a little large ~ 10-15mm diameter at the periscope. We couldn't see a change in the beam by adjusting HWS-L2 (on translation stage). We aligned the system using the retrofection of the ALS beam: aligned ALS beam to output coup;er with HWS-MS1 and then adjusted the fiber collimator (mounted in mirror mount) to aligned the HWS output to the final iris. Confirmed we were getting a reflection of ETMX by mis-aligning it ~20urad. Note that the beam looks cleaner than usual photo, probably because the GV is closed so we are getting no retroflection of ITMX. Set frequecy to 1Hz. Replaced the mask photo and started the HWS code with new references.
The ETMX HWS 520nm beam in now injected into the vacuum where it hasn't been for ~months. Tagging DetChar.
To do: measure the beam profile of beam out of fiber and re- calculate an imaging solution of the ETMX HR surface.
This afternoon, I went back down the EX and measured the beam profile of the beam out of the collimator. Results attached. Photos of beam profile after source, after HWP and after BS attached.
GV5 and GV7 were soft closed at ~10am local time today to facilitate equipment craning for crane inspections. They were opened at ~11am local time at the completion of the vertex crane inspection.
This morning, Ryan let me know that he got a "Check PSL chiller" verbal alarm. Upon checking the chiller I saw the water level was a little low but not at minimum, but the usual oscillations in the water level as the chiller does its thing were getting a little close; this is likely what caused the alarm. I added 175 mL of water to the chiller to bring the water level back up to the MAX line.
Closes FAMIS27804
CO2X was found at 29.5, i added 200ml to get it to right under the MAX line at 30.3.
CO2Y was found at 8, I added 250ml to get it to just under the MAX line at 9.4.
Early Monday morning I was alerted to the lag compressor alarm on the instrument air compressor. Both compressors were running and the tank pressure was increasing until it reached cut-out pressure. I inspected for any signs of an air leak but didn't find anything obvious. I observed the compressor and found that when the air dryer swapped desiccant towers it began blowing down a lot of air. The poly-tube connections beneath each dryer had a build-up of ice. I cleared the ice out as best as I could and observed the dryer for another several cycles, but the problem did not reoccur. We will continue to monitor the performance of the air dryer.
We added some code to recognize the more revent timing board firmware revisions.
Tue Dec 10 10:10:41 2024 INFO: Fill completed in 10min 38secs
Today's TC-mins only just exceeded the -70C trip level (-75C,-74C). For tomorrow's fill I've increased the trip to -65C.
Camilla, Robert, WP12232
While looking for coupling of CO2 chillers into DARM at Crab pulsar 59Hz 81246, and also CO2s effect on SQZ 72244, this morning we turned off the CO2s and stayed locked ~40 minutes which is longer than we expected. Unsure of the lock loss cause as maintenance activities had started.
We aimed to see if the 59Hz coupling to DARM peak changed (needs ~20 minutes+ of data), but the accelerators showed that by increasing the CO2 power dumped in the water cooled beam dump, the load on the chillers changed and the 59Hz peak moved ~0.1Hz which means that it would be very difficult to see in DARM.
In attached plot, can see that the SQZ (especially at high frequency) got worse without CO2s. The SQZ ASC also changed the alignment mainly in YAW, does that mean that the CO2s need some alignment improvements in YAW? Looking at HWS signals, the optics substrate absorption changes as expected and started to level out, although the simulation didn't expect them too level out yet.
In future, to repeat this test, we could dump to 1.7W going into DARM using a normal beam-dump to dump the 1.7W before the periscope so that the load on the chiller isn't changed but the CO2 is not on being injected and anything that could backscatter light after the periscope is blocked. Alternatively, we could shake the table. Still the VPs themselves could be a source of scatter as are only AR coated at 10.6um, meaning they reflect up to 15% of 1064nm per surface DCC D1100439.
TITLE: 12/10 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 9mph Gusts, 6mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.42 μm/s
QUICK SUMMARY:
Running the range comparison for both hours of the bad range this morning vs an hour of good range earlier in the same lock. The noise looks to be largly above ~30Hz.
Workstations were upgraded and rebooted. This was an OS packages upgraded. Conda packages were not upgraded.
TITLE: 12/10 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: Currently Observing at 158 Mpc and have been Locked for 4 hours. The range is okay now, but we did go through a patch of low range that looks very similar to the low range from the last lock. For both this lock and the previous lock it looks like the low range started around the same time after locking, 1 hour and 20 minutes after the lock started(ndscope). I couldn't find anything with squeeze or the jitter diaggui.
LOG:
00:26 Running an initial alignment
01:26 Initial alignment done, relocking
02:06 NOMINAL_LOW_NOISE
02:09 Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
21:59 | CDS | Erik, Jonathan | MSR | N | VM Server Setup | 23:58 |
00:07 | CDS | Jonathan, Erik | LVEA | remote | Moving virtual machines | 00:39 |
00:08 | PCAL | Tony | PCAL | y(local) | Getting sttuff ready for tomorrow | 00:29 |
00:22 | PEM | Robert | LVEA | YES | Putting viewport covers back on | 01:22 |
Closes FAMIS#26020, last checked 81615
TITLE: 12/10 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is LOCKING and in ENVIRONMENT due to EARTHQUAKE.
We had a few good hours of locking today but as with yesterday, earthquakes have been rampant. Here are details:
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:03 | HAZ | LVEA IS LASER HAZARD | LVEA | YES | LVEA IS LASER HAZARD | 06:09 |
17:08 | FAC | Karen | MY | N | Technical Cleaning | 17:42 |
17:09 | PEM | Robert | LVEA | YES | Viewport setup | 18:09 |
19:54 | PEM | Robert | LVEA | YES | Recording lock acquisition from viewport | 20:13 |
21:59 | CDS | Erik, Jonathan | MSR | N | VM Server Setup | 23:58 |
22:34 | FAC | Tyler | EX | N | Cryopump check | 21:34 |
22:48 | EE | Fil | Recieving Rollup | N | Item transport | 23:48 |
00:07 | CDS | Jonathan, Erik | LVEA | remote | Moving virtual machines | 00:35 |
00:08 | PCAL | Tony | PCAL | y(local) | Getting sttuff ready for tomorrow | 00:29 |
00:22 | PEM | Robert | LVEA | YES | Putting viewport covers back on | 01:22 |
Attached is DARM for the no SQZ test time (~10 minute averages). It seems like the noise stopped before the test started. We are seeing worse DARM around 20Hz with SQZ.
Lockloss @ 12/09 23:12UTC due to large earthquake from Nevada. Sitting in DOWN until ground motion goes down
12/10 2:09UTC Back to Observing
Camilla C, TJ S
This morning we had another period where our range was fluctuating almost 40Mpc, previously seen on Dec 1 (alog81587) and further back in May (alog78089). Camilla and decided to turn off both TCS CO2s for a short period just to completely rule them out, since previously there was correlation with these range dips and a TCS ISS channel. We saw no positive change in DARM during this short duration test, but we didn't want to go too long and lose lock. CO2s were requested to have no output power from 16:12:30-16:14:30UTC
The past times that we have seen this range loss, the H1:TCS-ITMX_CO2_ISS_CTRL2_OUT_DQ channel and the H1:SQZ-FC_LSC_DOF2_OUT_DQ channel had noise that correlated to the loss, but theISS channel showed nothing different this time (attachment 2). We also were in a state of no squeezing at the time. So it's possible that this is a completely different type of range loss.
DetChar, could we run Lasso or check on HVETO for a period during the morning lock with our noisy range?
Here is a link to a lasso run during this time period. The two channels with the highest coefficients are a midstation channel H1:PEM-MY_RELHUM_ROOF_WEATHER.mean and a HEPI pump channel H1:HPI-PUMP_L0_CONTROL_VOUT.mean.
Back in March (76269) Jeff and I had updated all the suspension watchdogs (besides OFIS, OPOS, and HXDS since those were already up to date) to use better blrms filtering and to be output into um. We set the suspension watchdog thresholds to values between 100 and 300 µm, but these values were set arbitrarily since there was no way to previously see how far the stages move during different scenarios. We had upped a few of the thresholds after having some suspensions trip when they probably shouldn't have, and this is a continuation of that.
During the large earthquake that hit us on December 5th, 2024 18:46 UTC, all ISI watchdogs tripped as well as some of the stages on several suspensions. After a cursory look, all suspensions that tripped only had either the bottom or bottom+penultimate stage trip, meaning that with the exception of the single suspensions, the others' M1 stage damping should have stayed on.
We wanted to go through and check whether the trips may have just been because of the movement from the ISIs tripping. If that is the case, we want to raise the suspension watchdog thresholds for those stages so that these suspensions don't trip every single time their ISI trips, especially if the amount that they are moving is still not very large.
Suspension stages that tripped:
Triples:
- MC3 M3
- PR3 M2, M3
- SRM M2, M3
- SR2 M2, M3
Singles:
- IM1 M1
- OFI M1
- TMSX M1
MC3 (M3) (ndscope1)
When the earthquake hit and we lost lock, all stages were moving due to the earthquake, but once HAM2 ISI tripped 5 seconds after the lockloss, the rate at which the OSEMs were moving quickly accelerated, so the excess motion looks to mainly be due to the ISI trip (ndscope2).
Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
M1 | 150 | 86 | 150 (unchanged) |
M2 | 150 | 136 | 175 |
M3 | 150 | 159 | 200 |
PR3 (M2, M3) (ndscope3)
Looks to be the same issue as with MC3.
Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
M1 | 150 | 72 | 150 (unchanged) |
M2 | 150 | 162 | 200 |
M3 | 150 | 151 | 200 |
SRM (M2, M3) (ndscope4)
Once again, the OSEMs are moving after the lockloss due to the earthquake, but the rate they were moving at greatly increased once HAM5 saturated and the ISI watchdogs tripped.
Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
M1 | 150 | 84 | 150 (unchanged) |
M2 | 150 | 165 | 200 |
M3 | 150 | 174 | 225 |
SR2 (M2, M3) (ndscope5)
Again, the OSEMs are moving after the lockloss due to the earthquake, but the rate they were moving at greatly increased once HAM4 saturated and the ISI watchdogs tripped.
Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
M1 | 150 | 102 | 150 (unchanged) |
M2 | 150 | 182 | 225 |
M3 | 150 | 171 | 225 |
IM1 (M1) (ndscope6)
Again, the OSEMs are moving after the lockloss due to the earthquake, but the rate they were moving at greatly increased once HAM2 saturated and the ISI watchdogs tripped.
Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
M1 | 150 | 175 | 225 |
OFI (M1) (ndscope7)
Again, the OSEMs are moving after the lockloss due to the earthquake, but the rate they were moving at greatly increased once HAM5 saturated and the ISI watchdogs tripped.
Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
M1 | 150 | 209 | 250 |
TMSX (M1) (ndscope8)
This one seems a bit questionable - it looks like some of the OSEMs were already moving quite a bit before the ISI tripped, and there isn't as much of a clear place where they started moving more once the ISI had tripped(ndscope9). I will still be raising the suspension trip threshold for this one just because it doesn't need to be raised very much and is within a reasonable range.
Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
M1 | 100 | 185 | 225 |
We just had an earthquake come through and trip some of the ISIs, including HAM2, and with that tripped IM2(ndscope1). I checked to see if the movement in IM2 was caused by the ISI trip and sure enough it was (ndscope2). I will be raising the suspension watchdog threshold for IM2 up to 200.
Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
M1 | 150 | 152 | 200 |
Yet another earthquake!. The earthquake that hit us December 9th 23:10 UTC tripped almost all of our ISIs, and we had three suspension stages trip as well, so here's another round of trying to figure out if they tripped because of the earthquake or because of the ISI trips. The three suspensions that tripped are different from the ones I had updated the thresholds for earlier in this alog.
I will not be making these changes right now since that would knock us out of Observing, but the next time we are down I will make the changes to the watchdog thresholds for these three suspensions.
Suspension stages that tripped:
- MC2 M3
- PRM M3
- PR2 M3
MC2 (M3) (ndscope1)
It's hard to tell for this one what the cause for M3 tripping was(ndscope2), but I will up the threshold here for M3 anyways since I'm sure even if the trip was directly caused by the earthquake, the ISI tripping definitely wouldn't have helped!
Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
M1 | 150 | 88 | 150 (unchanged) |
M2 | 150 | 133 | 175 |
M3 | 150 | 163 | 200 |
PRM (M3) (ndscope3)
This one it's pretty clear that it was because of the ISI tripping(ndscope4).
Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
M1 | 150 | 44 | 150 (unchanged) |
M2 | 150 | 122 | 175 |
M3 | 150 | 153 | 200 |
PR2 (M3) (ndscope5)
Again this one it's pretty clear that it was because of the ISI tripping(ndscope6).
Stage | Original WD threshold | Max BLRMS reached after lockloss | New WD threshold |
M1 | 150 | 108 | 150 (unchanged) |
M2 | 150 | 129 | 175 |
M3 | 150 | 158 | 200 |
Using the darm_integral_compare.py script from the NoiseBudget repo (NoiseBudget/aligoNB/production_code/H1/darm_integral_compare.py) as a starting point, I made a version that is simplified and easy to run for when our range is low and we want to compare range vs frequency with a previous time.
It takes two starting times, supplied by the user, and for each time, it grabs the DARM data between the start time and an end time of starttime+5400 seconds (1.5 hours). Using this data it calculates the inspiral range integrand and returns two plots(pdf) - one showing the range integrand plotted against frequency for each set of data(png1), and then the second plot just shows DARM for each set of data, along with a trace showing the cumulative difference in range between the two sets as a function of frequency(png2). These are saved both as pngs and in a pdf in the script's folder.
This script can be found at gitcommon/ops_tools/rangeComparison/range_compare.py. To run it you just need to supply the gps times for the two sets of time that you want to compare, although there is also an optional argument you can if you want the length of data taken to be different than the default 5400 seconds. The command used to generate the PDF and PNGs attached to this alog was as follows: python3 range_compare.py --span 5000
I appearently didn't do a very good job of telling you how to run this and forgot to put the example times in the command, so here's a more clear (actually complete) explanation
To find the script, go to:
cd /ligo/gitcommon/ops_tools/rangeComparison/
and then to run the script:
python3 range_compare.py [time1] [time2]
where time1 and time2 are the gps start times for the two stretches of time that you want to compare. The default time span it will run with for each time is 5400 seconds after the start time, but this can be changed by using the --span command followed by the number of seconds you want. For example, the plots from the original alog were made by running the command python3 range_compare.py --span 5000 1414349158 1414586877