Matt Todd, Camilla C. Contuined from 82281. Layout D1400241.
We swapped HWS the fiber from 200um to 50um and used the CFCS11-A adjustable SMA fiber collimator to make the beam as small as possible (at the limit of collimator), it still was larger than when this work was done at EX 81734, too large to align.
Instead we took some beam profile measurements of the beam out of the collimator, they are attached along with a photo of the table, L3 had been removed already. There was some strange wings on the beam profiler photo making the D4Sigma values unusable, photo attached.
Left the HWS laser off as it isn't aligned.
I've done some fitting, seems like an oddly small beam at the waist...but it's what the fit shows. I'll also try and confirm this by looking at the divergence angle coming from the 50um fiber and lens strength of the collimater.
Tue Jan 28 10:04:13 2025 INFO: Fill completed in 4min 11secs
Gerardo confirmed a good fill curbside. TC-B continues to be non-nominal.
TCmins [-79C, -35C] OAT (-2C, 28F), DeltaTempTime 10:04:22 DeltaTemp 61.1C (just squeaked in, trip is 60.0)
TITLE: 01/28 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT
Wind: 5mph Gusts, 3mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY:
H1's been locked 4.5hrs and just finished magnetic injections---in Observing until Maintenance....but then just taken out for In-Lock SUS charge measurements!
Workstations updated and rebooted. This was an OS packages update. Conda packages were not updated.
TITLE: 01/28 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: We stayed locked the whole shift, ~9 hours.
LOG: No log.
21:18 UTC Observing
05:14 - 05:22 UTC dropped observing as the SQZer lost lock and relocked. The FC struggled a little to relock.
TITLE: 01/28 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: A lockloss and commissioning time this morning, but once that wrapped up relocking went smoothly and H1 has been observing for just over 3 hours.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
17:16 | SAFETY | LASER HAZ | LVEA | YES | LVEA is Laser HAZARD | Ongoing |
16:35 | PEM | Robert | LVEA | - | Closing up viewports | 17:23 |
17:49 | PCAL | Francisco | Pcal Lab | Local | Grabbing target | 17:58 |
18:24 | ISC | Robert, Sheila, Jennie | LVEA | Y | Looking into viewports | 18:39 |
19:53 | FAC | Tyler | LVEA | - | Grab lift batteries | 20:13 |
20:29 | FIT | Matt | X-arm | n | On a run | 20:57 |
21:55 | TCS | Camilla, Matt | Opt Lab | n | Opening a bag | 22:21 |
22:51 | CAL | Tony | PCal Lab | Local | PCal measurement | 23:48 |
TITLE: 01/27 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 4mph Gusts, 2mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
FAMIS 31070
Since the chiller swap last week, several trends (mostly for cooling) are slightly different even though the chiller setpoints were set the same as the previous chiller.
PMC REFL continues its upward trend. The RefCav alignment is tanking, but that will be fixed with an on-table FSS alignment tomorrow during maintenance.
Summary: H1 was out of observing from 16:31 to 21:18 UTC due to a lockloss and commissioning.
Soon after dropping observing for regularly scheduled commissioning time this morning, H1 lost lock at 16:34 UTC due to a M4.2 EQ from Idaho. Commissioners then decided to still utilize commissioning time for PR2 spot moves with H1 unlocked (see other logs, especially 82481). Once most activities had finished, I ran H1 through an initial alignment and started locking, which went smoothly and automatically.
It was taking suspiciously long to converge the full IFO ASC at 2W, and Ryan and I saw that it was the PRM ADS that was taking forever to converge. This usually occurs because th POP A offsets are not set right so the PRM doesn't go to the right place in DRMI ASC.
I saw that the POP A offsets were OFF, probably left that way after this alog: 81849.
I set the offsets where we want them and turned them on. These settings are SDFed. This will improve locking time by a small amount.
Mayank, Sheila, Jennie W, Ryan S, Elenna, Jenne, Camilla, Robert.
Follow on from 82401, mostly copied Jenne's 77968.
As soon as we started setup, the IFO unlocked from an EQ and we decided to do this with the IFO unlocked.
Ryan locked green arms in initial alignment and offloaded. Took ISC_LOCK to PR2_SPOT_MOVE (when you move PR3 it calculates and moves PR2, PRM and IM4).
Green beatnotes were low but improved when we started moving. Steps taken: Move PR3 Yaw with sliders until ALS_C_COMM_A beatnote decreased to ~-14 and then used pico A_8_X to bring it back. Repeated until PR3 M1 pitch was 1-2urad off and then Mayank brought back pitch with PR3 sliders. Repeated moving PR3 yaw sliders and picos.
Started with PR3 (Pitch,Yaw) at (-122, 96), went to (-125.9, 39.5), were aiming for Yaw at -34. But, at (-124, 68) we lost the beam on AS_AIR, whoops.
Once we realized that we fell off AS_AIR so took PR3 back to last time we had light on it (68urad in yaw slider), ignoring green arms with the plan of moving back to 38urad by moving SR2 to keep light on AS_AIR. Moved SR2 in single bounce (ITMY misaligned) to increase light on AS_AIR. We couldn’t go any further in PR3 yaw with keeping light on AS AIR so we decided to revert green picos to work with 68urad on PR3. WE took PR3 back to (-125.9, 39.5) and reversed our steps of sliders and picos.
After we got here, Ryan offloaded green arms and we tried to go to initial alignment. No flashes on init align in X arm or y-arm (touched BS for y-arm). We would usually be able to improve alignment while watching AS_AIR but the beam wasn’t clearly on AS_AIR. Improved beam on AS_AIR by moving SR2/3.
Ran SR2 align in ALIGN_IFO GRD. This seemed to make some clipping worse, are we clipping in SR2? Still working on SR2 alignment. Maybe we should update the ISC_LOCK PR2_SPOT_MOVE state to have SR2/3 follow align so that we don't loose AS_C and AS_AIR.
Regarding "Improved beam on AS_AIR by moving SR2/3."
We found that by moving SR2/3 by hand, or by engaging SR2 align and moving SR3 (SR2 follows), we can make improvements in the AS_C NSUM and the AS_C yaw position, but that clearly does not fix all the problems with input align and the terrible beam shape we saw on the AS AIR camera. This leads us to believe that part of the problem is upstream, as in even if we fix everything at the output, we may have caused some other clipping problem in the PRC.
Sheila and I tried adjusting the pointing of PR2 to see if we could improve the input align issues, but that seemed to have very little effect.
I think that a possible reason why our PR2 spot moves have gone poorly is because the PR2 spot move function in the guardian does not have the correct constants to ensure the spot moves on PR2 and not on the other optics.
Looking back in time, the PR2 spot move function was first written by Stefan in 2016 (as I can find, see: 28420, 28442). Looking at his code and the current guardian code which was originally copied from his code, the adjustment values are exactly the same:
pitPR3toPR2=-9.2;
yawPR3toPR2=+9.2;
pitPR3toIM4=56;
yawPR3toIM4=11;
pitPR3toPRM=1.5;
yawPR3toPRM=2.2;
These differ from the values you would calculate from the ray transfer matrix, which Stefan notes in a comment in 28442. My guess is that the difference in those values is related to whatever calibration we add into the optics sliders.
Also, Jeff updated the IM slider calibrations to microradians last April, see: 77211. I can't find any alog (so far) that reports an update to the PR2 spot move values in the guardian to account for this recalibration.
Sheila pointed me to the May 20, 2024 spot move where she says she updated the adjustment values from the guardian: 77949. However, the values she uses in this alog are not reported, and it doesn't look like the guardian numbers actually changed. I looked at the saved ndscope from that time and eyeballed the values to be approximately:
yawPR3toIM4 = 0.875
yawPR3toPR2 = 10
yawPR3toPRM = 2
You can check the attached screenshot to see how I calculated these values. These numbers are clearly different compared to the numbers above, so I don't really know what happened here. But, it seems that if we want to do a PR2 spot move again, we should check to make sure we are adjusting the optics appropriately.
You can also find the template for this scope in "/ligo/home/sheila.dwyer/ndscope/PR2_spot_move_jennie.yaml" if you want to check yourself.
J. Kissel, (with help from E. von Reis, D. Barker) ECR E2500017 IIET Ticket LHO:33143 WP 12302 I'm extending the infrastructure I installed July 2024 LHO:78956 in order to include replicas of the OMC DCPD GW channels' infrastructure further down the chain, namely to send the output of the 524 kHz test infrastructure over IPC to the 16 kHz OMC model to store the channels in the frames. This is result of the last two weeks' worth of use of the existing prototype infrastructure, and the findings that there may be some discrepancies between the offline down-sampled versions of the 524 kHz outputs and the 16 kHz online down-sampled versions (see LHO:82420) The simulink models touched for this update are /opt/rtcds/userapps/release/ cds/h1/models/h1iopomc0.mdl omc/h1/models/h1omc.mdl omc/common/models/omc.mdl In the attached collection of screenshots, I show "before" vs. "after" for these models, and the parts affected. (1) Top level of h1iopomc0 model before vs. after (2) Inside the OMC top_names block of the h1iopomc0 model before vs. after (3) Inside the OMC_DCPD block of the h1iopomc0.mdl model before vs. after (4) Top level of h1omc.mdl model before vs. after (5) Inside the OMCNEW block of the omc.mdl library which is used and renamed as just OMC in the h1omc.mdl model before vs. after (6) Inside the OMC_DCPD block of the OMCNEW block before vs. after I note in as a part of (6) I've re-organized the primary DCPD A and DCPD B SUM and NULL outputs such that it's much more legible than the previous version; again see that before vs. after More details to follow in the comments.
Because these models are run on the h1omc0 front-end, they all need to be compiled on the h11build machine with a special environment. To create the special environment, after logging in as controls to the h1build machine, use the command to edit the following bash environment variables: $ export RCG_SRC=/opt/rtcds/rtscore/advligorts-5.3.1_ramp $ export RTS_VERSION=5.3.0~~dual_tone_frequency Then you can do the standard $ rtcds build h1iopomc0 ... etc for each model.
Pending DAQ Changes for new models:
h1iopomc0:
h1omc:
Today we re-engaged the 16k Digital AA Filter in the A and B DCPD paths then re-updated the calibration on the front end and in the gstlal-calibration (GDS) pipeline before returning to Observing mode. ### IFO Changes ### * We engaged FM10 in H1OMC-DCPD_A0 and H1OMC-DCPD_B0 (omc_dcpd_filterbanks.png). We used the command in LHO:82440 to engage the filters and step the OMC Lock demod phase (H1:OMC-LSC_PHASEROT) from 56 to -21 degrees (77 degree change). The 77 degrees shift is necessary to compensate for the fact that the additional 16k AA filter in the DCPD path introduces a 77 degree phase shift at 4190Hz (the oscillator frequency at which the dither line that the OMC Lock servo is locked to) (omc_lock_servo.png). All of these changes (the FM10 toggles and the new OMC demod phase value) have been saved in the OBSERVE and SAFE sdfs. * It was noted in the control room that the range was quite low (153Mpc) and re remembered that we might want to tune the squeezer again as Camilla had done yesterday (LHO:82421). We have not done this. * Preliminary analysis of data taken with this newly installed 16k AA filter engaged suggests that the filter is helping (LHO:82420). ### Calibration Changes ### We pushed a new calibration to the front end and the GDS pipeline based on the measurements in 20250123T211118Z. In brief, here are a few things we learned/did: - The inverse optical gain (1/Hc) filter changes are not being exported to the front endat all . This is a bug. - We included the following delays in the actuation path: uim_delay = 23.03e-6 [s] pum_delay = 0 [s] tst_delay = 20.21e-6 [s] These values are stored in the pydarm_H1.ini file. - The pyDARM parameter set also contains a value of 198.664 for tst_drive_align_gain, which is inline with CALCS (H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN) and the ETMX path in-loop (H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN). - There is still a 5% error at 30Hz that is not fully understood yet. Broadband pcal2darm comparison plots will be posted in a comment.
I'm attaching a PCALY2DARM comparison to show where the calibration is now compared against what it was before the cal-related work started. At present (dark blue) we have a 5% error magnitude near 30Hz and roughly a 2degree maximum error in phase. The pink trace shows a broadband of PCALY to GDS-CALIB_STRAIN on Saturday, 1/25. This is roughly 24hrs after the cal work was done and I plotted it to show that the calibration seems to be holding steady. The bright green trace is the same measurement taken on 1/18, which is before the recent work to integrate the additional 16k AA filter in the DCPD path began. All in all, we've now updated the calibration to compensate for the new 16k AA filter and have left the calibration better than it was when we found it. More discussion related to the cause of the large error near 30Hz is to come.
Camilla, Sheila, Erik
Erik points out that we've lost lock 54 times since November in the guardian states 557 558 transition from ETMX or low noise ESD ETMX.
We thought that part of the problem with this state was a glitch caused when the boost filter in DARM1 FM1 is turned off, which motivated Erik's change to the filter ramping on Tuesday 82263, which was later reverted after two locklosses that happened 12 seconds after the filter ramp, 82284.
Today we added 5 seconds to the pause after the filter is ramped off (previously the filter ramp time and the pause were both 10 seconds long, now the filter ramp time is still 10 seconds but the pause is 15 seconds). We hope this will allow us to better tell if the filter ramp is the problem or something that happens immediately after.
In the last two lock acquisitions, we've had the fast DARM and ETMY L2 glitch 10seconds after DARM1 FM1 was turned off. Plots attached from Jan 26th and zoom, and Jan 27th and zoom. Expect this means this fast glitch is from the FM1 turning off, but we've seen this glitch come and go in the past, e.g. 81638 where we though we fixed the glitch by never turning on DARM_FM1, but we still were turning FM1 on, just later in the lock sequence.
In the lock losses we saw on Jan 14th 82277 after the OMC change (plot), I don't see the fast glitch but there is a larger slower glitch that causes the lockloss. One thing to note different between that date and recently is that the counts of the SUS are double the size. We always have the large slow glitch, but when the ground is moving more we struggle to survive it? Did the 82263 h1omc change fix the fast glitch from FM1 turning off (that seems to come and go) and we were just unlucky with the slower glitch and high ground motion the day of the change?
Can see from the attached microseism plot that it was much worse around Jan 14th than now.
Around 2025-01-21 22:29:23 UTC (gps 1421533781) there was a lock-loss in the ISC_LOCK state 557 that happened before FM1 was turned off.
It appears to have happend about 15 seconds after executing the code block where self.counter == 2
. This is about half way through the 31 second wait period before executing the self.counter == 3,4 blocks.
See attached graph.
I've updated the PEM_MAG_INJ and SUS_CHARGE Guardian nodes to not run their associated injections and drop H1 from observing if there is an active stand down alert from SNEWS, Fermi, Swift, or KamLAND (for supernova, GRB, and neutrino alerts). As a reminder, these normally start at 7:20am local time on Tuesdays if H1 is locked.
Due to a typo in the SUS_CHARGE Guardian (my fault), the in-lock charge measurements have not run since I implemented this change. I've fixed this and reloaded the node, so in-lock charge measurements should be back to running as usual at 7:45am Tuesday mornings if H1 is locked.