TITLE: 02/08 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 2mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.15 μm/s
QUICK SUMMARY:
H1 has been locked for almost 5 Hours.
All systems look great, at first glance.
LHO plans to drop from Observing for caibration from 19:30- 20:00 UTC.
TITLE: 02/08 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 51Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
H1 lockloss after 14+hr lock. During locking, took opportunity to (1) LOAD Violin_Damping & (2)Address POP_A p/y offsets. Within 2-3min of getting to OBSERVING, S250208ad GW Candidate rolled through.
There have been elevated PSL Dust counts throughout the shift.
LOG:
After the ETMx Glitch Lockloss, had the opportunity to address the POP_A Offsets via Sheila's alog (but she also phoned in to walk me through it!) :)
And then superevent S250208ad 2-min after getting to observe! :)
H1 just had an ETMx Glitch lockloss (after 14.25hr lock).
H1 MANAGER Ended An Alignment To Start Another Initial Alignment!
Some weirdness was that I proactively ran an Initial Alignment, but while in MICH BRIGHT, ISC_LOCK took over and restarted an Initial Alignment at 0235utc!! (I had GRD IFO in MANAGED...but noticed that ISC LOCK was also still MANAGED [by H1 MANAGER]). H1 Manager's log had a note about a timer "waiting for green" was done, so h1 manager thought we needed to run an alignment. :-/
At any rate, let this 2nd Alignment run through, and now back to locking.
See that h1's violin is still high, and then noticed that ITMy Mode5 is currently running with: 0.0 gain + FM8 ON. To damp this down since the big ring-up on Tuesday, Ryan C found some settings which had been working (and after a few days of seeing them help, he updated lsc_params to with these new settings, BUT we need to run a LOAD of VIOLIN_DAMPING to implement these settings next time we are out of OBSERVING (this LOAD slipped my mind after my lockloss a few minutes before the end of my shift).
In the meantime, I made the change by hand (~13.5hrs into our current lock):
TITLE: 02/08 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 51Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
H1 Stayed Locked the Entire Day! There were multiple times when we fell out of Observing. This was because the SUS_PI Guardian decided that we needed !!! XTREME_PI_DAMPING !!!, like Macho Man Randy Savage's new Mountain Dew comercial on repeat.
Sheila agreed that it was just a little too Xtreme, and changed the threshhold for XTREME from 20 to 40 so we can get some Observing time in.
Other than and Sheila's Picoing on POP QPDs that it's been a quite but ! XTREME ! day.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
17:16 | SAFETY | LASER SAFE ( \u2022_\u2022) | LVEA | SAFE! | LVEA SAFE!!! | 19:08 |
20:45 | ISS | Siva | Optics Lab | YES | Working Optics lab. | 23:49 |
00:44 | VAC | Janos & Mike | EX,MX, Mech room | N | Checking on Vac system | 02:44 |
TITLE: 02/08 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 146Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 11mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.14 μm/s
QUICK SUMMARY:
For the handoff today, Tony mentioned the change to the threshhold for PI Mode24. And also got a run down from Sheila alignment/offset items for DRMI due to the pico-ing onto the POP photodiodes (for next lockloss, I will probably look at DRMI alignment and if it looks rough (or if ALS has issues), I'll run an Initial Allignment.)
Microseism continues to trend down to/below the 50th percentile. Breezes are under 15mph.
J. Kissel ECR E2500017 IIET Ticket IIET:33143 WP 12302 As a follow-on to LHO:82509, where I installed the infrastucture that down-sampled the new-ish 524 kHz, filtered, pick-off paths of the OMC DCPDs in the same way as the primary GW path, here is the explicit list of channels that are now available. Be forwarned -- unlike the primary path 16 kHz channels, the configuration of the ADC input matrix and filters of the 524 kHz test banks are *not* controlled by the SDF system, and are not a part of the DARM loop, so "we" can do whatever we want with them, even during observing without any impact on or change in the detector. In other words, you aren't guaranteed that these 16 kHz test channels will have the same signal processing forever. Please either check we me -- or preferably learn how to look at the configuration of EPICs records and filter modules yourself -- so you understand what to expect in the test pick-off channels when you compare them against the their primary path counter part. I can at least promise when *I* make a change to these channels, it will be explicitly aLOGged, as I have been doing (something like LHO aLOGs 82384 or 82313). But -- I can't encourage this enough -- all EPICs records and filters are saved at all times always and forever, so being able to lookup and reconstruct the configuration from *that* is the end-all-be-all. For now, the test paths are set up to produce results like in LHO:82512, and I outline the differences between the paths below. All of these channels are calibrated into [mA] on the DCPDs. We can work with you if you need these in different units, but for line hunting and comparative analyses like ASD ratios and transfer functions the units really shouldn't mean much more to you than what you should label your y-axis. It may be useful to reminder yourself of the analog side of these DCPD channels using the diagram from LHO:67644. Namely, below I mention "4CH average of analog voltage" because of the way the voltage from each DCPD is copied in analog and digitized on 4 separate ADC channels with the special "AA" chassis, D2300115, which is a normal AA chassis stripped of all filtering to be instead a pass-through-and-copy chassis. H1:OMC-DCPD_A_OUT_DQ H1:OMC-DCPD_B_OUT_DQ - primary GW path versions of DCPD A and B, - 16 kHz down-sampled version of 524 kHz channels, because there's no filtering or gains applied in these 16 kHz banks - 4CH average of analog voltage, - Stored in the frames at single-precision, but not limited by single precision noise below the ~7 kHz. - has NO 1 Hz 5th-order elliptic high-pass filter, or else the DARM loop would be terribly unstable and the IFO wouldn't work. - as of Friday 2025-01-24 has 2x 16kHz digital AA filters, and 1x digital 65kHz AA filters - BLUE traces in LHO:82512 H1:OMC-DCPD_16K_A1_OUT_DQ H1:OMC-DCPD_16K_B1_OUT_DQ - first copy test path of DCPD A and B, - 16 kHz down-sampled version of 524 kHz channels, because there's no filtering or gains applied in these 16 kHz banks - 4CH average of analog voltage, (by setting H1:OMC-ADC_INMTRX_1_9 and _2_10 element to 1.0, with all others in the _1 and _2 row set to 0.0) - has 1 Hz 5th-order elliptic high-pass filter ON to minimize impact of single-precision noise (in the 524 kHz version that's NOT stored in the frames). - as of Friday 2025-01-24 has NO digital AA filters - BROWN traces in LHO:82512 H1:OMC-DCPD_16K_A2_OUT_DQ H1:OMC-DCPD_16K_B2_OUT_DQ - second copy test path of DCPD A and B - 16 kHz down-sampled version of 524 kHz channels4CH average of analog voltage, because there's no filtering or gains applied in this 16 kHz bank - 4CH average of analog voltage, (by setting H1:OMC-ADC_INMTRX_3_9 and _4_10 element to 1.0, with all others in the _3 and _4 row set to 0.0) - has 1 Hz 5th-order elliptic high-pass filter ON to minimize impact of single-precision noise (in the 524 kHz version that's NOT stored in the frames). - as of Friday 2025-01-24 has 1x 16kHz digital AA filters, and 1x digital 65kHz AA filters, and thus - a replica of how the primary path was set up PRIOR to 2025-01-24 (tho, the primary path never had the 1 Hz high-pass) - GREEN traces in LHO:82512 H1:OMC-DCPD_SUM_OUT_DQ H1:OMC-DCPD_16K_SUM1_OUT_DQ H1:OMC-DCPD_16K_SUM2_OUT_DQ - These are the primary path and two copies of the SUM of the DCPDs, "summed" with specific balancing matrix coefficients, i.e. the channels H1:OMC-DCPD_MATRIX_1_1 and _1_2 H1:OMC-DCPD_MATRIX1_1_1 and _1_2 H1:OMC-DCPD_MATRIX2_1_1 and _1_2 - At the moment, the A and B inputs to the matrix have all the configuration notes of the corresponding A and B channels mentioned above, and there is no filtering or gain applied in the TEST SUM banks just like in the primary path. - These channels are directly proportional to DARM_ERR [counts], which is eventually calibrated into H1:GDS-CALIB_STRAIN (and all downstream "cleaned" products). Unfortunately, that coefficient of proportionality changes at the ~1% level every lock stretch, or else I'd quote it explicitly, but it's roughly 2.47e6 [mA/ct] or 4.05e-7 [ct/mA]. H1:OMC-DCPD_NULL_OUT_DQ H1:OMC-DCPD_16K_NULL1_OUT_DQ H1:OMC-DCPD_16K_NULL2_OUT_DQ - Similar to the SUM channels, these are the primary path and two copies of the DIFFERENCE of the DCPDs, "subtracted" with specific balancing matrix coefficients, i.e. the channels H1:OMC-DCPD_MATRIX_2_1 and _2_2 H1:OMC-DCPD_MATRIX1_2_1 and _2_2 H1:OMC-DCPD_MATRIX2_2_1 and _2_2 - Here as well, at the moment, the A and B inputs to the matrix have all the configuration notes of the corresponding A and B channels mentioned above, and there is no filtering or gain applied in the TEST SUM banks just like in the primary path. I have not yet looked at the test-path SUM or NULL channels to understand anything about what's in there other than the obvious -- that they're replicas of the singular SUM and NULL channels that are *extremely* useful in commissioning the IFO. So, I anticipate these will be just as useful because we can use these test paths to configure two *other* versions of how the DCPD signals are processed.
The corner RGA (located on output arm between HAM4 and HAM5) lost connection to the control room computer and stopped collecting data around 6pm yesterday (2/6/25). The software gave an error stating "Driver Error: Run could not be stopped".
I could not ping the unit from the terminal, but Erik confirmed the port is still open on the network switch, so it seems to be an issue with the RGA electronics. Other RGAs connected to this computer can still be accessed.
I restarted the software and attempted to reconnect to the RGA, but no luck. I will have to wait until next Tuesday maintenance to troubleshoot. This unit had been collecting data for the past ~6 months without issue. I will perform a hardware reset at the next opportunity to try and bring the unit back online, otherwise we have a new PrismaPro we can replace this unit with during the next vent.
2/18/25
Today, Erik was able to reconfigure the IPs of the RGAs. Able to ping all three RGAs currently connected to network, and corner monitoring scans have resumed. Filaments are turned on for all three RGAs, Corner, HAM6 and EX. Reminder, both HAM6 and EX have 10 l/s pumps on RGA volume.
Starting on Feb 5th, we've had more ring ups of MODE24. This doesn't seem related to the move of PR3, although it has been worse today it started before the move.
We have been taken out of observing by extreme damping 15 times in the last 3 hours. So for right now I've increased the threshold from 20 to 40 to go to extreme damping. We will see if this avoids the need to go out, or just delays it by a few seconds.
Sheila, Matt, Mayank
Because of yesterday's PR2 move, the beam was very off center on the ASC POP diodes. I beleive that this was the cause of last night's locking difficulties 82678, rather than the arm alignment references, which should not have been affected (and weren't very different after being reset). This QPD is used in DRMI ASC, and the yaw loop was pushing in the wrong direction so that the beam was falling off the QPD. Turning off this loop and not PR2 can lead to some unusual alignment that might cause difficulty locking, which could be avoided by moving CHARD, even though the real problem was in the PRC.
We were dropped out of observing because of the extreme PI damping (more on that later), so we took the chance to pico on these QPDs. We used the poorly named ndscope template /sheila.dwyer/ndscope/ASC/Pico_pop_wfs.yaml (similar to 81849)
We strategy that worked for us was to use motor 5 (ASC POP steering 1) to center QPD B and motor 6 (ASC POP steering 2) to center QPD A, but we had to walk very far to get both beams on the QPDs at the same time, we ended up moving by about 17000 counts on both picos in yaw (in the attached screenshot you can see wee picked the wrong strategy first).
We ended up with both QPDs centered around 0, and the sums are about 3% higher after our pico'ing than before. We reset the offsets to 0, and accepted this in both safe.snap and observe.snap. We have set PR1 +PR2 ASC to off in the DRMI guardian, because we need to set these offsets at 2W with all the ASC on.
Request for next time we lock: DRMI may not be well aligned because the PRC1+2 loops are off, but if initial alignment gets run that should fix it. It could probably also be fixed by running PRMI ASC, then locking DRMI. Next time we lock, we can trend what the POP_A QPD pitch and yaw are in the guardian state PREP_DC_READOUT and set the offsets to -1 * this number. Then we should edit lines 1030 and 1031 in ISC_DRMI to turn back on PRC1 and PRC2 ASC loops.
Note for Detchar: Keita had pointed out that fixing this clipping on the QPDs could possibly fix our large glitches that looked like a scattered light problem, which was apparent in range fluctuations in this morning's lock. We don't have much data yet since going back to observing, spectragram.
Fri Feb 07 10:07:26 2025 INFO: Fill completed in 7min 22secs
Jordan confirmed a good fill curbside. TCmins [-68C, -34C] OAT (0C, 32F) DeltaTempTime 10:07:35
18:30 UTC H1 dropped from Observing due to a PI ring up that caused the SUS_PI Guardian to envoke XTREME_PI_DAMPING.
The SUS_PI guardian did great work and Damped the PI 24 down in 2 minutes.
H1 returned back to Observing at 18:32 UTC.
This happened again at 18:36 UTC and we got back to Observing at 18:37 UTC
This happened again at 18:42 UTC and we returned to Observing at 18:44 UTC after once again XTREME_PI_DAMPING.
Another PI24 Ring up that took us in to Comissioning at 18:57 UTC
This time we will stay in comissioning for a few minutes so Sheila can pico ASC Pop QPDS.
I've also taken SQZ MAN to NO_SqUEEZING[7] and switched back to FREQ_DEP_SQZ[100] at 19:28 UTC.
Sheila & Matt are still Pico-ing.
TITLE: 02/07 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 4mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.16 μm/s
QUICK SUMMARY:
H1 has been locked for over three and a half hours. We have a tall Violin.
SQZ
At 15:37 UTC the We dropped from Observing due to the SQZr subsystem dropped from FREQ_DEP_SQZ[100] to LOCK_OPO[28]
I opened up the SQZ Overview and saw that the Filter Cavity Guardian was working its way through the states with out any issues, So I did not intervien.
We were back in Observing by 15:43 UTC.
TLDR: H1 would not lock due to alignment changes imposed earlier in the day, so after bringing alignment back, green references were updated and H1 returned to observing at 12:01 UTC. For OPS - An initial alignment MUST be run after the next lockloss due to updated green references.
H1 called for assistance at 07:48 UTC as it could not relock on its own (still trying to recover from Corey's shift). I discovered that H1 had run an initial alignment automatically, but every time it would lock DRMI, the ASC (specifically PRC1_Y) would pull alignment the wrong way. I first tried simply turning off the PRC1_Y loop, which worked to get through DRMI ASC, but then there would be a lockloss somewhere before or during ENGAGE_ASC. One of these times at ENGAGE_ASC, I paused in the state previous and went through ISC_LOCK line-by-line to turn on each ASC loop one at a time, but I eventually lost lock while turning on CHARD_Y. At this point, I reached out to Jenne for assistance, and she reminded me of the move_ARM_dev.py script in userapps/asc/ which would allow me to converge CHARD_Y before engaging the loop, which went well. I was then able to successfully go through every step of ENGAGE_ASC and continue locking.
While this was going on, Jenne speculated that the green alignment references had not been updated after the alignment changes during the commissioning period today, which would explain why alignment looked so bad following the initial alignment run earlier in the night. So, while waiting in PREP_DC_READOUT, I opened the ALS beam shutters to see if the arm would still lock in green to update the alignment references. Surprisingly, the alignments looked great and quickly locked on the TEM00 mode, so I ran the setEndGreenQPDOffsets_{X,Y}ARM.py scripts in userapps/als/ for about 5 minutes to set the QPD offsets, then updated the ITM camera offsets by setting them to be the average of the camera P/Y positions. All of these values have been accepted in both the SAFE and OBSERVE SDF tables (screenshots attached). Since the green references have been updated, Jenne encourages there to be an initial alignment run after the next lockloss to ensure the references are good. After all that, I re-shuttered the ALS beams and H1 continued locking without issue all the way to observing (sadly just before S250207bg).
See comments in 82683. We will leave these arm references in, but I don't this that was really the issue last night, it was mostly that the beam was falling off the POP_A_QPD so that PRC1 ASC was not working, as Ryan noted.
Matt Todd, Jennie Wright, Sheila Dwyer
Today we lost lock right before the commissioning window, and so we made another effort at moving the spot on PR2 out of lock, correcting some mistakes made previously. Here's an outline of steps to take:
When relocking:
Today, we did not pico on these QPDs, but we need to. We will plan to do that Monday or Tuesday (next time we relock), and then we will need to update the offsets.
Today, I also forgot to revert the change to ISC_DRMI before we went to observing. So, I've now edited it to turn back on the PRC1 + PRC2 loops, but someone will need to load ISC_DRMI at the next opurtunity.
We need to add one more step to this procedure: pico on the POP QPDs 82683
For more context, here's a brief history of where our spot has been:
PR3 yaw slider (urad) | PR2 Y2L coeffient | spot position on PR2 [mm] (on +Y side of optic) | |
July 2018 until July 2024, except for a few days | 150 | -7.4 | 14.9 |
July 2024- Feb 6 2025 | 100 | -6.25 | 12.588 |
May 21st 2024, and Feb 6th 2024 | -74 | -3 | 6 |
Today, we have some extra nonstationary noise between 20-50 Hz, which we hoped would be fixed by pico'ing on the POP QPDs but it hasn't been fixed, as you can see from the range and rayleigh statistic in the attachments.
Back in May 2024, we had an unrelated squeezer problem that caused some confusion: 78033. We were in this alignment from 5/20/24 at 19 UTC to 22:42 UTC on 5/23/24 15 UTC. We did not see this large glitchy behavoir at this time, and there was a stretch of time when the range was 160, although there were also times when the range was lower.
[M. Todd, C. Compton, G. Vajente, S. Dwyer]
To understand the effect of the Relative Intensity Noise (RIN) of the CO2 laser (Access 5W L5L) proposed for CHETA on the DARM loop, we've done a brief study to check whether the addition of the RIN as displacement noise in deltaL will cause saturation at several key points in the DARM loop such as the ESD driver and DCPDs. The estimates we've made on the RIN at these points are calibrated with the DARM model in pydarm, which models the DARM loop during Nominal Low Noise; however, appropriate checks have been made that these estimates are accurate or at least over-estimating of the effects during lower power stages (when the CHETA laser will be on).
This estimate is done by propagating displacement noise in deltaL (how CHETA RIN is modeled, m/rtHz) to counts RMS of the ESD DAC. The RMS value of this should stay below 25% or so of the saturation level of the DAC, which is 2**19. To do this, we multiply the loop suppressed CHETA RIN (calibrated into DARM) by the transfer functions mapping deltaL to ESD counts (all are calculated at NLN using pydarm).
The CHETA RIN in ESD cts RMS is 0.161% of the saturation level, and in L2 coil cts RMS is 1.098%, and in L3 coil cts RMS is 0.015%. It is worth noting that the CHETA RIN RMS at these points is around 10x higher than that which we expect with just DARM during NLN.
We also checked to make sure that the ESD cts RMS during power-up states is not higher than that during NLN, meaning the calibration using NLN values gives us a worst case scenario of the CHETA RIN impact on ESD cts RMS.
List of Figures:
1) Loop Model Diagram with labeled nodes
2) CHETA RIN in ESD cts RMS
3) CHETA RIN in L2coil cts RMS
4) CHETA RIN in L1coil cts RMS
5) DARM Open Loop Gain - pydarm
6) DARM Sensing Function - pydarm
7) DARM Control Function (Digitals) - pydarm
8) Transfer Function: L3DAC / DARM_CTRL - pydarm
9) Transfer Function: L2DAC / DARM_CTRL - pydarm
10) Transfer Function: L1DAC / DARM_CTRL - pydarm
11) ASD/RMS ESD cts during power-up states - diaggui H1:SUS-ETMX-L3_MASTER_OUT_UL_DQ
12) CHETA RIN ASD (raw)
This estimate is done by propagating displacement noise in deltaL (how CHETA RIN is modeled, m/rtHz) to counts RMS of the DCPD ADC. The RMS value of this should stay below 25% or so of the saturation level of the DAC, which is 2**15. To do this, we multiply the loop suppressed CHETA RIN (calibrated into DARM) by the transfer functions mapping deltaL to DCPD ADC counts, using the filters in Foton files. This gives us the whitened ADC counts, so by multiplying by the anti-whitening filter we get the unwhitened DCPD ADC cts RMS, which is what is at risk of saturation.
The CHETA RIN in DCPD cts RMS is 3.651% of the saturation level. Again, it is worth noting that the CHETA RIN RMS at this point is around 10x higher than that which we expect with just DARM during NLN.
We also checked to make sure that the DCPD-A ADC channel is coherent with DARM_ERR. In short, it is up to 300Hz, where controls noise dominates our signal -- after 300Hz shot noise becomes the dominant noise source and reduces our coherence.
List of Figures:
1) Loop Model Diagram with labeled nodes
2) CHETA RIN in DCPD ADC cts RMS
3) Transfer Function: DCPD-ADC / DELTAL_CTRL
4) Coherence: DCPD-A / DARM_ERR
Calibrating CHETA RIN to ESD cts RMS
Calibrating CHETA RIN to DCPD ADC cts RMS
Previous related alogs:
1) alog 82456
Is the propagation of RIN into displacement consistent with the photothermal calculations done by Braginsky and Cerdonio? One can use Eq. 8 of Braginsky (1999) except with the replacement of the absorbed shot noise power 2 hbar omega_0 Wabs with the absorbed classical laser power. Then using
alpha = 0.6 ppm/K
sigma = 0.17
rho = 2200 kg/m^3
C = 700 J/(kg K)
r0 = 53 mm / sqrt(2)
I find sqrt(Sxx) = 1.6e-18 m/rtHz as the displacement from a single test mass assuming a CHETA RIN of 1e-5/rtHz and an absorbed power of 1 W.
[M. Todd, E. Hall]
Indeed the propagation of RIN int DARM laid out in T050064 is consistent with the work done by Braginsky and Cerdonio. The calibration follows the form in Figure 1.
Attached is a comparison plot of the two propagtions, using the parameters set above in Evan's comment.
Updating this post with some busier plots that show how other CO2 laser noise is projected into the various stages. As well as adding flat RIN curve propagations to give an intuition as to what RINs we do not need to even worry about in NLN.
I've also reattached the codes used because of a correction to the way the ASD integration was being done.
The plots also extend to lower frequency to show the behavior of the RIN propagation to each channel (mostly falling off below 10Hz). This is why we take the "RMS" value to be the integrated value of the ASD at 10Hz, and compare that to the saturation limit. It also gives a better display of the RMS from DARM in NLN at propagated to the above channels, showing that overall the RIN should have a small effect on these drives and ADCs.