TITLE: 02/07 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
Quiet shift for the most part until the waning minutes when there was a lockloss. No obvious reason for the lockloss, but there was the PI24 ring up a couple of min earlier, and there was an earthquake which was on its way, but it looks small (LOCKLOSS is still analyzing).
Before the lockloss, contemplated going out of observing for possible SQZ attention (due to lowish H1 range), but then the range would hover back up, so ultimately held off.
Similar to last night, have had hours of snow flurries, but only a dusting is sticking thus far.
LOG:
Tony Sanchez mounted a cisco poe switch in the MSR for initial setup and testing. Together we got the switch configured and hooked it up to the core switch. This switch is named sw-lvea-aux1. It will move into the CER racks and will act as a second camera switch. Today we moved a test camera onto it and will be using this to continue our evaluation of h1digivideo4.
TITLE: 02/07 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: The new settings I found yesterday for ITMY mode5/6 are still damping well today so I've added them to lscparams, VIOLIN_DAMPING needs to be reloaded. The range and SQZer look to be slightly degrading over the past 2 hours.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
17:16 | SAFETY | LASER SAFE ( \u2022_\u2022) | LVEA | SAFE! | LVEA SAFE!!! | 19:08 |
16:43 | FAC | Eric | FCES | N | Temperature investigation | 17:45 |
17:05 | FAC | Kim | My | N | Tech clean | 17:41 |
17:41 | FAC | Kim | Mx | N | Tech clean | 18:38 |
19:06 | FAC | Kim | H2 | N | Tech clean | 19:22 |
19:25 | FAC | Ken | EndX | N | Grab wire cart | 19:48 |
19:51 | ALS | Sheila, Matt | LVEA | LOCAL | Adjust ALS beatnote ISCT1 | 20:07 |
22:51 | SQZ | Mayank | Optics lab | N | Ongoing |
TITLE: 02/07 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Ryan C
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 17mph Gusts, 10mph 3min avg
Primary useism: 0.06 μm/s
Secondary useism: 0.18 μm/s
QUICK SUMMARY:
Got the hand-off from RyanC (who had a nice GW candidate in the afternoon!) and on the To Do list is LOAD ISC_DRMI guardian.
And currently, have noticed the range drifting down. RyanC looked at SQZ ndscope and noticed SQZ blrms have been drifting up with this range drop. Looking at the SQZ wiki to see what to do when SQZ looks bad to be ready to address soon.
ITMy M5 violin continues to ring down. And as I type, RyanC saved the settings which have been damping down this mode the last 2-days.
Environmentally, it's chilly out (most of our snow melted), breezes are below 20mph and microseism continues drift down and starting to touch 50th percentile.
Added in temperatures for the VEAs for FCES, EX and Ey.
New channel prefixes are
H1:CDS-FMCS_STAT_ZONEFCES
H1:CDS-FMCS_STAT_ZONEEXA
H1:CDS-FMCS_STAT_ZONEEXB
H1:CDS-FMCS_STAT_ZONEEXC
H1:CDS-FMCS_STAT_ZONEEXD
H1:CDS-FMCS_STAT_ZONEEYA
H1:CDS-FMCS_STAT_ZONEEYB
H1:CDS-FMCS_STAT_ZONEEYC
H1:CDS-FMCS_STAT_ZONEEYD
Matt Todd, Jennie Wright, Sheila Dwyer
Today we lost lock right before the commissioning window, and so we made another effort at moving the spot on PR2 out of lock, correcting some mistakes made previously. Here's an outline of steps to take:
When relocking:
Today, we did not pico on these QPDs, but we need to. We will plan to do that Monday or Tuesday (next time we relock), and then we will need to update the offsets.
Today, I also forgot to revert the change to ISC_DRMI before we went to observing. So, I've now edited it to turn back on the PRC1 + PRC2 loops, but someone will need to load ISC_DRMI at the next opurtunity.
We need to add one more step to this procedure: pico on the POP QPDs 82683
For more context, here's a brief history of where our spot has been:
PR3 yaw slider (urad) | PR2 Y2L coeffient | spot position on PR2 [mm] (on +Y side of optic) | |
July 2018 until July 2024, except for a few days | 150 | -7.4 | 14.9 |
July 2024- Feb 6 2025 | 100 | -6.25 | 12.588 |
May 21st 2024, and Feb 6th 2024 | -74 | -3 | 6 |
Today, we have some extra nonstationary noise between 20-50 Hz, which we hoped would be fixed by pico'ing on the POP QPDs but it hasn't been fixed, as you can see from the range and rayleigh statistic in the attachments.
Back in May 2024, we had an unrelated squeezer problem that caused some confusion: 78033. We were in this alignment from 5/20/24 at 19 UTC to 22:42 UTC on 5/23/24 15 UTC. We did not see this large glitchy behavoir at this time, and there was a stretch of time when the range was 160, although there were also times when the range was lower.
J. Kissel, at the prodding of S. Dwyer, A. Effler, D. Sigg, B. Weaver, and P. Fritschel Context The calibration of the DC alignment range / position of the ITMY CP, aka CPy, has been called into question recently under the microscope of "how misaligned is the ITMX CP, and do we have the actuation range to realign it?" given that it's been identified to be a cause of excess scattered light (see e.g. LHO:82252 and LHO:82396). What's in question / What Metrics Are Valid to Compare Some work has been done here LHO:77557 to identify that we think CPy is misaligned "down" i.e. in positive pitch by 0.55 [mrad] = 550 [urad] Peter reminds folks, in LHO:77587 that the DC range of the top mass actuators should be 440 [urad] and estimates that drives ~45 [mA] through the coils, pointing to my calibration of the coil current readbacks from LHO:77545. But in that same LHO:77587, he calls out that - Slider - OSEMs calibrations into [urad] disagree by a factor of 1130 / 440 = 2.5x. #YUCK1. And Daniel points out that there's a factor of 2x error in my interpretation of the coil current calibration from LHO:77545. #YUCK2. Note -- there's conversation about the optical lever readback disagreeing with these metrics as well, but the optical lever looks at the HR surface of the main chain test mass, so it's a false comparison to suggest that this is also "wrong." Yes, technically the optical lever beam hits and reflects some portion of all surfaces of the QUAD, but by the time this spots all hit the optical lever QPD, they're sufficiently spatially separated that we have to chose one, and the install team works hard to make sure that they've directed the reflection off of the test mass HR surface onto the QPD and no other reflection. That being said, the fact hat these optical lever readings of the test mass have been identified to be wrong in the past as well (see LHO:63833 for ITMX and LHO:43816 for ITMY) doesn't help the human sort which wrong metrics are the valid ones to complain about in this context. #FACEPALM So, yes, a lot of confusing metrics around there, and all the one's we *should* be comparing disagree -- and seemingly by large factors of 2x to 4x. So let's try to sort out the #YUCKs. Comparing the big picture of all the things that "should" be the same #YUCK1 In our modeling and calibrating, we assume (1) All ITM Reaction Chains have the same dynamical response (in rotation, for the on-diagonal terms, that's in units of [rad]/[N.m]) (2) All OSEM sensors on all have been normalized to have the same "ideal" calibration from ADC counts to [um]. (3) All mechanical arrangements of OSEMs are the same, so we can use the same lever arm to convert an individually sensed OSEM [um] into a rotation in [urad], and vice versa that a requested [N.m] drive in the EULER basis creates the same Force [N] at each OSEM coil, and (4) All OSEM top-mass actuator chains are chains are the same, (with 18 or 20 bit DACs and QTOP coil drivers, and 10x10 magnets), so the same DAC counts produces the same force at the OSEM's location. In order the check "are the same sensors / actuators reporting different values for (ideally) the same mechanical system," I used our library of historical data and allquads_2025-02-06_AllITMR0Chains_Comparison_R0_Y-Y_TF_zoomed.pdf However, for pitch, we do see a good bit of difference in allquads_2025-02-06_AllITMR0Chains_Comparison_R0_P-P_TF_zoomed.pdf. Of course, we're used to looking at these plots over many orders of magnitude and call what we see "good enough" to make sure the resonances are all in the right place. If I actually call out the DC magnitude of the transfer functions in the comparisons, you do actually see several factors of two, and differences between our four instantiations of the same suspension: ITM R0 P2P TF DC magnitude Model 0.184782 Model/Meas L1ITMY / Others Meas L1 ITMX 0.0675491 2.7355 ~3x 1.4028 L1 ITMY 0.0939456 1.9501 ~2x 1 H1 ITMX 0.0517654 3.5696 ~3.5x 1.8305 H1 ITMY 0.0947546 1.9501 ~2x ~1 So, there is definitely something different about these -- ideally identical -- suspensions. I think it's an amazing testimate to the install teams that both L1 and H1's ITMY have virtually identical DC magnitude (and AC transfer functions). Of course, "ideal," in terms of mechanics is muddled by cables that are laced thru the UIM / PUM / TST stages -- we've seen (from LONG ago) that specific (most) arrangements of the cables can stiffen the reaction chain, and setting the cables in such a way that they do *not* influence the pitch dynamics is hard -- see LHO:1769, 2085 and LHO:2117. I attach the R0 P2P plot plot from LHO:2117 that shows how much influence the cabling *can* have. I had the impression that this was only an impact on the "3rd" mode of the transfer function, but when you actually look at it with this "factors of two at DC" in mind, the data clearly shows cabling impact on the DC stiffness as well, and again factors of two are possible with different cable arrangements. So, in PITCH when we request the actuators to push these suspensions at DC, we may get a different answer at the optic i.e. the compensation plate, or CP. This may be some source of the disagreement between OSEM *sensors* and requested drive from the OSEM coil sliders. Resolving how much current is being driven through the coils, as reported by the FASTIMON or RMSIMON channels #YUCK2 (A) At H1, can confirm that all the QUAD's top masses, both main chain and reaction chain are using QTOP coil drivers, as designed, with no modifications -- see the e-Travelers within the "Quad Top Coil Drivers" serial numbers listed as related to H1 SUS C5 (S1301872) (B) I was about to make the same claim of L1, but in doing the due diligence with L1 SUS C5 (S1105375), I see that the S1000369 Quad Top Driver was modified to give more drive strength on ITMY R0 F1,F2,F3 - the pitch and yaw coils, and there's no follow-up record suggesting it was reverted. The work permit from Stuart Aston mentioned in LLO:28375 indicates a request to increase the strength by 25%. The action is also documented by Carl Adams and Michael Laxen in LHO:28301. It would be helpful to confirm if this mod is still in place, and if not, then the e-Traveler should be updated with record of the reverting. I'm guessing the mod is still in place, because there's mention of the serial number that was originally there being swapped in elsewhere in 2019 -- see LLO:46238. (C) That being said, I can at least make the statement confidently that all QUAD TOP Coil Drivers in play are using the same, original noise monitor circuit D070480. (D) Looking back all the content on the DCC page, Daniel's right about my mis-calibration of the coil driver current monitor from LHO:77545. This darn monitor circuit will be the death of me. The error comes in during a misunderstanding of how the single-ended output of the current monitor circuit is piped into our differential ADCs, namely the line * "single-ended voltage piped into only one leg of differential ADC" factor of two its DB25 output J1 in the interconnect drawing because of which I added the factor of 2 [V_DF] / 1 [V_SE] to the calibration. If you look at interconnect drawing you can see that the "F" (for FAST I MON) and "S" (for Slow RMSIMON) single-ended voltages are piped into the output DB25's positive pins, and the negative pins are connected to 0 V. This is a big unusual, because typical LIGO differential ADC driver circuits copy and invert the single-ended voltage and pipe the original single-ended voltage to the positive leg and the negative copy to the positive leg, such that V_SE = V_{D+} = - V_{D-}. Comparing these two configurations, (i) piping a signal ended voltage into only one leg, and 0V into the other (V_{SE} - V_{REF}) - (0 - V_{REF}) = V_SE (ii) copying and inverting the single ended voltage yields, (V_{SE+} - V_{REF}) - (V_{SE-} - V_{REF}) = (V_{SE} - V_{REF}) - ( - V_{SE} - V_{REF}) = 2 V_{SE} So, I'd used the (ii) configuration's calibration rather than (i), which is the case for the current monitors (and everything on that noise monitor board). The corrected the RMSIMON calibration is thus calibration_QTOP [ct/A] = 2 * 40.00 [V/A] * (10e3 / 30e3) * 1 * (2^16 / 40 [ct/V]) = 4.3691e+04 [ct/A] or 43.691 [ct/mA] or 0.0229 [mA/ct] Taking the values Peter shows in the F1 RMSIMON in his ndscope session in LHO:77587, Slider [urad] RMSIMON [ct] RMSIMON [mA] 440 4022.74 92.0732 0 113.972 2.6086 Delta 3908.77 89.4646 So, we're already driving a lot of coil current into the BOSEMs, if this calibration doesn't have any more flaws in it. I'd also like to super confirm with LLO that they've still got 25% more range on their ITMY QUAD top coil driver, 'cause if they're consistently using any substantial amount of the supposed range, than they've been holding these BOSEMs at larger than 100 [mA] for a long time, which goes against Dennis' old modeled requirement (see LLO:13456). I'll follow-up next Tuesday with some cold-hard measurements to back up the model of the coil driver and its current monitor.
Closes FAMIS#26029, last checked 82536
Compared to last week:
All corner station (except HAM8) St1 H2 is elevated at 5.8Hz
All corner station (except HAM8) St1 elevated in all sensors at 3.5 Hz
HAM 5/6 H3 elevated at 8Hz
ITMX ST2 H3 elevated between 5.5 and 8.5 Hz
ITMY ST2 H1/H2/H3 elevated between 5.5 and 8 Hz
BS ST2 H1/H2/H3 elevated between 5.5 and 8 Hz
Thu Feb 06 10:05:35 2025 INFO: Fill completed in 5min 32secs
TCmins [-62C, -44C] OAT (-2C, 28F) DeltaTempTime 10:05:47
Overnight, the electric reheat in the ducting of the VEA continued heating the space, even though the automation system was commanding the heat off. The space rose over set point by several degrees, which made relocking the IFO infeasible. I checked the control circuit of the heater and didn't find any obvious problems. Once I re-energized the control circuit, the heat remained off, making it difficult to find the cause of the problem. I watched the heater cycle normally per automation command, so for the time being it is working correctly. I will monitor it throughout the day.
Planned Saturday Calibration sweep done using the usual wiki.
Simulines start
PST: 2025-02-06 08:36:47.419400 PST
UTC: 2025-02-06 16:36:47.419400 UTC
GPS: 1422895025.419400
Simulines stop, we lost lock in the middle of the measurement.
PST: 2025-02-06 08:58:38.495605 PST
UTC: 2025-02-06 16:58:38.495605 UTC
GPS: 1422896336.495605
TITLE: 02/06 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 162Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 6mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.21 μm/s
QUICK SUMMARY:
H1 Manager contacted me because we couldn't get into squeezing. the filter cavity was having trouble locking even green. It was locking on the wrong modes(02 and 01), with FC_TRANS_C_LF_OUTPUT reading below 100. Referencing 72084, I paused the FC guardian, closed the servo for SQZ green, and checked the FC2 driftmon. FC2 had drifted A LOT in the past hours (ndscope). I tried moving the sliders until we were back in the location where we were 4 hours ago when our squeezing was good, but this wasn't enough to let us get back to a 00 mode, and moving the sliders around more in other directions didn't get me closer either. I then checked the SQZ troubleshooting wiki and plotted the M3 witness channels along with the driftmon for both FC1 and FC2. I adjusted the sliders according to the witness channels for FC1 and we got back to a 00 mode! I then fiddled back and forth between using the driftmon channels and using the witness channels to maximize FC_TRANS_C_LF_OUTPUT until we looked good, then I unpaused the FC node and we were able to enter FDS and then Observing!
TITLE: 02/06 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:
To Do List: Hit LOAD for VIOLIN_DAMPING guardian at next lockloss and use settings for IY5 that RyanC mentions in his summary.
Overall the ITMy Mode5/6 are slowly being damped down with Ryan's settings and have been monitoring to see that it continues to damp down. Hopefully H1 can stay locked overnight to damp these modes down.
Had about 60-90min of snow falling and sticking tonight (only about less than 0.5").
LOG:
For FAMIS #26359: All looks well for the last week for all site HVAC fans (see attached trends).
3-month trend for the SUS HWWDs.
As far as this 3-month stretch, we get at least ONE bit for all FOUR Test Masses. No further action needed.
Attached are monthly TCS trends for CO2 and HWS lasers. (FAMIS link)
TITLE: 02/06 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 16mph Gusts, 11mph 3min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.28 μm/s
QUICK SUMMARY:
H1's been locked 5.75hrs. Ryan-C mentioned the issues with IY5 violin mode, but sounds like he has a setting that is slowly damping it down. If there is a lockloss, I need to do a LOAD of the VIOLIN DAMP guardian (so the IY5 damps with 0 gain) & then enter the settings which work for him.
Microseism has been trending down over the last 8-ish hours and winds look a little calmer compared to 24hrs ago.
Sheila, Camilla, follow on from 82640.
We made some more changes to SQZ_MANGER to hopefully simplify it:
Saved and added to svn but not loaded.
Once reloaded, as states have been changed, any open SQZ_MANAGER medm's should be closed and reopened.
I did load this today, there don't seem to have been any issues in this lock.
Sheila, Dave, Ryan Crouch, Tony
This afternoon after the maintence window when we first started using guardian, once the ASC safe.snap was loaded by SDF revert we started sending large signals to the quads. We found that this was due to the camera servos having their gains set to large numbers. This was set this was in the safe.snap file.
After I set these two zero in safe.snap (which is really down.snap), Ryan again went through the guardian down, and this time we started to saturate the quads because of the arm asc loops (which we probably didn't notice the first time because we tried running down when we saw that there was a problem, and down would turn these off but not the camera servos).
Dave looked in the svn for this file, which he had committed this morning with this set of: diffs from this mornings svn commit . Looking through these, it kind of seems like somehow the safe.snap may have been overwritten with the observe.snap file.
Dave reverted that to the file from 7 days ago, which has Elenna's changes to the POP QPD offsets. Then I reverted all the diffs, so that we set all settings back to 7 days ago except those that are not monitored.
After this, Mayank and I were using various initial alignment states to make some clipping checks, which Mayank will alog. We noticed that the INP1Y loop (to IM4) was oscillating, so we reduced the gain in that from 10 to 40, on line 917 of ALIGN_IFO.py We also saw that there is an oscillation in the PRC ASC if we sit in PRX, but we haven't fixed that. These should not be due to whatever our safe.snap problem is, we hope.
Edit to add: We looked at the last lockloss, when the guardian went through SVN revert at 7 am yesterday Feb 3rd. It looks like the camera gains were 0 in the safe.snap at that time, but it was 100 by the time we did SDF revert at 20:51 UTC (1pacific time) today.