[Sheila, Eric, Kar Meng, Daniel]
At the time of removing soft cover, the particle count was ~2. We begun with the OPO dither locked. We also updated the ramp time on the PSAM controllers to be 100ms (they were at 1ms). It seems like the PSAMS are reading back the correct value (matches setpoint/target).
Betsy provided us with the screws for locking down the VIP, they are a #10 button head screw but they seem to fit in the slot for the locking mechanism. We have no washers for the #10 screw, which is not ideal. The #10 button head is not sufficiently wide enough and only engaged on one side of the slot. As a compromise, a 1/4" washer was tried to buffer this. Unfortunately, this was thick enough to prevent the screw from engaging on the threads, so two of the other bolts were installed without washers. One of the bolts that was installed is the correct bolt/washer. Once all the screws were in, the thumb screws were tightened and the OSEMs were checked. Pitch yaw and roll are all less than 1urad different. Vertical displacement is 100um different.
After finishing locking down: particle count was ~ 60
We were thinking about moving the second iris on the sqz path from between ZM3 and FC1 to between ZM2 and ZM3 so that we could see the retroreflection from the FC, but we abandoned this idea because Sheila thinks it wont be that helpful considering how difficult it seems it would be to move it there. As far as irises go, all the irises we need to install in the HAM7 chamber have been placed. We have irises installed on the transmission path to the homodyne and the CLF path on T7. It looks like the homodyne/transmission path is slightly misaligned relative to the irises placed before the vent (see images attached). We also still need to install one final iris on the green pump REFL path on T7 before we remove the OPO. We could not install this today because we could not get the SHG to lock.
We were having quite a bit of trouble dither locking the OPO, so the seed/clf input alignment may be a bit off. TEM01 and TEM00 have very similar dip fractions. By adjusting the locking code, Sheila was able to get the OPO locked to the TEM00 mode.
At the end of the day, we adjusted the iris between ZM3 and FC1 for better centering and placed 2 dogs around the OPO.
[Daniel, Karmeng]
We also placed an iris after the parascope on SQZT7 table to constraint the green reflected off the OPO.
Installed two 4-chn demod chassis and the in-vacuum interface chassis. This completes the chassis installation for JAC.
h1asc0 upgrade
Daniel, Jennie, Jeff, Fil, Jonathan, EJ, Dave:
An additional ADC was added to h1asc0's IO Chassis, Fil installed the corresponding AA chassis.
h1iopasc0 model was upgraded to add this ADC.
h1ascimc model was upgraded by Jennie and Jeff, this required a DAQ restart
h1lsc0 upgrade
Daniel, Jeff, Jennie, Dave:
New h1lsc and h1sqz models were installed. DAQ restart was required.
Beckhoff Upgrade
Daniel, Dave:
New ini files were installed for CSAUX, CSISC and CSTCS. DAQ+EDC restart was required
TW0 raw minute trends offload
offloading of raw trends from tw0 completed at 04:30 this morning. I reconfigured nds0 and started the file deletion at 16:50.
DAQ Restart
Jonathan, Dave:
DAQ was restarted twice, first for model changes, second for further model changes and EDC change (Beckhoff).
On first restart it was found that h1ascimc had removed two DQ channels used by GDS. There were returned for the second round of model/DAQ restarts.
I'm running a temporary HWS-ETMX dummy IOC on cdsioc0 to "green up" the EDC.
cds_status_ioc was upgraded to expect +1 ADC in the site wide ADC count.
Marc Daniel
We upgared the EtherCAT Corner Station Chassis 5 according to D1200132-v4 and E1200077-v4. The corresponding software changes were also comitted. This now includes all necessary upgrades to support JAC and most of the ones needed for BHD.
Closes FAMIS27617, last checked in alog88307
HAM7 looks very noisy as expected due to it being vented.
ITMX_ST1_CPSINF_V3 has some more lines at high frequency.
ETMX_ST1_CPSINF_V3 lines are larger at high frequency, the line at 30HZ on ETMY_ST1 looks larger for mulitple sensors.
TITLE: 12/10 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT
Wind: 46mph Gusts, 34mph 3min avg
Primary useism: 0.14 μm/s
Secondary useism: 0.39 μm/s
QUICK SUMMARY:
Looks like someone woke up and ran an Initial_Alignment first think this morning.
When I got it H1 was in an Initial Alignment complete state. which is fantastic, but unfortunately the Wind is Howling on site right now.
I'm going to try to lock 3 times just to see if I can catch a lucky lock. But the Wind forcast look abysmal for locking today.
[RyanS, Jenne, all the other control room people, of whom there were many]
Summary: We can get to PREP_DC_READOUT_TRANSITION (at least once), but had trouble with OMC locking.
We tried some locking earlier in the day, first starting with doing the same trick as yesterday (alog 88432) and moving ITMX while ETM and TMS were controlled by green WFS to improve the COMM beatnote. However, I only ended up moving ITMX 0.3 urad in yaw. Because this was a small change and Sheila reminded me that we can lock (when the wind is low) with comm down at -10 dBm-ish, so I decided that next lock we'd just use the camera setpoints (which we did and was successful). During this time, we had to disable SRC1 and PRC1 in DRMI ASC because they were pulling the alignment away, and we weren't really on the POP QPDs at all. Anyhow, we got to PREP_ASC_FOR_FULL_IFO twice, but the alignment never looked excellent. We tried ENGAGE_ASC_FOR_FULL_IFO once, and it was really bad and killed the lock. The other time, we think that the seismic state changing from useism to windy caused us trouble (but, in retrospect, likely was only troublesome because the alignment was so poor).
We then realized that, after the big earthquakes from this weekend, our input pointing wasn't good. RyanS then set the IMs 1, 2, and 3 such that they matched their top mass osems (not necessarily their sliders). We then moved IM3 a little bit to get back to where we had been on IM4 Trans QPD before the power outage (pit of 0.239, yaw of -0.071). We were able to quite easily run through an initial alignment with this. During this and all subsequent initial alignments, we used the pre-power outage green setpoints (including cameras). The COMM beatnote was around -9 dBm, so that's pretty good.
Some time around here we had the -18 V failure, see alog 88446 for details.
Then, we did another initial alignment.
Then, the CDS team let us know that they needed to reboot all the models, see alog 88448 for details. After all the models were back, Ryan restarted the ALS_[X,Y]ARM guardians using "guardctrl restart NODE", so they would know how to start their AWGs in case SCAN_ALIGNMENT needs to be run.
We then restored sliders to just after one of our recent inital alignments, and Ryan then reset the IMs 1-3 to their top mass osems again, and again moved IM3 to get us back to the pre-power outage spot on IM4. ....And did yet another initial alignment.
After this, we finally were able to try to lock for the first time in several hours! And things went really, really quite well. We basically didn't touch anything at all (PRC1 and SRC1 still disabled in DRMI ASC), and were able to lock to PREP_FOR_DC_READOUT! Yes, you read that right, ENGAGE_ASC_FOR_FULL_IFO did just fine on its own. The buildups went down then came back again, so we were a bit scared, but it kept hold of everything and was able to converge. The PRM's ADS alignment took a loooonng time to converge, which we've seen before after a power outage (eg, alog 86944), so after all of the ASC was on (including SOFT_LOOPS) and converged, I reset the POP_A offsets, and Ryan accepted them in safe.snap.
(Later, after some relocks, we're able to use PRC1 in DRMI ASC now that its offsets have been set. But, still SRC1 is left out since it's pulling things away).
The violin modes are quite high, but not so bad that it's impossible to get 2W locked.
We went to PREP_DC_READOUT_TRANSITION, and noticed that we were having trouble locking the OMC. We're still not sure what's going on here, and we're going to leave it for the night. We're hoping to leave it at PREP_DC_READOUT_TRANSITION, however the second time that we did ENGAGE_ASC_FOR_FULL_IFO, something pulled us away and we lost lock. We'll let it try one more time.
OMC locking troubles and symptoms:
Screenshot attached of newly SDF accepted POP A offset values.
Okay, we tried *one* more time, this time doing the same skip-over-tune_offsets thing, but having remembered to do the critical first few lines in the main state. We seemed to successfully get onto DC readout, the ISC_LOCK state DC_READOUT finished, and we started to see the DARM trace on the wall come down. However, we lost lock pretty suddenly. Ryan and I are giving up for the night.
Also, during engage ASC for full IFO, as part of getting up to attempting DC readout, we turned SRC1 off by hand since it was starting to pull things away. Ryan by-hand aligned SRM to get us back to good alignment. Ryan tried turning on SRC1 pit after the SOFT loops were converged, but that immediately started pulling us away, so we turned it back off and re-touched up SRM.
Looking back at the locklosses at DC readout transition, Ryan and my lockloss at ~8:30pm seemed to be successful through the matrix ramp and beyond. However, we got a bit of a ringup at ~18.5 Hz in LSC DARM.
Sheila's lockloss around 1030pm is similar, although did lose lock right at the end of the matrix transition. This lockloss had a DARM oscillation a little lower, closer to 14 Hz.
There is some data-getting issue, but once that gets solved Oli plans to try re-running the lockloss tool so we can see if there is anything else suggestive of why we might have struggled.
TITLE: 12/10 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
SEI_ENV state: MAINTENANCE
Wind: 9mph Gusts, 7mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.39 μm/s
QUICK SUMMARY:
All planned CDS Work : Status Postponed in favor of Locking.
DEC 8 - FAC - annual fire system checks around site : Status "Completed.... For now." ~ Tyler
MON TUES - RELOCKING IFO : " In Progress "
TUES AM - HAM7 Pull -Y Door : STATUS Completed
FARO work at BSC2 (Jason, Ryan C) Post poned in favor for locking. : Status postponed infavor of Locking.
HAM7 - in-chamber work to swap OPO assembly : Status " In progress"
Notes:
Fire Pump 1 is on 23:01 UTC
-18V power supply SUSC1 power supply dead, Fil C. is replacing it.
Dave took down corresponding AA & AI chassis and brought them back up.
Fire pump 1 back on 23:16 UTC
Fire pump 2 on at 23:19 UTC
Fire Pump 1 is back on again 23:21 UTC
High Voltage for HAM6 was accidentally shut off , which is why the Fast shutter no longer works....Turned back on at 23:53 UTC.
16:17 All SEI ALL SUS at Corner Station taken to safe to prepair for restarting ALL CS Models.
Waiting for HAM7 OPO team to give Green Light for model restarts on HAM7.
... Oops... ZM4 & ZM5 models were accidentally restarted with H1SUSSQZOUT before we heard from OPO team.
TSC X&Y CO2 TRIP Imminent !? Verbals scrolled too fast to get a time.... Dave was restarting those models at the time.
Ends Stations and Mid stations were also restarted since the X & Y ES both tripped due to a dolphin issue. ES were not taken to Safe.
"Might as well restart them all" ~ Dave B.
SUS ETMX was shook hard enough to trip the watchdog. Probably because they were not taken to safe befroe the reboots.
Dave, Jonathan,
We restarted all the tonight starting at 16:30-17:40. This was due to unexpected model behavior where epics outputs did not match daqd data. We are working to understand the mechanism. Our supposition is that it was due to a slow /opt/rtcds on model startup. We did not need to restart any daqd processes, which points to this issue being internal to the models. Dave will fill in a few more details in a comment.
[Sheila, Betsy, Anamaria, Eric, Daniel, Karmeng]
First time opening HAM7.
Removed the contamination control horizontal wafer.
Tools and iris are setup next to the chamber.
Dust count in the chamber and in the clean room are good/low.
4 irises placed at: after ZM1, and CLF REFL. ZM3 and ZM4 are roughly placed, need to realign after CDS are power on. Pump reflection hard to place, and we will not place an iris there.
We took some photos to help us determine where to place the irises and how to route the new in-vacuum cabling. Linking them here since they might be useful when planning for future vents:
Jeff, Oli
Earlier, while trying to relock, we were seeing locklosses preceded by a 0.6 Hz oscillation seen in the PRG. Back in October we had a time where the estimator filters were installed incorrectly and caused a 0.6 Hz lock-stopping oscillation (87689). Even though we haven't made any changes to the estimators in over a month now, I decided to try turning them all off (PR3 L/P/Y, SR3 L/P/Y). During the next lock attempt, there were no 0.6 Hz oscillations seen. I checked the filters and settings and everything looks normal, so I'm not sure why this was happening.
I took spectra of the H1:SUS-{PR3,SR3}_M1_ADD_{L,P,Y}_TOTAL_MON_DQ channels for each suspension and each DOF during two similar times before and after the power outage. I wanted the After time to be while we were in MICROSEISM, since it maybe seems like maybe the ifo isn't liking the normal WINDY SEI_ENV right now, so I wanted both the Before and After times to be in a SEI_ENV of MICROSEISM and the same ISC_LOCK states. I chose the After time to be 2025/12/09 18:54:30 UTC, when we were in an initial alignment, and then found a Before time of 2025/11/22 23:07:21 UTC.
Here are the sprectra for PR3 and SR3 for those times. PR3 looks fine for all DOF, and SR3 P looks to be a bit elevated between 0.6 - 0.75 Hz, but it doesn't look like it should be enough of a difference to cause oscillations.
Then, while talking to Jeff, we discovered the difference in overall noise in the total damping for L and P changed depending on the seismic state we were in, so I made a comparison between MICROSEISM and CALM SEI_ENV states (PR3, SR3). USEISM time was 2025/12/09 12:45:26 UTC and CALM was 2025/12/09 08:54:08 UTC with a BW of 0.02. The only difference in the total drive is seen in L and P, where it's higher below 0.6 Hz when we are in CALM.
So during those 0.6 Hz locklosses earlier today, we were in USEISM. Is it possible that the combination of the estimators in the USEISM state create an unstable combination?
This is possibly true. The estimator filters are designed/measured using a particular SEI environment, so it is expected that they would underperform when we change the SEI loops/blends.
Additionally, we use the GS13 signal for the ISI-->SUS transfer function .It might be the case that the different amount of in-loop/out-of-loop ness of the GS13 might do something to the transfer functions. I don't have any math conclusions from it yet, but Brian and I will think about it.
I'm pretty confident that the estimators aren't a problem, or at least a red herring. Just clarifying the language here -- "oscillation" is an overloaded term. And remember, we're in "recovery" mode from Last Thursday's power outage -- so literally *everything* is suspect and wild guesses are are being thrown on around like flour in a bakery, and we only get brief, but separated by 10s of minutes time, unrepeatable, evidence that something's wrong. The symptom was "we're trying 6 different things at once to get the IFO going. Huh -- the ndscope time-series IFO build ups as we're locking one time looked to exponentially grow to lock-loss in one lock stretch and in another it just got noisier halfway through this lock stretch. What happened? Looks like something at 0.6 Hz." We're getting to "that point" in the lock acquisition sequence maybe once every 10 minutes. There's an entire rack's worth of analog electronics that go dark in the middle of this, as one leg of its DC power failed. (LHO:88446) The microseism is higher than usual and we're between wind storms, so we're trying different ISI blend configurations (LHO:88444) We're changing around global alignment because we thing suspensions moved again during the "big" HAM2 ISI trip at the power outage (LHO:88450) There's a IFO-wide CDS crash after a while that requires all front-ends to be rebooted; with the suspicion that our settings configuration file track system might have been bad . (LHO:88448)... Everyone in the room thinks "the problem" *could* be the thing they're an expert in, when it's likely a convolution of many things. Hence, Oli trying to turn OFF the estimators. An near that time, we switch the configuration of the sensor correct / blend filters of all the ISIs (switching the blends from WINDY to MICROSEISM -- see LHO:88444). So -- there was - only one, *maybe* two where an "oscillation" is seen, in the sense of "positive feedback" or "exponential growth of control signal." - only one "oscillation" where it's "excess noise in the frequency region around 0.6 Hz," but they check if it actually *is* 0.6 Hz again isn't rigorous. That happens to be frequency of the lowest L and P modes of the HLTSs, PR3 and SR3. BUT -- Oli shows in their plots that: - Before vs. after the power outage, when looking at times when the ISI platforms are in the same blend state PR3 and SR3 control is the same. - The comparing the control request when the ISI platforms are in microseims vs. in windy show the expected change in control authority from ISI input, as the change in shape of the ASD of PR3 and SR3 between ~0.1 and ~0.5 Hz matches the change in shape of the blends. Attached is an ndscope of all the relevant signals -- our at least the signals in question, for verbal discussion later.
(Randy, Jordan, Travis, Filiberto, Gerardo)
We closed four small gate valves, two at the relay tube, RV-1 and RV-2. Two at the filter cavity tube, between BSC3 and HAM7, FCV-1 and FCV-2. The purge air system was on since last week, with a dew point reported by the dryer tower of -66 oC, and -44.6 oC measured chamber side. Particulate was measured at the port by the chamber, zero for all sizes. HAM7 ion pump was valved out.
Filiberto helped us out with making sure high voltage was off at HAM7, we double checked with procedure M1300464. Then, system was vented per procedure E2300169 with no issues.
Other activities at the vented chamber:
Currently the chamber has the purge air active at a very low setting.
(Randy, Travis, Jordan, Gerardo)
-Y door was removed, no major issues removing it, but the usual O-ring sticking to flat flange, they stuck around the bottom part of the door, 5-8 O'clock. Other than that no other issues. Both blanks were removed and the ports were covered with an aluminum sheet.
Note, the soft cover will rub against ZM3 if the external jig to pull the cover is not used.
It's been a few minutes so far. There is emergency lighting. Luckily since it was lunchtime there were no people out on the floor. Recovery Begins!
The power was out from 2025-12-04 or Thursday Dec 04 2025 from 20:25 UTC until 22:02 UTC (12:25 PST to 14:02 PST)
J. Freed,
I took Phase noise measurements of the 2 channel Keysight 33600A waveform generator for its use in building SPI Pathfinder in the optics lab before install. Going only off of the phase noise graphs, it is sufficient as it shows comparable results to the SRS which had a phase noise considered to be good enough for SPI pathfinder.
Key.png Shows the phase noise results. C1, C2 are the phase noise results for Channel 1 and Channel 2 on the Keysight, respectively. (Set up shown below). Shown for comparison the SRS SG392, which was suggested as a possible frequency source for SPI. The last measurement shown is the direct measurement of phase noise between the 2 channels of the Keysight. This measurement reflects the intended use case of the Keysight for SPI. As we need 2 frequencies at slightly different frequencies locked to each other and SPI will be measuring the output phase difference. Note the 60Hz peak; most likely caused by unclean AC power. This is why we are not using an AC powered device in the final installation.
Screenshot2025-12-01at50150 PM.png Shows the setup for C1, and C2. measurements. The SRS value was found with the same set up, just replacing the Keysight 33600A with a SRS. The C1-C2 is a direct measurement by plugging both channels into the BluePhase 1000. There is no 10MHz Ext back attachment in this measurment in order to best represent Keysight's theoretical performance in the optics lab.
Edit to the other comment and the main post
After a discussion with Jeff, we figured the best course of action would be to have all the generators referenced to the same generator. In order to better compare all the results. As well as, redo the plot in the previous comment to better reflect SPIs requirements as the base standard rather than the other way around.
PhaseNoiseSetUp.png Is a picture of the set up, it is similar to the previous tests except everything is referenced to the SRS. One thing I failed to mention in the previous tests and is not listed in the diagram is that the divide-by-eight goes into another Distribution Amp. before heading into the Ext. 10MHz of the function generators. Also note that the Double Mixer (DM) does not have an Ext 10MHz port. Instead it takes the 80MHz signal from the first differential amp; and a sin and cos 4096Hz signal from a CDS DAC through an AI chassis. (see 81593). Also note that the OCXO measurement took a pickoff from the first Distribution Amp instead of some sort of extra DoT.
Keyrad2.png Is the graph of phase noise measurements of the Double Mixer (DM) D2400315, a LIGO 80 MHz OCXO D080702 (Which both DM and OCXO are used in the final SPI design), and a Keysight 33600A dual source waveform generator which will be used during the SPI build. The phase noise measurements were all referenced with a SRS SG382. Note that the BluePhase 1000 set up calculates values in dBc/Hz, to convert to a more directly useful value for SPI, rad/rtHz, I used the conversion:
[rad/rtHz] = sqrt(2) * 10^([dBc/Hz]/20)
The w/o. Ref. label is a separate reference from the SRS. In SPI, there is a reference interferometer that removes noise gained along each of the arms of the main SPI Mach-Zehnder. Mathematically speaking, this subtraction has an attinuation effect on our phase noise of:
Phase Noise = D/c * f * Phase noise(w/o. Ref.)
Where D is the length mismatch in the main arm between the reference and the main interferometers (or ~30m), c is the speed of light, and f is the frequency of the phase noise. Or put another way, the plots that have the label (w/o Ref.) are the direct measurements of the phase noise while plots without that label are the theoretical effect of the noise in our system. We will experimentally test this later once SPI is installed using injections; by altering the 4096Hz CDS filter bank for the DM.
P.S. I have no idea why the OCXO noise is worse than the DM. We expected it to be better. A possibility is that since the DM measurements were taken more than a half a year ago, one of the devices was "just having a bad day" today. Investigating this, while interesting, is a lower priority than other tasks as the main goal of investigating the Keysight noise performance was achieved.
Edits to previous post. Graph: X axis label should be 'Frequency offset from 80MHz (Hz)' and y-axis label should be 'dbc'
Keyradwref.png and Keyradworef.png are the requirements for the phase noise of our oscilator with SPI having and not having a reference interferometer respectivly. In the final SPI pathfinder install, we will have a reference interferometer giving us much less stringent requirements on our oscialtors phase noise. But during the build, it may be nessisary to run tests without a reference interferometer, I plotted the without reference interferometer if that situation ever does come up.