Today Francisco and I went down to the EX End station to do a End Station measurement using PS4.
We followed our trusty T1500062 Procedure until aalmost the end, Because Francisco needed to do a beam movement After the ES measurement.
The PS4/PS5 lab measurement used for this was made on 2024-12-10. I did make a PS4/PS5 measurement yesterday, but that was done using the new Lab method. This means this Lab measurement is not availible on the master branch in the form required for the ES measurement analysis tools to use it properly.
python generate_measurement_data.py --WS "PS4" --date '2024-12-10'
Reading in config file from python file in scripts
../../../Common/O4PSparams.yaml
PS4 rho, kappa, u_rel on 2024-12-10 corrected to ES temperature 299.6 K :
-4.702394808525094 -0.0002694340454223 4.3277259408925024e-05
Copying the scripts into tD directory...
Connected to nds.ligo-wa.caltech.edu
martel run
reading data at start_time: 1422121080
reading data at start_time: 1422121480
reading data at start_time: 1422121800
reading data at start_time: 1422122150
reading data at start_time: 1422122510
reading data at start_time: 1422122865
reading data at start_time: 1422122980
reading data at start_time: 1422123840
reading data at start_time: 1422124200
Ratios: -0.4589607901775236 -0.4689807789761762
writing nds2 data to files
finishing writing
Background Values:
bg1 = 9.269796; Background of TX when WS is at TX
bg2 = 5.192082; Background of WS when WS is at TX
bg3 = 9.119009; Background of TX when WS is at RX
bg4 = 5.361966; Background of WS when WS is at RX
bg5 = 9.037959; Background of TX
bg6 = 0.508605; Background of RX
The uncertainty reported below are Relative Standard Deviation in percent
Intermediate Ratios
RatioWS_TX_it = -0.458961;
RatioWS_TX_ot = -0.468981;
RatioWS_TX_ir = -0.453407;
RatioWS_TX_or = -0.464488;
RatioWS_TX_it_unc = 0.085535;
RatioWS_TX_ot_unc = 0.091457;
RatioWS_TX_ir_unc = 0.089022;
RatioWS_TX_or_unc = 0.094409;
Optical Efficiency
OE_Inner_beam = 0.988084;
OE_Outer_beam = 0.990574;
Weighted_Optical_Efficiency = 0.989329;
OE_Inner_beam_unc = 0.057445;
OE_Outer_beam_unc = 0.061558;
Weighted_Optical_Efficiency_unc = 0.084198;
Martel Voltage fit:
Gradient = 1636.809688;
Intercept = 0.225736;
Power Imbalance = 0.978635;
Endstation Power sensors to WS ratios::
Ratio_WS_TX = -1.077654;
Ratio_WS_RX = -1.392043;
Ratio_WS_TX_unc = 0.053955;
Ratio_WS_RX_unc = 0.041453;
=============================================================
============= Values for Force Coefficients =================
=============================================================
Key Pcal Values :
GS = -5.135100; Gold Standard Value in (V/W)
WS = -4.702395; Working Standard Value
costheta = 0.988362; Angle of incidence
c = 299792458.000000; Speed of Light
End Station Values :
TXWS = -1.077654; Tx to WS Rel responsivity (V/V)
sigma_TXWS = 0.000581; Uncertainity of Tx to WS Rel responsivity (V/V)
RXWS = -1.392043; Rx to WS Rel responsivity (V/V)
sigma_RXWS = 0.000577; Uncertainity of Rx to WS Rel responsivity (V/V)
e = 0.989329; Optical Efficiency
sigma_e = 0.000833; Uncertainity in Optical Efficiency
Martel Voltage fit :
Martel_gradient = 1636.809688; Martel to output channel (C/V)
Martel_intercept = 0.225736; Intercept of fit of Martel to output (C/V)
Power Loss Apportion :
beta = 0.998895; Ratio between input and output (Beta)
E_T = 0.994100; TX Optical efficiency
sigma_E_T = 0.000419; Uncertainity in TX Optical efficiency
E_R = 0.995201; RX Optical Efficiency
sigma_E_R = 0.000419; Uncertainity in RX Optical efficiency
Force Coefficients :
FC_TxPD = 7.902395e-13; TxPD Force Coefficient
FC_RxPD = 6.183646e-13; RxPD Force Coefficient
sigma_FC_TxPD = 5.429896e-16; TxPD Force Coefficient
sigma_FC_RxPD = 3.673209e-16; RxPD Force Coefficient
data written to ../../measurements/LHO_EndX/tD20250128/
Martel measurement
WS in the Transmitter module.
WS in the Rx module
WS in the Rx module with Both Beams
PCAL ES trends pdf.
More data can be found here: https://git.ligo.org/Calibration/pcal/-/tree/master/O4/ES/measurements/LHO_EndX/tD20250128?ref_type=heads
Lockloss @ 01/29 01:00 UTC
03:02 Observing
TITLE: 01/28 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
Maintenance Day roughly ended at noon followed by locking and getting locked fairly smoothly with no issues worth noting.
There has been a temperature drop at EX from this morning due to a heater coil failure. Tyler gave Oli and I a summary of the status of EX's recovery. The Reheat temperature is now up at 80degF (it had been hovering at 60 for the last 7hrs)---it will now oscillate a bit until it returns to normal operations. Tyler said to keep an eye on it and contact him if we loose our Reheat air again.
LOG:
TITLE: 01/29 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 159Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 3mph Gusts, 1mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY:
Currently Observing at 159Mpc and have been Locked for 3.5 hours now. The work to fix the fan heating coil at EX just finished and the coils are running, just not with full capability. We have been doing okay even with the temperatures there still fluctuating. The average VEA temps had gone down 2.5°F over the past 7 hours, but now it's slowly going back up.
Ryan Crouch, Rahul Kumar
The assembly, balancing and testing of all 12 HRTS (Ham Relay Triple Suspension) for O5 is now complete. Currently all 12 HRTS has dummy optic installed at the bottom stage and once the mirrors are ready (with prisms bonded) then it will be replaced (later this year). I am attaching several pictures from the lab which shows all 12 HRTS staged on the optical bench. Later, six of the suspensions will be transported to LLO.
This plot compares the of transfer function performance of all 12 HRTS for all 6 degrees of freedom. We are still analyzing this data and there is scope for improving a couple of them (especially highlighted in green trace). Sometimes, it is as simple as adjusting the flag position wrt to LED/PD in bosems, and other times further fine tuning the balance and alignment of the blade springs.
The final two HRTS which were assembled by us are of OM0 configuration. This has bottom mass (M3 stage) actuation using AOSEM standoff assembly (as per D2300180_v2) as shows in the picture over here. The magnets used at M3 stage is 2.0 mm D x 0.5 mm T, SmCo. The transfer function results for both the OM0 configurations are as follows - attachment01 and attachment02. Both of them look healthy, when compared with the model.
Given below are the OLC, offsets and gains of the bosems attached to OM0 sn02,
s/n 622 31669 | 15834 | 0.947 |
s/n 639 32414 | 16207 | 0.925 |
s/n 637 28430 | 14215 | 1.055 |
s/n 632 27399 | 13699 | 1.094 |
s/n 684 26138 | 13069 | 1.147 |
s/n 698 32767 | 16383 | 0.915 |
I remeasured the associated suspension with the lime green trace (2024-9-16), a suspended version of the HRTS with structure s/n 012. Through adjusting the flags centering and position I was able to improve the measurement results, especially the verticle DOF, yaw also looks better. Previous measurement vs new measurement.
J. Kissel With today's new infrastructure, which down-samples the output of the 524 kHz test banks to 16 kHz in the same way as the primary GW channel, we confirm that last week's addition of an extra 65 to 16 kHz AA filter improves the GW data stream by reducing several down-converted line features. Using the infrastructure configured as shown in the 1st attached screenshot, I compare a ~17 average, 64 sec FFT (~0.02 Hz bindwidth), ASD of - RED The 524 kHz version of the A1 path, which has NO anti-aliasing. - BLUE The 16 kHz version of the primary GW path, which now has 1x 65k and 2x 16k AA filters - GREEN The 16 kHz version of the A2 path, which has the 1x 65k and 1x 16k AA filter configuration used prior to 2025-01-23 - BROWN The 16 kHz version of the A1 path, which has NO anti-aliasing and See 2nd attachment, focusing on the region where I see the most improvement between 400 and 1500 Hz. The The 3rd attachment shows the identical data set, but without DTT binning adjecent FFT bins that we normally used to aesthetically clean up the broad-band noise floor. One can see GREEN features that are not there in RED and also no in the improved BLUE that line up with features in the unfiltered, but down-sampled BROWN. More, fruitful, data to come from this infrastructure, I'm sure. I'm also sure that the CW group and DetChar must be happy with last week's change!
The (single precision) DTT session for this data set lives here: /ligo/svncommon/CalSVN/aligocalibration/trunk/Common/Electronics/H1/SensingFunction/OMCA/Data/ 2025-01-28_OMC_DCPD_AA_Filtering_Comparison.xml 2025-01-28_OMC_DCPD_AA_Filtering_Comparison_w16kHzData.xml where I've saved the session with the time-series in case we want to corroborate this study with offline down-sampling to better understand the front-end algorithm.
WP12302 New h1iopomc0, h1omc and IPC
Jeff, Jonathan, Erik, EJ, Dave:
New h1iopomc0 and h1omc models were installed. A DAQ restart (the first of two) was performed soon afterwards due to INI changes for both models.
h1omc0 uses a custom RCG for duo-tone frequency selection and in the past, filter ramp switching. The models had been built on h1build using the modifed RTS_VERSION and RCG_SRC vars, but the h1iopomc0 model would not start because h1build has a different kernel version to h1omc's boot server (h1vmboot0). The models were rebuilt and installed using h1vmboot0 which fixed the problem.
export RTS_VERSION=5.3.0~~dual_tone_frequency
export RCG_SRC=/opt/rtcds/rtscore/advligorts-5.3.1_ramp
WP12283 Add Locklossalert slow channels to DAQ
Dave:
A new H1EPICS_LLA.ini file was generated containing all the non-string PVs hosted by the locklossalert IOC. This was added to the edcumaster.txt
These channels were added to the DAQ on the second DAQ restart.
WP12296 Vacuum Ion Pump IOC changes
Patrick, Gerardo, Dave:
Patrick made code changes on h0vacmr which resulted in some EPICS channels being removed, some being renamed and some being added.
For those being renamed, their minute trend data using the old name is being referenced using the new names.
Following list shows the oldname/newname mapping.
DAQ Restarts
Jonathan, Dave:
The DAQ was restarted for the first time soon after the new h1iopomc0 and h1omc models were installed. This was an unremarkable restart except that gds0 had to be restarted twice before their channel lists synced up.
The DAQ was restarted a second time towards the end of maintenance once PCAL was done and the new h0vacmr IOC had been running for a while.
This restart followed a different procedure to permit renaming of the raw minute trend files while the trend writer was inactive (see above listing).
1. Restart 0-leg, except keep tw0 down
2. Restart EDC on h1susauxb123
3. Rename vacuum raw chan files
4. Start tw0
5. Restart gds0 if needed
5 Repeat above for the 1-leg, except EDC restart is not needed a second time.
All went well except I had a copy-paste error in my raw minute trend renaming script, which required me to go onto the trendwriters afterwards and fix a file naming issue.
Tue28Jan2025
LOC TIME HOSTNAME MODEL/REBOOT
08:24:20 h1omc0 h1iopomc0 <<< slow restart of h1iopomc0, it had to be rebuilt on h1vmboot0
08:25:00 h1omc0 h1omc
08:25:40 h1omc0 h1omcpi
08:31:21 h1daqdc0 [DAQ] <<< DAQ restart for OMC model changes
08:31:34 h1daqfw0 [DAQ]
08:31:34 h1daqtw0 [DAQ]
08:31:35 h1daqnds0 [DAQ]
08:31:42 h1daqgds0 [DAQ]
08:32:11 h1daqgds0 [DAQ] <<< gds0 restart
08:32:40 h1daqgds0 [DAQ] <<< gds0 needed a second restart
08:36:36 h1daqdc1 [DAQ]
08:36:48 h1daqfw1 [DAQ]
08:36:49 h1daqtw1 [DAQ]
08:36:50 h1daqnds1 [DAQ]
08:36:57 h1daqgds1 [DAQ]
08:37:30 h1daqgds1 [DAQ] <<< gds1 restart
11:27:24 h1daqdc0 [DAQ] <<< 0-leg restart keeping tw0 down
11:27:39 h1daqfw0 [DAQ]
11:27:40 h1daqnds0 [DAQ]
11:27:46 h1daqgds0 [DAQ]
11:28:26 h1susauxb123 h1edc[DAQ] <<< EDC restart, new VAC MR chans, added LLA chans
11:30:29 h1daqtw0 [DAQ] <<< delayed tw0 start following file renaming
11:31:53 h1daqgds0 [DAQ] <<< gds0 restart
11:33:43 h1daqdc1 [DAQ] <<< 1-leg restart keeping tw1 down
11:33:55 h1daqfw1 [DAQ]
11:33:55 h1daqnds1 [DAQ]
11:34:04 h1daqgds1 [DAQ]
11:35:48 h1daqtw1 [DAQ] <<< delayed tw1 start following file renaming
11:36:36 h1daqgds1 [DAQ] <<< gds1 restart
DAQ changes. left chevron=removed, right chevron=added
J. Kissel, (with model installs from D. Barker) ECR E2500017 IIET Ticket IIET:33143 WP 12302 This morning, Dave installed the model changes I prepped yesterday (LHO:82479), that installs front-end infrastructure to downsample the output of the OMC DCPD 524 kHz test banks (originally installed in July 2024 LHO:78956), in light of all the flurry of data analysis, and it revealing confusing results in the last two weeks (LHO:82420 and LHO:82294). These model changes have been committed to the userapps svn repo under /opt/rtcds/userapps/release/ cds/h1/models/h1iopomc0.mdl rev 30549 omc/h1/models/h1omc.mdl rev 30548 omc/common/models/omc.mdl rev 30547 Post-install, I built up MEDM screens. For now and for better or worse, all the screens are committed in the same location, /opt/rtcds/userapps/release/cds/h1/medm/ Overview screen: H1OMC_DCPD_TEST_OVERVIEW.adl Subordinate screens linked from overview: H1IOPOMC0_524KHZ_ADC0_INMTRX.adl H1IOPOMC0_524KHZ_DCPD_FILTERS.adl H1OMC_16KHZ_DCPD_FILTERS.adl H1OMC_DCPD_BALANCE_MATRIX_OVERVIEW.adl H1OMC_16KHZ_DCPD_SUMNULL_FILTERS.adl Because the model changes came with a channel name change for the 524 kHz test filter path, H1:OMC-DCPD_A1 >> H1:OMC-DCPD_524K_A1 H1:OMC-DCPD_B1 >> H1:OMC-DCPD_524K_A2 H1:OMC-DCPD_A1 >> H1:OMC-DCPD_524K_B1 H1:OMC-DCPD_B2 >> H1:OMC-DCPD_524K_B2 I needed to re-install all the filtering in the foton file, so I grabbed all the filters shown in LHO:82384 from /opt/rtcds/lho/h1/chans/filter_archive/h1iopomc H1IOPOMC0_1421610658.txt and pushed them to the cited version of the userapps svn /opt/rtcds/userapps/release/cds/h1/filterfiles/H1IOPOMC0.txt rev 30556 Finally, I set all of the EPICs records to how they're shown in the attached MEDM screen shots, accepted and unmonitored everything in the following SDF save files, and committed the fies to svn: h1iopomc0 model's safe and OBSERVE files are linked to the same file, /opt/rtcds/userapps/release/cds/h1/burtfiles/h1iopomc0/safe.snap. rev 30557 h1omc model's safe and OBSERVE files are linked to /opt/rtcds/lho/h1/target/h1omc/h1omcepics/burt/OBSERVE.snap -> /opt/rtcds/userapps/release/omc/h1/burtfiles/h1omc_OBSERVE.snap rev 30558 /opt/rtcds/lho/h1/target/h1omc/h1omcepics/burt/safe.snap -> /opt/rtcds/userapps/release/omc/h1/burtfiles/h1omc_down.snap rev 30558 We're now in observing, and Corey had to make no adjustments to the OMCs SDF save file. OK! That's all the pipe laying and masonry work -- now on the the science of looking at the differences between these channels!!
Closes FAMIS27660
For the CS dust monitors.
Clean & Bake: Passed both flow rate and zero count on the first try.
Staging BBSS lab: Passed both flow rate and zero count on the first try.
DR (diode room): Passed both flow rate and zero count on the first try.
LVEA:
5: Passed flow and zero count on the first try but it's left off as it's a pumped DM.
6: Passed both flow rate and zero count on the first try.
10: Passed both flow rate and zero count on the first try.
FCES: Passed both flow rate and zero count on the first try.
For the Labs.
LAB1 (Optics): Passed zero count on the first try but the flow was a little low, I increased it.
LAB2 (PCAL/Vac Prep): Passed both flow rate and zero count on the second try (1 ct of 0.03).
PSL: 101 (Anteroom), 102 (Laser room). I did not check these, RyanS will check them the next time he does a PSL excursion.
For the ends.
EndY: Passed both flow rate and zero count on the first try.
EndX: Passed the zero count on the first try but I had to increase the flow rate.
After a full docket of activities, started to lock H1 around 1945 (1145am) with an Initial Alignment. DRMI locking took 9+ min, and H1 was back to Observing by 2110 (110pmPT).
WP 12296 Patrick T., Gerardo M., Dave B. This work has been completed. The PLC code on h0vacmr is now running git version 3687c4fc15a718dc18aa07996ebcc79a57df2810 at https://git.ligo.org/cds/ifo/beckhoff/lho-vacuum. The MEDM screens have been updated. Dave has completed the associated DAQ restart. Since the vacuum PLC library code changed to add the Agilent IPCMini Ion Pump Controller, I built the updated library and installed it as part of this work. I ran into memory issues again, where it complained that too much memory was in use when I tried to open more than one Visual Studio instance. I ran into a number of errors running the Powershell script that generates the TwinCAT Visual Studio solution. A screenshot of one of them is attached. The others had more error messages, but I didn't catch them in a screenshot. The last attempt displayed an error, I think the one in the attached screenshot, but seemed to generate a working TwinCAT Visual Studio solution, so I left it at that and used it. During the process of trying to get the script to run I restarted the computer and cleaned out the modified and unversioned files in the git checkout.
Today I tuned up the FSS RefCav, as the TPD had been steadily falling over the last couple weeks. I started with a power budget of the FSS path (leaving the ISS ON and diffracting ~4%):
Clearly the single pass diffraction efficiency is well below its normal value of over 70%, so I proceeded to adjust the AOM alignment to improve it. Afterwards I adjusted the alignment of mirror M21 to improve the double pass diffraction efficiency. I then had to adjust the FSS EOM alignment to match the beam, it was almost clipping on the output aperture. The final power budget after all alignment tweaks (FSS In and AOM In were unchanged):
Try as I might, I could not find an AOM alignment with a higher diffraction efficiency (it usually hangs out in the mid-70% range). It might be time to start considering replacing the FSS AOM with a spare, as the peak single pass diffraction efficiency we've been able to achieve has been slowly falling over time.
Next I recovered the RefCav. Some alignment was needed using the RefCav input iris as a reference, and I realigned the RefCav Refl CCD camera to re-center the image. The RefCav locked without issue and I tweaked the alignment using the picomotor-controlled mirrors; the max TPD I could get while in the enclosure was 0.788 V, well below the maximums we've seen in the past. Next I aligned the RefCav Refl PD (since the beam alignment had shifted) and measured the RefCav visibility:
Visibility is also a little lower than usual, indicating there's some mode matching work to do (or maybe another indication it's time to change the FSS AOM?). Not much I could do about in the short amount of time left in the maintenance window, so I moved on. With the lower than normal TPD I checked the FSS TF. The UGF is currently ~320 kHz, which is a lower than the ~377 kHz we had at the end of the November NPRO swap. I left the FSS gains as they are (Common Gain of 14 dB, Fast Gain of 5 dB), but we could increase the Common Gain if we find we need more gain to compensate for the lower RefCav TPD. This complete the on-table RefCav alignment tweak.
By the time I got to the Control Room the TPD had already fallen slightly, likely due slight alignment changes as the enclosure returned to thermal equilibrium (this is normal). I tweaked the beam alignment from the Control Room and was able to get it back to ~0.775 V (this with the IMC locked). This completes LHO WP 12305.
Also, seeing the PSL cooling system flows not all the way back at their pre-chiller swap levels, I very slightly increased the chiller flow rate (by decreasing the amount of internal bypass); it was oscillating between 2.7 and 2.8 lpm, and is now a solid 2.8 lpm (as read at the chiller front panel). The flow rate for the Amplifiers returned to its usual 2.2 lpm, but the flow for the power meters and the return line are still slightly down from where they used to sit, by 0.1 lpm or less. We will monitor this and see how this effects the system (if at all, it's a very slight change).
Matt Todd, Camilla C. Contuined from 82281. Layout D1400241.
We swapped HWS the fiber from 200um to 50um and used the CFCS11-A adjustable SMA fiber collimator to make the beam as small as possible (at the limit of collimator), it still was larger than when this work was done at EX 81734, too large to align.
Instead we took some beam profile measurements of the beam out of the collimator, they are attached along with a photo of the table, L3 had been removed already. There was some strange wings on the beam profiler photo making the D4Sigma values unusable, photo attached.
Left the HWS laser off as it isn't aligned.
I've done some fitting, seems like an oddly small beam at the waist...but it's what the fit shows. I'll also try and confirm this by looking at the divergence angle coming from the 50um fiber and lens strength of the collimater.
Camilla, Sheila, Erik
Erik points out that we've lost lock 54 times since November in the guardian states 557 558 transition from ETMX or low noise ESD ETMX.
We thought that part of the problem with this state was a glitch caused when the boost filter in DARM1 FM1 is turned off, which motivated Erik's change to the filter ramping on Tuesday 82263, which was later reverted after two locklosses that happened 12 seconds after the filter ramp, 82284.
Today we added 5 seconds to the pause after the filter is ramped off (previously the filter ramp time and the pause were both 10 seconds long, now the filter ramp time is still 10 seconds but the pause is 15 seconds). We hope this will allow us to better tell if the filter ramp is the problem or something that happens immediately after.
In the last two lock acquisitions, we've had the fast DARM and ETMY L2 glitch 10seconds after DARM1 FM1 was turned off. Plots attached from Jan 26th and zoom, and Jan 27th and zoom. Expect this means this fast glitch is from the FM1 turning off, but we've seen this glitch come and go in the past, e.g. 81638 where we though we fixed the glitch by never turning on DARM_FM1, but we still were turning FM1 on, just later in the lock sequence.
In the lock losses we saw on Jan 14th 82277 after the OMC change (plot), I don't see the fast glitch but there is a larger slower glitch that causes the lockloss. One thing to note different between that date and recently is that the counts of the SUS are double the size. We always have the large slow glitch, but when the ground is moving more we struggle to survive it? Did the 82263 h1omc change fix the fast glitch from FM1 turning off (that seems to come and go) and we were just unlucky with the slower glitch and high ground motion the day of the change?
Can see from the attached microseism plot that it was much worse around Jan 14th than now.
Around 2025-01-21 22:29:23 UTC (gps 1421533781) there was a lock-loss in the ISC_LOCK state 557 that happened before FM1 was turned off.
It appears to have happend about 15 seconds after executing the code block where self.counter == 2
. This is about half way through the 31 second wait period before executing the self.counter == 3,4 blocks.
See attached graph.