After a lock loss this morning Sheila and I spent some time looking further into the mode spacing for ALS (alog80065). We each took an arm and looked at the mode spacing and peak heights at good alignments and some not as good alignments. This exercise didn't yield any major revelations, but did confirm that our Y arm definitely doesn't have perfect mode matching, X arm seems to be better than Y.
Attachment 1 - ALS Y 00 mode to 2nd order mode spacing. This showed that yesterday Sheila and Ibrahim missed the small 1st order mode between these and Sheila has sinced updated their FSR and FHOM values (alog80076).
Attachment 2 - An example of ALS Y locking below the max flashes, showing that some of our power is in the higher order modes.
Attachment 3 - ALS X comparison of attachment 1
Attachment 4 & attachment 5 - ALS X 1st order peak spacing and height for an okay alignment (4) vs a better alignment (5).
Attachment 6 & attachment 7 - Same as above but for the 2nd order modes.
Lockloss @ 16:36 UTC - link to lockloss tool
No obvious cause, but there's a small hit on ETMX about 100ms before the lockloss. We've often seen similar motion before locklosses like this.
H1 back to observing at 18:29 UTC. Fully automated relock after some brief ALS commissioning.
FAMIS 26295
Laser Status:
NPRO output power is 1.822W (nominal ~2W)
AMP1 output power is 64.33W (nominal ~70W)
AMP2 output power is 137.7W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 16 days, 21 hr 59 minutes
Reflected power = 22.18W
Transmitted power = 104.8W
PowerSum = 127.0W
FSS:
It has been locked for 0 days 8 hr and 13 min
TPD[V] = 0.8122V
ISS:
The diffracted power is around 1.5%
Last saturation event was 0 days 8 hours and 13 minutes ago
Possible Issues: (all are known)
AMP1 power is low
PMC reflected power is high
ISS diffracted power is low
Fri Sep 13 08:12:45 2024 INFO: Fill completed in 12min 41secs
TITLE: 09/13 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 11mph Gusts, 7mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY: H1 has been locked and observing for just over 6 hours. Looks like one automated relock overnight.
I just noticed a chat message from Erik that there was only one hour of Seismic blrms data being displayed in the control room. This has to do with the fact that after h1dmt1 was restarted there was only 1 hour of trend data available in the last 48 hours due to the mis-mounting of the /gds trends directory. The trend files are now available back to yesterday afternoon, so a restart of the blrms monitors should expand the displayed data to at least 16 hours. I tried to contact the control room via teamspeak, but I did not get a reply. I will wait until the control room is manned to restart the blrms monitors. This should have no effect other than to cause the blrms monitor displays to update.
I have just restarted the Seismic monitors after clearing it with the control room. At present there should be 21 hours of data being displayed which corresponds to the amount of trend data that have been stored since the /gds mount was fixed (h1dmt1 was restarted). I don't see any problematic side effects. To get the full 48 hour history, we will have to restart the seismic monitors again once the trends have been written for 48 hours (tomorrow afternoon PDT).
TITLE: 09/13 Owl Shift: 0500-1430 UTC (2200-0730 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Tony TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 6mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
H1 has been locked and observing for 9 hours and 24 minutes.
A whole lot of nothing has happened during this shift.
3:22 UTC GRB-Short E511196
Sheila, Louis, Francisco
Following LHO:80044, we measured ASC-DHARD_P(Y) for different ASC-AS_A_DC_YAW_OFFSET and found the loswet values of DHARD_Y at an offset of -0.05.
The "DTT_AS_A_*.png"s show the outputs of DHARD_DARM_240912 when changing AS_A_DC_OFFSET with a reference trace for when the offset was nominal (ASC-AS_A_DC_YAW_OFFSET = -0.15). Each trace has a marker at the two highest peaks in the ASC-DHARD power spectrum (the middle plot on the left). DTT_AS_A_neg0_05 shows the lowest value in the power spectrum compared to DTT_AS_A_neg0_10, while preserving coherence (lowest-left) and phase values (middle-rigt), compared to DTT_AS_A_0. AS_A_NDSCOPE corroborate on the times at which these measurements were done and the values for AS_A offset.
Confirming that the four TFs traces corresponding to reports
were all done under a thermalized interferometer.
TITLE: 09/12 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
IFO is in NLN and OBSERVING (since 19:43 UTC)
It's been a good day both for observing and comissioning.
OBSERVING:
We needed to do an initial alignment after the comissioning period ended but were able to fully automatically get to NLN after that and have stayed there since.
COMISSIONING:
Accepted SDFs attached.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:58 | SAF | H1 | LHO | YES | LVEA is laser HAZARD | 18:24 |
15:19 | FAC | Karen | Optics, Vac Prep Labs | N | Technical Cleaning | 15:42 |
16:08 | OPS | Mitchell, TJ | EX | N | Dust Monitor Pump Check | 17:08 |
TITLE: 09/12 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 15mph Gusts, 9mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY:
H1 is locked and Observing for the past 4 hours.
DMT seems to be having trouble retrieving data beyond the hour for some reason.
The DMT PEM NUC trends have been replaced with a temporary ndscope.
Sheila, Ibrahim
Context: ALSY has been misbehaving (which on its own is not new). Usually, problems with ALS locking pertain to an inability to attain higher magnitude flashes. However, in recent locks we have consistently been able to reach values of 0.8-0.9 cts, which is historically very lockable, but ALSY has not been able to lock in these conditions. As such, Sheila and I investigated the extent of misalignment and mode-mismatching in the ALSY Laser.
Investgiation:
We took two references, a "good" alignment, where ALSY caught swiftly with minimal swinging, and a "bad" alignment where ALSY caught with frequent suspension swinging. We then compared their measured/purported higher order mode widths and magnitudes. The two attached screenshots are from two recent locks (last 24hrs) from which we took this data. We used the known Free Spectral Range and G-factor along with the ndscope measurements to obtain the higher order mode spacing and then compared this to our measurements. While we did not get exact integer values (mode number estimate column), we convinced ourselves that these peaks were indeed our higher-order modes (to a certain extent that will be investigated more). After confirming that our modes were our modes, we then calculated the measured power distribution for these modes.
The data is in the attached table screenshot (copying it was not very readable).
Findings:
Next:
Investigation Ongoing
TJ and I had a look at this again this morning, and realized that yesterday we misidentified the high order modes. In Ibrahim's screenshot, there is a small peak between the 00 mode and the one that is 18% of the total, this is the misalignment mode, while the mode with 18% of the total is the mode mismatch. This fits with our understanding that part of the problems that we have with ALSY locking is due to bad mode matching.
Attached is a quick script to find the arm higher order mode spacing, the FSR is 37.52kHz, the higher order mode spacing is 5.86kHz.
-Brice, Sheila, Camilla
We are looking to see if there are any aux channels that are affected by certain types of locklosses. Understanding if a threshold is reached in the last few seconds prior to a lockloss can help determine the type of lockloss, which channels are affected more than others, as well as
We have gathered a list of lockloss times (using https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi) with:
(issue: the plots for the first 3 lockloss types wouldn't upload to this aLog. Created a dcc for them: G2401806)
We wrote a python code to pull the data of various auxilliary channels 15 seconds before a lockloss. Graphs for each channel are created, a plot for each lockloss time are stacked on each of the graphs, and the graphs are saved to a png file. All the graphs have been shifted so that the time of lockloss is at t=0.
Histograms for each channel are created that compare the maximum displacement from zero for each lockloss time. There are also a stacked histogram based on 12 quiet microseism times (all taken from between 4.12.24 0900-0930 UTC). The histrograms are created using only the last second of data before lockloss, are normalized by dividing by the numbe rof lockloss times, and saved to a seperate pnd file from the plots.
These channels are provided via a list inside the python file and can be easily adjusted to fit a user's needs. We used the following channels:
After talking with Camilla and Sheila, I adjusted the histogram plots. I excluded the last 0.1 sec before lockloss from the analysis. This is due to (in the original post plots) the H1:ASC-AS_A_NSUM_OUT_DQ channel have most of the last second (blue) histogram at a value of 1.3x10^5. Indicating that the last second of data is capturing the lockloss causing a runawawy in the channels. I also combined the ground motion locklosses (EQ, Windy, and microseism) into one set of plots (45 locklosses) and left the only observe (and Refined) tagged locklosses as another set of plots (15 locklosses). Both groups of plots have 2 stacked histograms for each channel:
Take notice of the histogram for the H1:ASC-DC2_P_IN1_DQ channel for the ground motion locklosses. In the last second before lockloss (blue), we can see a bimodal distribution with the right groupling centered around 0.10. The numbers above the blue bars is the percentage of the counts in that bin: about 33.33% is in the grouping around 0.10. This is in contrast to the distribution for the observe, refined locklosses where the entire (blue) distribution is under 0.02. This could indicate a threshold could be placed on this channel for lockloss tagging. More analysis will be required before that (I am going to next look at times without locklosses for comparison).
I started looking at the DC2 channel and the REFL_B channel, to see if there is a threshold in REFL_B that can be put for a new lockloss tag. I plotted the last eight seconds before lockloss for the various lockloss times. This time I split up the times into different graphs based on if the DC2 max displacement from zero in the last second before lockloss was above 0.06 (based on the histogram in previous comment): Greater = the max displacement is greater than 0.06, Less = the max displacement is less than 0.06. However, I discovered that some of the locklosses that are above 0.06 for the DC2 channel, are failing the logic test in the code: getting considered as having a max displacement less than 0.06 and getting plotted on the lower plots. I wonder if this is also happening in the histograms, but this would only mean that we are underestimating the number of locklosses above the threshold. This could be suppressing possible bimodal distributions for other histograms as well. (Looking into debugging this)
I split the locklosses into 5groups of 8 and 1 group of 5 to make it easier to distinghuish between the lines in the plots.
Based on the plots, I think a threshold for H1:ASC-REFL_B_DC_PIT_OUT_DQ would be 0.06 in the last 3 seconds prior to lockloss
Fixed the logic issue for splitting the plots into pass/fail the threshold of 0.06 as seen in the plot.
The histograms were unaffected by the issue.
Added code to the gitLab
Lockloss during SQZ comissioning during a suspect ZM4 move.
1410197873 Maybe squeezing can cause a lockloss.... we lost lost lock 400ms after a 80urad ZM4 pitch move (ramp time 0.1s), sorry plot attached. Maybe this caused extra scatter in the IFO signal. We were seeing a scatter peak in DARM around 100Hz from PSAMS settings near where we were.
Took Calibration Sweep at 13ish H1 NLN Lock.
BB Start and End: 15:27 UTC, 15:35 UTC
File Names:
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240912T152702Z.xml
Simulines GPS End and Start: 1410190636, 1410192001
File Names:
File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240912T153646Z.hdf5
File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240912T153646Z.hdf5
File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240912T153646Z.hdf5
File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240912T153646Z.hdf5
File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240912T153646Z.hdf5
Calibration Monitor Screenshot taken right after hitting run on BB attached.
Camilla, Ibrahim and I ran a second set of simulines. This is not a full calibration measurment. Screenshot of the monitor lines attached. Got the following output at the end of the measurement, note the 30 minutes between 'Ending lockloss monitor' and 'File written':
2024-09-12 17:16:07,510 | INFO | Ending lockloss monitor. This is either due to having completed the measurement, and this functionality being terminated; or because the whole process was aborted.
2024-09-12 17:48:05,831 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240912T165244Z.hdf5
2024-09-12 17:48:05,845 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240912T165244Z.hdf5
2024-09-12 17:48:05,856 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240912T165244Z.hdf5
2024-09-12 17:48:05,866 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240912T165244Z.hdf5
2024-09-12 17:48:05,876 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240912T165244Z.hdf5
Marc Daniel
We measured the gain and phase difference between the new DAC and the existing 20-bit DAC in SUS ETMX. For this we injected 1kHz sine waves and measure gain and phase shifts between the two. We started with a digital gain value of 275.65 for the new DAC and adjusted it to 275.31 after the measurement to keep the gains identical. The new DAC implements a digital AI filter that has a gain of 1.00074 and a phase of -5.48° at 1kHz, which corresponds to a delay of 15.2µs.
This puts the relative gain (new/old) to 1.00074±0.00125 and the delay to 13.71±0.66µs. The variations can be due to the gain variations in the LIGO DAC, the 20-bit DAC, the ADC or the AA chassis.
DAC | Channel Name | Gain | Adjusted | Diff (%) | Phase (°) | Delay (us) |
0 | H1:SUS-ETMX_L3_ESD_DC | 1.00239 | 1.00114 | 0.11% | -5.29955 | -14.72 |
1 | H1:SUS-ETMX_L3_ESD_UR | 1.00026 | 0.99901 | -0.10% | -5.10734 | -14.19 |
2 | H1:SUS-ETMX_L3_ESD_LR | 1.00000 | 0.99875 | -0.12% | -4.93122 | -13.70 |
3 | H1:SUS-ETMX_L3_ESD_UL | 1.00103 | 0.99978 | -0.02% | -5.11118 | -14.20 |
4 | H1:SUS-ETMX_L3_ESD_LL | 1.00088 | 0.99963 | -0.04% | -5.21524 | -14.49 |
8 | H1:SUS-ETMX_L1_COIL_UL | 1.00400 | 1.00275 | 0.27% | -4.72888 | -13.14 |
9 | H1:SUS-ETMX_L1_COIL_LL | 1.00295 | 1.00170 | 0.17% | -4.88883 | -13.58 |
10 | H1:SUS-ETMX_L1_COIL_UR | 1.00125 | 1.00000 | 0.00% | -5.08727 | -14.13 |
11 | H1:SUS-ETMX_L1_COIL_LR | 1.00224 | 1.00099 | 0.10% | -4.92882 | -13.69 |
12 | H1:SUS-ETMX_L2_COIL_UL | 1.00325 | 1.00200 | 0.20% | -4.78859 | -13.30 |
13 | H1:SUS-ETMX_L2_COIL_LL | 1.00245 | 1.00120 | 0.12% | -4.55283 | -12.65 |
14 | H1:SUS-ETMX_L2_COIL_UR | 1.00175 | 1.00050 | 0.05% | -4.52503 | -12.57 |
15 | H1:SUS-ETMX_L2_COIL_LR | 1.00344 | 1.00219 | 0.22% | -5.00466 | -13.90 |
Average | 1.00199 | 1.00074 | 0.07% | -4.93611 | -13.71 | |
Standard Deviation | 0.00125 | 0.00125 | 0.13% | 0.23812 | 0.66 |
FPGA filter is
zpk([585.714+i*32794.8;585.714-i*32794.8;1489.45+i*65519.1;1489.45-i*65519.1;3276.8+i*131031; \
3276.8-i*131031;8738.13+i*261998;8738.13-i*261998], \
[11555.6+i*17294.8;11555.6-i*17294.8;2061.54+i*26720.6;2061.54-i*26720.6;75000+i*93675; \
75000-i*93675;150000+i*187350;150000-i*187350;40000],1,"n")
A while ago Oli found that some earthquakes cause IMC splitmon saturations, possibly causing locklosses. I asked Daniel this with respect to the tidal system to see if we could improve the offloading some. After some digging we found that some of the gains in the IMC get lowered to -32db at high power, greatly reducing the effective range of the IMC SPLITMON. He and Sheila decided that the best place to recover this gain was during the LASER_NOISE_SUPPRESSION state (575) so Sheila added some code to that state to redistribute some the gain (lines 5778 -5788):
self.gain_increase_counter = 0
if self.counter ==5 and self.timer['wait']:
if self.gain_increase_counter <7: #icrease the imc fast gain by this many dB
#redistribute gain in IMC servo so that we don't saturate splitmon in earth quakes, JW DS SED
ezca['IMC-REFL_SERVO_IN1GAIN'] -= 1
ezca['IMC-REFL_SERVO_IN2GAIN'] -= 1
ezca['IMC-REFL_SERVO_FASTGAIN'] += 1
time.sleep(0.1)
self.gain_increase_counter +=1
else:
self.counter +=1
We tried running this, but an error in the code broke the lock. That's fixed now, the lines are commented out in ISC_LOCK and we'll try again in some other day.
This caused 2 locklosses, so it took a little digging to figure out what is happening. The idea is to increase H1:IMC-REFL_SERVO_FASTGAIN, to compensate for reducing H1:IMC-REFL_SERVO_IN1GAIN and H1:IMC-REFL_SERVO_IN2GAIN, all analog gains used in IMC/tidal controls. It turns out there is a decorator used in almost every state of IMC_LOCK that sets H1:IMC-REFL_SERVO_IN1GAIN to some value, so when ISC_LOCK changes all 3 of these gains, IMC_LOCK was coming in after and resetting FASTGAIN. This is shown in the attached trend,on the middle plot the IN1 and IN2 gains step down like they are supposed to, but the FASTGAIN does a sawtooth caused by two guardians controlling this gain. The decorator is called IMC_power_adjust_func() in ISC_library.py and is called as @ISC_library.IMC_power_adjust in IMC_LOCK. The decorator just looks at the value of the FASTGAIN, Daniel suggests that it would be best for this decorator to look at all of the gains and do this a little smarter. I think RyanS will look into this, but looks like redistributing gain in the IMC is not straightforward.
tagging this with SPI. This would be good to compare against. The SPI should reduce HAM2-3 motion and reduce IMC length changes coming directly from ground motion. If IMC drive is to match the arm length changes then it won't help. (unless we do some feedforward of the IMC control to the ISIs?)