TITLE: 09/13 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY: Short lock stretches today for some reason. Fortunately, at least relocking has been mostly straightforward.
H1 is currently relocking, so far up to TRANSITION_FROM_ETMX.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:58 | SAF | H1 | LHO | YES | LVEA is laser HAZARD | 18:24 |
15:26 | PCAL | Tony, Karen | PCal Lab | Local | Technical cleaning | 16:36 |
16:05 | TCS | Camilla | Opt Lab | n | Packing parts | 16:49 |
16:42 | VAC | Travis | MX | n | Check nitrogen dewar | 17:12 |
17:16 | TCS | Camilla | Optics Lab | n | Packing Parts | 18:16 |
18:49 | PCAL | Tony | PCal Lab | Local | Testing | 19:27 |
21:12 | PCAL | Tony | PCal Lab | Local | Testing | 21:16 |
22:53 | TCS | Camilla | Opt Lab | n | Parts cleanup | 23:18 |
TITLE: 09/13 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 14mph Gusts, 10mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
Currently relocking and at MAX_POWER.
Lockloss @ 22:13 UTC - link to lockloss tool
No obvious cause; only locked for about 90 minutes. I don't see the ETMX glitch prior to the lockloss that I've seen in the others today.
01:07UTC Observing
FAMIS26008
Nothing looks out of the ordinary and is consistent with the last edition of this famis task (alog79990).
Lockloss @ 18:47 UTC - link to lockloss tool
No obvious cause; were only locked for 20 minutes. ETMX started moving very slightly 43ms before the lockloss, but otherwise I don't see anything else suspicious.
H1 back to observing at 20:42 UTC. Automated relock except for I moved PRM to help during DRMI acquisition. One lockloss during TRANSITION_FROM_ETMX, but otherwise uneventful.
After a lock loss this morning Sheila and I spent some time looking further into the mode spacing for ALS (alog80065). We each took an arm and looked at the mode spacing and peak heights at good alignments and some not as good alignments. This exercise didn't yield any major revelations, but did confirm that our Y arm definitely doesn't have perfect mode matching, X arm seems to be better than Y.
Attachment 1 - ALS Y 00 mode to 2nd order mode spacing. This showed that yesterday Sheila and Ibrahim missed the small 1st order mode between these and Sheila has sinced updated their FSR and FHOM values (alog80076).
Attachment 2 - An example of ALS Y locking below the max flashes, showing that some of our power is in the higher order modes.
Attachment 3 - ALS X comparison of attachment 1
Attachment 4 & attachment 5 - ALS X 1st order peak spacing and height for an okay alignment (4) vs a better alignment (5).
Attachment 6 & attachment 7 - Same as above but for the 2nd order modes.
Lockloss @ 16:36 UTC - link to lockloss tool
No obvious cause, but there's a small hit on ETMX about 100ms before the lockloss. We've often seen similar motion before locklosses like this.
H1 back to observing at 18:29 UTC. Fully automated relock after some brief ALS commissioning.
FAMIS 26295
Laser Status:
NPRO output power is 1.822W (nominal ~2W)
AMP1 output power is 64.33W (nominal ~70W)
AMP2 output power is 137.7W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 16 days, 21 hr 59 minutes
Reflected power = 22.18W
Transmitted power = 104.8W
PowerSum = 127.0W
FSS:
It has been locked for 0 days 8 hr and 13 min
TPD[V] = 0.8122V
ISS:
The diffracted power is around 1.5%
Last saturation event was 0 days 8 hours and 13 minutes ago
Possible Issues: (all are known)
AMP1 power is low
PMC reflected power is high
ISS diffracted power is low
Fri Sep 13 08:12:45 2024 INFO: Fill completed in 12min 41secs
TITLE: 09/13 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 11mph Gusts, 7mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY: H1 has been locked and observing for just over 6 hours. Looks like one automated relock overnight.
I just noticed a chat message from Erik that there was only one hour of Seismic blrms data being displayed in the control room. This has to do with the fact that after h1dmt1 was restarted there was only 1 hour of trend data available in the last 48 hours due to the mis-mounting of the /gds trends directory. The trend files are now available back to yesterday afternoon, so a restart of the blrms monitors should expand the displayed data to at least 16 hours. I tried to contact the control room via teamspeak, but I did not get a reply. I will wait until the control room is manned to restart the blrms monitors. This should have no effect other than to cause the blrms monitor displays to update.
I have just restarted the Seismic monitors after clearing it with the control room. At present there should be 21 hours of data being displayed which corresponds to the amount of trend data that have been stored since the /gds mount was fixed (h1dmt1 was restarted). I don't see any problematic side effects. To get the full 48 hour history, we will have to restart the seismic monitors again once the trends have been written for 48 hours (tomorrow afternoon PDT).
TITLE: 09/13 Owl Shift: 0500-1430 UTC (2200-0730 PST), all times posted in UTC
STATE of H1: Observing at 152Mpc
OUTGOING OPERATOR: Tony TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 6mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
H1 has been locked and observing for 9 hours and 24 minutes.
A whole lot of nothing has happened during this shift.
3:22 UTC GRB-Short E511196
Sheila, Louis, Francisco
Following LHO:80044, we measured ASC-DHARD_P(Y) for different ASC-AS_A_DC_YAW_OFFSET and found the loswet values of DHARD_Y at an offset of -0.05.
The "DTT_AS_A_*.png"s show the outputs of DHARD_DARM_240912 when changing AS_A_DC_OFFSET with a reference trace for when the offset was nominal (ASC-AS_A_DC_YAW_OFFSET = -0.15). Each trace has a marker at the two highest peaks in the ASC-DHARD power spectrum (the middle plot on the left). DTT_AS_A_neg0_05 shows the lowest value in the power spectrum compared to DTT_AS_A_neg0_10, while preserving coherence (lowest-left) and phase values (middle-rigt), compared to DTT_AS_A_0. AS_A_NDSCOPE corroborate on the times at which these measurements were done and the values for AS_A offset.
Confirming that the four TFs traces corresponding to reports
were all done under a thermalized interferometer.
TITLE: 09/12 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
IFO is in NLN and OBSERVING (since 19:43 UTC)
It's been a good day both for observing and comissioning.
OBSERVING:
We needed to do an initial alignment after the comissioning period ended but were able to fully automatically get to NLN after that and have stayed there since.
COMISSIONING:
Accepted SDFs attached.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:58 | SAF | H1 | LHO | YES | LVEA is laser HAZARD | 18:24 |
15:19 | FAC | Karen | Optics, Vac Prep Labs | N | Technical Cleaning | 15:42 |
16:08 | OPS | Mitchell, TJ | EX | N | Dust Monitor Pump Check | 17:08 |
TITLE: 09/12 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 15mph Gusts, 9mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.11 μm/s
QUICK SUMMARY:
H1 is locked and Observing for the past 4 hours.
DMT seems to be having trouble retrieving data beyond the hour for some reason.
The DMT PEM NUC trends have been replaced with a temporary ndscope.
Sheila, Ibrahim
Context: ALSY has been misbehaving (which on its own is not new). Usually, problems with ALS locking pertain to an inability to attain higher magnitude flashes. However, in recent locks we have consistently been able to reach values of 0.8-0.9 cts, which is historically very lockable, but ALSY has not been able to lock in these conditions. As such, Sheila and I investigated the extent of misalignment and mode-mismatching in the ALSY Laser.
Investgiation:
We took two references, a "good" alignment, where ALSY caught swiftly with minimal swinging, and a "bad" alignment where ALSY caught with frequent suspension swinging. We then compared their measured/purported higher order mode widths and magnitudes. The two attached screenshots are from two recent locks (last 24hrs) from which we took this data. We used the known Free Spectral Range and G-factor along with the ndscope measurements to obtain the higher order mode spacing and then compared this to our measurements. While we did not get exact integer values (mode number estimate column), we convinced ourselves that these peaks were indeed our higher-order modes (to a certain extent that will be investigated more). After confirming that our modes were our modes, we then calculated the measured power distribution for these modes.
The data is in the attached table screenshot (copying it was not very readable).
Findings:
Next:
Investigation Ongoing
TJ and I had a look at this again this morning, and realized that yesterday we misidentified the high order modes. In Ibrahim's screenshot, there is a small peak between the 00 mode and the one that is 18% of the total, this is the misalignment mode, while the mode with 18% of the total is the mode mismatch. This fits with our understanding that part of the problems that we have with ALSY locking is due to bad mode matching.
Attached is a quick script to find the arm higher order mode spacing, the FSR is 37.52kHz, the higher order mode spacing is 5.86kHz.
A while ago Oli found that some earthquakes cause IMC splitmon saturations, possibly causing locklosses. I asked Daniel this with respect to the tidal system to see if we could improve the offloading some. After some digging we found that some of the gains in the IMC get lowered to -32db at high power, greatly reducing the effective range of the IMC SPLITMON. He and Sheila decided that the best place to recover this gain was during the LASER_NOISE_SUPPRESSION state (575) so Sheila added some code to that state to redistribute some the gain (lines 5778 -5788):
self.gain_increase_counter = 0
if self.counter ==5 and self.timer['wait']:
if self.gain_increase_counter <7: #icrease the imc fast gain by this many dB
#redistribute gain in IMC servo so that we don't saturate splitmon in earth quakes, JW DS SED
ezca['IMC-REFL_SERVO_IN1GAIN'] -= 1
ezca['IMC-REFL_SERVO_IN2GAIN'] -= 1
ezca['IMC-REFL_SERVO_FASTGAIN'] += 1
time.sleep(0.1)
self.gain_increase_counter +=1
else:
self.counter +=1
We tried running this, but an error in the code broke the lock. That's fixed now, the lines are commented out in ISC_LOCK and we'll try again in some other day.
This caused 2 locklosses, so it took a little digging to figure out what is happening. The idea is to increase H1:IMC-REFL_SERVO_FASTGAIN, to compensate for reducing H1:IMC-REFL_SERVO_IN1GAIN and H1:IMC-REFL_SERVO_IN2GAIN, all analog gains used in IMC/tidal controls. It turns out there is a decorator used in almost every state of IMC_LOCK that sets H1:IMC-REFL_SERVO_IN1GAIN to some value, so when ISC_LOCK changes all 3 of these gains, IMC_LOCK was coming in after and resetting FASTGAIN. This is shown in the attached trend,on the middle plot the IN1 and IN2 gains step down like they are supposed to, but the FASTGAIN does a sawtooth caused by two guardians controlling this gain. The decorator is called IMC_power_adjust_func() in ISC_library.py and is called as @ISC_library.IMC_power_adjust in IMC_LOCK. The decorator just looks at the value of the FASTGAIN, Daniel suggests that it would be best for this decorator to look at all of the gains and do this a little smarter. I think RyanS will look into this, but looks like redistributing gain in the IMC is not straightforward.
tagging this with SPI. This would be good to compare against. The SPI should reduce HAM2-3 motion and reduce IMC length changes coming directly from ground motion. If IMC drive is to match the arm length changes then it won't help. (unless we do some feedforward of the IMC control to the ISIs?)