Displaying reports 3361-3380 of 83430.Go to page Start 165 166 167 168 169 170 171 172 173 End
Reports until 16:30, Friday 24 January 2025
LHO General
ryan.short@LIGO.ORG - posted 16:30, Friday 24 January 2025 (82449)
Ops Day Shift Summary

TITLE: 01/25 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Few hours this morning spent working on calibration, then a lockloss caused another couple hours of reacquisition time this afternoon. H1 has been observing for almost 1.5 hours.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
17:16 SAFETY LASER HAZ  (\u2310\u25a0-\u25a0) LVEA YES LVEA is Laser HAZARD Ongoing
15:58 FAC Mitchell LVEA - Checking scissor lifts 16:19
16:19 FAC Kim Opt Lab N Technical cleaning 16:45
18:41 ISC Keita, Jennie, Mayank, Sivananda Opt Lab YES (local) ISS array work 20:24
H1 PSL
ryan.short@LIGO.ORG - posted 16:04, Friday 24 January 2025 (82451)
PSL Status Report - Weekly

FAMIS 26352

Laser Status:
    NPRO output power is 1.85W
    AMP1 output power is 70.23W
    AMP2 output power is 137.2W
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 3 days, 4 hr 31 minutes
    Reflected power = 26.16W
    Transmitted power = 102.5W
    PowerSum = 128.6W

FSS:
    It has been locked for 0 days 1 hr and 44 min
    TPD[V] = 0.6629V

ISS:
    The diffracted power is around 3.6%
    Last saturation event was 0 days 3 hours and 19 minutes ago


Possible Issues:
    PMC reflected power is high
    FSS TPD is low

RefCav alignment will likely need to be fixed on-table next Tuesday (I can try touching it up with picos if there's some TOO downtime this weekend, but I don't expect to get much improvement). PMC Refl being high is nothing new.

H1 General
ryan.crouch@LIGO.ORG - posted 16:01, Friday 24 January 2025 - last comment - 16:59, Friday 24 January 2025(82450)
OPS Friday EVE shift start

TITLE: 01/25 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 16mph Gusts, 11mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.22 μm/s
QUICK SUMMARY:

Comments related to this report
ryan.crouch@LIGO.ORG - 16:59, Friday 24 January 2025 (82454)SQZ

I dropped Observing from 00:49 - 00:56 to adjust the SQZer, I brought H1:SQZ-ADF_OMC_TRANS_PHASE back to -136 alog82421 and then after the servo was done I adjusted the OPO temperature. I accepted the new phase in SDF.

Images attached to this comment
H1 ISC (PEM)
jennifer.wright@LIGO.ORG - posted 15:54, Friday 24 January 2025 (82447)
Moving PR2 spot analysis

Sheila, Jennie W, Ryan S

Summary: The camera servos got turned off accidentally last time we moved PR3. Worth another try at this measurement

Analysis of why we lost lock the other day while doing the PR2 spot move in lock by moving PR3 yaw alignment and pico-ing to stay on the POP and POPAIR PDs, see image.

When we first started alktering the yaw of PR3 at the first cursor the circulating power started to get higher in the arms and around 17:22:02 UTC the circulating power began to go down as did LSC-POP_A. About 30 mins after this the circulating power began to recover as we stopped changing PR3 position and the pic-motor position. We are not sure why this happened. After this preiod we started moving PR3 yaw down again and the circulating power and POP-A power decresed and then we lost lock.

Over this period when wer were not actively chnaging the alignment, PR2 was still moving. So we checked the camera servos to see if they move PR2 (they don't) but we discovered that the camera serrvos were switched off by the camera guardian, see image.

We realised that because the PR2_SPOT_MOVE guardian state that we had ISC-LOCK in is less than 577 which tripped this condition in the CAMERA_SERVO guardian.

The CAMERA-SERVO guardian went to state 500 as shown in the  ndscope final row at the first cursor. This guardian node then stalled here as the PR2_SPOT_MOVE state does not contain a call to the unstall nodes function in ISC_LOCK, instead of switching on ADS servos and then trying to get back to the CAMERA_SERVO_ON state as in its state graph.

We altered the CAMERA SERVO guardian to eliminate the turning off of camera servo if it thinks the IFO is unlocked (ie. in a low number state) as this should be handled by ISC_LOCK which manages it.

Still need to think about why our overall circulating power got better then worse several times during these changes and why precisely we lost lock.

Images attached to this report
H1 General (Lockloss)
ryan.short@LIGO.ORG - posted 13:37, Friday 24 January 2025 - last comment - 15:16, Friday 24 January 2025(82445)
Lockloss @ 20:42 UTC

Lockloss @ 20:42 UTC - link to lockloss tool

No obvious cause, but the wind had recently picked up and looks like there was an ETMX glitch immediately before the lockloss.

Images attached to this report
Comments related to this report
ryan.short@LIGO.ORG - 15:16, Friday 24 January 2025 (82448)

H1 back to observing at 23:10 UTC. Longer acquisition due to lots of low-state locklosses with seemingly no explanation (e.g. ALS dropping out unexpectedly for both arms). Eventually issues resolved themselves and relocking went automatically.

H1 General
ryan.short@LIGO.ORG - posted 12:20, Friday 24 January 2025 (82443)
H1 Out of Observing for Calibration Fixes

H1 dropped observing from 17:16 to 20:14 UTC for fixes to the calibration. Log entry to come from Louis/Evan with specifically what was done.

H1 DetChar
dishari.malakar@LIGO.ORG - posted 10:43, Friday 24 January 2025 (82442)
DQ shift report for 30 Dec 2024 - 5 Jan 2025

Summary of the report:

 

Full report: link.

LHO VE
david.barker@LIGO.ORG - posted 10:27, Friday 24 January 2025 (82441)
Fri CP1 Fill

Fri Jan 24 10:14:33 2025 INFO: Fill completed in 14min 29secs

Jordan confirmed a good fill curbside. TCmins [-91C, -90C] OAT (4C, 39F), deltaTempTime 10:14:22

Images attached to this report
H1 CAL
matthewrichard.todd@LIGO.ORG - posted 09:48, Friday 24 January 2025 (82440)
Calibration back to using new anti-aliasing filters in DCPD channels
[Evan Louis Matthew]

This morning after Evan and Louis fixed the anti-aliasing filters we used our 'lockloss-less' recipe to re-engage the filters while smoothly changing the demod phase. This required a slight adjustment to the arguments in the command LHO:82430 (listed below).

The new filters seem to be working as expected and are not causing yesterday's calibration error. This transition was done without lockloss.

This is a placeholder alog to state that the calibration and the ifo have been reverted to their nominal state from this morning. I'll follow up and edit this entry with additional details later tonight/ tomorrow morning.

To bring the filters back on and step the phase rot back to modified angle in same ramp time as the filters' ramps.
cdsutils switch H1:OMC-DCPD_A0 FM10 ON; cdsutils switch H1:OMC-DCPD_B0 FM10 ON; cdsutils step -s 0.065 H1:OMC-LSC_PHASEROT -- -1.0,77
LHO General (Lockloss)
ryan.short@LIGO.ORG - posted 07:41, Friday 24 January 2025 - last comment - 08:49, Friday 24 January 2025(82438)
Ops Day Shift Start

TITLE: 01/24 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 3mph Gusts, 1mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.18 μm/s
QUICK SUMMARY: H1 lost lock just a half hour ago at 15:09 from a not obvious cause (link to lockloss tool) and is relocking; just reached DRMI.

Comments related to this report
ryan.short@LIGO.ORG - 08:49, Friday 24 January 2025 (82439)ISC

H1 back to observing at 16:45 UTC. Had to help PRM during DRMI locking, but otherwise this was an automatic relock.

I updated OMC-LSC_PHASEROT from -21 to 56 since TJ pointed out in his alog from last night and accepted it in both SAFE and OBSERVE SDF tables (screenshots attached). Since the OMC was already locked by the time I did this, I just used the command from alog82430, which worked and did not cause a lockloss. This is possibly why the calibration overnight looked strange.

Images attached to this comment
H1 General
thomas.shaffer@LIGO.ORG - posted 03:17, Friday 24 January 2025 - last comment - 04:06, Friday 24 January 2025(82436)
Ops Owl Update

During relocking, H1 couldn't get a DRMI or PRMI so it went to go to the Check_Mich_Fringes state but we lost lock a few seconds into it. The LASER_PWR node was still moving up to 10W that we use for Check_Mich at the time that the 2W request came in, but this was ignored while it was moving. I'm not entirely sure why. So as with many lock losses, our IMC lost lock and since we were at 10W, it couldn't relock. The IMC eventually relocked 2.5 hours later, long enough to give me a call. By the time I logged in, it was already started an initial alignment at 10W. I requested 2W for the PRC alignment step and then it finished off initial alignment on its own.

All of the states in LASER_PWR that do the adjusting are "protected" guardian states, meaning that they have to return true before it's allow to move on. I can't remember why exaclty, but I think this was because it would confuse the rotation stage if you made a power request while there's another one going on. I would expect that once this state was done though, that it would have then moved to the 2W adjusting state, but it looks like it ignored that request entirely. I'll add this to my todo list to fix.

Comments related to this report
thomas.shaffer@LIGO.ORG - 04:06, Friday 24 January 2025 (82437)CAL

I had to accept an SDF diff for the OMC-LSC_PHASEROT of -21. I'm actually thinking that this is the incorrect value and it should be the previous 56 value, but I don't want to risk losing lock.

Images attached to this comment
H1 General
anthony.sanchez@LIGO.ORG - posted 22:08, Thursday 23 January 2025 (82435)
Thursday Eve shift End

TITLE: 01/24 Eve Shift: 21:00-0600 UTC (1300-2200 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
After we got to Observing  and everyone went home, nothing really happened after my last alog . Its been quiet ever since everyone left.

Since the Lockclock was interupted which was mentioned in my last alog, I should remind everyone:
This lock started at 19:41:24 UTC
Thus H1 has been locked for 10+ Hours.

oh also:
CALCS has some pending configuration changes according to the CDS Overview screen.

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 18:41, Thursday 23 January 2025 (82434)
Thursday mid shift report

TITLE: 01/24 Eve Shift: 21:00-0600 UTC (1300-2200 PST), all times posted in UTC
STATE of H1: Observing at 150Mpc
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 6mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.19 μm/s
QUICK SUMMARY:
H1 has been locked for 7 hours as of  02:41:24 UTC
H1 is currently Observing.
 

End of Comissioning and CtrlZ:
Calibration Team has been working all day on the OMC phase and gains.
Camilla touched up the SQZr settings and temp.
Robert covered the Viewports & we are almost ready to get back to Observing.

The calibration team now has to revert all of their changes.

GDS has been restarted a few times.

1:42 UTC DCPD AA filters turned off and phase changed. No lockloss!!! YAY!!
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=82433

After some final tweaks and some SDF accepts from Louis's changes, whe got back to OBSERVING without a Lockloss at 2:09 32 UTC!

 

This lock started at 19:41:24 UTC.
At 00:33 UTC I noticed that the Lock Clock FOM had crashed, and thus I relaunched the lockclock.
When it returned, it only had 30 minutes on it, even though we have not lost lock yet and had been locked for many hours.
All of the lock clocks all read the same 30 minutes.
On the Calibration_Monitor, CDS_OverView.
According to this FOM Screenshot of Nuc28 we had been locked for 4 hours and 19 minutes at 00:01 UTC:
https://lhocds.ligo-wa.caltech.edu/cr_screens/archive/png/2025/01/23/16/nuc28-1.png
The Lockclock crashed again, it may have coincided with a restart of GDS?  Sorry, Louis.  
After talking with Dave this turned out to be ; hand editing a puppet file issue.
When dave started working on this https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=82429 puppet started to overwrite the file that he had changed.

H1 CAL
louis.dartez@LIGO.ORG - posted 18:09, Thursday 23 January 2025 (82433)
Calibration back to nominal - reverted everything to state before today's commissioning activities
J. Driggers, T. Sanchez, O. Patane, M. Todd, L. Dartez

This is a placeholder alog to state that the calibration and the ifo have been reverted to their nominal state from this morning. I'll follow up and edit this entry with additional details later tonight/ tomorrow morning.

In short:
- The OMC demod phase changes from LHO:82413 have been reverted using the steps in LHO:82430 successfully without breaking lock.
- The additional 16kHz AA filters in the DCPD A and B paths have been turned off.
- Camilla reverted her adjustments to the squeezer (LHO:82432)
- The calibration changes have been fully reverted.


There are many details to the several attempts we made to get things working in the calibration pipeline to keep the AA filtering in place. I'll share more with an update ASAP.
H1 SQZ
camilla.compton@LIGO.ORG - posted 12:33, Thursday 23 January 2025 - last comment - 17:04, Thursday 23 January 2025(82421)
SQZ phase servo effected by OMC digital phase changes?

After the CAL team changed the OMC digital phase in 82413, the SQZ at the start of the lock went through an excursion much larger than usual, all the way to 5dB of ASQZ before settling around -3.5dB of SQZ, plot. We often have a SQZ excursion at the start of the lock but it rarely gets above 0dB. We are unsure if it makes sense that the OMC change would effect the SQZ like this or we were just unlucky.

I paused SQZ_MANGER and took SQZ_ANG_ADF to DOWN. I think tuned H1:SQZ-CLF_REFL_RF6_PHASE_PHASEDEG for the best SQZ and adjusted H1:SQZ-ADF_OMC_TRANS_PHASE to get H1:SQZ-ADF_OMC_TRANS_SQZ_ANG to zero, effectively setting the setpoint of the servo. Adjusted from -138 to -128 sdf. We may need to change this again or revert before going back to observing if the SQZ isn't good with a more thermalized IFO.

Also used this time to reduce the HAM7 rejected SHG power. Enabling (not moving) the picos unlocked SQZ which is surprising.

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 12:49, Thursday 23 January 2025 (82422)CAL, ISC, SQZ
The supposition of *how* the change in digital anti-aliasing might have impacted the ADF / SQZ loop is as follows:
- The actuation / excitation of the ADF / SQZ tuning system is a single line driven into the ADF's VCXO at 322.0 Hz.
- That ADF field beats against the IFO field, which is influenced by the DARM control loop.
- The new digital AA filter causes -4.5 deg phase loss (and an increase in amplitude of 0.1%) at 322.0 Hz, and changes the DARM open loop gain in that same way.
- Thus the 322 Hz DEMOD of the IFO vs. SQZ beatnote field that's picked off at 3.125 MHz needs a phase adjustment.

If that (I don't really know what I'm talking about) model of what's happening is happening, I would have expected the ADF SQZ DEMOD phase to need exactly 4.5 [deg] of phase change, much like the OMC LSC SERVO's DEMOD phase needed changing at it's modulation frequency (see LHO:82413).

Camilla's main log here suggests a (-138) - (-128) = 10 [deg] phase shift.

I very much welcome other mechanisms/models of what has happened if this change in SQZ behavior is a result of the OMC DCPD digital filter change.
Images attached to this comment
camilla.compton@LIGO.ORG - 15:25, Thursday 23 January 2025 (82428)

Once we'd been locked for 4 hours, I again tweaked the SQZ angle by finding the angle ether side of good SQZ and going to the middle the adjusting ADF phase, diff attached, so that the total phase change since the OMC changes is the 4deg that Jeff predicted.

SQZ at 1kHz+ still isn't as good as I remember so we touched the OPO temp while in observing as in 80461, no change needed.

Images attached to this comment
camilla.compton@LIGO.ORG - 17:04, Thursday 23 January 2025 (82432)

Reverted back to -128deg ready for reverting CAL team to revert thier changes.

H1 ISC (CAL, DetChar, ISC)
evan.goetz@LIGO.ORG - posted 12:27, Thursday 23 January 2025 - last comment - 16:59, Thursday 23 January 2025(82420)
Preliminary look at 524k and 16k DCPD data before and after additional anti-alias filtering
E. Goetz, J. Kissel, L. Dartez

In previous aLOGs (see, e.g., LHO aLOG 82405), we had to decimate the 524 kHz data offline in order to evaluate the improvement of TEST channels with changes to anti-alias filtering. We expected to see a reduction in artifacts with the addition of 1 extra 65-16k decimation filter. Attached is a figure showing before and after the ratio of ASDs calculated in DTT from the DCPD A0 channel (not the TEST A channels). This figure shows some improvements (though perhaps hard to see visually) and introduces more questions, especially comparing to a plot like in LHO aLOG 82405, attached here as well.

Statistics:
Bins above 1% before = 33416
Bins above 1% after = 30273
Bins above 1% before, f < 2000 Hz = 9011
Bins above 1% after, f < 2000 Hz = 8362
Mean before = 1.1087
Mean after = 1.0651
Mean before, f < 2000 Hz = 1.0756
Mean after, f < 2000 = 1.0495

So we do see that the number of bins above 1% has gone down as expected (good), but the raw number of bins above 1% is much different than our expectations. Figure 1 simply seems far noisier than Figure 2. What is the cause of this? It would seem to imply that the IOP downsampling is not simply grabbing 1 value for every 32 samples in a consistent manner, or perhaps the way DTT grabs and exports the PSD data? I'll have to keep digging, but this seems strange
Images attached to this report
Comments related to this report
erik.vonreis@LIGO.ORG - 14:06, Thursday 23 January 2025 (82425)

This may be improved by using a the double precision version of diaggui, 'diaggui_test', creates much less noisy ASDs, especially at higher frequencies.

 

In the attached image, single precision ASD is on the left and double precision is i on the right.

Images attached to this comment
evan.goetz@LIGO.ORG - 16:59, Thursday 23 January 2025 (82431)
There may be a DTT export precision issue at play here with the ASD as Erik suggests. I wanted to carry out a time series analysis offline, so I exported all of the data before and after for the 16k (H1:OMC-DCPD_A_OUT_DQ) and 524k (H1:OMC-DCPD_A0_OUT) channels. Then I computed the PSD of the 524k channel and 16k channel, plus downsampling the 524k channel and computing the PSD.

Then I plot the ratio of the 16k PSD over the 524k PSD (cut off to the 16k Nyquist) to inspect the data for excess noise before and after the addition of the extra 65-16k downsampling filter. I don't understand the red curve, but the blue curve seems reasonable, as well as the black and grey curves. The blue curve shows excess noise that is then suppressed by the additional filter seen in the absence of large ratio values in the black and gray curves.

This result shows that the extra filtering is helpful, but until we can push a new calibration, we'll have to hold off adding it in.
Images attached to this comment
H1 ISC (CAL, ISC)
jeffrey.kissel@LIGO.ORG - posted 12:09, Tuesday 21 January 2025 - last comment - 13:02, Friday 24 January 2025(82375)
Digital Anti-Aliasing Options for 524kHz OMC DCPD Path
J. Kissel, E. Goetz, L. Dartez

As mentioned briefly in LHO:82329 -- after discovering that there is a significant amount of aliasing from the 524 kHz version of the OMC DCPD signals when down-sampled to 16 kHz -- Louis and Evan tried a versions of the (test, pick-off, A1, A2, B1, and B2) DCPD signal path with two copies, each, of the existing 524 kHz to 65kHz and 65 kHz to 16 kHz AA filters as opposed to one. In this aLOG, I'll refer to these filters as "Dec65k" and "Dec16k," or for short in the plots attached "65k" and "16k."

Just restating the conclusion from LHO:82329 :: Having two copies of these filters -- and thus a factor of 10x more suppression in the 8 to 32kHz region and 100x more suppression in the 32 to 232 kHz region -- seems to dramatically improve the amount of aliasing.

Recall these filters were designed with lots of compromises in mind -- see all the details in G2202011.

Upon discussion of applying this "why don't we just add MOAR FIRE" option 2xDec65k and 2xDec16k option for the primary signal path, there was concerns about 
    - DARM open loop gain phase margin, and
    - Computational turn-around time for the h1iopomc0 front-end process.

I attach two plots to help facilitate that discussion,
    (1st attachment) Bode plot of various combinations of the Dec65k and Dec16k filters.
    (2nd attachment) Plot of the CPU timing meter over the weekend, the during in which these filters were installed and ON in the 4x test banks on the same computer.

For (1st) :: Here we show several of the high-frequency suppression above 1000 Hz, and phase loss around 100 Hz for a couple of simple combinations of filtering. The weekend configuration of two copies of the 65k and 16k filters is shown in BLACK, the nominal configuration of one copy is shown in RED. In short -- all these combinations incur less than 5 deg phase loss around the DARM UGF. Louis is going do some modeling to show the impact of these combinations on the DARM loop stability via plots of open loop gain and loop suppression. We anecdotally remember that the phase margin is "pretty tight," sub-30 [deg], but we'll wait for the plots.

For (2nd) :: With the weekend configuration of filters, with eight more filters (the copies of the 65k and 16k, copied 4 times in each of the A1, A2, B1, B2 banks) installed and running, the extremes of CPU clock cycle turnaround time did increase, from "never above 13 [usec]" to "occasionally hitting 14 [usec]" out of the ideal 1/2^16 = 15.26 [usec], which is rounded up on the GDSTP MEDM screen to be an even 16 [usec]. This is to say, that "we can probably run with 4 more filters in the A0 and B0 banks," though that may necessarily limit how much filtering can be in the A1, A2, B1, B2 banks for future testing. Also, no one has really looked at what happens to the gravitational wave channel when the timing of the CPU changes, or gets near the ideal clock-cycle time, namely the basic question "Are there glitches in the GW data when the CPU runs longer than normal?"
Images attached to this report
Comments related to this report
erik.vonreis@LIGO.ORG - 13:28, Thursday 23 January 2025 (82424)

Unless a DAC, ADC, or IPC timing error occurs, then a long IOP cycle time will not affect the data.  The models have some buffering, so can even suffer an occaisional long cycle time beyond the maximum without affecting data.

h1iopomc0 average cycle time is about 8 us (see the IO Info button on the GDS TP screen), so can probably run with a consistent max cycle time well beyond 15 us without affecting data.

jeffrey.kissel@LIGO.ORG - 13:02, Friday 24 January 2025 (82444)
Here, the 1st attachment, a two week trend of H1IOPOMC0 front-end (DCUID 179) CPU timing activity during this time periods flurry of activity in installing, turning on, and using lots of different combinations of (relatively low Q, low-order, low SOS number) filters. While the minute trend of the primary "CPU_METER" channel is creeping up, the "CPU_AVG" has only incremented up once to 8 [usec] that Erik quotes above. 

FYI these channels can be found displayed on MEDM in the IOP's GDS_TP screen, following the link to "IO Info" and looking at the "CPU PROCESSING TIMES" section at the top middle. See second attachment.
Images attached to this comment
Displaying reports 3361-3380 of 83430.Go to page Start 165 166 167 168 169 170 171 172 173 End