Displaying reports 21-40 of 84499.Go to page 1 2 3 4 5 6 7 8 9 10 End
Reports until 07:34, Tuesday 16 September 2025
LHO General
thomas.shaffer@LIGO.ORG - posted 07:34, Tuesday 16 September 2025 (86957)
Ops Day Shift Start

TITLE: 09/16 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 13mph Gusts, 9mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.14 μm/s
QUICK SUMMARY: No alarms found this morning. Planned lighter maintenance today so we can get back to locking and understanding the power losses.

H1 CDS
erik.vonreis@LIGO.ORG - posted 06:54, Tuesday 16 September 2025 (86956)
Workstations updated

Workstations were updated and rebooted.  This was an OS packages update.  Conda packages were not updated.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 22:21, Monday 15 September 2025 - last comment - 10:18, Tuesday 16 September 2025(86954)
OPS Eve Shift Summary

Literally everyone (to name who I know/can recall would be unfair)

TITLE: 09/16 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: IDLE
INCOMING OPERATOR: NONE
SHIFT SUMMARY:

IFO is in IDLE and DOWN for CORRECTIVE MAINTENANCE  buuuuut IFO was OBSERVING for ~1 hr.

Lockloss was intentional in order to avoid potential harmful lockloses and issues throughout the night. We actually got to NLN and OBSERVING though!

It's not exactly over yet - because we have only been locked for 2 hours. Problems may be there and dormant. Nevertheless, we got here from a outrageous outage recovery and a nasty ISC/Lock reacquisition that took 5 days to get back into observing.

The main things:

How we got to NLN:

Alog 86951 summarizes the lock acquisition. We just sat there for a bit at OMC_WHITENING as violins fell (and continued to)

After NLN:

LOG:

None

 

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 10:18, Tuesday 16 September 2025 (86961)

Just want to add some notes about a few of these SDFs

In this alog I accepted the TCS SIM and OAF jitter SDFs incorrectly. The safe restore had restored old values and I mixed up the "set point" and "epics value" columns here (a mistake I have made before and will likely make again). I should have reverted these values last week instead of accepting them.

Luckily, I was able to look back at Matt's TCS sim changes and I have the script that set the jitter cleaning coeffs, so I was able to reset the values and sdf them in safe. Now they are correctly SDFed in observe as well.

H1 CAL (CAL)
ibrahim.abouelfettouh@LIGO.ORG - posted 22:20, Monday 15 September 2025 - last comment - 11:07, Tuesday 16 September 2025(86955)
Post Outage Thermalized NLN BroadBand Calibration

Start Time: 1442034629

End Time: 1442034940

 

2025-09-15 22:15:22,374 bb measurement complete.
2025-09-15 22:15:22,375 bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20250916T051011Z.xml
2025-09-15 22:15:22,375 all measurements complete

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 11:07, Tuesday 16 September 2025 (86963)

Here is the pcal broadband compared to the broadband taken after we pushed the calibration on 8/28. Overall looks ok.

Images attached to this comment
H1 ISC
ibrahim.abouelfettouh@LIGO.ORG - posted 19:29, Monday 15 September 2025 (86951)
Locking: Getting LASER_NOISE_SUPPRESSION

Ibrahim, Elenna, Jenne, Ryan S

We made it to LASER_NOISE_SUPPRESSION!

Here's how we got here again and here's what we're monitoring

After much needed help from Ryan S, Elenna and Jenne in initial alignment (manual initial alignment needed), we managed to get back to CARM_5_PICOMETERS but encountered the same issues with an SRM ringup upn getting to RESONANCE, which caused a Lockloss.

Elenna guided me through the following combination of steps tried this morning:

1. Get to CARM_5_PICOMETERS

2. Set TR_CARM_OFFSET to -52

3. Set the REFLAIR_B_RF27 I PRCL matrix to 1.6*(value-it's-at). Press LOAD MATRIX. This is reached by going to LSC->LSC_OVERVIEW->AIR3F (under input matrices area on left of screen, you want to look for REFLAIR_B_RF27 on the left).

4. Continue locking - SRM rang up as before and saturated once but with the new gain, it was enough to not cause a LL.

This worked, and now we're at OMC_WHITENING with high violins (so understandable, sorry fibers).

Now, I'm monitoring H1:IMC-REFL_DC_OUT16 and watching for an increase in power, which is bad. We've been at 63W (the new 60) for 18 minutes - here's a plot of that.

I don't know the calibration, but CDS OVERVIEW is reading out 147MPc

Images attached to this report
H1 CDS
erik.vonreis@LIGO.ORG - posted 17:49, Monday 15 September 2025 (86950)
ISC C RF Amp 24M1 power-monitoring beckhoff

[Fil, Jeff, Dave, Erik, Patrick]

As part of the hunt for problems related to the power outage on September 10, Dave notices that the channel H1:ISC-RF_C_AMP24M1_POWEROK had moved from 1 before the outage to 0 after. 

Jeff determined that the output of the amplifier was nominal, so that likely it was a problem with Beckhoff and not with the amplifier itself.

Fil inserted a breakout board between the amp the beckhoff cable and recorded a voltage drop from 3.3 to 2.2 volts on the power ok signal (pin 7). 

Some more testing shows the problem was definitely at Beckhoff end.  The associated terminal may need to be replaced.

H1 PSL (OpsInfo)
jason.oberling@LIGO.ORG - posted 17:07, Monday 15 September 2025 - last comment - 09:17, Tuesday 16 September 2025(86949)
PSL Inspection After Power Outage

J. Oberling, K. Kawabe

This afternoon we went into the PSL enclosure to inspect things after last week's power outage.  We concentrated on the IOO side of the PSL table, downstream of the PMC.  Our results:

We did a visual inspection with both the IR viewer and the IR-sensitive Nikon camera and did not find any obvious signs of damage on any of the optical surfaces we had access to; the only ones we couldn't see were the crystal surfaces inside the ISC EOM, we could see everything else.  I looked at the optics between Amp2 and the PMC and everything there looked normal, no signs of anything amiss.

While the beam was not perfectly centered on every optic, we saw no clipping anywhere in the beam path.  The irises after the bottom periscope mirror were not well centered, but they've been that way for a while so we didn't have a good reference for assessing alignment in that path (these irises were set after the O4 PSL upgrade, but there have been a couple of alignment shifts since then and the irises were not reset).  For reference, the beam is in the -X direction on the HWP in the power control rotation stage and in the -Z direction (but centered horizontally) on the PZT mirror after the power control stage.  We do have a good alignment reference on the ALS path (picked off through mirror IO_MB_M2, the mirror just before the ISC EOM), as those were set as part of the HAM1 realignment during this year's vent.  By my eye the first iris looked a tiny bit off in yaw (-Y direction) and pitch (+Z direction), while the second iris looked perfectly centered.  We found this odd, so Keita used the IR-sensitive camera to get a better angle on both irises and took some pictures.  With the better angle the beam looked well centered in yaw and maybe a little off in pitch (+Z direction) on that first iris, so I think my eye was influenced by the angle from which I was viewing the iris.  The second iris still looked very well centered.  Edit to add: Since the ALS path alignment looks good, to me this signals that there was not an appreciable alignment shift as a result of the change in PMC temperature.  If the PMC was the source of the alignment shift we would see it in both the main IFO and ALS paths.  If there is a misalignment in the main IFO path, its source is not the PMC.  Upon further reflection, a more accurate statement is:  If the PMC is the source of an alignment shift, the shift is too small to be seen on the PSL table (but not necessarily too small to be seen by the IMC).

The other spot of note is the entrance aperture for the ISC EOM.  It's really bright so it's hard to make a definitive determination, but it could be argued there's a very slight misalignment going into the ISC EOM.  I couldn't make anything out with the IR viewer, but Keita's picture shows the typical ring around the aperture a little brighter on the left side versus the right.  Despite this, there is no clipping in the beam, as we set up a WinCAM beam profiler to check.

The WinCAM was set behind IO_AB_L4, which is the lens immediately behind the bottom periscope mirror (this is the path that goes to the IMC_IN PD).  The attached picture shows what the beam looks like there.  No signs of clipping in the beam, so it's clearing all the apertures in the beam path.  I recall doing a similar measurement at this spot several years ago, but a quick alog search yields nothing.  I'll do a deeper dive tomorrow and add a comment should I find anything.

So to summarize, we saw no signs of damage to any visible optical surfaces.  We saw no clear evidence of a misalignment in the beam; the ALS path looks good, and nothing in the main IFO path looks suspicious outside of the ISC EOM entrance aperture (a lack of good alignment irises makes this a little difficult to assess; once we get the IFO back to a good alignment we should reset those irises).  We saw no clipping in the beam.

Keita has many pictures that he will post as a comment to this log.

Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 09:17, Tuesday 16 September 2025 (86952)

For PSL table layout, see https://dcc.ligo.org/D1300348/.

  • Picture 1 and 2: PMC input coupler shows some scattering on the front surface as well as HR surface but nothing unusual. Same thing for the output coupler. (The output coupler clamp plate receives a ghost beam, Jason remembers that this was always the case.)
  • Picture 3, 4 and 5: HWP (IO-MB-HWP1) and the polarizer for the manual power adjustment, the splitter for EOM-ALS: Not centered but far from clipping.
  • Picture 6 and 7: EOM input looks a bit off-centered in YAW to the left, but not extraordinarily so. Pictured from different viewing positions to make sure that this is not some kind of photograph artifact.
  • Picture 8: EOM output. Hard to say anything except that you can see a ghost beam on the EOM case to the right of the output aperture in this picture.
  • Picture 9: Lenses look OK. Note that you see three spots in L1 but the two are actually EOM output and the ghost beam on the EOM case seen through the lens.
  • Picture 10: Corner mirror (IO-MB_M3) in front of the motorized HWP rotator.
  • Picture 11: Motorized HWP rotator. Some scattered light (?) is hitting the mount. 
  • No picture for TFPs, they were very hard to photograph, but the first TFP looked OK when viewed using the IR viewer.
  • Picture 12 shows the steering mirror below the first TFP that receives the rejected light.
  • Very hard to picture any scattered light from the periscope mirrors when the power was set to 2W. Which already tells us that there's no scattering/clipping at 1% or even 0.1% level (on these optics).
  • Could not take a picture of the PZT mirror, very hard to have a good view of the mirror surface.
  • Picture 13 and 14: These are between the L2 lens and the corner mirror, IRIS1 is close to L2 and IRIS2 is about a foot downstream. Both look pretty good, if anything the beam might be a bit high on IRIS1.
  • Picture 15: This is an iris right behind the lens for the bottom periscope mirror transmission (IO_AB_L4).  (Update later. It looks like an iris between the PZT mirror and the bottom periscope mirror though that doesn't change the conclusion.) It's off-centered, but Jason thinks that this iris was set a long, long, long time ago and cannot be trusted.
  • Picture 16: Wincam was placed between the iris pictured above and IO_AB_BS1.
    • The distance from the downstream face of the lens holder of IO_AB_L4 and the front of the the attenuator stack was measured to be 76.5mm.
    • The Wincam sensor to the front of the stack is known to be 72.3mm.
    • The post holder for Wincam was left on the table so people can put Wincam in the same position later if necessary.

 

Images attached to this comment
LHO General
ryan.short@LIGO.ORG - posted 17:07, Monday 15 September 2025 (86945)
Ops Day Shift Summary

TITLE: 09/15 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: More progress today towards recovering H1 back to low noise! We've been able to make it up to LASER_NOISE_SUPPRESSION once so far and updated alignment offsets. Unfortunately we did see the slow increase of IMC REFL again once increasing the input power to 63W during LNS, and shortly after there was an IMC-looking lockloss. Since then, Jason and Keita inspected optics in the PSL and didn't find anything alarming. We attempted to reproduce the IMC REFL behavior with just the IMC locked up to 63W with the ISS secondloop on and off, but did not see it, interestingly. We then decided to try locking again, so an initial alignment with the new green offsets is ongoing.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
15:47 FAC Nellie MY N Technical cleaning 16:40
16:40 FAC Kim MX N Technical cleaning 17:31
16:41 FAC Randy MY N   18:25
16:44 ISC Elenna LVEA N Plugging in freq injection cable 16:49
17:49 FAC Kim H2 N Technical cleaning 18:00
21:15 PEM TJ EY N Checking DM vacuum pump 22:13
21:16 PSL Jason, Keita PSL Encl Local Optics inspection 23:11
21:25 VAC Gerardo LVEA N Check HAM6 pump 21:33
21:37 VAC Gerardo MX N Inspect insulation 23:12
21:40 CDS Erik, Fil LVEA/CER N Checking RF chassis 22:13
H1 CDS
david.barker@LIGO.ORG - posted 16:32, Monday 15 September 2025 (86948)
third iteration of slow controls channel changes before/after power glitch

After finding the error code changes for the AMP241M1 sensor, Erik suggested listing channels which have single values before and after, but the value changed.

This table shows a summary of the channel counts, the detailed lists at in attached text files. non-zero to zero is called dead, varying to flatline is called broken, single to diff single is called suspicious.

System num chans num_dead num_broken num_suspicious
aux-cs 23392 10 10 221
aux-ex 1137 0 0 7
aux-ey 1137 0 0 7
isc-cs 2618 1 0 14
isc-ex 917 0 0 10
isc-ey 917 0 0 9
tcs-cs 1729 1 4 39
tcs-ex 353 0 1 7
tcs-ey 353 0 3 7
sqz-cs 3035 0 4 33

 

Non-image files attached to this report
H1 PEM
thomas.shaffer@LIGO.ORG - posted 16:22, Monday 15 September 2025 (86947)
Possible contamination came from dust monitors after power outage

Quick summary - I tested my theory of the dust monitor pump running backwards and spewing contaminate with a pump at EY and was able to get it to run backwards.

A day after the power outage Dave noticed that the PSL dust counts were still very elevated, so I went to check on the corner station dust monitor vacuum pump and found it in an odd state (alog86857). The pump was running very hot, read 0inHg on the dial, and had some loose connections. I turned it off, tightened things up, it turned it back on with good pressure. Thinking more about it after I had walked away, my theory is that the pump started to run backwards when the power went out and the low pressure of the vacuum pulled the pump in reverse. The power then came back on and the motor started and continued in that direction.

Today I wanted to check on the end station pumps, so I took this opportunity to bring a pump that needed to be rebuilt anyway with me to the end station and try to recreate my theory above. I found no pump at EX, so I went to EY where I found the pump completely locked up and the motor, not pump, very hot. I unplugged this, hooked up the one that needed a rebuild and plugged it in. It pulled to -12inHg. I then tried a few times unplugging the power, listening to hear if it started to spin backwards, then pulling it back in. Third time I got it and it was running while pushing a bit of air out of the bleed valve and the pressure read 0inHg.

I didn't test at the dust monitor end of this system if it was spewing out any contaminate, most likely the graphite from the vanes in the pump, but I'd guess if it was running backwards over night it would make enough for us to notice. I'm looking into check valves, in line filters, or some type of relay to avoid this in the future.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:19, Monday 15 September 2025 (86946)
OPS Eve Shift Start

TITLE: 09/15 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 9mph Gusts, 5mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.09 μm/s
QUICK SUMMARY:

IFO is in IDLE for CORRECTIVE MAINTENANCE

Promise! Today, we were able to get to LASER_NOISE_SUPPRESSION!

Witness to the culprit (In Jenne's words): The IMC_REFL power increases with IMC power increase.

There was a dust monitor that blew back some dust into the PSL that we think may have been the culprit but the Jason and Keita just came back from the PSL after looking for signs of this contamination but found none.

The plan now is to look at the IMC_REFL power increase again by setting the IMC power to 60W then to 63, whilst also trying a new combo:

Did IMC_REFL increase? We shall find out.

H1 General
oli.patane@LIGO.ORG - posted 15:08, Monday 15 September 2025 (86938)
PRM camera at 2W before vs after power outage

Since the power outage, when we've been in the higher 2W locking states, we have been seeing some 'breathing' of a spot in the upper right of the PRM camera. Before the power outage, there was some scattering seen there, but it looked different (now it looks like a thumbprint and has a clear outline when fully there) and didn't 'breathe' like we see now.

Pre-outage (2025-09-09 07:01 UTC) - in PREP_DC_READOUT

Post-outage (2025-09-15 19:44 UTC) - In PREP_DC_READOUT

Images attached to this report
H1 ISC
elenna.capote@LIGO.ORG - posted 14:22, Monday 15 September 2025 - last comment - 15:20, Monday 15 September 2025(86940)
Summary of lock attempt

Following the steps as detailed here: 86935 we were able to get to the engage ASC full IFO state.

I ran the code for engaging IFO ASC by hand, and there were no issues. I did move the alignment around by hand to make sure the buildups were good and the error signals were reasonable. Ryan reset the green references once all the loops, including soft loops engaged.

We held for a bit at 2W DC readout to confer on the plan. We decided to power up and monitor IMC REFL We checked that the IMC REFL power made sense:

I ran guardian code to engage the camera servos so we could see what the low frequency noise looked like. It looked much better than it did the last time we were here!

We then stopped just before laser noise suppression. With IMC REFL down by half, we adjusted many gains up by 6 dB. We determined that on like 5939, where the IMC REFL gain is checked if it is below 2 should now be checked to see if it is below 8. I updated and loaded the guardian.

We rain laser noise suppression with no issues.

Then, I realized that we actually want to increase the power out of the PSL so that the power at IM4 trans matches the value before the power outage- due to the IMC issues that power has dropped from about 56 W to about 54 W.

I opened the ISS second loop with the guardian, and then stepped up PSL requested power from 60 W to 63 W. This seemed to get us the power out we wanted.

Then, while we were sitting at this slightly higher power, we had a lockloss. The lockloss appears to be an IMC lockloss (as in the IMC lost lock before the IFO).

The IMC REFL power had been increasing, which we expected from the increase of the input power. However, it looks like the IMC refl power was increasing even more than it should have been. This doesn't make any sense.

Since we were down, we again took the IMC up to 60 W and then 63 W. We do not see the same IMC refl power increase that we just saw when locked.

I am attaching an ndscope. I used the first time cursor to show when we stepped up to 63 W. You can see that between this first time cursor and second time cursor, the IMC refl power increases and the IM4 trans power drops. However, the iss second loop was NOT on. We also did NOT see this behavior when we stepped up to 60 W during the power up sequence. Finally, we could not replicate this behavior when we held in down and increased the input power with the IMC locked.

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 15:20, Monday 15 September 2025 (86943)

It is possible that our sensitivity is back to nominal. Here is a comparison of three lock times, first before the power outage, second during the lock just after the power outage, and third after we turned off ADS today when locked.

These settings were not nominal for the final reference (green on the attached plot):

  • we had not yet run laser noise suppression (intensity stabilization, adjustments to carm)
  • no squeezing was injected
  • we had no OMC whitening engaged
  • beam diverters were open
  • not thermalized (blue and orange traces are thermalized)

The low frequency noise is not quite at the level of the "before outage" trace, but it is also not as bad as the orange trace.

Non-image files attached to this comment
H1 ISC (OpsInfo)
ryan.short@LIGO.ORG - posted 14:22, Monday 15 September 2025 - last comment - 15:57, Monday 15 September 2025(86941)
Green Alignment References Updated

After engaging full IFO ASC and soft loops this afternoon, I updated the ITM camera and ALS QPD offsets and accepted them in the appropriate SAFE SDF tables. After Beckhoff reboots and PSL optic inspections, we'll run an initial alignment to solidify these alignment setpoints. They will need to be accepted in the OBSERVE tables once eventually back to NLN.

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 15:57, Monday 15 September 2025 (86944)

I checked the POP A offsets when the alignment converged and updated them.

Images attached to this comment
H1 General (ISC, Lockloss)
oli.patane@LIGO.ORG - posted 14:11, Monday 15 September 2025 - last comment - 15:12, Monday 15 September 2025(86939)
Last lockloss from LASER_NOISE_SUPPRESSION was IMC lockloss

We had a lockloss while in LASER_NOISE_SUPPRESSION (575), and looking at ASC-AS_A, the light on the PD dropped at the same time as DARM lost lock, so it was an IMC lockloss (lockloss webpage is still unaccessible but command line tool worked for me after waiting a while)

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 15:12, Monday 15 September 2025 (86942)
Images attached to this comment
H1 CDS
david.barker@LIGO.ORG - posted 13:41, Monday 15 September 2025 (86937)
Slow controls channels which were varying before glitch and flat-lined after

The previous channel list was channels which were non-zero before the glitch and became zero afterwards.

I've extended the analysis to look for channels which were varying before the glitch and became flat-lined afterwards, the flat-line value is shown.

auxcs.txt:  H1:SYS-ETHERCAT_AUXCORNER_INFO_CB_QUEUE_2_PERCENT (varying→flat:8.000e-05)
auxcs.txt:  H1:SYS-TIMING_C_FO_A_PORT_12_NODE_GENERIC_PAYLOAD_1 (varying→flat:9.100e+06)
auxcs.txt:  H1:SYS-TIMING_C_FO_A_PORT_12_NODE_XOLOCK_MEASUREDFREQ (varying→flat:9.100e+06)
auxcs.txt:  H1:SYS-TIMING_C_FO_B_PORT_11_NODE_GENERIC_PAYLOAD_0 (varying→flat:1.679e+04)
auxcs.txt:  H1:SYS-TIMING_C_FO_B_PORT_11_NODE_GENERIC_PAYLOAD_17 (varying→flat:2.620e+02)
auxcs.txt:  H1:SYS-TIMING_C_FO_B_PORT_11_NODE_PCIE_HASEXTPPS (varying→flat:1.000e+00)
auxcs.txt:  H1:SYS-TIMING_C_FO_B_PORT_2_NODE_GENERIC_PAYLOAD_13 (varying→flat:5.830e+02)
auxcs.txt:  H1:SYS-TIMING_C_FO_B_PORT_3_NODE_GENERIC_PAYLOAD_13 (varying→flat:6.470e+02)
auxcs.txt:  H1:SYS-TIMING_X_GPS_A_DOP (varying→flat:3.000e-01)
auxcs.txt:  H1:SYS-TIMING_Y_GPS_A_DOP (varying→flat:3.000e-01)
sqzcs.txt:  H1:SQZ-FIBR_LOCK_BEAT_FREQUENCYERROR (varying→flat:2.000e+00)
sqzcs.txt:  H1:SQZ-FREQ_ADF (varying→flat:-3.200e+02)
sqzcs.txt:  H1:SQZ-FREQ_LASERBEATVSDOUBLELASERVCO (varying→flat:2.000e+00)
sqzcs.txt:  H1:SYS-ETHERCAT_SQZCORNER_CPUUSAGE (varying→flat:1.200e+01)
tcsex.txt:  H1:SYS-ETHERCAT_TCSENDX_CPUUSAGE (varying→flat:1.200e+01)
tcsey.txt:  H1:AOS-ETMY_BAFFLEPD_4_ERROR_CODE (varying→flat:6.400e+01)
tcsey.txt:  H1:AOS-ETMY_BAFFLEPD_4_ERROR_FLAG (varying→flat:1.000e+00)
tcsey.txt:  H1:AOS-ETMY_ERROR_CODE (varying→flat:1.000e+01)
 

H1 ISC
camilla.compton@LIGO.ORG - posted 12:20, Monday 15 September 2025 (86936)
IM4 Trans at 2W in, Before Outage comapred to Now

Comparing IM4 Trans alignment at 2W in before the power outage to now. Plot attached.

IM4 trans Pitch alignment is the same, Yaw alignment is 0.06 different. So alignment changes are minimal.

Power on IM4 trans is slightly (~3%) lower: NSUM Power on IM4 trans was 1.891 for 2.026W in (ratio 0.933), now is 1.780 for 1.974 in (ratio 0.902).

Images attached to this report
Displaying reports 21-40 of 84499.Go to page 1 2 3 4 5 6 7 8 9 10 End