Displaying reports 141-160 of 86009.Go to page Start 4 5 6 7 8 9 10 11 12 End
Reports until 17:39, Friday 05 December 2025
H1 SUS
ryan.crouch@LIGO.ORG - posted 17:39, Friday 05 December 2025 - last comment - 17:39, Friday 05 December 2025(88392)
Percentage of time spend at [+] and [-] for L3_LOCK_BIAS_OFFSET on ETMX

I used Tony and I's statecounter.py to take a look at the past year of data using minute trends. Minute and second trends do some rounding that I had to consider in my search, ETMX goes from -8.9 to +6.05425 during a lock aquisition, these average to ~ -1.42. I ended up searching the chan H1:SUS-ETMX_L3_LOCK_BIAS_OFFSET for above and below -2.0 doing the following calls:

python3 statecounter.py -chan 'H1:SUS-ETMX_L3_LOCK_BIAS_OFFSET' -value '-2.0' -trend 'm-trend' -operator '<=" -gpsstart '1416182400' -gpsstop '1447718400'
python3 statecounter.py -chan 'H1:SUS-ETMX_L3_LOCK_BIAS_OFFSET' -value '-2.0' -trend 'm-trend' -operator '>=" -gpsstart '1416182400' -gpsstop '1447718400'

This gave me an outfile.txt file full of results in the format "idx (data_idx_start, data_idx_stop) gpsstart gpsstop duration" which I did some brief analysis on yielding:

Percentage of the time the Bias offset was [+]: 43.70 %  *Past locked LOWNOISE_ESD_ETMY
Percentage of the time the Bias offset was [-]: 56.25 %   *Between PREP_FOR_LOCKING and LOWNOISE_ESD_ETMY
Total duration in [+]: 205.3 [Days], [-]: 159.5 [Days] over 365.0 [Days]
Data timespan missing due to minute rounding and FW restarts 0.1965 [Days] (0.054 % of total time).

Where the BIAS_OFFSET is changed from negative to positive has changed a little over the past year but it's always during one of the final states of ISC_LOCK and it's always reset to negative PREP_FOR_LOCKING.

Images attached to this report
Comments related to this report
ryan.crouch@LIGO.ORG - 17:39, Friday 05 December 2025 (88393)

I haven't done the same search for ETMY but looking through where it's set in ISC_LOCK and ndscope I can say that ETMY spends the large majority (>90%) of time at -4.9. Specifically it stays at this value for 4min 23sec per acquisition, it gets changed to +9.3 during LOWNOISE_ESD_ETMY but it's brought back to -4.9 in the next state LOWNOISE_ESD_ETMX as we switch back.

H1 PSL (OpsInfo)
jason.oberling@LIGO.ORG - posted 16:25, Friday 05 December 2025 (88389)
PSL Enclosure Environmental Controls in Weird State After Power Outage

Jenne sent me a Mattermost message this afternoon pointing out an odd "oscillation" in Amp2 output power, so I took a look.  Sure enough, it was doing something weird.  Ever since the power outage, on a roughly 2-hour period, the output power would drop slightly and come back up.  I looked at items that directly impact the amplifier: Water/amp/pump diode temperatures, pump diode operating currents, and pump diode output.  Everything looked good with the temperatures and operating currents, but the pump diode output for 3 of the 4 pump diodes (1, 2, and 4) showed the same periodic behavior as the amp output; every ~2 hours, the pump diode output would spike and come back down, causing the "oscillation" in Amp2 output power.  See first attachment.  But what was causing this behavior?

At first, I couldn't think of anything beyond, "Maybe the pump diodes are finally starting to fail..."  Gerardo, who was nearby at the time, reminded me that during the last power outage the H2 enclosure had an odd sound coming from its environmental control system, and that it was not showing up on the control panel for the system; the control panel showed everything as OFF, but when Randy climbed on top of the enclosure he found one of the anteroom fans moving very slowly and haltingly, and making a noise as it did so.  Turning the fans off at the control panel fixed the issue at the time.  So, I took a look at the signal for the PEM microphone in the PSL enclosure, which Gerardo also reminded me of, and sure enough, the mic was picking up more noise than it was before the power outage (see 2nd attachment).  Around the same time, from the front of the Control Room Sheila noted that Diag_Main was throwing an alarm about the PSL air conditioner being ON.  It had been doing this throughout the day, but everytime I checked the AC temperature it was reading 70 °F, which was a little lower than normal but not as low as it reads when the AC is actually on (which is down around 67 °F or so).  This time, however, when I checked the AC temperature it was reading 68 °F.  Huh.  So I pulled up a trend of the PSL enclosure temps and sure enough, every 2 hours it looks like the AC comes on, drops the temperature a little, then turns off, and this behavior lines up with the "oscillation" in Amp2 output (see 3rd attachment; not much data for the enclosure temps since those come in through Beckhoff, which was recovered earlier this afternoon, but enough).

I went out and turned every PSL environmental item (HEPA fans, ACs, and make-up air) ON then OFF again and placed the enclosure back in Science Mode (HEPA fans and AC off, make-up air at 20%).  Won't know for sure if this cured the issue, as it's been happening on a 2-hour period, but looking at the PEM microphone in the enclosure shows promise.  The PEM mic is not picking up the extra noise, it's back to where it was before the power outage (see final 2 attachments).  Also encouraging, at the time of writing the AC temperatures are above the temperature where the ACs would kick on before I cycled the environmental controls.  I'll continue to monitor this over the weekend.

Images attached to this report
H1 CDS
jonathan.hanks@LIGO.ORG - posted 16:17, Friday 05 December 2025 - last comment - 17:14, Friday 05 December 2025(88390)
CDS Recovery report

Dave, Jonathan, Tony, operators, ...

This is a compilation of recovery actions based off of a set of notes that Tony took while helping with recovery.  This is to augment the already existing log entries 88381 and 88376.  Times are localtime.

Thurs 4 Dec

At 12:25 PST power went out.  Tony and Jonathan had been working to shut down some of the systems so that they could have a graceful power off.  The UPS ran out around 1:17 PST.  At 2:02 the power came back.

Tony checked the status of the network switch, making sure they all powered on and we could see traffic flowing.

We started up the DNS/DHCP servers, as well as made sure the FMCS system was coming up.

Then we got access to the management box.  We did this with a local console setup.

The first steps were to get file servers up, we needed to get /ligo and /opt/rtcds up.  We started on /opt/rtcds as that is what FE computers need.  We turned on h1fs0 and made sure it was exporting file systems.  H1fs0 was problematic for us.  The opt/rtcds file system is a ZFS file system.  We think that the box came up, exported the /opt/rtcds path, and then got the zpool ready to use.  In the mean time another server came up and wrote to /opt/rtcds.  This appears to have happened before the zfs filesystem could be mounted, so it created directories in /opt/rtcds and kept the zfs filesystem (which had the full contents of /opt/rtcds) from mounting.  When we noticed this we deleted the /opt/rtcds contents on h1fs0, made sure the zfs file system mounted, and then re-exporting things.  This gave all the systems a populated /opt/rtcs.  We had to reboot everything that had started as they ended up now having stale file handles. There were still problems with the mount.  The file system performance was very slow over nfs.  Direct disk access was fast when testing on the file server.  We fixed this the next day after rulling out network conjestion and errors.

We then turned on the h1daq*0 machines to make it possible to start recording data.  However they would need a reboot to clear issues with /opt/rtcds, and would need front end systems up in order to have data to collect.

Then we went to get /ligo started.  We logged onto cdsfs2.  As a reminder cdsfs2,3,4,5 are a distributed system with the files.  We don't start this much so we had forgotten how.  Our notes hinted at it.  Dan Moraru helped here.  What we had to do was to tell pacemaker (pcs) to leave maintenance mode, then it started the cdsfsligo ip address.  Dan did a zpool reopen to fix zfs errors.  Then we restarted the nfs-server service.  At this point we had a /ligo file system.  We updated the notes on cdswiki as well so that we a reminder for next time.  The system was placed back into maintenance mode (the failover is problematic).

The next step was to get the boot server running.  This was h1vmboot5-5, that lived on h1hypervisor.  This is a kvm based system, that does not use proxmox, like our vm cluster.  So it took us a moment to get on, we ended up going in via the console and doing a virsh start on h1vmboot5-5.

Dave started front-ends at this point.  Operators were checking the IOC for power.

We started the 0 leg of the daq.

We started the procmox cluster up and began starting ldap and other core services.  To get the VMs to start we had to do some work on cdsfs0 as the VM images are stored on a share there.  This was an unmount of the share, starting zfs on cdsfs0 and a remount.

Ldap came up around 4:31.  This allowed users to start logging into systems as themselves.

Turned on the following vms

We powered on the epics gateway machine.

We needed to reboot h1daqscript0 to get mounts right and to start the daqstat ioc.  This was around 5pm.  The overview showed that TW1 was down.  We needed to bring and interface up on h1daqdc1 and start a cds_xmit process so that data would flow to TW1.  We got TW1 working around 5:15pm.

Powered on h1xegw0, and fmcs-epics.  Note with fmcs-epics the button doesn't work (it is a mac mini with a rack mount kit), you need to go the back and find the power button there.

Reviewing systems, we turned off the old h1boot1 (there are no network connections, so it doesn't break anything to power on, but it should be cleaned up).  Powered on the ldasgw0,1 machines so that h1daqnds could mount the raw trend archive.

The epics gateways did not start automatically, so we went onto cdsegw0 around 5:45 and ran the startup procedure.

The wap controller did not come up.  Something is electrically wrong (maybe a failed power supply).

At 6:01 Dave powered down susey for Fil.  It was brought back around 6:11.

Through this Dave was working on starting models.  The /opt/rtcds was very slow and models would time out while reading the safe.snap files.  Eventually Dave got things going.

Patrick, Tony, and Jonathan did some work on the cameras while Dave was restarting systems.  We picked a camera, cycled the network switch port to power cycle the camera.  Then restarted the camera service.  However this did not result in a video stream.

Friday

Jonathan found a few strange traffic flows while looking for slow downs.  These were not enough to cause the slow downs we had.  h1daqgds1 did not have it's broadcast interface come up and was transmitting the 1s frames out over its main interface and through to the router.  So this was disabled until it could be looked at.  The dolphin manager was sending more traffic to all daqd boxes trying to establish the health of the dolphin fabric.  It was not a complete fabric as dc0 was not brought back up as it isn't doing anything at this point.  In response we started dc0 to remove another source of abnormal traffic from the network.

These were not enought to explain the slow downs.  Further inspection showed that there was not abnormal traffic going to/from h1fs0 and that other than the above noted traffic there was no unexpected traffic on the switch.

It was determined the slow downs were strictly nfs related.  We changed cdsws33 to get /opt/rtcds from h1fs1.  This was done to test the impact of restarting the file server.  After testing restarts with h1fs1 and cdsws33 we stopped the zfs-share service on h1fs0 (so there would not be competing nfs servers) and restarted nfs-kernel-server.  In general no restarts where required, access just returned to normal speed for anything touching /opt/rtcds.

After this, TJ restarted the guardian machine to try and put it into a better state.  All nodes came up except one, which he then restarted.

Dave restarted h1pemmx, which had been problematic.

We restarted h1digivideo2,4,5,6.  This brought all the cameras back.

Looking at the h1daqgds machines the broadcast interface had not come back.  So starting the enp7s0f1 interfaces and restarting the daqd fixed its transmission problems.  The 1s frames are flowing to DMT.  At Dan's request Jonathan took a look at a DMT issue after starting h1dmt1,2,3.  the /gds-h1 share was not mounted on h1dmtlogin, so nagios checks were failing due to not have that data available.

The wap control was momentarily brought to life.  It's cmos battery was dead so it needed intervention to boot and needed a new date/time.  However it froze while saving settings.

The external medm was not working.  Looking at the console for lhoepics, its cmos battery had failed as well and needed intervention to boot.

After starting ldasgw0,1 yesterday we were able to mount the raw minute trend archive to the nds servers.

Fw2 needed a reboot to reset it's /opt/rtcds.

We also brought up more of the test stand today to allow Ryan Crouch and Rahul to work on the sustriple in the staging building.

A note, things using kerberos for authentication are going slower.  We are not sure why.  We have reached out to the LIAM group for help.

Comments related to this report
jonathan.hanks@LIGO.ORG - 17:14, Friday 05 December 2025 (88391)

I was able to get the wap controller back by moving it's disks to another computer.

This is a reminder that we need to rebuild the controller, and do it in a vm.

LHO General
thomas.shaffer@LIGO.ORG - posted 15:02, Friday 05 December 2025 (88378)
Ops Day Shift End

TITLE: 12/05 Day Shift: 1530-2300 UTC (0730-1500 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Oli
SHIFT SUMMARY: We've continued to recover after the power outage yesterday. The major milestones were getting the /opt/rtcds/ file server reset and running back up to normal speed. This allowed us to get all guardian nodes up and running and start other auxillary scripts and IOCs. The Beckhoff work wrapped up around noon and we were able to get the IMC locked and ready to start initial alignment just before 1300PT. We then ran into some IMC whitening issues, which have since been solved. The main issue at this point is the COMM and DIFF beatnotes are very low. Investigation ongoing. See many other alogs for specific information on the power outage recovery.

H1 ISC
elenna.capote@LIGO.ORG - posted 13:43, Friday 05 December 2025 - last comment - 17:43, Friday 05 December 2025(88387)
PR3 yaw move to improve comm beatnote, ALS X alignment

While TJ was running initial alignment for the green arms, I noticed that the ALS X beam on the camera appeared to be too far to the right side of the screen. The COMM beatnote was at -20 dBm, when it is normally between -5 and -10 dBm. I checked both the PR3 top mass osems and the PR3 oplevs. The top mass osems did not indicate any significant change in position, but the oplev seems to indicate a significant change in the yaw position. PR3 yaw was around -11.8 but then changed to around -11.2 after the power outage. It also appears that the ALS X beam is closer to its usual position on the camera.

I decided to try moving PR3 yaw. I stepped 2 urad and which brought the oplev back to -11.8 in yaw and the COMM beatnote to -5 dBm. Previous slider value: -230.1, new slider value: -232.1.

The DIFF beatnote may not be great yet, but we should wait for beamsplitter alignment before making any other changes.

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 14:10, Friday 05 December 2025 (88388)

Actually, this may not have been the right thing to do. I trended the oplevs and top mass osems of ITMX and ETMX and compared their values during today's initial alignment before moving PR3, to the last initial alignment we did before the power outage. They are mostly very similar except for ETMX yaw.

  P then P now Y then Y now
ITMX oplev -7.7 -7.8 5.8 5.6
ETMX oplev 2.7 3.1 -11.5 -3.1

I put the PR3 yaw slider back to its previous value of -230.1 until we can figure out why ETMX yaw has moved so much.

sheila.dwyer@LIGO.ORG - 17:43, Friday 05 December 2025 (88394)

I moved PR3 yaw back to -231.9 on the slider.  This allowed us to see PRMI flashes on POPAIR.  We can revist if we really want to keep this alignment of PR3 on Monday.

H1 SEI
ryan.crouch@LIGO.ORG - posted 13:02, Friday 05 December 2025 (88386)
BRS Drift Trends -- Monthly

Closes FAMIS38811, last checked in alog87797.

We can see yesterdays outage on every plot as expected and the BRS issues from October 21st. There's doesn't look to be any trend in the driftmon, and ETMY looks to be still slowly increasing in temperature before the outage. The aux plot looks about the same as during the last check except for ETMY DAMP CTL looks to have come back at a different lower spot but the EY SEI configuration looks like is not quite fully recovered so that may be why.

Images attached to this report
H1 General
thomas.shaffer@LIGO.ORG - posted 12:47, Friday 05 December 2025 (88385)
All systems recovered enough for relocking

All systems have been recovered enough to start relocking at this point. We will not be relocking the IFO fully at this time though since the ring heaters have been off and their time to thermalize would take too long. We will instead be running other measurements that will not need the ring heaters on.

There is a good chance that other issues will pop up, but we are starting initial alignment now.

H1 IOO
jenne.driggers@LIGO.ORG - posted 12:43, Friday 05 December 2025 (88384)
IMC locked

[Oli, Jenne]

Oli brought all of the IMC optics back to where they had been yesterday before the power outage.  We squiggled MC2 and the IMC PZT around until we could lock on the TEM00 mode, and let the WFS (with 10x usual gain) finish the alignment.  We offloaded the IMC WFS using the IMC guardian.  We then took the IMC to OFFLINE, and moved MC1 and the PZT such that the DC position on the IMC WFS matched a time from yesterday when the IMC was offline before the power outage. We relocked, let the alignment run again, and again offloaded using the guardian.   Now the IMC is fine, and ready for initial alignement soon.

H1 PEM
ryan.crouch@LIGO.ORG - posted 12:38, Friday 05 December 2025 (88383)
Dust monitor IOC restarted

I restarted all the dust monitor IOCs, they all came back nicely. I then reset the alarm levels using the 'check_dust_monitors_are_working' script.

H1 DAQ
daniel.sigg@LIGO.ORG - posted 12:34, Friday 05 December 2025 (88382)
Slow controls

Marc Daniel

We finished the software chanegs for EtherCAT Corner Station Chassis 4.

We found 2 issues related to the power outage:

Baffle PD chassis in EX has a bench suppy that needs to be turned on by hand.

The system is back up and running.

H1 CDS
david.barker@LIGO.ORG - posted 11:35, Friday 05 December 2025 (88381)
CDS Status

Jonathan, EJ, Richard, TJ, Dave:

Slow /opt/rtcds file access has been fixed.

All Guardian nodes running again.

The long-range-dolphin issue was due to the end station Adnaco chassis being down, Richard powered these up and all IPCs are working correctly now.

Cameras are now working again.

Currently working on:

 . GDS broacaster and DAQ min trend archived data

 . Alarms and Alerts

H1 PSL
oli.patane@LIGO.ORG - posted 10:20, Friday 05 December 2025 (88380)
PSL Status Report Weekly FAMIS

Closes FAMIS#27621, last checked 88194

Everything looking okay except PMC REFL being high. It does look like it's been higher since they recovered the PSL after yesterday's power outage, but we are also still recovering and more will still need to be done for the PSL anyway, so maybe this is expected/will be adjusted anyway.


Laser Status:
    NPRO output power is 1.83W
    AMP1 output power is 70.67W
    AMP2 output power is 139.9W
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 0 days, 17 hr 9 minutes
    Reflected power = 25.84W
    Transmitted power = 105.6W
    PowerSum = 131.5W

FSS:
    It has been locked for 0 days 16 hr and 19 min
    TPD[V] = 0.5027V

ISS:
    The diffracted power is around 4.0%
    Last saturation event was 0 days 0 hours and 0 minutes ago


Possible Issues:
    PMC reflected power is high

Images attached to this report
LHO General
tyler.guidry@LIGO.ORG - posted 09:13, Friday 05 December 2025 (88379)
FAC Recovery Post Power Outage
Following yesterdays power outage, primary FAC systems (HVAC, domestic water supply, fire alarm/suppression) all seem to be operating normally. Fire panels required acknowledgement of power supply failures, and battery backups did their job. There is an issue plaguing the supply fan drive at the FCES which revisited us this morning, but that arising from the outage is highly unlikely. Temperatures are within normal ranges in all areas. Eric continues to physically walk systems down to ensure normal operation of points in systems not visible via FMCS.

C. Soike E. Otterman R. McCarthy T. Guidry
LHO General
thomas.shaffer@LIGO.ORG - posted 07:52, Friday 05 December 2025 (88377)
Ops Day Shift Start

TITLE: 12/0 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Planned Engineering
OUTGOING OPERATOR: None

QUICK SUMMARY: Still in the midst of recovery from the power outage yesterday. Looks like all suspensions stayed damped overnight except the BS which tripped at 0422UTC. SEI is still in a non-nominal state as Jim left it yesterday (alog88373). Many medm screens, scripts, other processes are running *very* slow. 

Recovery will continue, one forward step at a time.

H1 PSL (ISC, PSL)
keita.kawabe@LIGO.ORG - posted 13:57, Tuesday 28 October 2025 - last comment - 17:46, Friday 05 December 2025(87803)
ISS array S1202965 was put in storage (Rahul, Keita)

Related: https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=87729

We disconnected everything from the ISS array installation spare unit S1202965 and stored it in the ISS array cabinet in the vac prep area next to the OSB optics lab. See the first 8 pictures.

The  incomplete spare ISS array assy originally removed from LLO HAM2 (S1202966) was moved to a shelf under the work table right next to the clean loom in the optics lab (see the 9th picture). Note that one PD was pulled from that and was transplanted to our installation spare S1202965.

Metadata for both 2965 and 2966 were updated.

ISS second array parts inventory https://dcc.ligo.org/E2500191 is being updated.

Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 15:59, Wednesday 29 October 2025 (87835)

Rahul and I cleared the optics table so Josh and Jeff can do their SPI work.

Optics mounts and things were put in the blue cabinet. Mirrors, PBS and lenses were put back into labeled containers and in the cabinet in front of the door to the change area.

Butterfly module laser, the LD driver and TEC controller were put back in the gray plastic bin. There was no space in the cabinets/shelves so it's put under the optics table closer to the flow bench area.

Single channel PZT drivers were put back in the cabinet on the northwest wall in the optics lab. Two channel PZT driver, oscilloscopes, a function generator and DC supplies went back to the EE shop.

OnTrack QPD preamp, its dedicated power transformer, LIGO's LCD interface for QPD and its power supply were put in a corner of one of the bottom shelf of the cabinet on the southwest wall.

Thorlabs M2 profiler and a special lens kit for that were given to Tony who stored them in the Pcal lab.

aLIGO PSL ISS PD array spare parts inventory E2500191 was updated.

keita.kawabe@LIGO.ORG - 17:46, Friday 05 December 2025 (88324)ISC, PSL

Final jitter coupling values etc. of the installation spare

I was baffled to find that I haven't made an alog about it, so here it is. These as well as other alogs written by Jennie, Rahul or myself in since May-ish 2025 will be added to https://dcc.ligo.org/LIGO-T2500077.

Position of the PDs relative to the beam were adjusted further.

Multiple PDs were moved so that there's no huge outlier in the position of the PDs relative to the beam. When Mayank and Siva were here, we used to do this using an IR camera to see the beam spot position. However, since then we have found that the PD output itself to search for the edge of the active area is easier.

After the adjustments were made, the beam going into the ISS array was scanned vertically as well as horizontally while the PD outputs were recorded. See the first attachment. There are two noteworthy points.

1. PDs "look" much narrower in YAW than in PIT due to 45 degrees AOI only in YAW.

Relative alignment matters more for YAW because of this.

2. YAW scan shows the second peak for most of PDs but only in one direction.

This was observed in Mayank/Siva data too but it wasn't understood back then. This is the design feature. The PDs are behind an array plate like in the second attachment (the plate itself is https://dcc.ligo.org/D1300322). Red lines show the nominal beam lines and they're pretty close to one side of the conical bores on the plate. Pink and blue arrows represent the shifted beam in YAW.

If the beam is shifted too much "to the right" on the figure (i.e. pink), the beam is blocked by the plate, but if the shift is "to the left" (i.e. blue) the beam is not blocked. It turns out that it's possible that the beam grazes along the bore, and when that happens, a part of the broad specular reflection hits the diode.

See the third attachment, this was shot when PD1 (the rightmost in the picture) was showing the second peak while PD2 didn't.

(Note that the v2 plate which we use is an improvement over the v1 that actually blocked the beam when the beam is correctly aligned. However, there's no reason things are designed this way.)

Finding a beam spot position that has a good (small/acceptable) jitter coupling.

We used a PZT-driven mirror to modulate the beam position, which was measured by the array QPD connected to ON-TRAK OT-301 preamp as explained in this document in T2500077 (though it is misidentified as OT-310). 

See the fourth attachment where relatively good (small/acceptable) coupling was obtained. The numbers measured this time VS April 2025 (Mayank/Siva numbers) VS February 2016 (T1600063-V2) are listed below. All in all, horizontal coupling was better in April but vertical is better now. Both now and Apr/2025 are better than Feb/2016.

PD number

Horizontal [RIN/m]

Vertical [RIN/m]

Now

Apr/2025

(phase NA)

Feb/2016

(phase NA)

Now

Apr/2025

(phase NA)

Feb/2016

(phase NA)

1

6.9 0.8 20 -0.77 34.1 11
2 7.1 2.7 83 5.1 2 25
3 8.2 5.5 59 2.2 4.4 80
4 8.8 2.3 33 0.30 1.1 21
5 -19  5.1 22 11 12.3 56
6 -14  12.9 67 16 30.4 44
7 -18  10.2 27 2.9 42.7 51
8 -19  5.3 11 12 52.1 54

Phase of the jitter coupling: You can mix and match to potentially lower jitter coupling further.

Only in "Now" column, the coupling is expressed as signed numbers as we measured the phase of the array PD output relative to the QPD output. Absolute phase is not that important but relative phase between the array PDs is important. The phase is not uniform across all diodes when the beam is well aligned. This means that you can potentially mix and match PDs to further minimize the jitter coupling. 

Using the example of this particular measurement, if you choose PD1/2/3/4 as the in-loop PD, the jitter coupling of the combined signal is roughly mean(6.9,7.1,8.2,8.8)=7.8 RIN/m horizontally and mean(-0.77, 5.1, 2.2, 0.3) = 1.7.

However, if you choose PD1/3/4/7 (in analog land), the coupling is reduced to mean(6.9, 8.2, 8.8, -18)=1.5 horizontally and mean(-0.77, 2.2, 0.3, 2.9)=1.2.

You don't pre-determine the combination now, you should tune the alignment and measure the coupling in chamber to decide if you want a different combination than 1/2/3/4.

Note, when monotonically scanning the beam position in YAW (or PIT) edge to edge of PDs, some PDs showed more than one phase flips. When the beam is apparently clipped at the edge (thus the coupling is huge), all diodes show the same phase as expected. But that's not necessarily the case when the beam is well aligned as you saw above.The reason of the sign flips when the beam is far from the edge of the PD is unknown but there should be something like particulates on the PD surface. 

QPD adjustment after the good coupling position was established

The QPD was physically moved so the beam is very close to the center of the QPD. This can be used as a reference in chamber when aligning the beam to the ISS array.

After this, we manually scanned the beam horizontally and measured the QPD output. See the 5th attachment, vertical axis is directly comparable to the normalized PIT/YAW of the CDS QPD module, assuming that the beam size on the QPD in the lab is close enough to the real beam in chamber (which it should be).

Images attached to this comment
H1 IOO (ISC, PSL)
jennifer.wright@LIGO.ORG - posted 18:17, Friday 03 October 2025 - last comment - 10:19, Friday 05 December 2025(87290)
ISS vertical calibration

Jennie W, Keita,

Since we don't have an easy way of scanning the input beam in the vertical direction, Keita used the pitch of the PZT steering mirror to do the scan and we read out the DC voltages for each PD.

The beam position can be inferred from the pictures setup - see photo. As the pitch actuator on the steering mirror is rotated the allen key which is in the hole in the pitch actuator moves up and down relative to the ruler.

height on ruler above table = height of centre of actuator wheel above table + sqrt((allen key thickness/2)^2 + (allen key length)^2) *np.sin(ang - delta_theta)

where ang is the angle the actuator wheel is at and delta_theta is the angle from the centre line of the allen key to its corner which is used to point at the gradations on the ruler.

The first measurement from our alignment that Keita found yesterday that minimised the vertical dither coupling is shown. It shows voltage on each PD vs. height on the ruler.

From this and from the low DC voltages we saw on the QPD and some PDs yesterday Keita and realised we had gone too far to the edge of the QPD and some PDs.

So in the afternoon Keita realigned onto all the of PDs.

Today as we were doing measurements on it Keita realised we still had the small aperture piece in place on the array so we moved that for our second set of measurements.

The plot of voltage with ruler position and voltage with pitch wheel angle are attached.

Images attached to this report
Comments related to this report
jennifer.wright@LIGO.ORG - 13:02, Monday 06 October 2025 (87323)

Keita did a few more measurements in the verticall scan after I left on Friday, attached is the updated scan plot.

He also then set the pitch to the middle of the range (165mm on the scale in the graph) and took a horizontal scan of the PD array using the micrometer that the PZT mirror is mounted on. See second graph.

Images attached to this comment
jennifer.wright@LIGO.ORG - 13:04, Monday 06 October 2025 (87324)

From the vertical scan of the PD array it looks like diodes 2 and 6, which are in a vertitcal line in the array, are not properly aligned. We are not sure if this is an issue with one of the beam baths through the beamsplitters/mirrors that split the light onto the four directions for each vertical pair of diodes or if these diodes are just aligned wrongly.

keita.kawabe@LIGO.ORG - 10:19, Friday 05 December 2025 (88367)

The above plots are not relevant any more as PD positions were adjusted since, but here are additional details for posterity.

  • "height" was always measured by a ruler which had a considerable zero point offset from the table surface. That's OK because the same offset appears on both sides of the equation and cancels with each other.
  • "height of centre of actuator wheel above table" =160mm.
  • "allen key thickness" = 2mm.
  • "allen key length" = 80.6mm.

Calculating rotation angle of the knob doesn't mean anything, that must be converted to a meaningful number like the displacement of the beam on the PD. This wasn't done for the above plots but was done to the plots with final PD positions. 

  • PZT mirror mount is Thorlabs KC-1-PZ series (not to be confused with KC-1-P series) and the manual actuator knob tilts the mirror by 0.4deg per revolution.
  • We haven't measured the distance from the PZT mirrror to anything but took pictures that are good enough to determine the distance to the array PDs.
    • Attached photos were shot during the beam profile measurement. Red and green lines are visual aid for the screw holes grid ON THE TABLE SURFACE and the yellow line is the beam path, which is pretty much directly above the red line. From the second picture, we see that the distance from the PZT mirror surface to the surface of the tenmprary mirror (SM1) is something like 9.5"+-0.5".
    • Before removing SM1 from the table, the distance from SM1 to the ISS array input aperture location (red cross in the third attachment) was measured to be 292mm.
    • The distance from the input aperture to the front surface of the periscope beam splitter on the ISS array is nominally 17.4mm (again see the third attachment).
    • Finally, from the front surface of the periscope beam splitter on the ISS array to one of the PDs in the array on the first floor is 139.2mm. ("PD1" in PD Array Plate problem.docx in https://dcc.ligo.org/LIGO-E1400231. Don't trust numbers for other PDs in the document as the effect of refraction is calculated incorrectly.)
    • In total, the distance from the PZT mirror to one array PD is ~690+-13mm or 690*(1+-0.02)mm. The error bar is negligible for this purpose.
Images attached to this comment
Displaying reports 141-160 of 86009.Go to page Start 4 5 6 7 8 9 10 11 12 End