Displaying reports 61-80 of 78140.Go to page 1 2 3 4 5 6 7 8 9 10 End
Reports until 22:08, Tuesday 01 October 2024
H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 22:08, Tuesday 01 October 2024 (80415)
Lockloss

TITLE: 10/02 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
LOG:
The first lock after the Tuesday Maintenance was reached after the Vacuum team was finished and after an initial alignment was completed, NLN was reached at 02:49 UTC.

A SQZ issue prevented Observing before a Super event candidate was announced.

5 minutes after H1 got into OBSERVING a HAM6 Saturation and sudden Lockloss happened.

Unknown Lockloss https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/?event=1411875844

Relocking went well but there was another HAM6 saturation and Lockloss that happened wile relocking at Engage_ ASC_For_FULL_IFO.

NLN Reached again @ 5:02 UTC
Observing reached again at 5:04 UTC

LOG:

Start Time System Name Location Lazer_Haz Task Time End
22:22 PEM Robert, Anamaria Yarm N PEM inv 23:22
22:32 FAC Betsy LVEA N Parts grab 22:36
23:11 VAC Janos LVEA N Join team 01:07
00:00 PEM Robert Anna-Maria End Y N Setting up PEM tests 01:07
00:03 SEI Jim Remote N Testing Blend filters on SEI sys while watching the ALS PDs 02:03
01:08 PEM Robert & Anna Maria Alabama N Mag Field measurements 03:08
01:20 VAC Janos LVEA N Checking ION pump 03:20

 

Images attached to this report
H1 SQZ (Lockloss)
anthony.sanchez@LIGO.ORG - posted 20:52, Tuesday 01 October 2024 (80413)
SQZr OPO Pump Fiber issues

H1 had the ISS Pump issue that gives DAIG main the see alog 70050 message.

I started reading that alog chain again and then Super event candidate S241002e happened.

Vicky quickly jumped online to lend a hand and explain how she manages this particular issue.

"The problem is the pump fiber launched power > 35mW, which is set as the limit
So the OPO GRD was trying to lock the OPO, but this required too much fiber into the power, that guardians were bringing it down to protect the pump fiber from too much input power  

so i backed off  opo_grTrans_setpoint_uW = 75  (usually 80, this is found in sqzparams.py)  

with lower opo transmitted power, less power is needed into the fiber, so things can lock again.

This required also re-tuning the OPO TEC setpoint (which is the SDF you have accepted), and re-ran SCAN_SQZANG to optimize squeezing with this lower squeezing level  

probably the to-do for SQZ is re-optimize fiber coupling into the pump fiber. We need to fix this pump path anyways, so this fiber coupling should be re-optimized then if it can wait, and earlier if not

(also - i have been putting all the things into the LHO SQZ chat so the local SQZ ppl know what's up)"


 

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 20:12, Tuesday 01 October 2024 (80412)
SDF Diffs

Current SDF Diffs that Are stopping us from reaching Observing.
But the SQZer is throughing the ISS Pump 70050 fit again.

Images attached to this report
LHO VE
janos.csizmazia@LIGO.ORG - posted 19:22, Tuesday 01 October 2024 - last comment - 14:46, Thursday 03 October 2024(80411)
HAM1(/HAM2) Annulus Ion Pump swap
Gerardo, Jordan, Travis, Janos

Reacting to the issue in aLog https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=80370, the HAM1 AIP was checked for controller and pump failures. After trying a couple of controllers, it was clear that the pump broke down.
Shortly after the maintenance period, because of the high seismic activity, locking was impossible, so the vacuum team took the opportunity and swapped the pump (as normally it would have been impossible during the 4 hours of the maintenance period).
The AIP was swapped with a noble diode pump - this was not exactly intentional, but turned out to be a happy accident, as it seemingly works much better than the Starcell or even the Galaxy pumps. However, as the noble diode pump has positive polarity, a positive polarity controller was needed: the only available piece is an Agilent IPC Mini (see it in the picture attached), which works well, but in the MEDM screen it appears to be faulty, due to its wiring differences.
All in all, the HAM1/HAM2 twin-annulus system was pumped with the 2 AIPs and an Aux cart - Turbo pump bundle. The noble diode pump stabilized very nicely (at 5-7E-7 Torr, which is unusually low), so eventually the Aux cart Turbo pump bundle was switched off - at 6:55 pm PT.
Since then, the 2 AIPs continue to decrease the annulus pressure, which is indeed very nice, so practically we are back to normal.
In the meantime, Gerardo quickly modified an appropriate Varian controller to have positive polarity, so at the next opportunity the vacuum team will swap it with the Agilent controller, so the MEDM screen will also be normal. Note that the Aux cart - Turbo bundle remain there until this swap happens.
Images attached to this report
Comments related to this report
gerardo.moreno@LIGO.ORG - 08:04, Wednesday 02 October 2024 (80416)VE

While the ion pump was replaced we managed to trip the cold cathode giving us the signal for the vacuum pressure internal to HAM1, PT100B, found the CC off last night, but since the IFO was collecting data decided to wait until we were out lock, I turned the CC back on this morning.  See trend data attached.

Images attached to this comment
jon.feicht@LIGO.ORG - 10:57, Wednesday 02 October 2024 (80427)
Diode pumps are typically faster than triode pumps including Starcell cathode types. Nice work!
gerardo.moreno@LIGO.ORG - 14:46, Thursday 03 October 2024 (80446)VE

Removed the Agilent IPC Mini controller that was temporarily installed on Tuesday, and replaced it with a positive (+) MiniVac controller.  Attached is a trend of current load for both controllers, HAM1 and HAM2.

Note for the vacuum team, we still have the aux cart connected to HAM2 annulus ion pump isolation valve.

Images attached to this comment
H1 General
anthony.sanchez@LIGO.ORG - posted 17:30, Tuesday 01 October 2024 - last comment - 09:28, Wednesday 02 October 2024(80410)
LVEA has been Swept

LVEA has been Swept.
Ready to start locking now.

 

Comments related to this report
camilla.compton@LIGO.ORG - 09:28, Wednesday 02 October 2024 (80424)DetChar

Travis noticed that the paging system was buzzing in the offices this morning. At 16:25UTC we unplugged both the paging system and PSL phone that had been missed yesterday. Tagging DetChar in case that caused any excess noise in last night's locks.

H1 PSL
jason.oberling@LIGO.ORG - posted 17:16, Tuesday 01 October 2024 (80409)
Investigating FSS Glitching

J. Oberling, R. Short, M. Pirello, F. Clara, F. Mera

Summary: We're pretty much back where we started, not sure where the problem is but sure there is a problem somewhere.

Since mid-September there have been an increasing number of glitches in the FSS.  These are seen in the FAST channel (signal to NPRO PZT), the PC channel (signal to phase-correcting EOM), and the TEMP channel (NPRO crystal operating temperature).  In addition, these glitches are the suspected cause of many locklosses in the last couple of weeks and have delayed relocking due to these glitches causing FSS oscillations.  In light of this we took a look at the FSS today.

At the start of maintenance we unlocked the IMC, leaving the FSS locked to only the Reference Cavity (RefCav).  I tweaked up the alignment then went out on the floor with Fernando and Ryan to take a look at the FSS transfer function.  I set up the Agilent network analyzer as normal, but the TF looked like garbage; I had to up the source power to 0 dBm (from our usual -20 dBm) to get a clear signal, and still the TF looked wrong.  See first attachment.  Seeing this, we filed a WP (12114) and pulled the TTFSS box from the PSL enclosure and took it to the EE shop.

The current TTFSS is old, and we were unable to find a test plan for it.  With Marc, Fil, and Fernando we began stepping through the circuit diagram, testing each component to make sure it was functioning as expected from the diagram (for example: "The gain should be x, is it?  No, but we might be injecting too high a frequency, see the filter in the feedback?  Lower the frequency, now is the gain OK?  Yes.").  This was somewhat time consuming, but we could find no single component that was clearly bad or not operating properly.  Visually everything looked ok, nothing was obviously blown or damaged.  Marc did clean some excess flux from some of the solder pads.  We then checked the LO and everything looked OK there as well; signal amplitude might be a little high but Fil didn't see anything worrisome.  Not knowing what else to do, I reinstalled the TTFSS box.

Once the box was installed I took a TF, this time from inside the enclosure using the Agilent unit we keep in there.  The TF looked just fine and as expected, see 2nd attachment.  With a common gain of 15 and a fast gain of 9 the FSS has a UGF of 452 kHz with 65° of phase margin; the peaks around 770 kHz are still present like normal.  We decided to check the crossover (where the NPRO PZT and the phase-correcting EOM meet) at ~20 kHz (this is set with the fast gain).  The 3rd attachment shows the crossover as found, with a fast gain of 9.  We tried raising the fast gain, but this made the crossover peak larger.  Lowering the fast gain made the peak smaller but also starts to raise the overall signal (see 4th and 5th attachments, with a fast gain of 7 and 6, respectively).  We decided to leave the fast gain at 7 for now, will see how things behave (in the 3 hours since we made this change and the time of writing this alog behavior seems unchanged).  The picture with the fast gain at 7 does looke a little worse than the intial one with fast gain at 9; not sure why, we went back and forth between them a couple times and 9 looked worse than 7 each time (maybe I got "lucky" with the camera shot?).  This done we transitioned the enclosure back to Science Mode.

Before leaving the LVEA we tried one more thing.  Since we had a successful TF measurement inside the enclosure, we hooked up the Agilent unit by the PSL/ISC racks again and took a TF (since I had an apparent junk measurement with this same unit earlier in the day, see that 1st attachment again).  The TF looked perfectly fine, matched the measurement done inside the enclosure.  Not sure where the garbage TF from this morning came from, at this point I can only call it "user error of unknown origin."

So we've ended where we started, sure there's a problem but not sure where it is.  The current TTFSS appears to be operating correctly, but the FAST signal is still glitching; we even saw glitching with the IMC unlocked, although Ryan reports that it "seems" like the glitching is more frequent when the IMC is locked (maybe IFO feedback is exacerbating an existing issue...).  Marc and Fil have been testing a newer gen of TTFSS (not the same as used in SQZ, as that one is not compatible with the PSL as currently wired), so once that is qualified we will try switching to it (Peter King had it working with the RefCav in the optics lab, so it should work with the PSL).  We'll try that next.

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 16:45, Tuesday 01 October 2024 (80407)
Tuesday Eve Shift Start

TITLE: 10/01 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: MAINTENANCE
    Wind: 21mph Gusts, 15mph 3min avg
    Primary useism: 0.08 μm/s
    Secondary useism: 0.81 μm/s
QUICK SUMMARY:
Secondary Micro Seism is elevated,
The Vacuum team is still working on an ION pump annulus swap out on the floor of the LVEA.
Locking attempts will start once the VAC team is done and a sweep of the LVEA has been completed.
 

 

Images attached to this report
H1 PSL
ryan.short@LIGO.ORG - posted 16:42, Tuesday 01 October 2024 (80408)
PSL Cooling Water pH Test

FAMIS 21312

pH of PSL chiller water was measured to be just above 10.0 according to the color of the test strip.

H1 General
ryan.crouch@LIGO.ORG - posted 16:31, Tuesday 01 October 2024 (80402)
OPS Tuesday day shift summary

TITLE: 10/01 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Microseism
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Maintenance ran a little over again from the TTFSS investigation. The ground is moving today, elevated microseism alog80392 and a 6.6 from Tonga tripped some ISI WDs. The DAQ was not restarted. The LVEA is now is laser SAFE.
LOG:                                                                                                                                                       

Start Time System Name Location Lazer_Haz Task Time End
23:58 SAF H1 LVEA YES LVEA is laser HAZARD 15:39
14:33 CDS Erik EY n Swap ISC chassis 16:32
15:00 SYS Betsy LVEA yes Part hunt 15:05
15:01 FAC Karen, Kim, Nellie FCES n Tech clean 16:10
15:18 FAC Eric CS n HVAC heater testing 16:43
15:18 CDS Fil EY n Helping with ISCEY FE 15:22
15:21 VAC Janos, Jordan, Travis LVEA n HAM1 AIP troubleshooting 16:32
15:21 SAF Ryan C LVEA - Transitioning LVEA to laser safe 15:40
15:25 FAC Chris Mids n Potty transport 17:27
15:25 CDS Fil CER, LVEA n Check UPSs 16:19
15:40 PSL Jason, Ryan S LVEA n FSS TFs 21:21
15:41 SYS Richard EY n Check on things 17:23
15:49 ALS Keita EX,EY n ALS troubleshooting 19:36
16:04 FAC Chris site n Pest control escort 17:27
16:08 SYS Mitchell LVEA n Parts hunt 16:17
16:10 FAC Karen EY n Tech clean 17:09
16:10 FAC Kim, Nellie EX n Tech clean 17:45
16:20 CDS Fil Ends n Checking UPSs 16:47
17:02 VAC Travis LVEA n Parts hunt 17:05
17:23 VAC Gerardo, Jordan LVEA n HAM1 pump cart 18:19
17:42 SYS Richard EX n Checking on things 18:30
17:45 SYS Mitchell site n FAMIS tasks 19:03
17:46 FAC Kim, Karen LVEA n Tech clean 18:55
17:49 VAC Janos, VAT LVEA n Tour 17:49
18:00 FAC Chris LVEA n FAMIS tasks 18:21
18:05 VAC Janos LVEa n Checks 18:19
18:12 CDS Erik EndX N Restart CDS laptops 18:36
18:22 FAC Chris Ends N Checks 18:57
19:29 VAC Travis FCES N Clean area, prep for next weeks installs 20:07
19:44 VAC Gerardo LVEA N ION pump HAM1 19:57
21:13 SEI Jim LVEA N Move masses from receiving to LVEA 21:30
21:30 VAC Gerardo, Jordan LVEA N Pump swap annulus ion, Janos joined at 23:00 Ongoing
21:54 CAL Tony, Dripta PCAL lab Y PCAL work 23:03
22:22 PEM Robert, Anamaria Yarm N PEM inv 23:30

Some of the work that was completed is as follows:

A 6.6 from Tonga rolled through around 20:20UTC, it started tripping watchdogs when the R wave came through (peaked at just over 10 on NUC10)

20:49 UTC ISI BS stage 2, ITMY stage 1 & 2 watchdogs tripped

21:23 ISI HAM7 wd trip, maybe from Jim moving stuff arond the LVEA? Verbal called out a glitch beforehand (all the cps sensor saw it)

DIAG_MAIN message was flashing occasionally - TIDAL LIMITS: X_ARM_CTRL_OUTPUT within 10 percent of limit

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 13:21, Tuesday 01 October 2024 - last comment - 13:22, Tuesday 01 October 2024(80404)
CDS Maintenance Summary: Tuesday 1st October 2024

WP12099 h1iscey computer swap

Erik:

Recenly we discovered that h1iscey (W2245) has a bad PCIe slot-7 on its motherboard, which is the cause of the A2 Adnaco issue in its IO Chassis. We have been skipping this slot in the past weeks.

h1iscey's front end computer was swapped with one from the test stand. See Erik's alog for details.

No model changes were required, no DAQ restart.

WP12108 laptop upgrade

Erik:

End station CDS laptops were upgraded.

Alarms reconfiguration

Dave:

The FMCS reverse osmosis (RO) has become very stable. The alarm settings were changed to alarm after 1 hour (was set to 1 day).

New CDS Overview

Dave:

New overview was released, see alog for details.

Comments related to this report
david.barker@LIGO.ORG - 13:22, Tuesday 01 October 2024 (80405)

Tue01Oct2024
LOC TIME HOSTNAME     MODEL/REBOOT
08:23:11 h1iscey      ***REBOOT***
08:24:51 h1iscey      h1iopiscey  
08:25:04 h1iscey      h1pemey     
08:25:17 h1iscey      h1iscey     
08:25:30 h1iscey      h1caley     
08:25:43 h1iscey      h1alsey   

H1 CDS
david.barker@LIGO.ORG - posted 13:12, Tuesday 01 October 2024 (80403)
New CDS Overview

I have release a new CDS Overview, image is attached.

The previous overview was a hybrid of computer generated and hand drawn content. Specifically the upper 80% (the FE and DAQ section) was computer generated and the lower 20% was a mix of external files and hand drawn.

The new overview is completely genenerated by python code (generate_cds_overview_medm.py). This is reflected in its name change from H1CDS_APLUS_O4_OVERVIEW_CUST.adl to H1CDS-CDS_OVERVIEW_MACHINE_GEN.adl.

To open the new CDS overview, you need to kill all existing MEDMs and restart the sitemap from scratch, otherwise you will see a strange overview-within-an-overview screen.

New features:

Lower area has better layout, rows and columns align.

New IFO range "7 segment LED" section, hopefully range can be more easily read from afar. Currently it has a static orange-on-black color scheme, later version will turn green when in observe.

Color standardisation: related display buttons are light-blue on navy, shell execution buttons are light-green on dark-green.

Most areas can be clicked to open related displays.

To Do:

Complete the slow-controls and FMCS areas

Images attached to this report
LHO FMCS
eric.otterman@LIGO.ORG - posted 12:49, Tuesday 01 October 2024 (80401)
LVEA temperature trends
The LVEA temperature trends will stabilize throughout the day today. The excursions are a result of the work I did this morning to the zone reheats. 
H1 General
ryan.crouch@LIGO.ORG - posted 12:31, Tuesday 01 October 2024 (80399)
OPS Tuesday day shift update

The TTFSS investigation is still ongoing, microseism has greatly risen to above the 90th percentile, the wind is also rising but its only ~20 mph.

H1 General
nyath.maxwell@LIGO.ORG - posted 12:22, Tuesday 01 October 2024 (80398)
Alog, awiki, svn downtime due to VM host issues in GC
There was an issue today where the VM hypervisor running alog, svn, awiki, ldap, and some services lost its network.  It has been restored.
H1 PEM
ryan.crouch@LIGO.ORG - posted 10:41, Tuesday 01 October 2024 (80392)
High Microseism today

The microseism today has reached fully above the 90th percentile, recreating Roberts plot from alog74510 the seismometers seem to show the largest phase difference is between EX and the CS, and the 2nd highest is with EY which suggests its because of the motion from our coast? From windy.com there's currently a 10 meter wave low pressure system off the coast of WA that's pretty much along the axis of the Xarm. These seismometers are also dominated by a ~0.12 Hz oscillation.

Images attached to this report
H1 SQZ
sheila.dwyer@LIGO.ORG - posted 10:41, Tuesday 01 October 2024 - last comment - 14:50, Thursday 03 October 2024(80396)
OPO PZT voltage seems related to low range overnight

Yesterday afternoon until 2 am, we had low range because the squeezing angle was not well tuned.  As Naoki has noted 78529 this can happen when the OPO PZT is at a lower voltage than we normally operate at. This probably could have been improved by running SCAN_SQZANG. 

I've edited the OPO_PZT_OK checker, so that it requires the OPO PZT to be between 70 and 110 V (it used to be 50 to 110V).  This might mean that sometimes the OPO has difficulty locking, (ie, 76642), which will cause the IFO to call for help, but that will avoid running with low range when it needs to run SCAN_SQZANG.

 

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 14:50, Thursday 03 October 2024 (80447)

Reverted OPO checker back to 50-110V as we moved the OPO crystal spot. 

H1 AOS (DetChar)
keith.riles@LIGO.ORG - posted 08:48, Wednesday 04 September 2024 - last comment - 12:31, Tuesday 01 October 2024(79897)
Disturbance degrading Crab pulsar sensitivity
There is a bump disturbance in the H1 strain spectrum that is degrading the noise in the vicinity of the Crab pulsar. Attached are zoomed-in ASDs from recent dates (drawn from this CW figure of merit site). That band was pretty stable in O4a, but has been unstable in O4b. I can see hints of the bump as early as the end of March 2024 (see calendar navigation to sample dates).

I have poked around in Fscan plots and tried the handy-dandy Carleton coherence mining tool, but saw no obvious culprit PEM channels. Is there a new motor running somewhere (or an old motor with a drifting frequency)? 

Attachments:

Sample ASD snippets from August 1, September 1 & 2, along with a snippet from the end of O4a on January 16.

The red curve is from one day of averaging, and the black curve is the run-average up to that date.

The vertical bars show the approximate band of the Crab pulsar during the O4 run (taking into account Doppler modulations and long-term spin down).
Images attached to this report
Comments related to this report
keith.riles@LIGO.ORG - 12:31, Tuesday 01 October 2024 (80400)
Correction: The graph labeled August 1 really applies to July 12, the last day of data before the long shutdown. Attached is what I should have posted instead (same curves with the correct label). Thanks to Sheila for spotting the error. 
Images attached to this comment
H1 SQZ
sheila.dwyer@LIGO.ORG - posted 16:22, Wednesday 31 May 2023 - last comment - 20:53, Tuesday 01 October 2024(70050)
what to do if the SQZ ISS saturates

I've set the set point for the OPO trans to 60 uW, this gives us better squeezing and a little bit higher range.  However, the SHG output power sometimes fluctuates for reasons we don't understand, which causes the ISS to saturate and knocks us out of observing.   Vicky and operators have fixed this several times, I'm adding instructions here so that we can hopefully leave the setpoint at 60uW and operators will know how to fix the problem if it arrises again. 

If the ISS saturates, you will get a message on DIAG_MAIN, then, the operators can lower the set point to 50 uW.

1) take sqz out of IFO by requesting NO_SQUEEZING from SQZ_MANAGER. 

2) reset ISS setpoint, by opening the SQZ overview screen, and opening the SQZ_OPO_LR guardian with a text editor.  in the sqzparams file you can set opo_grTrans_setpoint_uW to 50. Then load SQZ_OPO_LR, request LOCKED_CLF_DUAL_NO_ISS, then after it arrives re-request LOCKED_CLF_DUAL, this will turn the ISS on with your new setpoint.  

3)This change in the ciruclating power means that we need to adjust the OPO temperature to get the best SQZ.  Open the OPO temp ndscope, from the SQZ scopes drop down menu on the sqz overview (pink oval in screenshot).  THen adjust the OPO temp setting (green oval) to maximize the CLF-REFL_RF6_ABS channel, the green one on the scope.  

4) Go back to observing, by requesting FREQ_DEP_SQZ from SQZ_MANAGER.  You will have 2 SDF diffs to accept as shown in the screenshot attached. 

Images attached to this report
Comments related to this report
victoriaa.xu@LIGO.ORG - 20:13, Monday 05 June 2023 (70162)

Update: in the SDF diffs, you will likely not see H1:SQZ-OPO_ISS_DRIVEPOINT change, and just the 1 diff for OPO_TEC_SETTEMP. The channel *ISS_DRIVEPOINT is used for commissioning but ISS stabilizes power to the un-monitored value which changes, H1:SQZ-OPO_ISS_SETPOINT.

Also, if SQZ_OPO_LR guardian is stuck ramping in "ENGAGE_PUMP_ISS" (you'll see H1:SQZ-OPO_TRANS_LF_OUTPUT ramping), this is b/c the setpoint is too high to be reached, which is a sign to reduce "opo_gr_TRANS_setpoint_uW" in sqzparams.py.

naoki.aritomi@LIGO.ORG - 16:56, Monday 07 August 2023 (72044)

Update for operators:

2) reset ISS setpoint, by opening the SQZ overview screen, and opening the SQZ_OPO_LR guardian with a text editor. In the sqzparams file you can set opo_grTrans_setpoint_uW to 50 60. Then load SQZ_OPO_LR, request LOCKED_CLF_DUAL_NO_ISS, then after it arrives re-request LOCKED_CLF_DUAL, this will turn the ISS on with your new setpoint. Check if the OPO ISS control monitor (H1:SQZ-OPO_ISS_CONTROLMON) is around 3 by opening SQZ OVERVIEW -> SQZT0 -> AOM +80MHz -> Control monitor (attached screenshot). If the control monitor is not around 3, repeat 2) and adjust the opo_grTrans_setpoint_uW to make it around 3.

Images attached to this comment
anthony.sanchez@LIGO.ORG - 18:27, Sunday 10 September 2023 (72792)

Vicki has asked me to make a comment about how the line in sqzparams.py should stay at 80 due to tuning for 80 instead of 50 or 60.

Line 12: 
opo_grTrans_setpoint_uW = 80 #OPO trans power that ISS will servo to. alog 70050.

relevent alog:

https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=72791

anthony.sanchez@LIGO.ORG - 20:53, Tuesday 01 October 2024 (80414)

Latest update on how to deal with this SQZ error message with a bit more clarity:
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=80413

Displaying reports 61-80 of 78140.Go to page 1 2 3 4 5 6 7 8 9 10 End