WP 11948
Symmetra batteries on the MSR UPS unit were replaced. All batteries installed are now the V66 version. The 16 batteries on bank 1 were replaced today. Bank 2 batteries were replaced on 2022. Verified unit showed all 32 batteries installed across both banks. Dave confirmed emails were being sent out from unit.
D. Barker, F. Clara. M. Pirello, R. McCarthy
Looked at the wind fences this morning. No new damage. EY has a yoke (from an old repair) that broke a couple months back on the one panel we didn't replace, first attached image. This hasn't gotten any worse. EX still looks okay, second image.
Last week I did a major rewite of the CDS HW reporting EPICS IOC to catch any future DAC FIFO errors which are currently not being shown in the models' STATE_WORD. Part of that rewrite was to obtain all the front end data from the running DAQ configuration hash.
Today I released a new version which:
TITLE: 07/02 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 4mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY:
As I was arriving the IFO was just getting back into Nominial_Low_Noise at 14:26 UTC>
Took Observatory mode to Calibration to the Inlock charge measurements.
Everything seems to be currently functioning just fine.
Workstations were updated and rebooted. These were OS package updates. Conda packages were not updated.
TITLE: 07/02 Eve Shift: 2300-0500 UTC (1600-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY: High winds this evening have kept H1 down for the past 4 hours; gusts peaked around 45mph. Since they've calmed down a bit, I've started locking H1 and it's just reached ENGAGE_ASC_FOR_FULL_IFO.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:08 | SAF | LVEA | LVEA | YES | LVEA IS LASER HAZARD | Ongoing |
01:29 | CAL | Francisco, Neil | PCal Lab | n | Getting equipment | 02:01 |
These are the times for the continuation of the project discussed in 78734. Unfortunately, we again lost lock before I finished.
Start of change (GPS) |
End of change (GPS) |
ITMX bias at start (V) |
ITMX bias at end (V) |
ITMY bias at start (V) |
ITMY bias at end (V) |
1403912193 |
1403912517 |
0 |
20 |
0 |
-20 |
1403914518 |
1403914684 |
20 |
-19 |
-20 |
19 |
Lockloss @ 00:59 UTC - link to lockloss tool
Ends lock at 4h45m. I suspect the cause was wind; gusts have recently hit up to 45mph and there have been lots of glitches in the past hour or two.
A continuation on the previous measurements where we seen a drop in green power at 34 Celcius in both SHG 1 and 2 in a cavity setup.
To exclude the effect of optical cavity, we took off the front mirror of the cavity and took the double pass measurements under slightly different beam alignment. Then the single pass measurement is done by removing the rear mirror. The single pass measurement result fits the model, but not the double pass (not sure why).
Rebuild SHG 2, and measure the phase matching condition with lower pump at 10mW (in comparison all previous measurement were done with 60mW), and the same drop in power is seen in 34 Celcius.
Conclusion: the sinc curve phase matching measurement is only reliable when done in single pass setup. The measurement done in cavity setup is not definitive in diagnosing crystal condition.
FAMIS 20704
The NPRO has had a couple of sudden power jumps in the past week, seen also by output of AMP1 and less so from AMP2. The AMP LD powers didn't move at the same time as these jumps.
PMC transmitted power continues to fall while reflected power increases; when we swap the PMC tomorrow morning this should finally be put to rest.
Jason's brief incursion last Tuesday shows up clearly in environment trends.
TITLE: 07/01 Eve Shift: 2300-0500 UTC (1600-2200 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT
Wind: 16mph Gusts, 11mph 5min avg
Primary useism: 0.07 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY: H1 has been locked and observing for just over 3 hours. ITM bias change tests while observing have resumed for the afternoon.
TITLE: 07/01 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 20:17 UTC (3 hr xx min lock)
Very calm rest-of-shift after a speedy <1 hr 30 min lock acquisition.
Other:
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:08 | SAF | LVEA | LVEA | YES | LVEA IS LASER HAZARD | 10:08 |
14:39 | FAC | Karen | Optics, Vac Prep | N | Technical Cleaning | 15:06 |
15:51 | PEM | Robert | LVEA | YES | Viewport Compensation Plate Experiment | 18:48 |
16:04 | FAC | Karen | MY | N | Technical Cleaning | 17:17 |
16:04 | FAC | Kim | MX | N | Technical Cleaning | 16:44 |
16:59 | FAC | Eric | EX Chiller | N | Glycol static and residual pressure check | 17:41 |
17:02 | FAC | Tyler | MX, MY, EX, EY | N | Glycol Check | 17:41 |
17:31 | FAC | Chris | EY | N | HVAC Work | 18:31 |
20:00 | FAC | Karen | Optics Lab | Local | Technical Cleaning | 20:00 |
20:01 | PCAL | Francisco | Optics Lab | Local | Testing relays | 21:52 |
21:47 | SQZ | Terry, Kar Meng, Camilla | Optics Lab | Local | SHG Work | 22:16 |
FAMIS 21042
pH of PSL chiller water was measured to be just above 10.0 according to the color of the test strip.
Naoki, Camilla
We continued the ZM4 PSAMS scan with hot OM2 in 78636. We changed ZM4 PSAMS strain voltage from 7.5V to 5.5V and saw the squeezing and range improvement as shown in the first attachment. The second attachment shows the squeezing imrovement. The squeezing level is 5.2dB around 2 kHz.
We also tried 4.5V ZM4 PSAMS, but it is worse than 5.5V. So we set the ZM4 PSAMS at 5.5V. The ZM5 PSAMS strain voltage is -0.78V.
Everytime we changed ZM4 PSAMS, we compensated the ZM4 pitch alignment with slider and ran SCAN_ALIGNMENT_FDS and SCAN_SQZANG.
18:15-18:35UTC took MICH, PRCL, SRCL LSC noise budget injections. Following instructions in 74681 and 74788. Last taken with cold OM2 in 78554.
Committed in ligo/gitcommon/NoiseBudget/aligoNB/aligoNB/H1/couplings and on aligoNB git. For plot, used gpstime 1403879360 # Hot OM2, 2024/07/01 14:29UTC, Observing Time, O4b, IFO locked 3h30 hour, 158MPc range
In addition to usual references, I added H1:CAL-CS_{MICH,PRCL,SRCL}_DQ and saved as ref13.
Comparing today's lsc noise budget to cold OM2 lsc noise budget in 78554:
IFO is in LOCKING after an unknown LOCKLOSS at 18:55 UTC
Scheduled 8:30 - 11:30 PT (15:30 UTC to 18:30 UTC) Comissioning went well and finished at 11:36 PT.
Other:
After Sheila/Jennie's 78776 SRM move, the SRCL FF got worse. Compare green before SRM move to brown after move in attached plot. We tuned the SRCL FF gain from 1.18 to 1.14 to improve from brown to blue trace. If we see SRCL coherence, we could remeasure and fit the FF. Accepted in sdf and lscparams.
Calibration sweep taken today at 2106UTC in coordination with LLO and Virgo. This was delayed today since we were'nt thermalized at 1130PT.
Simulines start:
PDT: 2024-06-29 14:11:45.566107 PDT
UTC: 2024-06-29 21:11:45.566107 UTC
GPS: 1403730723.566107
End:
PDT: 2024-06-29 14:33:08.154689 PDT
UTC: 2024-06-29 21:33:08.154689 UTC
GPS: 1403732006.154689
I ran into the error below when I first started the simulines script, but it seemed to move on. I'm not sure if this frequently pops up and this is the first time I caught it.
Traceback (most recent call last):
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 314, in _
bootstrap
self.run()
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 108, in r
un
self._target(*self._args, **self._kwargs)
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 427, in generateSignalInje
ction
SignalInjection(tempObj, [frequency, Amp])
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 484, in SignalInjection
drive.start(ramptime=rampUp) #this is blocking, and starts on a GPS second.
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/awg.py", line 122, in start
self._get_slot()
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/awg.py", line 106, in _get_sl
ot
raise AWGError("can't set channel for " + self.chan)
awg.AWGError: can't set channel for H1:SUS-ETMX_L1_CAL_EXC
One item to note is that h1susex is running a different version of awgtpman since last Tuesday.
This almost certainly failed to start the excitation.
I tested a 0-amplitude excitation on the same channel using awggui with no issue.
There may be something wrong the environment the script is running in.
We haven't made any changes to the environment that is used to run simulines. The only thing that seems to have changed is that a different version of awgtpman is running now on h1susex as Dave pointed out. Having said that, this failure has been seen before but rarely reappears when re-running simulines. So maybe this is not that big of an issue...unless it happens again.
turns out i was wrong about the environment not changing. according to step 7 of the ops calib measurement instructions, simulines has been getting run in the base cds environment...which the calibration group does not control. That's probably worth changing. In the meantime, I'm unsure if that's the cause of last week's issues.
The CDS environment was stable between June 22 (last good run) and Jun 29.
There may have been another failure on June 27, which would make two failures and no successes since the upgrade.
The attached graph for June 27 shows an excitation at EY, but no associated excitation at EX during the same period. Compare with the graph from June 22.
On Jun 27 and Jun 28, H1:SUS-ETMX_L2_CAL_EXCMON was excited during the test.
Emails were correctly sent to the cdsadmin mailing group from the UPS unit (ups-msr-0)