Displaying reports 7901-7920 of 84670.Go to page Start 392 393 394 395 396 397 398 399 400 End
Reports until 07:37, Friday 12 July 2024
H1 General (Lockloss)
ryan.crouch@LIGO.ORG - posted 07:37, Friday 12 July 2024 - last comment - 08:22, Friday 12 July 2024(79060)
OPS Friday day shift start

TITLE: 07/12 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 3mph 5min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:

Comments related to this report
ryan.crouch@LIGO.ORG - 08:22, Friday 12 July 2024 (79064)

Running the coherence low range check, CHARD_P,Y and MICH seem to have high coherence.

Images attached to this comment
H1 General
oli.patane@LIGO.ORG - posted 01:09, Friday 12 July 2024 (79058)
Ops Eve Shift End

TITLE: 07/12 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 148Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Mostly uneventful evening, but after the lockloss while trying to relock (hadn't done an IA), DRMI unlocked twice while we were in ENGAGE_DRMI_ASC, and got beamsplitter saturations both times, with the second time also getting a BS ISI saturation (1st time ndscope, 2nd time ndscope). Not sure what that was about, as I ran an IA and when I tried locking again it was fine. After being in Observing for a little bit I decided that I wanted to adjust sqz because it was really bad, and I had some issues after running it the first time (clicked things out of order even though it shouldn't matter??), but eventually I was able to get a lot more squeezing and a much better sensitivity at the higher frequencies.
LOG:

23:00 Relocking and at PARK_ALS_VCO
23:35 NOMINAL_LOW_NOISE
23:42 Started running simulines measurement to check if simulines is working
00:05 Simulines done
00:13 Observing

00:44 Earthquake mode activated due to EQ in El Salvador
01:04 Seismic to CALM

04:32 Lockloss
Relocking
    - 17 seconds into ENGAGE_DRMI_ASC, BS saturated and then LL
    - BS saturation twice in ENGAGE_DRMI_ASC, then ISI BS saturation, then LL
05:08 Started an initial alignment
05:27 IA done, relocking
06:21 NOMINAL_LOW_NOISE
06:23 Observing

06:38 Out of Observing to try and make sqz better because it's really bad
07:24 Observing                                                                                                            

Start Time System Name Location Lazer_Haz Task Time End
23:10 PCAL Rick, Shango, Dan PCAL Lab y(local) In PCAL Lab 23:54
Images attached to this report
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 21:33, Thursday 11 July 2024 - last comment - 23:24, Thursday 11 July 2024(79056)
Lockloss

Lockloss @ 07/12 04:32 UTC

Comments related to this report
oli.patane@LIGO.ORG - 23:24, Thursday 11 July 2024 (79057)

06:23 UTC Observing

LHO VE
david.barker@LIGO.ORG - posted 20:23, Thursday 11 July 2024 (79054)
Thu CP1 Fill

Thu Jul 11 08:11:57 2024 INFO: Fill completed in 11min 53secs

late entry from this morning

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 20:06, Thursday 11 July 2024 (79053)
Ops Eve Midshift Status

Observing at 157Mpc and have been locked for 3.5  hours. We rode out a 5.2 earthquake from El Salvador earlier. Wind is low and going down.

H1 ISC
camilla.compton@LIGO.ORG - posted 16:19, Thursday 11 July 2024 (79037)
Laser Noise aligoNB taken: Jitter, Frequency, Intensity

Jennie, Sheila, Camilla

Last all done in March 76623,76323,  Jitter taken in June 78554. Committed to ligo/gitcommon/NoiseBudget/aligoNB/aligoNB/H1/couplings and on aligoNB git.

Followed instructions in 70642 (new dB, see below), 74681 and 74788. We left CARM control just on REFL B for all three of these injections sets so that Sheila can create the 78969 projection plots.

 

Adjusting  70642 for Switch CARM control from REFL A+B to REFL B only:

Notes on plugging in the CARM CMB excitation for frequency excitation. In PSL rack ISC-R4, Plug the BNC from row 18 labeled AO-OUT-2 into the D0901881 Common Mode Servo on Row 15: EXC on Excitation A.

 

Images attached to this report
Non-image files attached to this report
H1 General
oli.patane@LIGO.ORG - posted 16:18, Thursday 11 July 2024 (79046)
Ops Eve Shift Start

TITLE: 07/11 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 14mph Gusts, 12mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.06 μm/s
QUICK SUMMARY:

Currently relocking and at MOVE_SPOTS. Everything looking good

LHO General
thomas.shaffer@LIGO.ORG - posted 16:15, Thursday 11 July 2024 (79023)
Ops Day Shift End

TITLE: 07/11 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY: We had a large and close 6.5M earthquake off the coast of Vancouver Island stop us from observing for most of the shift. This earthquake tripped all of the ISIs, but the only suspensions that tripped were IM1, SR3, SRM. Getting back was fully auto(!), expect for untripping WDs and some stops to allow for SEI testing. The delayed commissoining was then coordinated with LLO to start at 2030 UTC (130pm PT). A lock loss happened 1.5 hours into commissioning, the cause seeming to be one of those ETMX wiggles again. Relocking now has been full auto with an initial alignment so far.
LOG:

Start Time System Name Location Lazer_Haz Task Time End
15:47 ISC Jeff CER - Take pictures of racks 15:58
16:09 - Sabrina, Carlos, Milly EX n Property survey in desert 18:12
17:23 CDS Marc EX n Swap cables at rack 17:48
18:32 ISC Keita LVEA n Replug AS_C cable 18:33
18:38 PCAL Francisco, Cervane, Shango, Dan PCAL Lab local PCAL meas. 21:36
20:56 ISC Sheila PSL racks n Plug in freq. noise excitation 21:06
21:26 ISC Sheila, Camilla PSL racks n Unplug niose excitation 21:31
21:36 PCAL Francisco PCAL lab local Flip a switch 21:51
H1 ISC (ISC)
jennifer.wright@LIGO.ORG - posted 15:55, Thursday 11 July 2024 - last comment - 15:26, Friday 19 July 2024(79045)
DARM Offset step with hot OM2

We were only about 2 and half hours into lock when I did this test due to our earthquake lockloss this morning.

I ran the

python auto_darm_offset_step.py

in /ligo/gitcommon/labutils/darm_offset_step

Starting at GPS 1404768828

See attached image.

Analysis to follow.

Returned DARM offset H1:OMC-READOUT_X0_OFFSET to 10.941038 (nominal) at 2024 Jul 11 21:47:58 UTC (GPS 1404769696)

DARM offset moves recorded to 
data/darm_offset_steps_2024_Jul_11_21_33_30_UTC.txt

Images attached to this report
Comments related to this report
jennifer.wright@LIGO.ORG - 14:25, Friday 12 July 2024 (79080)

Here is the calculated Optical gain vs dcpd power and DARM offset vs optical gain as calculated by ligo/gitcommon/labutils/darm_offset_step/plot_darm_optical_gain_vs_dcpd_sum.py

The contrast defect is  calculated from the height of the 410Hz PCAL line at each offset step in the output DCPD, and is 1.014 +/- 0.033 mW.

Non-image files attached to this comment
jennifer.wright@LIGO.ORG - 15:58, Monday 15 July 2024 (79130)

I added an additional plotting step to the code and it now makes this plot which shows us how the power at AS_C changes with the DARM offset power at the DCPDs. The slope of this graph tells us what fraction of the power is lost between the input to HAM6 (AS_C) and the DCPDs.

P_AS = 1.770*P_DCPD + 606.5mW

Where the second term is light that will be rejected by the OMC and that which gets through the OMC but is insensitive to DARM length changes.

The loss term between the anti-symmetric port and the DCPDs is 1/1.77 = 0.565

Non-image files attached to this comment
H1 OpsInfo
thomas.shaffer@LIGO.ORG - posted 15:43, Thursday 11 July 2024 (79044)
Minor changes to H1_MANAGER

I tested out two changes to H1_MANAGER today:

H1 General
thomas.shaffer@LIGO.ORG - posted 15:01, Thursday 11 July 2024 - last comment - 17:26, Thursday 11 July 2024(79038)
Lock loss 2154 UTC

Lock loss 1404770064

Lost lock during commissioning time, but we were between measurements so it was caused by something else. Looking at the lock loss tool ndscopes, ETMX has that movement we've been seeing a lot of just before the lock loss.

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 17:26, Thursday 11 July 2024 (79051)CAL

07/12 00:13 UTC Observing

There were changes to the PCAL ramp times(PCALX, PCALY) made at 21:48 UTC. At that time we were locked and commissioning.

I have reverted those changes.

Images attached to this comment
X1 SUS
ibrahim.abouelfettouh@LIGO.ORG - posted 13:38, Thursday 11 July 2024 - last comment - 21:56, Thursday 11 July 2024(79032)
BBSS Transfer Functions and First Look Observations

Ibrahim, Oli

Attached are the most recent (07-10-2024) BBSS Transfer Functions following the most recent RAL visit and rebuild. The Diaggui screenshots show the first 01-05-2024 round of measurements as a reference. The PDF shows these results with respect to expectations from the dynamical model. Here is what we think so far:

Thoughts:

The nicest sounding conclusion here is that something is wrong with the F3 OSEM because it is the only OSEM and/or flag involved in L, P, Y (less coherent measurements) but not in the others; F3 fluctuates and reacts much more irratically than the others, and in Y, the F3 OSEM has the greatest proportion of actuation than P and a higher magnitude than L, so if there were something wrong with F3, we'd see it in Y the loudest. This is exactly where we see the loudest ring-up. I will take spectra and upload this in another alog. This would account for all issues but the F1, LF and RT OSEM drift, which I will plot and share in a seperate seperate alog.

Images attached to this report
Non-image files attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 21:56, Thursday 11 July 2024 (79055)

We have also now made a transfer function comparison between the dynamical model, the first build (2024/01/05), and following the recent rebuild (2024/07/10). These plots were generated by running $(sussvn)/trunk/BBSS/Common/MatlabTools/plotallbbss_tfs_M1.m for cases 1 and 3 in the table. I've attached the results as a pdf, but the .fig files can also be found in the results directory, $(sussvn)/trunk/BBSS/Common/Results/allbbss_2024-Jan05vJuly10_X1SUSBS_M1/. These results have been committed to svn.

Non-image files attached to this comment
H1 CAL
louis.dartez@LIGO.ORG - posted 07:10, Thursday 11 July 2024 - last comment - 22:48, Friday 12 July 2024(79019)
testing patched simulines version during next calibration measurement
We're running a patched version of simuLines during the next calibration measurement run. The patch (attached) was provided by Erik to try and get around what we think are awg issues introduce (or exacerbated) by the recent awg server updates (mentioned in LHO:78757).

Operators: there's is nothing special to do. just follow the normal routine as I applied the patch changes in place. Depending on the results of this test, I will either roll them back or work with Vlad to make them permanent (at least for LHO).
Non-image files attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 17:16, Thursday 11 July 2024 (79048)

Simulines was run right after getting back to NOIMINAL_LOW_NOISE. Script ran all the way until after Commencing data processing, where it then gave:

Traceback (most recent call last):
  File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 712, in
    run(args.inputFile, args.outPath, args.record)
  File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 205, in run
    digestedObj[scan] = digestData(results[scan], data)
  File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 621, in digestData
    coh = np.float64( cohArray[index] )
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/gwpy/types/series.py", line 609, in __getitem__
    new = super().__getitem__(item)
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/gwpy/types/array.py", line 199, in __getitem__
    new = super().__getitem__(item)
  File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/astropy/units/quantity.py", line 1302, in __getitem__
    out = super().__getitem__(key)
IndexError: index 3074 is out of bounds for axis 0 with size 0

erik.vonreis@LIGO.ORG - 17:18, Thursday 11 July 2024 (79049)

All five excitations looked good on ndscope during the run.

erik.vonreis@LIGO.ORG - 17:22, Thursday 11 July 2024 (79050)

Also applied the following patch to simuLines.py before the run.  The purpose being to extend the sine definition so that discontinuities don't happen if a stop command is executed late.  If stop commands are all executed on time (the expected behavior), then this change will have no effect.

 

diff --git a/simuLines.py b/simuLines.py
index 6925cb5..cd2ccc3 100755
--- a/simuLines.py
+++ b/simuLines.py
@@ -468,7 +468,7 @@ def SignalInjection(resultobj, freqAmp):
     
     #TODO: does this command take time to send, that is needed to add to timeWindowStart and fullDuration?
     #Testing: Yes. Some fraction of a second. adding 0.1 seconds to assure smooth rampDown
-    drive = awg.Sine(chan = exc_channel, freq = frequency, ampl = amp, duration = fullDuration + rampUp + rampDown + settleTime + 1)
+    drive = awg.Sine(chan = exc_channel, freq = frequency, ampl = amp, duration = fullDuration + rampUp + rampDown + settleTime + 10)
     
     def signal_handler(signal, frame):
         '''

 

vladimir.bossilkov@LIGO.ORG - 07:33, Friday 12 July 2024 (79059)

Here's what I did:

  • Cloned simulines in my home directory
  • Copied the currently used ini file to that directory, overwriting default file [cp /ligo/groups/cal/src/simulines/simulines/newDARM_20231221/settings_h1_newDARM_scaled_by_drivealign_20231221_factor_p1.ini /ligo/home/vladimir.bossilkov/gitProjects/simulines/simulines/settings_h1.ini]
  • reran simulines on the log file [./simuLines.py -i /opt/rtcds/userapps/release/cal/common/scripts/simuLines/logs/H1/20240711T234232Z.log]

No special environment was used. Output:
2024-07-12 14:28:43,692 | WARNING | It is assumed you are parising a log file. Reconstruction of hdf5 files will use current INI file.
2024-07-12 14:28:43,692 | WARNING | If you used a different INI file for the injection you are reconstructing, you need to replace the default INI file.
2024-07-12 14:28:43,692 | WARNING | Fetching data more than a couple of months old might try to fetch from tape. Please use the NDS2_CLIENT_ALLOW_DATA_ON_TAPE=1 environment variable.
2024-07-12 14:28:43,692 | INFO | If you alter the scan parameters (ramp times, cycles run, min seconds per scan, averages), rerun the INI settings generator. DO NOT hand modify the ini file.
2024-07-12 14:28:43,693 | INFO | Parsing Log file for injection start and end timestamps
2024-07-12 14:28:43,701 | INFO | Commencing data processing.
2024-07-12 14:28:55,745 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240711T234232Z.hdf5
2024-07-12 14:29:11,685 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240711T234232Z.hdf5
2024-07-12 14:29:20,343 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240711T234232Z.hdf5
2024-07-12 14:29:29,541 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240711T234232Z.hdf5
2024-07-12 14:29:38,634 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240711T234232Z.hdf5


Seems good to me. Were you guys accidentally using some conda environment when running simulines yesterday? When running this I was in " cds-testing " (which is the default?!). I have had this error in the past due to borked environments [in particular scipy which is the underlying responsible code for coherence], which is why I implemented the log parsing function.
The fact that the crash was on coherence and not the preceding transfer function calculation rings the alarm bell that scipy is the issue. We experienced this once in LLO with a single bad conda environment that was corrected, though I stubbornly religiously ran with a very old environment for a long time to make sure that error doesn't come up,

I ran this remotely so can't look at PDF if i run 'pydarm report'.
I'll be in touch over teamspeak to get that resolved.

ryan.crouch@LIGO.ORG - 08:00, Friday 12 July 2024 (79061)

Attaching the calibration report

Non-image files attached to this comment
vladimir.bossilkov@LIGO.ORG - 08:06, Friday 12 July 2024 (79062)

There's a number of WAY out there data points in this report.

Did you guys also forget to turn off the calibration lines when you ran it?

Not marking this report as valid.

louis.dartez@LIGO.ORG - 08:34, Friday 12 July 2024 (79065)
right, there was no expectation of this dataset being valid. the IFO was not thermalized and the cal lines remained on. 

The goal of this exercise was to demonstrate that the patched simulines version at LHO can successfully drive calibration measurements. And to that end the exercise was successful. LHO has recovered simulines functionality and we can lay to rest the scary notion of regressing back to our 3hr-long measurement scheme for now.
erik.vonreis@LIGO.ORG - 22:48, Friday 12 July 2024 (79089)

The run was probably in done in the 'cds' environment.  At LHO, 'cds' and 'cds-testing' are currently identical.  I don't know the situation at LLO, but LLO typically runs with an older environment than LHO.

Since it's hard to stay with fixed versions on conda-forge, it's likely several packages are newer at LHO vs. LLO cds environments.

H1 SUS (SUS)
rahul.kumar@LIGO.ORG - posted 11:18, Wednesday 10 July 2024 - last comment - 15:35, Thursday 11 July 2024(79003)
New settings for damping rung up violin mode ITMY mode 05 and 06

The following settings seems to be working for now and I will commit it in the lscparams after a couple of IFO lock stretches.

New settings (ITMY05/06):- FM5 FM6 FM7 FM10 Gain +0.01 (new phase -90 degree), might increase the gain later on depending upon how slow the damping is.

Nominal settings (ITMY05/06): FM6 FM8 FM10 Gain +0.02 (phase -30deg)

Given below are settings I tried this morning and it did not work,

1. no phase, 0.01 gain - increase
2. -30 phase, -0.01 gain - increase
3. +30 phase, 0.01 gain - increase
4. -30 phase, 0.01 gain - IY05 decreasing (both filters) IY06 increasing (both filters)
5. -60 phase, 0.01 gain - IY05 decreasing (both filters) IY06 increasing (only in narrow filter)

---

After talking to TJ, I have set the gain to zero on lscparams and saved it but not loaded it since we are OBSERVING. Will load it once there is a target or opportunity.

Images attached to this report
Comments related to this report
rahul.kumar@LIGO.ORG - 16:30, Wednesday 10 July 2024 (79011)

DARM spectra attached below shows that both the modes are slowly decreasing, next I will try and bump up the gain to 0.02.

Images attached to this comment
rahul.kumar@LIGO.ORG - 15:35, Thursday 11 July 2024 (79043)SUS

ITMY 05/06 - FM5 FM6 FM7 FM10 Gain +0.02 has been saved in lscparams and violin mode Guardian has been loaded for the next lock.

H1 CDS (CAL, CDS, SUS)
erik.vonreis@LIGO.ORG - posted 10:21, Tuesday 25 June 2024 - last comment - 18:37, Thursday 11 July 2024(78644)
SUSH2A

[Dave, Erik]

Dave found that DACs in h1sush2a were in a FIFO HIQTR state since 2024-04-09 11:33 UTC.

 

FIFO HIQTR means that DAC buffers had more data than expected.  DAC latency would be proportionally higher than expected.

 

The models were restarted, which fixed the issue.

Comments related to this report
erik.vonreis@LIGO.ORG - 18:37, Thursday 11 July 2024 (79052)

The upper bound on sush2a latency for the first three months of O4B is 39 IOP
cycles.  At 2^16 cycles per second, that's a maximum of 595
microseconds.

At 1 kHz that's  214 degrees of phase shift.

Normal latency is 3 IOP cycles, 46 microseconds, 16 degrees phase shift
@ 1 kHz.

The minimum latency when sush2a was in error was 4 cycles, 61
microseconds, 23 deg @ 1 KHz.
 

Displaying reports 7901-7920 of 84670.Go to page Start 392 393 394 395 396 397 398 399 400 End