Displaying reports 6241-6260 of 84064.Go to page Start 309 310 311 312 313 314 315 316 317 End
Reports until 16:44, Tuesday 17 September 2024
H1 CAL
louis.dartez@LIGO.ORG - posted 16:44, Tuesday 17 September 2024 - last comment - 08:52, Wednesday 18 September 2024(80155)
updated calibration to account for new ETMX L3 DACs
L. Dartez, T. Sanchez, J. Driggers, M. Pirello, J. Rollins, J. Betzwieser

The H1 calibration has been updated to account for the additional 15.2us delay due to the new LIGO DAC card used in the ETMX L3 path.

There was a bit of confusion and imperfect communication over how the new DACs were implemented that contributed to some locking troubles after the maintenance period. 
- First, the DACs were only installed for ETMX-L3 for now. With conflicting emails and alogs I walkedin thinking all 3 stages had their DACs swapped.
- The gains that I adjusted in the COILOUTF and ESDOUTF filter banks were not used. On L1 and L2 this is because there was no DAC swap. On L3 it's because the new DACs have a built-in gain that serves the same purpose as the ones I adjusted. as such, the '32bit DAC' FMs are ignored and can be purged.

Updating the calibration pipeline for this change required steps that are outside of the anticipated. 
Our last exported report, 20240330T211519Z, represents the state of the DARM model that is used to inform the front end and the GDS pipeline. I copied this report into 20240917T222803Z (this time stamp is not associated with any measurement files; i made it up as the time at which I was doing the copying) and updated the pydarm_H1.ini to include an additional 15.2 us delay in the unknown actuation delay parameter for the X arm, bringing the total 'unknown' actuation delay to 30.2us. I then updated the id file to reflect the new report id and regenerated the new report using  pydarm report --regen 20240917T222803Z (I should have used --skip-mcmc also but that's ok). Then I had to fix the id file _again_ after the generation before committing and uploading the report to ldas. 

Once the report was committed, I exported it to the front end. The front end value changes are all on the order of a percent or two due to rerunning the MCMC during the generation. Note to pydarm devs: I had to mark this report as 'valid' in order to get pydarm to export it. But I didn't want to mark it as valid since we won't want this report to be considered as a unique measurement in the uncertainty budget. We'll need to remove the valid tag before preparing the unc budget for O4b.

There was some trouble getting GDS restarted but Jamie and Jonathan jumped in to help with that. It was just an auth issue.
Images attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 08:52, Wednesday 18 September 2024 (80164)

To clarify L1 & L2 are driven by the new DAC. Gains don't need to be adjusted in the normal path, since they are adjusted in the new paths.

H1 General
anthony.sanchez@LIGO.ORG - posted 16:36, Tuesday 17 September 2024 (80153)
Tuesday Maintance Ops Shift End

TITLE: 09/17 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 143Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:


This morning's Lockloss for maintenance: 2024-09-17 15:19

Relocking after the ETMX DAC swap:
TMSX was off by a lot, realigned by hand.
Turned off SEI maintenance ENV

Reverted sliders to : 1410631523
Adjutsed sliders to get OSUMs back to a place where we had flashes.

Running Baffle_Align for TMSX

Using measured BPD values to center TMSX.
Old offset pitch -100.25911370237192 and yaw -107.50189039633354
New offset pitch -99.5599642875359 and yaw -108.61504480911252
H1:SUS-TMSX_M1_OPTICALIGN_P_OFFSET => -99.5599642875359
H1:SUS-TMSX_M1_OPTICALIGN_Y_OFFSET => -108.61504480911252
Turning off the test offsets to TMSX. (should be P -33.4, Y 30.5)
H1:SUS-TMSX_M1_TEST_P => OFF: OFFSET
H1:SUS-TMSX_M1_TEST_Y => OFF: OFFSET
TMSX dither alignment finished!
PZT Y TRIGGER NOT ON!


Running Baffle Align for ITMX

Using measured BPD values to center ITMX.
Old offset pitch -93.16362911805888 and yaw 109.68269245998749
New offset pitch -98.52206679818204 and yaw 107.62344936551601
H1:SUS-ITMX_M0_OPTICALIGN_P_OFFSET => -98.52206679818204
H1:SUS-ITMX_M0_OPTICALIGN_Y_OFFSET => 107.62344936551601
Turning off the test offsets to ITMX. (should be P -25.7, Y -19.0)
H1:SUS-ITMX_M0_TEST_P => OFF: OFFSET
H1:SUS-ITMX_M0_TEST_Y => OFF: OFFSET
ITMX dither alignment finished!
PZT Y TRIGGER NOT ON!


Baffle Align ETMX was ran,
Then ETMX and TMSX touched up by hand. Light on ALSX!!!

Locking started , reached PREP_DC_READOUT_TRANSITION

Fire Alarm! & Lockloss during the Fire drill.

Louis set some changes to the ETMX Gains for L1, L2,& L3. while H! was in down after that Lockloss.  Please see https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=80141
Which caused an ALS X Arm Fault. Once we realized that there was an error in the Gain due to the DAC being 28 and not 32 bit. please see Alog from Marc.
Letting Marc and Louis chat, we all learned that there is already gain being applied to compensate for the extra bits and Louis didn't need the gain changes. Thus he was adding more gain to the existing gain which is good for guitars, but bad for IFO's .

Louis's gains have now been reverted.

This allowed us to get past locking ALS.

Poked the BS a little to catch DRMI.

NLN reached at 22:08 UTC!!!

22:12 UTC pydarm measure --run-headless bb was ran from NLN_CAL_MEAS [700]

@ 23:15 UTC pydarm measure --run-headless bb was ran aqgain to compair with the first one that was taken.

Success!!! We are in Observing with the NEW DAC.




LOG:
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         

Start Time System Name Location Lazer_Haz Task Time End
23:58 SAF H1 LHO YES LVEA is laser HAZARD 18:24
15:05 HWS Camilla LVEA YES Opening the HWFS table 15:52
15:14 EE Fil LVEA & HAM Shaq Yes Helping Camilla with Power and working out in the FTCE. 17:52
15:15 VAC Gerardo Mech room n Ion pump work 15:48
15:15 EE Marc EX N DAC swap 16:35
15:22 IAS Jason LVEA laser hazard Changing farrow settings. 15:31
15:23 FAC Karen WoodShop n Technical cleaning 16:23
15:26 SEI Jim His office N BRS Damping. 17:26
15:28 FAC Chris EY N HVAC Filter swap. 17:25
15:31 FAC Contractor Arms N Outhouse maintenace. 18:31
15:38 VAC Norco CP2 (CS) n LN2 fill 17:38
16:30 PEM Genivive LVEA Yes Looking for PEM parts Accelerometer & Mic. 16:58
16:31 FAC Karen & kim LVEA Y Technical cleaning 17:11
16:54 PEM Sam & Genivive LVEA YES Looking for PEM parts 17:09
17:10 OPS TJ LVEA Yes LVEA Sweep 17:40
17:12 Access Laser Camilla , TJ, +2 LVEA yes Show Access Laser the CO2 lasers 18:12
17:54 PEM Genivive & Sam Ham Shaq N Checking PEM mic and sensors 18:36
18:09 FAC Chris EX n hvac fILTER SWAP 19:04
H1 CDS
david.barker@LIGO.ORG - posted 16:31, Tuesday 17 September 2024 - last comment - 16:47, Tuesday 17 September 2024(80154)
h1susex new LIGO DAC needs to be added to the SWWD

If we plan to use the new LIGO 28AO32 DAC to drive ETMX L1,L2,L3 for a while, we should add this new DAC to the IOP model's software watchdog (SWWD) DACKILL list.

Currently if the h1iopsusex SWWD is triggered for 15mins, it will DACKILL the 18bit and 20bit DACs, but not the 28bit.

This requires are restart of all the models on h1susex, but no DAQ restart is required.

Comments related to this report
jenne.driggers@LIGO.ORG - 16:47, Tuesday 17 September 2024 (80156)

Yes, I think that we can squeeze this in as a target-of-opportunity, if the IFO is unlocked during business hours sometime.  We have enough other watchdogs (including the lockloss triggers to stop outputs even if guardian is not working), that I think it should be okay to not break the lock for this, or wake someone up in the middle of the night to do the restart.

EDIT: Dave just noted to me in an email that it can also wait until next Tuesday, if we don't get it in before then.

LHO General
corey.gray@LIGO.ORG - posted 16:23, Tuesday 17 September 2024 (80150)
Tues EVE Ops Transition

TITLE: 09/17 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Calibration
OUTGOING OPERATOR: None
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 28mph Gusts, 24mph 5min avg
    Primary useism: 0.05 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:

H1 CAL
anthony.sanchez@LIGO.ORG - posted 15:24, Tuesday 17 September 2024 - last comment - 16:22, Tuesday 17 September 2024(80149)
pydarm Broadband measurement!

notification: new test result
notification: end of measurement
notification: end of test
diag> save /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240917T221248Z.xml
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240917T221248Z.xml saved
diag> quit
EXIT KERNEL

2024-09-17 15:17:59,370 bb measurement complete.
2024-09-17 15:17:59,370 bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240917T221248Z.xml
2024-09-17 15:17:59,371 all measurements complete.

Images attached to this report
Comments related to this report
anthony.sanchez@LIGO.ORG - 16:22, Tuesday 17 September 2024 (80151)

Second Broadband measurement for compairison:

notification: new test result
notification: end of measurement
notification: end of test
diag> save /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240917T231505Z.xml
/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240917T231505Z.xml saved
diag> quit
EXIT KERNEL

2024-09-17 16:20:16,365 bb measurement complete.
2024-09-17 16:20:16,365 bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240917T231505Z.xml
2024-09-17 16:20:16,365 all measurements complete.

Images attached to this comment
H1 ISC (ISC)
marc.pirello@LIGO.ORG - posted 15:12, Tuesday 17 September 2024 - last comment - 20:25, Sunday 03 November 2024(80147)
SUS-ETMX Ligo DAC 32 (LD32) testing at EX (continued)

Building on work last week, we installed a 2nd PI AI chassis (S1500301) in order to keep the PI signals separate from the ESD driver signals.  Original PI AI chassis S1500299.

We routed the LD32 Bank 0 thorugh the first PI AI chassis to the ESD drive L3, while keeping the old ESD driver signal driving the PI through the new PI AI chassis.

We routed the LD32 Bank 1 to the L2 & L1 suspension drive.

We did not route LD32 Bank 2 or Bank 3 to any suspensions.  The M0 and R0 signals are still being driven by the 18 bit DACs.

The testing did not go as smoothly as planned, a watchdog on DAC slot 5 (the L1&L2 drive 20 bit DAC) continousouly tripped the ESD reset line.  We solved this by attaching that open DAC port (slot 5) to the PI AI chassis to clear the WD error.

Looks like we made it to observing.

F. Clara, R. McCarthy, F. Mera, M. Pirello, D. Sigg

Comments related to this report
jenne.driggers@LIGO.ORG - 17:54, Tuesday 17 September 2024 (80157)DetChar-Request

Part of the implication of this alog is that the new LIGO DAC is currently installed and in use for the DARM actuator suspension (the L3 stage of ETMX).  Louis and the calibration team have taken the changes into account (see, eg, alog 80155). 

The vision as I understand it is to use this new DAC for at least a few weeks, with the goal of collecting some information on how it affects our data quality.  Are there new lines?  Fewer lines?  A change in glitch rate?  I don't know that anyone has reached out to DetChar to flag that this change was coming, but now that it's in place, it would be helpful (after we've had some data collected) for some DetChar studies to take place, to help improve the design of this new DAC (that I believe is a candidate for installation everywhere for O5).

tabata.ferreira@LIGO.ORG - 20:25, Sunday 03 November 2024 (81042)DetChar

Analysis of glitch rate:

We selected Omicron transients during observing time across all frequencies and divided the analysis into two cases: (1) rates calculated using glitches with SNR>6.5, and (2) rates calculated using glitches with SNR>5. The daily glitch rate for transients with SNR greater than 6.5 is shown in Figure 1, with no significant difference observed before and after September 17th. In contrast, Figure 2, which includes all Omicron transients with SNR>5, shows a higher daily glitch rate after September 17th.

The rate was calculated by dividing the number of glitches per day by the daily observing time in hours.

Images attached to this comment
LHO General
tyler.guidry@LIGO.ORG - posted 13:36, Tuesday 17 September 2024 (80143)
LHO Fire Drill
Per FAMIS request id 26936 an annual fire drill was performed today at 1:01 PST. All regular fire alarm systems sustained function for 5 minutes at LEXC, LSB and OSB. This included a disabling of all HVAC functions at the corner station. As such, a small temperature increase may be seen across the LVEA. I have verified all systems resumed normal operation after the alarms were disabled and system reset. Expect temperatures to correct and return to normal shortly. 

R. McCarthy, T. Guidry, E. Otterman
H1 AOS (CAL)
louis.dartez@LIGO.ORG - posted 13:25, Tuesday 17 September 2024 - last comment - 14:30, Tuesday 17 September 2024(80141)
Engaged new COILOUTF filter gains for 32bit DACs on ETMX
The IFO was kicked back to DOWN while approaching CHECK_VIOLINS_BEFORE_POWERUP during the fire alarm. I kept the IFO in DOWN for a moment while I engaged the filter modules with the 32bitDAC gains mentioned in LHO:80017.

screenshots attached.
Images attached to this report
Comments related to this report
louis.dartez@LIGO.ORG - 13:40, Tuesday 17 September 2024 (80144)
i saved the filter module changes in SDF. we'll need to revert if we end up swapping the DAC's back out
louis.dartez@LIGO.ORG - 14:09, Tuesday 17 September 2024 (80145)
It turns out that the new LIGODACs were only put in place for L3, not L1-L3. I've reverted my FM changes to L1 and L2. 

Furthermore, Marc explained to me that the new 32 bit DACs are effectively 28 bit DACs so I've adjusted the gains in the L3 ESDOUTF filters to 2**10 from 2**14.
Images attached to this comment
louis.dartez@LIGO.ORG - 14:30, Tuesday 17 September 2024 (80146)
From discussion with Marc, it turns out that according to LHO:80023 there is a gain of 275.31 being applied to the output of the L3 LIGODACs. This gain accounts for the difference between the 20bit DACs and the new LIGO-DAC. A screenshot showing these gains is attached. 

We are leaving engaged the gain of 4 in the ESDOUTF filterbanks for ETMX-L3 as this gain gets us from the old 10-bit configuration to the 20-bit configuration. We are *not* engaging the '32bitDAC' FMs I prepared last week since the gain of 275.31 being applied by the DACs already takes care of getting us from the 20-bit configuration to the 32bit (28bit?) configuration.

Jenne has accepted the FM changes in SDF.
Images attached to this comment
H1 General
anthony.sanchez@LIGO.ORG - posted 13:05, Tuesday 17 September 2024 - last comment - 13:15, Tuesday 17 September 2024(80140)
fire alarm!!!

Fire alarm. Assembling near the staging building.

Comments related to this report
anthony.sanchez@LIGO.ORG - 13:15, Tuesday 17 September 2024 (80142)

Fire Drill ...
Lost lock at Prep_DC_Readout _Transition.

H1 CDS
david.barker@LIGO.ORG - posted 12:36, Tuesday 17 September 2024 - last comment - 12:49, Tuesday 17 September 2024(80138)
CDS Maintenance Summary: Tuesday 17th September 2024

WP12091 Add new TEST_NOTIFY Guardian Node

TJ, Dave:

TJ created the new node and the new H1EDCU_GRD.ini. I updated H1EPICS_GRD.ini. DAQ+EDC restart needed.

WP12087 Add VACSTAT gauge channels to DAQ.

Dave:

I added the gauge metadata channels to H1EPICS_VACSTAT.ini. DAQ+EDC restart needed.

This failed due to channel name lengths exceeding 54 characters. This change was then backed out.

Verify PEM signal cabling to h1iopcdsh8 IO Chassis

Fil, Dave:

Fil verified the PEM cabling was as per h1pemh8. See Fil's alog for details.

New LIGO 28AO32 DAC driving ETMX L2 and ESD signals

Daniel, Richard, Marc, Fil, TJ, Tony, Dave, Erik, Jonathan, EJ

The 28AO32 DAC was wired to drive the SUS ETMX DAC channels which were being driven by its 20bit DACs. Existing AI chassis were used for these drives to remove one more change.

Two additional AI chassis were installed in this rack. One to keep the first 20bit DAC in a good state, the second to drive the h1susetmxpi DAC drives (last two channels of second 20bit DAC).

DAQ Restart

Jonathan, Dave:

The DAQ + EDC were restarted for the Guardian and VACSTAT changes. This was a very messy restart.

DC0 did not restart, it found that the new VACSTAT channel names were too long.

I reverted the change to H1EPICS_VACSTAT and we did a second restart of the DAQ 0-leg and EDC.

FW0 spontaneously restarted itself after only writing a few full frames.

When we were sure FW0 was stable again, we restarted the 1-leg with no issues.

The name-too-long issues should have been caught by my check_daq_channels_validity.py code. I subsequently found there was a bug in the code, and it was not checking the new DAQ INI files, rather the running version. This has been fixed and tested.

Comments related to this report
david.barker@LIGO.ORG - 12:49, Tuesday 17 September 2024 (80139)

Tue17Sep2024
LOC TIME HOSTNAME     MODEL/REBOOT
08:14:57 h1susauxb123 h1edc[DAQ] <<< First EDC restart, did not notice DC0 was staying down
08:18:43 h1daqdc0     [DAQ]  <<< Effectively second 0-leg restart after fixing H1EDC.ini
08:18:49 h1daqfw0     [DAQ]
08:18:49 h1daqnds0    [DAQ]
08:18:49 h1daqtw0     [DAQ]
08:19:06 h1daqgds0    [DAQ]


08:19:15 h1susauxb123 h1edc[DAQ]  <<< second EDC restart with fixed H1EDC.ini


08:21:38 h1daqfw0     [DAQ] <<< Spontaneous restart of FW0


08:30:34 h1daqdc1     [DAQ] <<< 1-leg restart
08:30:45 h1daqfw1     [DAQ]
08:30:46 h1daqtw1     [DAQ]
08:30:47 h1daqnds1    [DAQ]
08:30:56 h1daqgds1    [DAQ]
 

H1 AOS
filiberto.clara@LIGO.ORG - posted 12:10, Tuesday 17 September 2024 (80137)
FCES PEM Cabling

WP 12089

All FCES PEM cabling is now labeled. PEM sensors are connected to PEM chassis as follows:

  1. CH1 - HAM8 Microphone
  2. CH2 - HAM8 Accelerometer X
  3. CH3 - HAM8 Accelerometer Y
  4. CH4 - HAM8 Accelerometer Z
  5. CH5 - FCES Accelerometer Beam Tube
  6. CH6 - HAM8 Accelerometer Floor
  7. CH7 - HAM8 Mag X
  8. CH8 - HAM8 Mag Y
  9. CH9 - HAM8 Mag Z

Worth noting that the PEM AA Chassis shares ADC channels with the ISC AA Chassis. A special rear interface board is installed in the PEM AA Chassis.

D. Barker, F. Clara

H1 CDS
david.barker@LIGO.ORG - posted 12:01, Tuesday 17 September 2024 (80136)
28AO32 DAQ MEDM widened to show full number

I've hand edited the LIGO DAC MEDMs for h1susetmx and h1iopsusex models so the large drive signals are no longer being clipped at both ends.

Images attached to this report
H1 TCS
camilla.compton@LIGO.ORG - posted 08:55, Tuesday 17 September 2024 - last comment - 12:59, Wednesday 02 October 2024(80132)
HWS Masks Reinstalled and code restarted.

WP 12080  After turning the HWS lasers back on in 80043, today I reinstalled the masks. To get the right amount of light on the CCD camera, ITMX is set to 1Hz and ITMY to 20Hz. New references taken at 15:50UTC, 30 minutes after we lost lock, so the IFO wouldn't have been 100% cold. 

Comments related to this report
camilla.compton@LIGO.ORG - 15:48, Tuesday 17 September 2024 (80148)

HWS is aligned and working well.

Plots are attached comparing 40 seconds, 120 seconds and 20 minutes after we power to 60W input.

Comparing to March 76385, the ITMX main point absorber looks to be heating up more than it has in the past but the main IFO beam also looks a little lower and offset than March (origin cross is the same), so that could be causing the point absorber to look different. The ITMX P2L is the same as March and Y2L 0.1 higher.

Images attached to this comment
camilla.compton@LIGO.ORG - 12:59, Wednesday 02 October 2024 (80431)

Noticed the ITMY HWS code had stopped, error attached. While we were out of observing today, I restarted it.

Images attached to this comment
Displaying reports 6241-6260 of 84064.Go to page Start 309 310 311 312 313 314 315 316 317 End