Displaying reports 161-180 of 78140.Go to page Start 5 6 7 8 9 10 11 12 13 End
Reports until 13:46, Thursday 26 September 2024
H1 General (CDS)
anthony.sanchez@LIGO.ORG - posted 13:46, Thursday 26 September 2024 (80313)
SDF Diffs Accepted

After the EtherCAT reboot we had a few SDF Diffs that needed to be accepted.
Screenshot.

Images attached to this report
H1 ISC (CDS, ISC)
keita.kawabe@LIGO.ORG - posted 12:24, Thursday 26 September 2024 - last comment - 14:54, Thursday 26 September 2024(80309)
OMC whitening switching issue (Tony, TJ, JoeB, Sheila, Fil, Patrick, Daniel, Keita among others)

This morning Tony and TJ had a hard time locking the OMC.

We've found that OMC DCPD A and B output are very assymmetric only when there was a fast transient (1st attachment) but not when the OMC length was slowly brought close to the resonance (2nd attachment), which suggested whitening problem.

The transfer function from OMCA to B suggested that the switchable hardware whitening was ON for DCPD_A and OFF for B when it was supposed to be OFF for both. 3rd attachment shows the transfer function from DCPD_A to B, and 4th attachment shows the anti-whitening filter shape.

Switching ON the anti-whitening only for DCPD_A made the frequency response flat. Trying to switch the analog whitening ON and OFF by toggling H1:OMC-DCPD_A_GAINTOGGLE didn't change the hardware whitening status, it's totally stuck.

We tried to lock the IFO by only using DCPD_B, but IFO unlocked for some reason.

After IFO lost lock, people found on the floor that it's the problem of the whitening chassis, not the BIO. It's not clear if we can fix the board in the chassis (which is preferrable) or have to swap the whitening chassis (less preferable as calibration group needs to measure the analog TF and generate a compensation filter).

We'll update as we make progress.

 

Images attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 12:57, Thursday 26 September 2024 (80310)

Fernando, Fil, Daniel

DCPD whitening chassis fixed.

We diagnosed a broken photocoupler in the DCPD whitening chassis. Since the photocoupler is located on the front interface board, we selected to swap this board with the one from the spare. This means the whitening transfer function should not have changed. Since we switched the front interface board together with the front-panel, the serial number of the chassis has (temporarily) changed to the one of the spare.

The in-vacuum DCPD amplifiers were powered off for 30-60 minutes while the repair took place. So, they need some time to thermalize.

filiberto.clara@LIGO.ORG - 13:32, Thursday 26 September 2024 (80312)

Unit installed is S2300003. The front panel and front interface board was removed/borrowed from S2300004.

louis.dartez@LIGO.ORG - 14:54, Thursday 26 September 2024 (80316)CAL
N.B. S2300004 and S2300002 have been characterized and fit already. See LHO:71763 and LHO:78072 for the S2300004 and S2300002 zpk fits, respectively.

Should the OMC DCPD Whitening chassis need to be fully swapped, we already have the information we need to install the corresponding compensation filters in the front end and in the pyDARM model to accommodate that change. This, of course, rides on the expectation that the electronics have not materially changed in their response in the interim.

H1 General
anthony.sanchez@LIGO.ORG - posted 11:27, Thursday 26 September 2024 (80307)
Thurs Mid Shift update

TITLE: 09/26 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 12mph Gusts, 6mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.22 μm/s
QUICK SUMMARY:

Locking notes:
Ran an initial Alignment.
After Initial Alignment I started the Locking process, and bounced to PRMI 3 times even though I had good DRMI flashes.
On the 3rd time of jumpin to PRMI,  I manualed back to Align_recycling mirrors, then requested NLN. that took us to DRMI , then I touched up SRM in Yaw.
 
OMC issue in Prep_DC_READOUT_TRANSITION.
Tridents looking good, but the TUNE_OFFSETS OMC_LOCK Guard state tunes Away from the desired value.
TJ Tried locking by hand...
OMC DCPDS are not balanced? (I'm not sure what they mean by this.
Sheila suggests that it might be a Whitening issue and Keita confirms this.
We are trying to switch the OMC DCPD Whitening that we are using. We are now using DCPD B. and Plan to switch back to DCPD A later on. An effort wasmade to try to keep the configuration the same for the CAL team, so Swapping Whitening Chassis was Not the best option. We are stuck in the "High State", which makes the output of the OMC's DCPD's high. But the High State is where we need it to be for NLN.
Sheila and Keita, did the DCPD Switching to get around the Whitening issues, just to have a LockLoss in the next ISC_Lock state DARM_TODC_READOUT. :(

Relocking again got to PRMI....

17:43 UTC EtherCat Failure, Fernando, Fil, and Sigg are working on resolving the issue.

Current H1 status is Down for Corrective maintenance.


 

 

 

 

H1 CAL
louis.dartez@LIGO.ORG - posted 11:16, Thursday 26 September 2024 (80291)
Procedural issues in LHO Calibration this week
In the past few weeks have seen rocky performance out of the Calibration pipeline and its IFO-tracking capabilities. Much, but not all, of this is due to [my] user error.

Tuesday's bad calibration state is a result of my mishandling of the recent drivealign L2L gain changes for the ETMX TST stage (LHO:78403,
LHO:78425, LHO:78555, LHO:79841).

The current practice adopted by LHO with respect to these gain changes is the following:

1. Identify that KAPPA_TST has drifted from 1 by some appreciable amount (1.5-3%), presumably due to ESD charging effects.
2. Calculate the necessary DRIVEALIGN gain adjustment to cancel out the change in ESD actuation strength. This is done in the DRIVEALIGN bank so that it's downstream enough to only affect the control signal being sent to the ESD. It's also placed downstream of the calibration TST excitation point.
3. Adjust the DRIVEALIGN gain by the calculated amount (if kappaTST has drifted +1% then this would correspond to a -1% change in the DRIVEALIGN gain).
3a. Do not propagate the new drivealign gain to CAL-CS.
3b. Do not propagate the new drivealign gain to the pyDARM ini model.

After step 3 above it should be as if the IFO is back to the state it was in when the last calibration update took place. I.e. no ESD charging has taken place (since it's being canceled out by the DRIVEALIGN gain adjustments). 
It's also worth noting that after these adjustments the SUS-ETMX drivealign gain and the CAL-CS ETMX drivealign will no longer be the copies of each other (see image below). 

The reasoning behind 3a and 3b above is that by using these adjustments to counteract IFO changes (in this case ESD drift) from when it was last calibrated, operators and commissioners in the control room could comfortably take care of performing these changes without having to invoke the entire calibration pipeline. The other approach, adopted by LLO, is to propagate the gain changes to both CAL-CS and pyDARM each time it is done and follow up with a fresh calibration push. This approach leaves less to 'be remembered' as CAL-CS, SUS, and pyDARM will always be in sync but comes at the cost of having to turn a larger crank each time there is a change.


 
Somewhere along the way I updated the TST drivealign gain parameter in the pyDARM model even though I shouldn't have. At this point, I don't recall if I was confused because the two sites operate differently or if I was just running a test and left this parameter changed in the model template file by accident and subsequently forgot about it. In any case, the drivealign gain parameter change made its way through along with the actuation delay adjustments I made to compensate for both the new ETMX DACs and for residual phase delays that haven't been properly compensated for recently (LHO:80270). This happened in commit 0e8fad of the H1 ifo repo. I should have caught this when inspecting the diff before pushing the commit but I didn't. I have since reverted this change (H1 ifo commit 41c516).

During the maintenance period on Tuesday, I took advantage of the fact that the IFO was down to update the calibration pipeline to account for all of the residual delays in the actuation path we hadn't been properly compensating for (LHO:80270). This is something that I've done several times before; a combination of the fact that the calibration pipeline has been working so well in O4 and that the phase delay changes I was instituting were minor contributed to my expectation that we would come back online to a better calibrated instrument. This was wrong. What I'd actually done was install a calibration configuration in which the CAL-CS drivealign gain and the pyDARM model's drivealign gain parameter were different. This is bad because pyDARM generates FIR filters that are used by the downstream GDS pipeline; those filters are embedded with knowledge of what's in CAL-CS by way of the parameters in the model file. In short, CAL-CS was doing one thing and GDS was correcting for another. 

-- 

Where do we stand?

At the next available opportunity, we will be taking another calibration measurement suite and using it reset the calibration one more time now that we know what went wrong and how to fix it. I've uploaded a comparison of a few broadband pcal measurements (image link). The blue curve is the current state of the calibration error. The red curve was the calibration state during the high profile event earlier this week. The brown curve is from last week's Thursday calibration measurement suite, taken as part of the regularly scheduled measurements.

--
Moving forward, I and others in the Cal group will need to adhere more strictly to the procedures we've already had in place: 
1. double check that any changes include only what we intend at each step
2. commit all changes to any report in place immediately and include a useful log message (we also need to fix our internal tools to handle the report git repos properly)
3. only update calibration while there is a thermalized ifo that can be used to confirm that things will back properly, or if done while IFO is down, require Cal group sign-off before going to observing
Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 10:55, Thursday 26 September 2024 - last comment - 13:13, Thursday 26 September 2024(80306)
Slow controls system down for investigation, alarms bypassed

Daniel has the Beckhoff slow controls system offline for investigation.

I've bypassed the following alarms:

Bypass will expire:
Thu Sep 26 10:54:08 PM PDT 2024
For channel(s):
    H1:PEM-C_CER_RACK1_TEMPERATURE
    H1:PEM-C_MSR_RACK1_TEMPERATURE
    H1:PEM-C_MSR_RACK2_TEMPERATURE
    H1:PEM-C_SUP_RACK1_TEMPERATURE
 

Comments related to this report
david.barker@LIGO.ORG - 13:13, Thursday 26 September 2024 (80311)

all alarms are active again.

H1 CDS
david.barker@LIGO.ORG - posted 09:49, Thursday 26 September 2024 (80305)
New h1lsc filter loaded

I loaded the new H1LSC.txt filter file into h1lsc. This has added Elenna's "new0926" filter to PRCLFF. This filter is currently turned off and has not been switch on recently.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 08:26, Thursday 26 September 2024 (80303)
Thu CP1 Fill

Thu Sep 26 08:14:57 2024 INFO: Fill completed in 14min 52secs

Jordan confirmed a good fill curbside.

Images attached to this report
H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 07:56, Thursday 26 September 2024 (80301)
Thursday OPS Day Shift Start

TITLE: 09/26 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 3mph Gusts, 1mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.23 μm/s
QUICK SUMMARY:

When I arrived the IFO was trying to lock its self after a Lockloss from NLN. .
Unknown Lockloss

While getting the screenshots and typign this up the IFO went to PRMI twice.so I decided to get it an Initial Alignment after that big gusty wind storm yesterday.
 

Images attached to this report
X1 SUS (SUS)
ryan.crouch@LIGO.ORG - posted 07:51, Thursday 26 September 2024 - last comment - 08:21, Thursday 26 September 2024(80296)
Comparison of all fully assembled A+ HRTS transfer functions

A follow up to alog78711 with all of the 11/12 completed assemblies.

I made 3 comparison plots; all suspensions(contains the legend on the 1st page, measurement date to corresponding sus s/n), both of the suspended versions, all 9 of the freestanding versions (there is 1 left to be finished that we're waiting on a part rework/repair).

Non-image files attached to this report
Comments related to this report
rahul.kumar@LIGO.ORG - 08:21, Thursday 26 September 2024 (80302)

We are still working on fine tuning the results of two HRTS suspension, measured on 08_30_2100 and 09_05_1800 (dark green and purple line in the plot shown here), especially for V and R dof. The magnitude (page 03 and 04 here) is lower than the rest of the batch and there is also some cross coupling from Rdof.

H1 ISC
elenna.capote@LIGO.ORG - posted 03:15, Thursday 26 September 2024 (80287)
PRCL coupling with time, new feedforward fit

Continuing in the investigation of the PRCL coupling and the attempts at applying a PRCL feedforward...

Since we have performed several injections into PRCL over the last few weeks, we can track the PRCL coupling. I made this plot comparing the DARM/PRCL transfer function from three different times: an injection performed before testing a new feedforward on Sept 16, an injection from the noise budget, and an injection done while updating the sensing matrix for SRCL and PRCL after rephasing POP9, Sept 23. The PRCL coupling is relatively stable from all of these tests.

However, the feedforward we have tried has not been working, which has been very confusing. I think I have discovered the reason why: in addition to the measure of the DARM/PRCL transfer function, there is a measure of the "preshaping", which includes the high pass filter we apply to the feedforward. Looking at my code, I found that I had been using the SRCL preshaping to calculate the PRCL coupling, which uses a different high pass filter (comparison of the three high pass filters). I should have been using the MICH preshaping measurement instead, since MICH and PRCL use the same high pass, and everything downstream from that point is the same. Sorry everyone for that mistake.

So, when using the proper preshaping, a PRCL feedforward fit should work. Since we already have this data taken, I ran a quick fit of the feedforward, and compared it to previous fits. Camilla's fit from July 11 was successful, and I can see that the low frequency gain and phase is very similar to the new fit. I believe one reason her fit no longer works is the coupling shape from up to about 70 Hz has changed. Meanwhile, the incorrect fits I tried recently are very different in magnitude and phase (shown in green as "old incorrect fit"), explaining why they failed.

This new fit requires about a factor of 5 more gain than Camilla's fit above 200 Hz, which I think should be ok. The PRCL injections see a large bandstop around 250 Hz, so we don't measure the coupling there. If we can turn off these bandstops we can probably get a much better measurement up to 1000 Hz, which will help this shaping.

Overall, the fit should subtract up to 10 times the noise from 10-30 Hz and on average 3 times the noise up to 100 Hz. Based on our recent noise budget projections, which show that PRCL is directly limiting DARM noise up to 30 Hz, this will have a positive effect on the low frequency sensitivity.

The new filter is placed in the PRCL FF bank in FM6, labeled as "new0926". Since we are in observing, I didn't load the model, just saved the filter. To test, load the filter bank, and engage with a gain of 1, along with the  FM10 highpass.

I recommend that this filter is tested along with the commissioning work for Thursday. It would be useful to have an injection before the filter is applied and after the filter is applied, each with enough averages to provide a good fit. If this feedforward filter doesn't work, we can refit. If it does, we can look at fitting iteratively for further subtraction. Please try to get about 30 averages for both measurements, and if possible, boost the injection strength above 50 Hz to get a bit better coherence.

If the feedforward doesn't work, please also additionally run a PRCL preshaping measurement. This means running with the PRCL FF input set OFF, with the high pass filter ON and a gain of 1. I think we have a template for this in the LSC feedforward folder. If not, the preshaping template of the MICH preshaping ("MICHFF") can be repurposed with the appropriate channels. This will help avoid the confusing mistake I previously made.

Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 22:00, Wednesday 25 September 2024 (80300)
OPS Eve Shift Summary

TITLE: 09/26 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY:

IFO is in NLN and OBSERVING since 03:09 UTC (2hr 10 min lock)

Wind was actively inhibiting locking for the first few hrs of shift. Gusts were ranging from 35-48mph. There were a few times where it dipped below 35mph at which point I began to lock, to no avail, losing lock usually at CHECK_IR or DRMI.

There was a lull with <20mph winds ~1:40 UTC. We got all the way to MICH_FRINGES bu IFO was not able to lock PRMI or DRMI so I began INITIAL_ALIGNMENT. From this point on, locking was fully automatic and we reached NLN approximately 1.5hrs after the initial alignment started.

Other:

LOG:

None

H1 SUS
oli.patane@LIGO.ORG - posted 19:06, Wednesday 25 September 2024 (80299)
PI Ringups Since OFI Vent End

Since we've gotten back from the OFI vent, we've had chunks of time where we struggle with PI's ringing up (attachment1). During that time, we've made changes to filter bank settings and adjusted the max power during our efforts to try and tame them.

Timeline of PI ringups and associated changes (all times in UTC)
August 22-23rd (alog79673)
    - Multiple locklosses from PI24 (lockloss1,lockloss2,lockloss3)
    - Changes made to PI damping (alog79665)
August 24
    - nominal max power setting changed to 61W so as to bypass it for the weekend
August 27 (alog79753)
    - PI31 ringup (no lockloss)
August 31 (alog79836)
    - PI28 & PI29 ring up and cause lockloss
September 02 (alog79860)
    - PI28/29 ring up and caused lockloss
    - nominal max power changed back to 60W
    - PI28/29 ringup causing lockloss
    - PI28/29 ringup causing lockloss
    - PI28/29 ringup causing lockloss
        - These last three lockloss were with 60W
        - These four locklosses all happened one after the other.
September 17
    - PI24 ringup and lockloss
September 21
    - PI24 ringup and lockloss
September 23
    - PI24 ringup and lockloss
September 25
    - PI24 ringup and lockloss
    - PI24 ringup and lockloss
    - PI24 ringup and lockloss

Something weird first noticed by TJ during the 17:52UTC ring up today was that PI24 looked a lot noisier than usual. I looked back at the previous PI24 ringups(attachment2) and noticed that only the ringups from today (September 25th) are this noisy, compared to the other PI24 locklosses we've seen since the vent.
After this latest lockloss, the nominal max power has been increased back up to 61W (alog80295). We are discussing changes to ring heater power since it looks like the max power changes are enough to stop one mode from ringing up, but might be causing the others to start ringing up.

 

Images attached to this report
H1 AOS
jason.oberling@LIGO.ORG - posted 13:44, Tuesday 24 September 2024 - last comment - 11:18, Thursday 26 September 2024(80271)
SR3 Optical Lever

J. Oberling, O. Patane

Today we started to re-center the SR3 optical lever after SR3 alignment was reverted to its pre-April alignment.  That's not quite how it went down, however...

We started by hooking up the motor driver and moving the QPD around (via the crossed translation stages it is attached to), and could not see any improvement to the OpLev signal.  While moving the horizontal translation stage it suddenly stopped and started making a loud grinding noise, like it had hit its, or a, limit.  Not liking the sound of that, we launched on figuring out fall protection to climb on top of HAM4 to investigate.  While the fall protection was getting figured out we took a look at the laser and found it dead.  No light, no life, all dead.  So we grabbed a spare laser from the Optics Lab and installed it (did not turn it on yet).

Once the fall protection was figured out I climbed on top of HAM4 and opened the OpLev receiver.  I couldn't visually see anything wrong with the stage.  It was near the center of its travel range, and nothing else looked like it was hung up.  I removed the QPD plate and the vertically mounted translation stage to get a better view of the stuck stage, and could still see nothing wrong.  Oli tried moving the stage with the driver and it was still making the loud noise, and the stage was not moving.  So it was well and truly stuck.  We grabbed one of the two spare translation stages from the EE shop (where Fernando was testing the remote OpLev recentering setup), tested it to make sure it worked (it did!), and installed it in the SR3 OpLev receiver.  The whole receiver was reassembled and the laser was turned on.  Oli slowly turned up the laser power while I watched for the beam, and once it was bright enough Oli then moved the translation stages to roughly center it on the QPD.

Something interesting, as Oli was turning up the laser power it would occasionally flash bright and then return to the brightness it was at before the flash.  They got it bright enough to see a SUM count of ~3k, and then re-centered the OpLev.  At this point I closed up the receiver and came down from the chamber.  I turned the laser power up to return the SUM counts to the ~20k it was at before the SR3 alignment shift and saw the SUM counts jump just like the beam would flash.  This happened early in the power adjustment (for example: started at ~3k SUM, adjusted up and saw a flash to ~15k, then back down to ~6k) but leveled off once the power was higher (I saw no jumps once the SUM counts were above 15k or so).  Maybe some oddness with a low injection current for the laser diode?  Not sure.  The OpLev is currently reading ~20k SUM counts and looks OK, but we'll keep an eye out to see if it remains stable starts behaving oddly.

The SR3 optical lever is now fixed and working again.

New laser SN is 197-3, old laser SN is 104-1.  SN of the new translation stage is 10371

Images attached to this report
Comments related to this report
jason.oberling@LIGO.ORG - 11:18, Thursday 26 September 2024 (80308)

Forgot to add, once the translation stage became stuck the driver was still recording movement as the counts would change when we tried to move the stage but the stage was clearly not moving.  So the motor encoder for the stage was working while the stage itself was stuck.

H1 DetChar (DetChar, DetChar-Request)
gabriele.vajente@LIGO.ORG - posted 11:27, Wednesday 18 September 2024 - last comment - 17:47, Wednesday 02 October 2024(80165)
Scattered light at multiples of 11.6 Hz

Looking at data from a couiple of days ago, there is evidence of some transient bumps at multiples of 11.6 Hz. Those are visible in the summary pages too around hour 12 of this plot.

Taking a spectrogram of data starting at GPS 1410607604, one can see at least two times where there is excess noise at low frequency. This is easier to see in a spectrogram whitened to the median. Comparing the DARM spectra in a period with and without this noise, one can identify the bumps at roughly multiples of 11.6 Hz.

Maybe somebody from DetChar can run LASSO on the BLRMS between 20 and 30 Hz to find if this noise is correlated to some environmental of other changes.

Images attached to this report
Comments related to this report
jane.glanzer@LIGO.ORG - 14:29, Thursday 26 September 2024 (80314)DetChar

I took a look at this noise, and I have some slides attached to this comment. I will try to roughly summarize what I found. 

I first started by taking some 20-30 hz BLRMS around the noise. Unfortunately, the noise is pretty quiet, so I don't think lasso will be super useful here. Even taking a BLRMS for a longer period around the noise didn't produce much. I can re-visit this (maybe take a narrower BLRMS?), but as a separate check I looked at spectra of the ISI, HPI, SUS, and PEM channels to see if there was excess noise anywhere in particular. I figured maybe this could at least narrow down a station where there is more noise at these frequencies.

What I found was:

  1. Didn't see excess noise in the EY or EX channels at ~11.6 Hz or at the second/third harmonics.
  2. Many CS channels had some excess noise around 11.6 hz, less at the second/third harmonics.
  3. However, of the CS channels that DID have excess noise around 11.6 Hz and 23.2 Hz, HAM8 area popped up the most. Specifically these channels: H1:PEM-FCES_ACC_BEAMTUBE_FCTUBE_X_DQ, H1:ISI-HAM8_BLND_GS13Z_IN1_DQ, H1:ISI-HAM8_BLND_GS13X_IN1_DQ.
  4. HAM3 also popped up, and the Hveto results for this day had some glitches witnessed by H1:HPI-HAM3_BLND_L4C_RZ_IN1_DQ.
  5. Potential scatter areas: something near either HAM8 or HAM3?
Non-image files attached to this comment
jane.glanzer@LIGO.ORG - 12:33, Wednesday 02 October 2024 (80429)DetChar

I was able to run lasso on a narrower strain blrms (suggested by Gabriele) which made the noise more obvious. Specifically, I used a 21 Hz - 25 Hz blrms of auxiliary channels (CS/EX/EY HPI,ISI,PEM & SUS channels) to try and model a strain blrms of the same frequency via lasso. In the pdf attached, the first slide shows the fit from running lasso. The r^2 value was pretty low, but the lasso fit does pick up some peaks in the auxiliary channels that do line up with the strain noise. In the following slides, I made time series plots of  the channels that lasso found to be contributing the most to the re-creation of the strain. The results are a bit hard to interpret though. There seems to be roughly 5 peaks in the aux channel blrms, but only 2 major ones in the strain blrms. The top contributing aux channels are also not really from one area, so I can't say that this narrowed down a potential location. However, two HAM8 channels were among the top contributors (H1:ISI_HAM8_BLND_GS_X/Y). It is hard to say if that is significant or not, since I am only looking at about an hours worth of data. 

I did a rough check on the summary pages to see if this noise happened on more than one day, but at this moment I didn't find other days with this behavior. If I do come across it happening again (or if someone else notices it), I can run lasso again.

Non-image files attached to this comment
adrian.helmling-cornell@LIGO.ORG - 17:47, Wednesday 02 October 2024 (80437)DetChar

I find that the noise bursts are temporally correlated with vibrational transients seen in H1:PEM-CS_ACC_IOT2_IMC_Y_DQ. Attached are some slides which show (1) scattered light noise in H1:GDS-CALIB_STRAIN_CLEAN from 1000-1400 on Septmeber 17, (2) and (3) the scattered light incidents compared to a timeseries of the accelerometer, and (4) a spectrogram of the accelerometer data.

Non-image files attached to this comment
H1 DetChar (DetChar, Lockloss)
bricemichael.williams@LIGO.ORG - posted 11:33, Thursday 12 September 2024 - last comment - 17:03, Wednesday 25 September 2024(80001)
Lockloss Channel Comparisons

-Brice, Sheila, Camilla

We are looking to see if there are any aux channels that are affected by certain types of locklosses. Understanding if a threshold is reached in the last few seconds prior to a lockloss can help determine the type of lockloss, which channels are affected more than others, as well as

We have gathered a list of lockloss times (using https://ldas-jobs.ligo-wa.caltech.edu/~lockloss/index.cgi) with:

  1. only Observe and Refined tags (plots, histogram)
  2. only Observe, Refined, and Windy tags (plots, histogram)
  3. only Observe, Refined, and Earthquake tags (plots, histogram)
  4. Observe, Refined, and Microseism tags (note: all of these also have an EQ tag, and all but the last 2 have an anthropogenic tag) (plots, histogram)

(issue: the plots for the first 3 lockloss types wouldn't upload to this aLog. Created a dcc for them: G2401806)

We wrote a python code to pull the data of various auxilliary channels 15 seconds before a lockloss. Graphs for each channel are created, a plot for each lockloss time are stacked on each of the graphs, and the graphs are saved to a png file. All the graphs have been shifted so that the time of lockloss is at t=0.

Histograms for each channel are created that compare the maximum displacement from zero for each lockloss time. There are also a stacked histogram based on 12 quiet microseism times (all taken from between 4.12.24 0900-0930 UTC). The histrograms are created using only the last second of data before lockloss, are normalized by dividing by the numbe rof lockloss times, and saved to a seperate pnd file from the plots.

These channels are provided via a list inside the python file and can be easily adjusted to fit a user's needs. We used the following channels:

channels = ['H1:ASC-AS_A_DC_NSUM_OUT_DQ','H1:ASC-DHARD_P_IN1_DQ','H1:ASC-DHARD_Y_IN1_DQ','H1:ASC-MICH_P_IN1_DQ', 'H1:ASC-MICH_Y_IN1_DQ','H1:ASC-SRC1_P_IN1_DQ','H1:ASC-SRC1_Y_IN1_DQ','H1:ASC-SRC2_P_IN1_DQ','H1:ASC-SRC2_Y_IN1_DQ', 'H1:ASC-PRC2_P_IN1_DQ','H1:ASC-PRC2_Y_IN1_DQ','H1:ASC-INP1_P_IN1_DQ','H1:ASC-INP1_Y_IN1_DQ','H1:ASC-DC1_P_IN1_DQ', 'H1:ASC-DC1_Y_IN1_DQ','H1:ASC-DC2_P_IN1_DQ','H1:ASC-DC2_Y_IN1_DQ','H1:ASC-DC3_P_IN1_DQ','H1:ASC-DC3_Y_IN1_DQ', 'H1:ASC-DC4_P_IN1_DQ','H1:ASC-DC4_Y_IN1_DQ']
Images attached to this report
Comments related to this report
bricemichael.williams@LIGO.ORG - 17:03, Wednesday 25 September 2024 (80294)DetChar, Lockloss

After talking with Camilla and Sheila, I adjusted the histogram plots. I excluded the last 0.1 sec before lockloss from the analysis. This is due to (in the original post plots) the H1:ASC-AS_A_NSUM_OUT_DQ channel have most of the last second (blue) histogram at a value of 1.3x10^5. Indicating that the last second of data is capturing the lockloss causing a runawawy in the channels. I also combined the ground motion locklosses (EQ, Windy, and microseism) into one set of plots (45 locklosses) and left the only observe (and Refined) tagged locklosses as another set of plots (15 locklosses). Both groups of plots have 2 stacked histograms for each channel:

  1. Blue:
    • the max displacement from zero between one second before and 0.1 second before lockloss, for each lockloss. 
    • The data is one second before until 0.1 second before lockloss, for each lockloss
    • the histogram is the max displacement from zero for each lockloss
    • The counts are weighted as 1/(number of locklosses in this data set) (i.e: the total number of counts in the histogram)
  2. Red:
    • I took all the data points from eight seconds before until 2 seconds before lockloss for each lockloss.
    • I then down-sampled the data points from 256 Hz to 16Hz sampling rate by taking every 16th data point.
    • The histogram is the displacement from zero of these down-sampled points
    • The counts are weighted as 1/(number of down-samples data points for each lockloss) (i.e: the total number of counts in the histogram)

Take notice of the histogram for the H1:ASC-DC2_P_IN1_DQ channel for the ground motion locklosses. In the last second before lockloss (blue), we can see a bimodal distribution with the right groupling centered around 0.10. The numbers above the blue bars is the percentage of the counts in that bin: about 33.33% is in the grouping around 0.10. This is in contrast to the distribution for the observe, refined locklosses where the entire (blue) distribution is under 0.02. This could indicate a threshold could be placed on this channel for lockloss tagging. More analysis will be required before that (I am going to next look at times without locklosses for comparison).

 

Images attached to this comment
H1 General (CAL, ISC)
anthony.sanchez@LIGO.ORG - posted 15:06, Saturday 31 August 2024 - last comment - 09:23, Thursday 26 September 2024(79841)
ETMX Drive align L2L Gain changed

anthony.sanchez@cdsws29: python3 /ligo/home/francisco.llamas/COMMISSIONING/commissioning/k2d/KappaToDrivealign.py

Fetching from 1409164474 to 1409177074

Opening new connection to h1daqnds1... connected
    [h1daqnds1] set ALLOW_DATA_ON_TAPE='False'
Checking channels list against NDS2 database... done
Downloading data: |█████████████████████████████████████████████████████████████████████████████████████| 12601.0/12601.0 (100%) ETA 00:00

Warning: H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN changed.


Average H1:CAL-CS_TDEP_KAPPA_TST_OUTPUT is -2.3121% from 1.
Accept changes of    
H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN from 187.379211 to 191.711514 and
H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN from 184.649994 to 188.919195
Proceed? [yes/no]
yes
Changing
H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN and
H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN
H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN => 191.7115136134197
anthony.sanchez@cdsws29:

 

Comments related to this report
louis.dartez@LIGO.ORG - 16:20, Saturday 31 August 2024 (79845)
I'm not sure if the value set by this script is correct. 

KAPPA_TST was 0.976879 (-2.3121%) at the time this script looked at it. The L2L DRIVEALIGN GAIN in H1:SUS-ETMX_L3_DRIVEALIGN_L2L_GAIN was 184.65 at the time of our last calibration update. This is the time at which KAPPA_TST was set to 1. So to offset the drift in the TST actuation strength we should change the drivealign gain to 184.65 * 1.023121 = 188.919. This script chose to update the gain to 191.711514 instead; this is 187.379211 * 1.023121, with 187.379211 being the gain value at the time the script was run. At that time, the drivealign gain was already accounting for a 1.47% drift in the actuation strength (this has so far not been properly compensated for in pyDARM and may be contributing to the error we're currently seeing...more on that later this weekend in another post.). 

So I think this script should be basing corrections as percentages applied with respect to the drivealign gain value at the time when the kappa's were last set (i.e. just after the last front end calibration update) *not* at the current time.

also, the output from that script claims that it also updated H1:CAL-CS_DARM_FE_ETMX_L3_DRIVEALIGN_L2L_GAIN but I trended it and it hadn't been changed. Those print statements should be cleaned up.
louis.dartez@LIGO.ORG - 09:23, Thursday 26 September 2024 (80304)
to close out this discussion, it turns out that the drivealign adjustment script is doing the correct thing. Each time the drivealign gain is adjusted to counteract the effect of ESD charging, the percent change reported by Kappa TST should be applied to the drivealign gain at that time rather than what the gain was when the kappa calculations were last updated.
Displaying reports 161-180 of 78140.Go to page Start 5 6 7 8 9 10 11 12 13 End