TITLE: 09/19 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
H1's been locked almost 6.5hrs with a range hovering above 160Mpc (w/ only a few EX saturations on Verbal).
TITLE: 09/18 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 21mph Gusts, 15mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.09 μm/s
QUICK SUMMARY:
H1 was locked when I arrived (has been locked for 1.5hrs). Winds are a little up, but less than last night. All is looking well thus far.
TITLE: 09/18 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 151Mpc
INCOMING OPERATOR: Corey
SHIFT SUMMARY: Observing at 155Mpc and have been locked for almost an hour. Quiet day with one lockloss but hands-off relocking.
LOG:
14:30 Relocking and at CARM_5_PICOMETERS
15:05 NOMINAL_LOW_NOISE
15:09 Observing
21:13 Lockloss
21:33 Lockloss from CHECK_MICH_FRINGES
21:38 Lockloss from FIND_IR
22:37 NOMINAL_LOW_NOISE
22:39 Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
15:03 | PEM | Robert | LVEA Receiving Door | n | Moving stuff | 15:05 |
15:30 | FAC | Eric | EX, EY | n | Checking glycol levels | 16:52 |
15:37 | FAC | Karen | Optics lab & Vac Prep | N | Technical cleaning | 15:53 |
19:12 | FAC | Kim | Receiving Door | n | Opening it | 20:12 |
20:37 | FAC | Tyler | MX, MY | n | 3IFO and Safety checks | 22:30 |
21:34 | FAC | Fil | Roof | n | Grabbing the lightning rod | 22:34 |
Jeff, Oli
I was looking over the Controls Summary Table and noticed a discrepancy between the Coil Driver Strength given in the table (11.9mA/V) and the value in the TMTS Controls Design Description(9.943mA/V). Using the make_OSEM_filter_model.m script along with looking at the driver circuit, Jeff and I were able to verify that the value shown on the Controls Design Description, 9.943 mA/V, was the correct value. The difference between the incorrect and correct value is over 19%!
We looked around at scripts that would use this coil driver value and found that the script we use for comparing transfer function measurements to the model, plotTMTS_dtttfs.m (found in $(sussvn)/TMTS/Common/MatlabTools/), had been using the wrong value to calculate the calibration, so that value was updated and committed to the svn. This difference will close the gap between the model and measurements. Thankfully that seems to be the only location where the wrong value was used.
Daniel, Marc, Fil, Richard, Erik, Dave:
Attached drawing shows the SUS ETMX DAC changes made yesterday.
Summary:
The new LIGO 28-bit DAC has replaced the h1susetmx model's 20-bit DAC signals. These comprise five L3 ESD signals, four L2 signals and four L1 signals.
The h1susetmx 18-bit DAC channels were not changed (the M0 and R0 signals).
The h1susetmxpi model's two 20-bit DAC signals were not changed, no h1susetmxpi model change was needed.
The routing of the h1susetmx model's L3, L2 and L1 signals to the 28-bit DAC was done using an existing matrix. Gains were applied using existing filter-modules. No h1susetmx model change was needed, this was a hardware change.
Details:
Please reference attached drawing and h1susetmx model snipet.
The h1susex front end comprises two ADCs, three 18-bit DACs, two 20-bit DACs and the new LIGO DAC. Setting the ADCs aside;
The third 18-bit DAC is only used by the h1sustmsx model, and so can be discounted.
The first two 18-bit DAC cards are used by h1susetmx (driving M0 and R0). These were not touched and are not applicable.
The first 20-bit DAC card is used by h1susetmx to drive L1 and L2 (four channels each).
The second 20-bit DAC card is shared between the h1susetmx and h1susetmxpi models. h1susetmx owns the first six channels, and drives five of them (L3-ESD DC+4QUAD). h1susetmxpi owns the last two channels (ESD left and right).
There are two types of Anti-Imaging (AI) chassis used here: the standard 18-bit/20-bit DAC AI (two inputs, each input is driven by a separate DAC), and the Parametric Instability (PI AI) chassis. The PI-AI has one input (block of eight channels) which internally are split into two blocks of six and two channels. The block of six channels is filtered normally and exits as chan 1-6 on front panel ('OUT 1-6'), the block of two channels is PI filtered and exits on its own front connector ('Band Pass Ch 7 & 8'). See attached photo.
Before:
The first 20-bit DAC is connected to one half of a standard AI chassis. Its output drives the L1, L2 signals.
The second 20-bit DAC is connected to a PI-AI chassis. The first 6 channels are driven by h1susetmx, the last two by h1susetmxpi.
The LIGO DAC is not connected to any AI chassis.
Now:
A second PI-AI chassis was installed in the rack, it is used to drive h1susetmxpi's DAC channels.
The first 20-bit DAC was disconnected from its AI (see note below).
The second 20-bit DAC was disconnected from the original PI-AI and connected to the new PI-AI. The PI driver cable was moved from the original to new AI. This means that other than the AI chassis change, the h1susetmxpi model and its drives were unchanged.
The first 8 channels of the LIGO DAC are connected to the original PI-AI (with its L3/ESD field cabling intact). Therefore the ESD analog filters were not changed in the transition from 20-bit to 28-bit DAC.
The second 8 channels of the LIGO DAC are connected to the standard AI (was connected to first 20-bit DAC). This permits drive of L1 & L2.
The attached MEDM snapshot shows the matrix/gain settings. The first 5 channels are driving the ESD, the second block of 8 the L1/L1.
Complications:
Initially the first 20-bit DAC (h1susetmx L1 & L2) was going to be left disconnected from any AI. However the lack of an AI watchdog signal put this DAC into error. Marc connected this DAC to a second input of a PI-AI chassis. This input does not internally connect to any filter block, but the interface card does supply the missing AI watchdog signal.
The IOP model's software watchdog (SWWD) needs to have the new LIGO DAC added to its DACKILL list, now that the new DAC is part of production.
Looking at data from a couiple of days ago, there is evidence of some transient bumps at multiples of 11.6 Hz. Those are visible in the summary pages too around hour 12 of this plot.
Taking a spectrogram of data starting at GPS 1410607604, one can see at least two times where there is excess noise at low frequency. This is easier to see in a spectrogram whitened to the median. Comparing the DARM spectra in a period with and without this noise, one can identify the bumps at roughly multiples of 11.6 Hz.
Maybe somebody from DetChar can run LASSO on the BLRMS between 20 and 30 Hz to find if this noise is correlated to some environmental of other changes.
I took a look at this noise, and I have some slides attached to this comment. I will try to roughly summarize what I found.
I first started by taking some 20-30 hz BLRMS around the noise. Unfortunately, the noise is pretty quiet, so I don't think lasso will be super useful here. Even taking a BLRMS for a longer period around the noise didn't produce much. I can re-visit this (maybe take a narrower BLRMS?), but as a separate check I looked at spectra of the ISI, HPI, SUS, and PEM channels to see if there was excess noise anywhere in particular. I figured maybe this could at least narrow down a station where there is more noise at these frequencies.
What I found was:
I was able to run lasso on a narrower strain blrms (suggested by Gabriele) which made the noise more obvious. Specifically, I used a 21 Hz - 25 Hz blrms of auxiliary channels (CS/EX/EY HPI,ISI,PEM & SUS channels) to try and model a strain blrms of the same frequency via lasso. In the pdf attached, the first slide shows the fit from running lasso. The r^2 value was pretty low, but the lasso fit does pick up some peaks in the auxiliary channels that do line up with the strain noise. In the following slides, I made time series plots of the channels that lasso found to be contributing the most to the re-creation of the strain. The results are a bit hard to interpret though. There seems to be roughly 5 peaks in the aux channel blrms, but only 2 major ones in the strain blrms. The top contributing aux channels are also not really from one area, so I can't say that this narrowed down a potential location. However, two HAM8 channels were among the top contributors (H1:ISI_HAM8_BLND_GS_X/Y). It is hard to say if that is significant or not, since I am only looking at about an hours worth of data.
I did a rough check on the summary pages to see if this noise happened on more than one day, but at this moment I didn't find other days with this behavior. If I do come across it happening again (or if someone else notices it), I can run lasso again.
I find that the noise bursts are temporally correlated with vibrational transients seen in H1:PEM-CS_ACC_IOT2_IMC_Y_DQ. Attached are some slides which show (1) scattered light noise in H1:GDS-CALIB_STRAIN_CLEAN from 1000-1400 on Septmeber 17, (2) and (3) the scattered light incidents compared to a timeseries of the accelerometer, and (4) a spectrogram of the accelerometer data.
Wed Sep 18 08:10:41 2024 INFO: Fill completed in 10min 37secs
Jordan confirmed a good fill curbside.
TITLE: 09/18 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 3mph 3min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
Relocking and at CARM_5_PICOMETERS. Not sure yet why we lost lock, but it might've been an earthquake.
Lockloss from earlier this morning, 2024-09-18 13:39UTC, wasn't caused by an earthquake, but by the FSS saturating. This is the first FSS_OSCILLATION lockloss that we've had from NLN in a year
Back to Observing 15:09UTC.
Had to accept this sdf diff for ETMX_L3_ESDOUTF_LR to get into Observing. This change happened during TRANSITION_FROM_ETMX. Since this is our first relock since the DAC was changed out yesterday at EX, wondering if this was related at all to those changes.
I probably had a stray mouse click that turned that decimation filter off, and then did my SDF-accepting of the state of FMs 9 and 10 of the ESDOUTF filter banks. Thankfully it was just the decimation filter that I had wrong, which doesn't affect our sensitivity in any way.
Anyhow, I have updated the safe.snap file to accept the decimation filter being correctly ON (see attached). This will show up as an SDF diff again (opposite of what Oli posted this morning) in Observe and should be accepted. I don't want to set it in Observe right now, since accepting would require changing other things with the ETM, which I don't want to do right now. So, it'll stay as a diff.
TITLE: 09/17 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Calibration
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
Tony took H1 to Observing, during our Ops hand-off, ending a long Maintenance. Winds were high during the afternoon peaking at about 2315utc; H1 had lower range during this windy beginning of shift, but either thermalization and/or dying winds helped with H1 range hovering around the "usual" 160Mpc.
Dropped from Observing for a thermalized calibration requested by Jenne (with Louis helping).
LOG:
Since H1 was thermalized after this lock which was after a long/busy Maintenance Day, was asked to run another calibration. Notified LLO & Virgo via chat & TeamSpeak to give them a heads up about H1 dropping out of Triple Coincidence for 30min for calibration work.
Measurement NOTES:
This morning the In-Lock SUS Charge Measurements ran. Attached are the plots for all four Test Masses. Closing FAMIS 28371.
L. Dartez, T. Sanchez, J. Driggers, M. Pirello, J. Rollins, J. Betzwieser
The H1 calibration has been updated to account for the additional 15.2us delay due to the new LIGO DAC card used in the ETMX L3 path.
There was a bit of confusion and imperfect communication over how the new DACs were implemented that contributed to some locking troubles after the maintenance period.
- First, the DACs were only installed for ETMX-L3 for now. With conflicting emails and alogs I walkedin thinking all 3 stages had their DACs swapped.
- The gains that I adjusted in the COILOUTF and ESDOUTF filter banks were not used. On L1 and L2 this is because there was no DAC swap. On L3 it's because the new DACs have a built-in gain that serves the same purpose as the ones I adjusted. as such, the '32bit DAC' FMs are ignored and can be purged.
Updating the calibration pipeline for this change required steps that are outside of the anticipated.
Our last exported report, 20240330T211519Z, represents the state of the DARM model that is used to inform the front end and the GDS pipeline. I copied this report into 20240917T222803Z (this time stamp is not associated with any measurement files; i made it up as the time at which I was doing the copying) and updated the pydarm_H1.ini to include an additional 15.2 us delay in the unknown actuation delay parameter for the X arm, bringing the total 'unknown' actuation delay to 30.2us. I then updated the id file to reflect the new report id and regenerated the new report using pydarm report --regen 20240917T222803Z
(I should have used --skip-mcmc also but that's ok). Then I had to fix the id file _again_ after the generation before committing and uploading the report to ldas.
Once the report was committed, I exported it to the front end. The front end value changes are all on the order of a percent or two due to rerunning the MCMC during the generation. Note to pydarm devs: I had to mark this report as 'valid' in order to get pydarm to export it. But I didn't want to mark it as valid since we won't want this report to be considered as a unique measurement in the uncertainty budget. We'll need to remove the valid tag before preparing the unc budget for O4b.
There was some trouble getting GDS restarted but Jamie and Jonathan jumped in to help with that. It was just an auth issue.
To clarify L1 & L2 are driven by the new DAC. Gains don't need to be adjusted in the normal path, since they are adjusted in the new paths.
Building on work last week, we installed a 2nd PI AI chassis (S1500301) in order to keep the PI signals separate from the ESD driver signals. Original PI AI chassis S1500299.
We routed the LD32 Bank 0 thorugh the first PI AI chassis to the ESD drive L3, while keeping the old ESD driver signal driving the PI through the new PI AI chassis.
We routed the LD32 Bank 1 to the L2 & L1 suspension drive.
We did not route LD32 Bank 2 or Bank 3 to any suspensions. The M0 and R0 signals are still being driven by the 18 bit DACs.
The testing did not go as smoothly as planned, a watchdog on DAC slot 5 (the L1&L2 drive 20 bit DAC) continousouly tripped the ESD reset line. We solved this by attaching that open DAC port (slot 5) to the PI AI chassis to clear the WD error.
Looks like we made it to observing.
F. Clara, R. McCarthy, F. Mera, M. Pirello, D. Sigg
Part of the implication of this alog is that the new LIGO DAC is currently installed and in use for the DARM actuator suspension (the L3 stage of ETMX). Louis and the calibration team have taken the changes into account (see, eg, alog 80155).
The vision as I understand it is to use this new DAC for at least a few weeks, with the goal of collecting some information on how it affects our data quality. Are there new lines? Fewer lines? A change in glitch rate? I don't know that anyone has reached out to DetChar to flag that this change was coming, but now that it's in place, it would be helpful (after we've had some data collected) for some DetChar studies to take place, to help improve the design of this new DAC (that I believe is a candidate for installation everywhere for O5).
Analysis of glitch rate:
We selected Omicron transients during observing time across all frequencies and divided the analysis into two cases: (1) rates calculated using glitches with SNR>6.5, and (2) rates calculated using glitches with SNR>5. The daily glitch rate for transients with SNR greater than 6.5 is shown in Figure 1, with no significant difference observed before and after September 17th. In contrast, Figure 2, which includes all Omicron transients with SNR>5, shows a higher daily glitch rate after September 17th.
The rate was calculated by dividing the number of glitches per day by the daily observing time in hours.
Andrei, Sheila
This morning we are taking the commissioning time to get a sqz data set similar to 77133 with longer averaging time and no other changes to the IFO happening in parrallel.
Data times:
After this data collection, we ran SCAN_SQZANG, which resulted in a demod phase of 184.35
In the attached screenshots the units are different in the top and bottom plots, I filed a dtt bug because the plot disappears if I try to change the units on the bottom plot: 403
Late post of the subtracted quantum noises (FIS (+FDAS) and FDS).
While differences in dB look big on those plots, see these comparisons of just the quantum noise models relative to the total noise (FIS and FDS) -- below 100 Hz, these are really marginal / small differences in the quantum noise, which by 100 Hz is order 2-5 times below total DARM noise. So inferring low-frequency (<100 Hz) quantum noise paramters does require quiet glitch-free (rejected) long averaging times to nail down. So, this helps appreciate why figuring out low-freq quantum noise models is kinda tricky, given we can only measure the total noise in different sqz configurations. So far this analysis code is here.
Note due to the glitch in the no-sqz time, median averaging is required, or else we need to truncate to before the glitch (e.g. 400 seconds vs the full 600). Updating the noise budget environment to run median averaging, which requires the updated (but not fully updated) version of scipy, is underway so noise budget code can use median averaging.
I think several things related to quantum noise budgeting have changed since this sqz dataset in May 2024 (i.e. post-vent, including srcl detuning & fc detuning lho79929 amongst many other things) - so likely another sqz dataset in this vein is needed for more accurate quantum noise budgeting in O4b.
The parameters here in the figure titles are what was used to produce the quantum noise trace. It is rather "simple" at this stage, it does not yet include mode-mismatch, etc. The models do not match measurements well below 40-60 Hz, in part because noisy measurements (note the short 10-min measurement times).
Notably anti-squeezing does not match models well below 80 Hz, see anti-sqz vs. models after subtracting classical noise here. Including mode-mismatches may be able to resolve some of the discrepancies, but so far kinda unclear.
22:39 Observing
Accepted Jenne's sdf diff in Observe