Jim and Fil alerted me that the MKS 390 gauge on A2 cross of FC-A (between HAM7 and BSC3) was making a clicking noise. While I was elsewhere moving compressors around, Patrick and Fil attempted to reset it via software, but with no luck. H1 lost lock while I was looking at trends of this gauge, so I went to the LVEA and power cycled the gauge. After power cycling, the "State" light on the gauge changed to blinking alternating green and amber, where before power cycling it was blinking only green, but the clicking sound is still present, although maybe at a reduced rate (see attached videos). Looking at trends, the gauge started to get noisy ~2 days ago and got progressively worse through today.
The vacuum overview MEDM is currently reporting this gauge as "nan".
Sorry for the confusion, we didn't attempt to reset it in software, just looked to see if it was reporting errors, which it was (see attached screenshots). Looking these codes up, it appears to be a hardware error which should be reported to the manufacturer.
TITLE: 11/06 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 10mph Gusts, 8mph 3min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.25 μm/s
QUICK SUMMARY:
Currently in DRMI and trying to relock.
11/06 00:50UTC Observing
Following the raising of the entire BBSS by 5mm (80931), we took transfer function measurements and compared them to the previous measurements from October 11th(80711), where the M1 blade height was the same but the BBSS had not yet been moved up by the 5mm. They are looking pretty similar to the previoius measurements, but there are extra peaks in V2V that line up with the peaks in pitch.
The data for these measurements can be found in:
/ligo/svncommon/SusSVN/sus/trunk/BBSS/X1/BS/SAGM1/Data/2024-10-31_2300_tfs/
The results for these measurements can be found in:
/ligo/svncommon/SusSVN/sus/trunk/BBSS/X1/BS/SAGM1/Results/2024-10-31_2300_tfs/
The results comparing the October 11th measurements to the October 31st measurements can be found in:
/ligo/svncommon/SusSVN/sus/trunk/BBSS/Common/Data/allbbss_2024-Oct11-Oct31_X1SUSBS_M1/
BBSS Status of Drift since moving whole BBSS up from the Top Stage to compensate for TM blades moving down (alog 809391)
Plots:
BBSSF1DriftPUMDoFsCurrentConfig: 6 day trend of LPY micron counts for the PUM.
BBSSF1DriftTMDoFsCurrentConfig: 6 day trend of LTVRPY micron counts for the Top Mass.
BBSSF1DriftTMDoFsPreviousConfig: 18 day trend of LTPRPY micron counts for Top Mass.
BBSSF1DriftTMBOSEMssPreviousConfig: 18 day trend of BOSEM counts for Top Mass.
BBSSF1DriftTMBOSEMsCurrentConfig: 6 day trend of BOSEM counts for Top Mass.
First Look Findings:
Daniel, Camilla, Vicky
After PSL laser frequency change 81073, we adjusted the SQZ + ALS X + ALS Y laser crystal temperatures again, as before in lho80922.
Again the steps we took:
Squeezer recovery at NLN was complicated after this temp change, but after fixing laser mode hopping and updating OPO temperature, SQZ is working again at NLN.
OPO TEC temperature adjusted from 31.468 C --> 31.290 C, to maximize seed NLG. Change is consistent with last time lho80926 (though not obvious that laser freq changes mean the red/green opo co-resonance should change, but ok).
Squeezer laser was likely modehopping at the new settings above. Symptoms:
- Screenshot of TTFSS beatnote before / after / afterAfter the PSL laser freq change.
- PMC_TRANS was 430 (nominal H1:SQZ-PMC_TRANS_DC_POWERMON~675)
- PMC_REFL was 370 (nominal H1:SQZ-PMC_REFL_DC_POWERMON~44)
- Bad PMC transmission meant not enough SHG power, and while OPO could lock, the OPO PUMP ISS could not reach the correct pump power. So aside from mode hopping being bad, it would have meant bad squeezing.
Camilla and I went to the floor. SQZ Laser controller changes made to avoid mode hopping:
This adjustment seemed quite sensitive to laser current and crystal temperature, with a tunable range kinda less than I expected.
Compared to previous SQZ laser adjustments to avoid mode-hopping (see old laser 51238, 51135, 49500, 49327), this temperature adjustment seems small for the current (amps) adjustment.. but, seemed to work.
PMC cavity scans look clean at these new settings (laser looks single-mode), see screenshot. For +/- 200 MHz steps of crystal frequency, still see a 160 MHz beatnote. Later could double-check a PMC cavity scan at +200 MHz just to be sure. PMC cavity scan template added to medm SQZT0 screen (! PMC) and at $(userapps)/sqz/h1/Templates/ndscope/pmc_cavity_scan.yaml.
R. Short, J. Oberling
Today we were originally planning to take a look at the TTFSS, but noticed something before going into the enclosure that we think is the cause of some of the PSL instability since the completion of the NPRO swap. We were looking at some trends from yesterday's ISS OFF test, recreating the plots Ryan C. had made, when we noticed the ISS get angry and enter into an unlock/relock cycle that it couldn't get out of. While it was doing this we saw that the PMC Refl power was up around 40W 30W and "breathing" with PMC Trans; as PMC Refl would increase, PMC Trans would decrease, but the sum of the 2 was unchanged. We then unlocked the ISS. As we sat and watching things for a bit after the ISS unlock we saw that PMC Refl would change by many Watts as it was breathing. We tried unlocking the IMC, the behavior was unchanged; we then tried unlocking the FSS RefCav, and the behavior was still unchanged. It was at this point we noticed that the moves in PMC Refl were matched by very small moves (1-2 mW) in the NPRO output power; as the NPRO output power went up, PMC Refl went down, and vice versa. This looked a whole lot like the NPRO was mode hopping, and 2 or more modes were competing against each other. This would explain the instability we've seen recently. The ISS PDs are located after the PMC, so any change in PMC output (like from competing modes) would get a response from the ISS. So if PMC Refl would go up, as we had been seeing, and PMC Trans drops in response, also as we had been seeing, the ISS would interpret this as a drop in power and reduce the ISS AOM diffraction in response. When the ISS ran out of range on the AOM diffraction it would then become unstable and unlock; it would then try to lock again, see it "needed" to provide more power, run out of range on the AOM, and unlock. While this is happening the FSS RefCav TPD would get very noisy and drop, as power out of the PMC was dropping. You can see it doing this in the final 3 plots from the ISS OFF test. To give the ISS more range we increased the power bank for the ISS by moving the ISS Offset slider from 3.3 to 4.1; this moved the default diffraction % from ~3% to ~4%. We also increased the ISS RefSignal so when locked the ISS is diffracting ~6% instead of ~2.5%.
To try to fix this we unlocked the PMC and moved the NPRO crystal temperature, via the FSS MEDM screen, to a different RefCav resonance to see if the mode hopping behavior improved. We went up 2 RefCav resonance and things looked OK, so we locked the FSS here and then the ISS. After a couple minutes PMC Refl began running away higher and the ISS moved the AOM lower in response until it ran out of range and unlocked. With the increased range on the diffracted power it survived a couple of these excursions, but then PMC Refl increased to the point where the ISS emptied its power bank again and unlocked. So we tried moving down by a RefCav resonance (so 1 resonance away from our starting NPRO crystal temperature instead of 2) and it was the same: things looked good, locked the stabilization systems, and after a couple minutes PMC Refl would run away again (again the ISS survived a couple of smaller excursions but then a large one would empty the bank and unlock it again). So we decided to try going up a 3rd resonance on the RefCav. Again, things looked stable so we locked the FSS and ISS here and let it sit for a while. After almost an hour we saw no runaway on PMC Refl, so we're going to try operating at this higher NPRO crystal temperature for a bit and see how things go (we went 2 hours with the ISS OFF yesterday and saw nothing, so this comes and goes at potentially longer time scales). The NPRO crystal temperature is now 25.2 °C (it started at 24.6 °C), and the temperature control reading on the FSS MEDM screen is now around 0.5 (it was at -0.17 when we started). We have left the ISS Offset at 4.1, and have moved the RefSignal so it is diffracting ~4%; this to give the ISS a little more range since we've been seeing it move a little more with this new NPRO (should the crystal temperature change solve the PMC Refl runaway issue we could probably revert this change, but I want to keep it a little higher for now). Ryan S. will edit DIAG_MAIN to change the upper limit threshold on the diffracted power % to clear the "Diffracted power is too high" alert, and Daniel, Vicky, and Camilla have been re-tuning the SQZ and ALS lasers to match the new PSL NPRO cyrstal temperature. Also, the ISS 2nd loop will have to be adjusted to run with this higher diffracted power %.
Edit: Corrected the initial high PMC Refl from 40W to 30W. We saw excursions up towards 50W later on, as can be seen in the plots Ryan posted below.
Sadly, after about an hour and a half of leaving the PSL in this configuration while initial alignment was running for the IFO, we saw similar behavior as what was causing the ISS oscillations as seen last night (see attachment). There's a power jump in the NPRO output power (while the FSS was glitching, but likely unrelated), followed by a dip in NPRO power that lines up with a ~2W increase in PMC reflected power, likely indicating a mode hop. This power excursion was too much of a change out of the PMC for the ISS, so the ISS lost lock. Since then, there have been small power excursions (up to around 18-19W), but nothing large enough to cause the ISS to give out.
As Jason mentioned, I've edited the range of ISS diffracted power checked by DIAG_MAIN to be between 2.5% and 5.5% as we'll be running with the ISS at around 4% at least for now. I also checked with Daniel on how to adjust the secondloop tuning to account for this change, and he advised that the ISS_SECONDLOOP_REFERENCE_DFR_CAL_OFFSET should be set to be the negative value of the diffracted power percentage, so I changed it to be -4.2 and accepted it in SDF.
Last attachments are of the situation as we first saw it this morning before making adjustments, and a zoomed-out trend over our whole work this morning.
FAMIS 21614
pH of PSL chiller water was measured to be just above 10.0 according to the color of the test strip.
Tue Nov 05 10:06:12 2024 INFO: Fill completed in 6min 8secs
Footing prep and pour was completed at the end of last week. Today more forms are being set for the stem wall pour. It is my understanding that the stem wall could still be poured as early as the end of this week. T. Guidry
TITLE: 11/05 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 27mph Gusts, 17mph 3min avg
Primary useism: 0.08 μm/s
Secondary useism: 0.33 μm/s
QUICK SUMMARY:
H1 had a couple hours of Observing last night, and h1 was powering up as I walked in. Today is Maintenance Day with PSL, HAM6 and CDS items on the docket. LVEA is laser HAZARD.
Winds died down around midnight last night and this is when H1 had chances at NLN. Microseism is mostly beneath the 95th percentile and continues trend down.
Workstations were updated and rebooted. This was an OS packages updated. Conda packages were not updated.
IFO is in NLN and OBSERVING as of 11:15 UTC.
IMC/EX Rail/Non-Lockloss Lockloss Investigation:
In this order and according to the plots, this is what I believe happened.
I believe EX saturated, prompting IMC to fault while guardian was in DRMI_LOCKED_CHECK_ASC (23:17 PT), putting guardian in a weird fault state but keeping it in DRMI, without unlocking (still a mystery). Ryan C noticed this (23:32 PT) and requested IMC_LOCK to DOWN (23:38 PT), tripping MC2’s M2 and M3 (M3 first by a few ms). This prompted Guardian to call me (23:39 PT). What is strange is that even after Ryan C successfully put IMC in DOWN, guardian was unable to realize that IMC was in DOWN, and STAYED in DRMI_LOCKED_CHECK_ASC until Ryan C requested it to go to INIT. Only after then did the EX Saturations stop. While the EX L3 stage is what saturated before IMC, I don’t know what caused EX to saturate like this. The wind and microseism were not too bad so this could definitely be one of the other known glitches happening before all of this, causing EX to rail.
Here’s the timeline (Times in PT)
23:17: EX saturates. 300ms later, IMC faults as a cause of this.
23:32: Ryan C notices this weird behavior in IMC lock and MC2 and texted me. He noticed that IMC lost lock and faulted, but that this didn’t prompt an IFO Lockloss. Guardian was still at DRMI_LOCKED_CHECK_ASC, but not understanding that IMC is unlocked and EX is still railing.
23:38: In response, Ryan C put IMC Lock to DOWN, which tripped MC2’s M3 and M2 stages. This called me. I was experiencing technical issues logging in, so ventured to site (made it on-site 00:40 UTC).
00:00: Ryan C successfully downed IFO by requesting INIT. Only then did EX stop saturating.
00:40: I start investigating this weird IMC fault. I also untrip MC2 and start an initial alignment (fully auto). We lose lock at LOWNOISE_ESD_ETMX, seemingly due to large sus instability probably from the prior railing since current wind and microseism aren’t absurdly high. (EY, IX, HAM6 and EX saturate). The LL tool is showing an ADS excursion tag.
03:15: NLN and OBSERVING achieved. We got to OMC_Whitening at 02:36 but violins were understandably quite high after this weird issue.
Evidence in plots explained:
Img 1: M3 trips 440ms before M2. Doesn’t say much but was suspicious before I found out that EX saturated first. Was the first thing I investigated since it was the reason for call (I think).
Img 2: M3 and M2 stages of MC2 showing IMC fault beginning 23:17 as a result of EX saturations (later img). All the way until Ryan C downs IMC at 23:38, which is also when the WD tripped and when I was called.
Img 3: IMC lock faulting but NOT causing ISC_LOCK to go to lose lock. This plot shows that guardian did not put IFO in DOWN or cause it to lose lock. IFO is in DRMI_LOCKED_CHECK_ASC but IMC is faulting. Even when Ryan C downed the IMC (which is where the crosshair is), this did not cause a ISC to go to DOWN. The end time axis is when Ryan C put IFO in INIT, finally causing a Lockloss and ending the railing in EX (00:00).
Img 4: EX railing for 42 minutes straight, from 23:17 to 00:00.
Img 5: Ex beginning to rail 300ms before IMC faults.
Img 6: EX L2 and L3 OSEMs at the time of the saturation. Interestingly, L2 doesn’t saturate but before the saturation, there is erratic behavior. Once this noisy signal stops, L3 saturates.
Img 7: EX L1 and M0 OSEMs at the time of the saturation, zoomed in. It seems that there is a noisy and loud signal, (possibly a glitch or due to microseism?) in the M0 stage that is very short which may have kicked off this whole thing.
Img 8: EX L1 and M0 OSEMs at the whole duration of saturation. We can see the moves that L1 took throughout the 42 minutes of railing, and the two kicks when the railing started and stopped.
Img 9 and 10: An OSEM from each stage of ETMX, including one extra from M0 (since signals were differentially noisy). Img 9 is zoomed in to try to capture what started railing first. Img 10 shows the whole picture with L3’s railing. I don’t know what to make of this.
Further investigation:
See what else may have glitched or lost lock first. Ryan C’s induced lockloss, saving the constant EX railing doesn’t seem to show up under the LL tool, so this would have to be done in order of likely suspects. I’ve never seen this behavior before what this was if anyone else has.
Other:
Ryan’s post EVE update: alog 81061
Ryan’s EVE Summary: alog 81057
Couple of strange things that happened before this series of events that Ibrahim has written out:
TJ Suggested checking on the ISC_DRMI node: seemed fine, was in DRMI_3F_LOCKED from 06:54 until DRMI unlocked at 17:17UTC, then it went to DOWN.
2024-11-05_06:54:59.650968Z ISC_DRMI [DRMI_3F_LOCKED.run] timer['t_DRMI_3f'] done
2024-11-05_07:17:40.442614Z ISC_DRMI JUMP target: DOWN
2024-11-05_07:17:40.442614Z ISC_DRMI [DRMI_3F_LOCKED.exit]
2024-11-05_07:17:40.442614Z ISC_DRMI STALLED
2024-11-05_07:17:40.521984Z ISC_DRMI JUMP: DRMI_3F_LOCKED->DOWN
2024-11-05_07:17:40.521984Z ISC_DRMI calculating path: DOWN->DRMI_3F_LOCKED
2024-11-05_07:17:40.521984Z ISC_DRMI new target: PREP_DRMI
2024-11-05_07:17:40.521984Z ISC_DRMI executing state: DOWN (10)
2024-11-05_07:17:40.524286Z ISC_DRMI [DOWN.enter]
TJ also asked what the H1:GRD-ISC_LOCK_EXECTIME was, this kept getting larger and larger e.g. after 60s was at 60, like ISC_LOCK got hung, see attached (bottom left plot). Starting getting larger at 6:54:50 UTC which was the same time as the last message from ISC_LOCK. Reached a maximum of 3908 seconds ~ 65minutes before Ryan reset it using INIT. Another simpler plot here.
TJ worked out that this is due to a call to cdu.avg without a timeout.
ISC_LOCK DRMI_LOCKED_CHECK_ASC converge checker must have returned True so it went ahead to the next lines when contained a call to nds via cdu.avg().
We've had previous issues with similar calls getting hung. TJ has already written a fix to avoid this, see 71078.
'from timeout_utils import call_with_timeout' was already imported as is used for PRMI checker. I edited the calls to cdu.avg to use the timeout wrapper in ISC_LOCK:
I was about to go to bed when I glanced at the control room screenshots and saw that something looked wrong, verbal was full of MC2 and EX saturations and ETMX drivealign was constantly saturating along with the suspension. I saw ISC_LOCK was also in a weird state, as it thought it was in DRMI_LOCKED_PREP_ASC despite having lost the IMC and both ARMS.
I texted Ibrahim that something didn't look right, I logged in soon after and requested IMC_LOCK to down to stop it which have been a bad move as it tripped the M2 and M3 watchdogs on MC2. After some texts with ibrahim I tried to bring ISC_LOCK to down, had to init it to get it there. Ibrahims taking over now.
TITLE: 11/05 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Wind
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: We've been down from WIND for most of the day, winds are finally below 20mph for the 3 minutes average as of the end of the shift.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
00:20 | PEM | Robert | LVEA | - | Moving magnetometer | 01:20 |
02:21 | PEM | Robert | CER | - | Move Magnetometer | 02:26 |
PSL/IMC investigations today while we were down for the wind alog81056, the secondary microseism is still decreasing however slowly. Its been a windy shift, I had a ~2ish hour window of >30mph winds where I was able to hold ALS, but I ran into issues at ENGAGE_ASC where we would have an IMC lockloss 13 seconds into the state everytime.
See zoomed out trends from the 2 hour ISS_ON vs. 2 hour ISS_OFF test attached.
edit: fixed labels that were previously backwards (thanks Camilla). With ISS off, we had not seen glitches in yesterday's 2-hour test.
This alog is a continuation of previous efforts to correctly calibrate and compare the LSC couplings between Hanford and Livingston. Getting these calibrations right has been a real pain, and there's a chance that there could still be an error in these results.
These couplings are measured by taking the transfer function between DARM and the LSC CAL CS CTRL signal. All couplings were measured without feedforward. The Hanford measurements were taken during the March 2024 commissioning break. The Livingston MICH measurement is from Aug 29 2023 and the PRCL measurement from June 23 2023.
As an additional note, during the Hanford MICH measurement, both MICH and SRCL feedforward were off. However, for the Livingston MICH measurement, SRCL feedforward was on. For the PRCL measurements at both sites, both MICH and SRCL feedforward were engaged.
The first plot attached shows a calibrated comparison between the MICH, PRCL and SRCL couplings at LHO.
The second plot shows a calibrated comparison between the Hanford and Livingston MICH couplings. I also included a line indicating 1/280, an estimated coupling level for MICH based on an arm cavity finesse of 440. Both sites have flat coupling between about 20-60 Hz. There is a shallow rise in the coupling above 60 Hz. I am not sure if that's real, or some artifact of incorrect calibration. The Hanford coupling below 20 Hz has steeper response, which appears like some cross coupling between SRCL perhaps (it looks about 1/f^2 to me). Maybe this is present because SRCL feedforward was off.
The third plot shows a calibrated comparison between the Hanford and Livingston PRCL couplings. I have no sense of what this coupling should look like. If the calibration here is correct, this indicates that the PRCL coupling at Hanford is about an order of magnitude higher than Livingston. Whatever coupling is present has a different response between both sites, so I don't really know what to make of this.
The Hanford measurements used H1:CAL-DELTAL_EXTERNAL_DQ and the darm calibration from March 2024 (/ligo/groups/cal/H1/reports/20240311T214031Z/deltal_external_calib_dtt.txt
)
The Livingston measurement used L1:OAF-CAL_DARM_DQ and a darm calibration that Dana Jones used in her previous work (74787, saved in /ligo/home/dana.jones/Documents/cal_MICH_to_DARM/L1_DARM_calibration_to_meters.txt)
LHO MICH calibration: I updated the CAL CS filters to correctly match the current drive filters. However, I made the measurement in March 11 before catching some errors in the filters. I incorrectly applied a 200:1 filter, and multiplied by sqrt(1/2) when I should have multipled by sqrt(2) (76261). Therefore, my calibration includes a 1:200 filter and a factor of 2 to appropriately compensate for these mistakes. Additionally, my calibration includes a 1e-6 gain to convert from um to m, and an inverted whitening filter [100, 100:1, 1]. This is all saved in a DTT template: /ligo/home/elenna.capote/LSC_calibration/MICH_DARM_cal.xml
LLO MICH calibration: I started with Dana Jones' template (74787), and copied it over into my directory: /ligo/home/elenna.capote/LSC_calibration/LLO_MICH.xml. I inverted the whitening filter using [100,100,100,100,100:1,1,1,1,1] and applied a gain of 1e-6 to convert um to m.
LHO PRCL calibration: I inverted the whitening using [100,100:1,1] and converted from um to m with 1e-6.
LLO PRCL calibration: I inverted the whitening using [10,10,10:1,1,1] and converted from um to m with 1e-6.
I exported the calibrated traces to plot myself. Plotting code and plots saved in /ligo/home/elenna.capote/LSC_calibration
Evan Hall has a nice plot of PRCL coupling from O1 in his thesis, Figure 2.16 on page 37. I have attached a screen grab of his plot. It appears as if the PRCL coupling now in O4 is lower than it is in this measurement (from I am assuming O1)- eyeballing about 4e-4 m/m at 20 Hz now in O4 compared to about 2e-3 m/m at 20 Hz in Evan's plot.
WP12186
Richard, Fil, Erik, Dave:
We performed a complete power cycle of the h1psl0. Note, this is not on the Dolphin fabric so no fencing was needed. Procedure was
The system was power cycled at 10:11 PDT. When the iop model started, it reported a timing error. The duotone signal (ADC0_31) was a flat line signal of about 8000 counts with a noise of a few counts.
Erik thought the timing card had not powered up correctly, so we did a second round of power cycles at 10:30 and this time the duotone was correct.
NOTE: the second ADC failed its AUTOCAL on both restarts. This is the PSL FSS ADC.
If we continue to have FSS issues, the next step is to replace the h1pslfss model's ADC and 16bit DAC cards.
[ 45.517590] h1ioppsl0: INFO - GSC_16AI64SSA : devNum 0 : Took 181 ms : ADC AUTOCAL PASS
[ 45.705599] h1ioppsl0: ERROR - GSC_16AI64SSA : devNum 1 : Took 181 ms : ADC AUTOCAL FAIL
[ 45.889643] h1ioppsl0: INFO - GSC_16AI64SSA : devNum 2 : Took 181 ms : ADC AUTOCAL PASS
[ 46.076046] h1ioppsl0: INFO - GSC_16AI64SSA : devNum 3 : Took 181 ms : ADC AUTOCAL PASS
We decided to go ahead and replace h1pslfss model's ADC and DAC card. The ADC because of the continuous autocal fail, the DAC to replace an aging card which might be glitching.
11:30 Powered system down, replace second ADC and second DAC cards (see IO Chassis drawing attached).
When the system was powered up we had good news and bad news. The good news, ADC1 autocal passed after the previous card had been continually failing since at least Nov 2023. The bad news, we once again did not have a duotone signal in ADC0_31 channel. Again it was a DC signal, with amplitude 8115+/-5 counts.
11:50 Powered down for a 4th time today, replaced timing card and ADC0's interface card (see drawing)
12:15 powered the system back up, this time everything looks good. ADC1 AUTOCAL passed again. Duotone looks correct.
Note that the new timing card duotone crossing time is 7.1uS, and the old card had a crossing of 7.6uS
Here is a summary of the four power cycles of h1psl0 we did today:
Restart | ADC1 AUTOCAL | Timing Card Duotone |
10:11 | FAIL | BAD |
10:30 | FAIL | GOOD |
11:30 | (new card) PASS | BAD |
12:15 | (new card) PASS | (new cards) GOOD |
Card Serial Numbers
Card | New (installed) | Old (removed) |
64AI64 ADC | 211109-24 | 110203-18 |
16AO16 DAC | 230803-05 (G22209) | 100922-11 |
Timing Card | S2101141 | S2101091 |
ADC Interface | S2101456 | S1102563 |
Detailed timeline:
Mon04Nov2024
LOC TIME HOSTNAME MODEL/REBOOT
10:20:14 h1psl0 ***REBOOT***
10:21:15 h1psl0 h1ioppsl0
10:21:28 h1psl0 h1psliss
10:21:41 h1psl0 h1pslfss
10:21:54 h1psl0 h1pslpmc
10:22:07 h1psl0 h1psldbb
10:33:20 h1psl0 ***REBOOT***
10:34:21 h1psl0 h1ioppsl0
10:34:34 h1psl0 h1psliss
10:34:47 h1psl0 h1pslfss
10:35:00 h1psl0 h1pslpmc
10:35:13 h1psl0 h1psldbb
11:43:20 h1psl0 ***REBOOT***
11:44:21 h1psl0 h1ioppsl0
11:44:34 h1psl0 h1psliss
11:44:47 h1psl0 h1pslfss
11:45:00 h1psl0 h1pslpmc
11:45:13 h1psl0 h1psldbb
12:15:47 h1psl0 ***REBOOT***
12:16:48 h1psl0 h1ioppsl0
12:17:01 h1psl0 h1psliss
12:17:14 h1psl0 h1pslfss
12:17:27 h1psl0 h1pslpmc
12:17:40 h1psl0 h1psldbb