Displaying reports 6441-6460 of 84169.Go to page Start 319 320 321 322 323 324 325 326 327 End
Reports until 10:43, Thursday 12 September 2024
H1 ISC (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 10:43, Thursday 12 September 2024 - last comment - 10:49, Thursday 12 September 2024(80059)
Lockloss 17:37 UTC

Lockloss during SQZ comissioning during a suspect ZM4 move.

Comments related to this report
camilla.compton@LIGO.ORG - 10:49, Thursday 12 September 2024 (80060)

1410197873 Maybe squeezing can cause a lockloss.... we lost lost lock 400ms after a 80urad ZM4 pitch move (ramp time 0.1s), sorry plot attached. Maybe this caused extra scatter in the IFO signal. We were seeing a scatter peak in DARM around 100Hz from PSAMS settings near where we were.

Images attached to this comment
H1 CAL (CAL)
ibrahim.abouelfettouh@LIGO.ORG - posted 09:14, Thursday 12 September 2024 - last comment - 11:29, Thursday 12 September 2024(80057)
Calibration Sweep 09/12

Took Calibration Sweep at 13ish H1 NLN Lock.

BB Start and End: 15:27 UTC, 15:35 UTC

File Names:

/ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20240912T152702Z.xml

 

Simulines GPS End and Start: 1410190636, 1410192001

File Names:

File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240912T153646Z.hdf5
File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240912T153646Z.hdf5
File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240912T153646Z.hdf5
File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240912T153646Z.hdf5
File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240912T153646Z.hdf5

Calibration Monitor Screenshot taken right after hitting run on BB attached.

Images attached to this report
Comments related to this report
francisco.llamas@LIGO.ORG - 11:29, Thursday 12 September 2024 (80061)

Camilla, Ibrahim and I ran a second set of simulines. This is not a full calibration measurment. Screenshot of the monitor lines attached. Got the following output at the end of the measurement, note the 30 minutes between 'Ending lockloss monitor' and 'File written':

2024-09-12 17:16:07,510 | INFO | Ending lockloss monitor. This is either due to having completed the measurement, and this functionality being terminated; or because the whole process was aborted.
2024-09-12 17:48:05,831 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20240912T165244Z.hdf5
2024-09-12 17:48:05,845 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20240912T165244Z.hdf5
2024-09-12 17:48:05,856 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20240912T165244Z.hdf5
2024-09-12 17:48:05,866 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20240912T165244Z.hdf5
2024-09-12 17:48:05,876 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20240912T165244Z.hdf5

Images attached to this comment
LHO VE
david.barker@LIGO.ORG - posted 08:17, Thursday 12 September 2024 (80056)
Thu CP1 Fill

Thu Sep 12 08:14:19 2024 INFO: Fill completed in 14min 15secs

Jordan confirmed a good fill curbside.

Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 07:44, Thursday 12 September 2024 (80055)
OPS Day Shift Start

TITLE: 09/12 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 143Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: EARTHQUAKE
    Wind: 14mph Gusts, 9mph 3min avg
    Primary useism: 0.19 μm/s
    Secondary useism: 0.29 μm/s
QUICK SUMMARY:

IFO is in NLN and OBSERVING since 01:49 UTC (13 hr lock!)

Just entered EQ mode due to 4 back-to-back earthquakes arriving from Mexico (4.6-5.1 Mag).

Planning on going into Calibration + Comissioning time at 8:20PST (15:30 UTC).

H1 General
anthony.sanchez@LIGO.ORG - posted 22:14, Wednesday 11 September 2024 (80054)
Wednesday Ops Eve Shift End

TITLE: 09/12 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY:
Short 48 min lock, when the shift started, then let H1 try to lock itself. After 5 locklosses and an initial alignment we got to NLN.

1:47 UTC GRB-Short E511072
1:47 UTC GRB-Short E511073
1:48 UTC GRB-Short E511072
 
1:49 UTC Nominal_Low_Noise reached
1:49 UTC Observing reached


LOG:
No Log


PS here is a rainbow.

Images attached to this report
H1 General (Lockloss)
anthony.sanchez@LIGO.ORG - posted 17:11, Wednesday 11 September 2024 - last comment - 09:17, Thursday 12 September 2024(80052)
Wednesday Ops Eve Shift Start

TITLE: 09/11 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 147Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 21mph Gusts, 15mph 3min avg
    Primary useism: 0.04 μm/s
    Secondary useism: 0.10 μm/s
QUICK SUMMARY:

IFO is locked ..... was locked for 48 min.
Everything was running smoothly until the lockloss of unknown cause.

 

Images attached to this report
Comments related to this report
oli.patane@LIGO.ORG - 09:17, Thursday 12 September 2024 (80058)
Images attached to this comment
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 16:33, Wednesday 11 September 2024 (80051)
OPS Day Shift Summary

TITLE: 09/11 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 145Mpc
INCOMING OPERATOR: Tony
SHIFT SUMMARY:

Calibration Early Morning (8AM-10AM):

Lockloss during calibration with the probable reasons. Happened as soon as simulines ran an L1_SUSETMX injection. Either:

So in short, no calibration work was done.

Squeeze Late Morning (10AM-12PM):

We got to NLN again at 17:17 UTC! Then a 6.3 EQ came though and we lost lock while the ground PEAKMON was reading 4.7 microns. We Initially thought we would make it but alas.

Sadly, no squeeze work was done either.

Earthquakey, Windstormy Quiet Noon (12-4:30PM)

After this 6.3 EQ, we were hit by two more 5.8’s from Vanuatu while wind speeds steadily went over 30mph from 19:20 UTC to 20:20 UTC. We stayed in this unstable state for about 3 hours losing lock approximately 20 times pre-DRMI. Wind peaked at 38Mph but once it went below 28ish Mph, we were able to lock, and quite quickly as well. The successful acquisition run only took 50 minutes!

As though the tree has to fall when nobody is listening, I left the room for 40 minutes for the OPS meeting and I came back to a fully automatically locked IFO in NLN. We were in OBSERVING minutes later. I guess the wind also came down by 10mph avg in that time so scientifically, it was probably that.

Initial Alignment Weirdness: Ongoing Issue

During initial alignment SRC align and only since making the SR3 move, SRM, SR2, SQUEEZE_OUT and IFO_OUT saturate at times where they are not known to do so. Looking closely at the ASC AS_A DC_SUM_OUT, this happens when the counts are sufficiently high but a glitch occurs that misaligns SRM very badly. Ryan C found a temp solution by going into IFO_NOTIFY and then pausing at PREP_FOR_SRY for a few seconds.

Sheila and I investigated very briefly and found that the ASC trigger signal malfunctions sometimes by activating when it is much lower than its threshold for activating. More investigation and monitoring to come.

Other:

IMC Gain Redistribution during LASER_NOISE_SUPPRESSION worked! I’ve unmonitored its SDF (by instruction) and we’re testing/monitoring it. SDF Screenshot attached.


LOG:

Start Time System Name Location Lazer_Haz Task Time End
23:58 SAF H1 LHO YES LVEA is laser HAZARD 18:24
15:19 FAC Karen Optics Lab N Technical Cleaning 16:19
17:49 SQZ Camilla LVEA YES Turn on hartman  Wavefront Sensor chassis 17:51
21:04 EE Marc MY N Part Search 22:29
21:40 VAC Janos Cryo: LVEA, MX/Y, EX/Y N Cryopump Check 23:39
Images attached to this report
H1 PEM (PEM)
samantha.callos@LIGO.ORG - posted 15:35, Wednesday 11 September 2024 (80049)
CER ACs Testing

Samantha Callos, Robert Schofield

August 30, 2024
CER ACs turned off and on at the following times:

CER Bank 1 (Upstairs)

CER Bank 2 (Downstairs)

LHO General
tyler.guidry@LIGO.ORG - posted 15:07, Wednesday 11 September 2024 (80048)
VPW Heat Pump
The new Daikin heat pump that serves the clean side of the VPW has now been fully tied into the building and is running. One of the two cooling circuits appears to be DOA as it is completely flat. In the coming days Facilities team will pull a vacuum on the problematic lineset to assess whether it has a leak and if so, to what degree. In the meantime, the unit will still heat and, to a lesser degree, cool the spaces.

E. Otterman C. Soike T. Guidry
LHO General (Lockloss)
ibrahim.abouelfettouh@LIGO.ORG - posted 14:50, Wednesday 11 September 2024 (80047)
OPS Day Midshift Update: Locking Troubles

We've been experiencing locking issues the whole day, briefly reaching NLN twice for <30mins. Here's why:

Lock 1: Calib Injection cause

Lock 2: 6.3 EQ Cause

Lock Attempt 3: 38Mph Gusts + Back to back 5.8 mag EQs are preventing ALS from locking. We've lost lock an approximate 8 times now on the way to DRMI, only grabbing DRMI once for <1 minute.

 

Specifics (to be repeated in the summary alog) below:

Calibration Early Morning Comissioning 8AM-10AM:

Lockloss during calibration with the probable reasons. Happened as soon as simulines ran an L1_SUSETMX injection ran. Either:

So in short, no calibration work was done.

 

Squeeze Late Morning 10AM-12PM:

We got to NLN again at 17:17 UTC! Then a 6.3 EQ came though and we lost lock while the ground PEAKMON was reading 4.7 microns. We Initially thought we would make it but alas.

Sadly, no squeeze work was done either.

 

Earthquakey, Windstormy Quiet Noon 12PM-Now

After this 6.3 EQ, we were hit by two more 5.8’s from Vanuatu while wind speeds steadily went over 30mph from 19:20 UTC to 20:20 UTC.

We've done one initial alignment, having issues there too at SRC align, which has a weird glitch that doesn't let SRM catch only sometimes also causing SQZ_OUT and SR2/SRM to saturate and even trip SRM WD once. Pausing ALIGN_IFO at PREP_FOR_SRY seems to help but for unknown reasons so far.

Where the EQ has calmed down, reducing the ground motion to a lockable state, the wind speeds have picked up past 35mph (which statistically means >2hr lock 95% of the time). ALS is barely locking and we haven't made it past Locking_ALS since 20:02 UTC (1hr 48 mins ago).

H1 ISC
francisco.llamas@LIGO.ORG - posted 14:48, Wednesday 11 September 2024 (80044)
A2L DHARD from AS_A Offsets

Sheila, Louis, Francisco

We changed the ASC-AS_A_DC_YAW_OFFSET from -0.15 to -0.3 and saw an increase in ASC-DHARD_Y power spectrum in the 10-30 Hz range.

Originally, we planned to make simuline and pcaly bb injections with two different offset values for WFS given a thermalized and locked IFO. Having a non-thermalized IFO, we decided to make calibration measurements after changing the WFS offset, reverting the offset, and make calibration measurements again. We lost lock during the calibration measurement. However, we used DTT to see the data from the different offset values during NOMINAL_LOW_NOISE (as seen in AS_A_DC_YAW_OFFSET).

DHARD_DARM_PS_10-30_Hz shows the power spectrum from 10 Hz to 30 Hz for the channels of relevance. From this plot we see that the YAW dof is coupled to DARM such that the lines are visible. The increase in magnitude suggests that a big offset makes things worse, also seen in DHARD_DARM_TF_10-30_Hz, so we might try smaller offsets next time.

DTT template can be found in /ligo/home/francisco.llamas/A2L/DHARD_DARM.xml

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 13:44, Wednesday 11 September 2024 (80046)
Program to show who is logged into CDS remotely at a given time

Yesterday we added the remote access ioc (RACCESS) uid channels to the DAQ so they can be trended.

I have written a progam called who_is_logged_in which takes any gpstime format and lists who was logged into CDS remotely at that time.

The epoch for these channels is noon Tue 10 Sept 2024.

Example:

who_is_logged_in "17:00 yesterday"

Who is logged remotely into CDS on Tue Sep 10 17:00:00 2024 PDT

cdsssh    Number of users 5
cdsssh    elenna.capote                 (1 session)
cdsssh    erik.vonreis                  (1 session)
cdsssh    ezekiel.dohmen                (1 session)
cdsssh    gerardo.moreno                (1 session)
cdsssh    louis.dartez                  (2 sessions)

cdslogin  Number of users 1
cdslogin  david.barker                  (1 session)

The code uses minute trend data, so you will get the login list rounded to the minute.

If you ask for who is logged in now, it defaults to 10 minutes ago because minute trend data is not immediately available, e.g.

who_is_logged_in now

Who is logged remotely into CDS on Wed Sep 11 13:32:16 2024 PDT

cdsssh    Number of users 7
cdsssh    elenna.capote                 (1 session)
cdsssh    erik.vonreis                  (1 session)
cdsssh    ezekiel.dohmen                (1 session)
cdsssh    gerardo.moreno                (1 session)
cdsssh    jim.warner                    (1 session)
cdsssh    louis.dartez                  (2 sessions)
cdsssh    tyler.guidry                  (1 session)

cdslogin  Number of users 2
cdslogin  david.barker                  (1 session)
cdslogin  root                          (1 session)
 

H1 TCS (DetChar)
camilla.compton@LIGO.ORG - posted 11:48, Wednesday 11 September 2024 (80043)
HWS Lasers turned on again, HWS plates need to be reinstalled.

TJ, Camilla.

We turned on the HWS lasers via medm and the LVEA chassis, they came back with the same powers we turned them off with (1.7mW for Y and 3.2mW for X). Since we realigned SR3 to the pre-April alignment yesterday 80028, both beams are on the CCD cameras, picture. The HWS plates are still removed from when we started alignment work in 78083. We will need to replace these before getting HWS signals, then restart HWS code.

CO2X chassis tripped off with the "RTD or IR alarm" error when I turned on the HWS SLED chassis (same rack). Turned back on with no issue (keyed off/on, gate button). We've seen similar behavior before. These CO2 chassis are planning an upgrade as per FRS6639.

I found the LVEA, clean receiving entrance and EE bay lights still on, turned them off but they would have been on during observing since yesterday, tagging Detchar.

Images attached to this report
H1 ISC (SEI)
jim.warner@LIGO.ORG - posted 11:17, Wednesday 11 September 2024 (80042)
Another attempt at redistributing IMC gains, changes to IMC_LOCK & ISC_LOCK

There was an earthquake rumbling through the site from Papua New Guinea during commissioning this morning, so Sheila suggested we could try the IMC gain redistribution again, as we got several notifications of IMC splitmon saturations. To do this, we commented out the IMC power adjust decorator in the IMC_LOCK guardian, unfortunately  in the wrong state, so we again got bit by IMC_LOCK readjusting IMC fast gain while we were trying to change IMC fast gain and IN1 and IN2 gains. After losing lock and catching the error, the decorator was commented out in the ISS_ON state for IMC_LOCK, and the gain redistribution was added to ISC_LOCK's LASER_NOISE_SUPPRESSION state. Still waiting for the ground to calm down enough for the arms to stay locked, but we will try locking with the gain redistribution in this time.

 

 

H1 ISC
sheila.dwyer@LIGO.ORG - posted 10:03, Wednesday 11 September 2024 (80039)
optical gain comparison after OFI repair

I realized that there isn't an explict alog about this because Paul was asking, so I'm adding this trend to show that after the OFI repair our optical gain is better by about 2.1%, implying that the optical losses are reduced by 4.3%.  While the OM2 heaters is now off and was on before the vent, that didn't have a large impact on optical gain last time that we changed it in late June. 

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 09:32, Wednesday 11 September 2024 (80038)
Wed CP1 Fill

Wed Sep 11 08:12:51 2024 INFO: Fill completed in 12min 47secs

Images attached to this report
H1 CDS
david.barker@LIGO.ORG - posted 08:26, Wednesday 11 September 2024 - last comment - 09:23, Wednesday 11 September 2024(80035)
CDS Maintenance Summary: Tuesday 10th September 2024

WP12069 New CDS NAT router test

Jonathan:

The new CDS NAT router was tested successfully. It has been left in place as the production machine. Details in Jonathan's alog.

WP12067 Add PEM ADC to h1iscey

Fil, Erik, Dave:

The additional PEM ADC was added to h1iscey's IO Chassis. At the same time the failed A2 Adnaco backplane in the chassis was investigated.

After replacing backplanes and swapping fibers the problem was traced to the Adnaco adapter card in the front end computer, specifically the PCIe slot it was in (slot7, left-most as viewed from rear of computer).

h1iscey does not have any Contec binary IO cards, so it was possible to skip PCIe-slot7 and temporarily only use three Adnaco adapter cards.

h1iopiscey model was modified to add the new ADC. A DAQ restart was required

WP12076 Add VACSTAT EPICS channels to DAQ

Dave:

A new H1EPICS_VACSTAT.ini was added to the DAQ. EDC+DAQ restart required.

WP12075 Add RACCESS UID EPICS channels to DAQ

Dave:

H1EPICS_RACCESS.ini was expanded to add the UID channels. EDC+DAQ restart required.

WP12059 Add CDS HW Status EPICS channels to DAQ

Dave:

H1EPICS_CDSHWSTAT.ini was added to the DAQ. DAQ+EDC restart required.

DAQ Restart

Dave:

The DAQ was restarted 12:01 0-leg, 12:05 1-leg for the above changes. EDC was restarted at 12:02.

No problems with the restart, no second GDS restarts were required.

h1digivideo3 cameras briefly offline

Dave:

While tracing the digital video servers ethernet cabling I discovered the WorkStation-VLAN ethernet cable in eth0 of h1digivideo3 was not fully seated and its cameras went blue-screen.

After Erik alerted me to the issue, I pushed the ethernet cable and it reseated with a satisfying snap and the images started streaming again.

Comments related to this report
david.barker@LIGO.ORG - 09:21, Wednesday 11 September 2024 (80036)

h1iscey bad PCIe slot7 FRS32088

david.barker@LIGO.ORG - 09:23, Wednesday 11 September 2024 (80037)

Tue10Sep2024
LOC TIME HOSTNAME     MODEL/REBOOT
08:57:28 h1iscey      ***REBOOT*** <<< Add ADC and Debug A2 in IO Chassis
08:59:10 h1iscey      h1iopiscey  
10:01:35 h1iscey      ***REBOOT***
10:03:17 h1iscey      h1iopiscey  
10:07:24 h1iscey      ***REBOOT***
10:09:11 h1iscey      h1iopiscey  
10:09:24 h1iscey      h1pemey     
10:09:37 h1iscey      h1iscey     
10:29:13 h1iscey      ***REBOOT***
10:30:57 h1iscey      h1iopiscey  
10:31:10 h1iscey      h1pemey     
10:31:23 h1iscey      h1iscey     
10:31:36 h1iscey      h1caley     
10:31:49 h1iscey      h1alsey     


12:01:46 h1daqdc0     [DAQ] <<< 0-leg restart
12:01:58 h1daqfw0     [DAQ]
12:01:58 h1daqtw0     [DAQ]
12:01:59 h1daqnds0    [DAQ]
12:02:07 h1daqgds0    [DAQ]


12:02:58 h1susauxb123 h1edc[DAQ] <<< EDC restart


12:05:00 h1daqdc1     [DAQ] <<< 1-leg restart
12:05:12 h1daqfw1     [DAQ]
12:05:12 h1daqtw1     [DAQ]
12:05:13 h1daqnds1    [DAQ]
12:05:22 h1daqgds1    [DAQ]
 

H1 SQZ
camilla.compton@LIGO.ORG - posted 12:14, Tuesday 10 September 2024 - last comment - 19:33, Wednesday 11 September 2024(80010)
PSAMS adjustments for SQZ-OMC mode matching

Vicky, Camilla

Repeated 66946 PSAMs changes with SQZ-OMC scans with a new method of dithering the PZT around the TEM02 mode and minimizing it. With this we improved the mode mismatch from 4% to 3%. It will interesting to see if these settings are still better in full lock. Plots of OMC scan attached and same plot zoomed on peaks attached.

Took OMC scans using tempalte /sqz/h1/Templates/dtt/OMC_SCANS/Sept10_2024_PSAMS_OMC_scan_coldOM2.xml  Unlock OMC and H1:OMC-PZT2_OFFSET to -50 (nominal is-17) before starting scan.

Started with strategy 1: Lock OMC and Maximize TEM00 with PSAMS (alignment controlled with loops).
Changed to more sensitive strategy 2: Vicky put a dither on the OMC PZT around the TEM02 peak, awggui attached. Then minimize TEM02 peak with PSAMS (alignment controlled with loops).
 
ZM4/5 PSAMs TEM00 TEM02
Mismatch
(% of TEM02)
Notes Ref on Plot
5.5V/-0.8V 0.6362 0.02684 4.048% Starting 0 (pink)
4.0V/0.34V 0.6602 0.02332 3.412% Maximized TEM00 1 (blue)
2.1V/0.2V 0.6611

0.02097

3.074% Minimized TEM02 2 (green) This is 0V on the ZM4 PZT.
3.0V/0.85V 0.6609 0.02209 3.234% Minimized TEM02 3 (orange)
4.0V/0.34V N/A N/A N/A Minimized TEM02 Didn't scan, checked that we got similar results with different method of minimumizing TEM02 rather than maximizing TEM00
8.1V/-0.4V 0.6422 0.03432 5.073%
Minimized TEM02
4 (cyan)
Chose similar ZM4 settings to  what we found was good in full lock with cold OM2 in 76986

2.1V/-0.1V

0.6591 0.02162 3.176% Minimized TEM02 5 (red)

2.1V/0.2V

0.6580 0.01954 2.884% Back to best ref2 PSAMS values 6 (brown) LEAVING HERE

Over 1V and under -1V on ZM5 is bad at most ZM4 strains (tested at 4.0V ZM4). For each step we adjusted ZM4 and then fine adjusted ZM5.

Images attached to this report
Comments related to this report
victoriaa.xu@LIGO.ORG - 14:58, Tuesday 10 September 2024 (80024)ISC, SQZ

Note OM2 is cold currently for this measurement (and since the vent it seems).

Images attached to this comment
camilla.compton@LIGO.ORG - 11:55, Wednesday 11 September 2024 (80045)

With the orginal PSAMs (5.5V/-0.8V) we had:

  • OMC locked 1-2 min, starting from 1410019031.
    • TRANS, DCPD_SUM_OUT = 0.65.
    • REFL, OMC-REFL_A_LF_OUT between (0.06,0.18)
  • OMC unlocked 30s, starting from 1410019336.
    • TRANS DCPD_SUM_OUT = 0.009.
    • OMC-REFL_A_LF_OUT between (0.86,0.99)

At the better PSAMS settings (2.1V/0.2V) plot attached:

  • OMC locked 1-2 min, starting from 1410027768.
    • TRANS DCPD_SUM_OUT = 0.67
    • OMC-REFL_A_LF_OUT between (0.036, 0.167)
  • OMC unlocked 1 min, starting from 1410028000.
    • TRANS DCPD_SUM_OUT = 0.0087
    • OMC-REFL_A_LF_OUT between (1.008, 0.87)
  • DARK OMC 1 min, starting from 1410028321.
    • TRANS DCPD_SUM_OUT = 0.0084
    • OMC-REFL_A_LF_OUT between (-0.02, -0.01)
Images attached to this comment
Displaying reports 6441-6460 of 84169.Go to page Start 319 320 321 322 323 324 325 326 327 End