Oli, Ibrahim
Closes FAMIS 28369. Last checked in alog 78994. These have skipped a few weeks due to the vent (inability to lock).
Analyses completed by Oli and plots/alog by me. I have been told that ITMX's plot weirdness is known, looked into and/or expected.
Closes FAMIS 28452. Last checked in alog 28451
The plots attached show H1 coming back after the emergency vent. Other than that, trends look normal.
TITLE: 09/03 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 142Mpc
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Currently Observing and have been Locked for 4.5 hours. We were aiming to stay locked during today's maintenance period, we did lose lock but it was just from turning the BRS for the end stations off. Relocking took a while because of maintenance going on but there wasn't much intervention needed and we were back locked much earlier than we would have been if
LOG:
14:30 Locked for 2 hours and running PEM measurements
14:42 Observing
14:45 Out of Observing for In-Lock SUS Charge measurements
15:03 Into Observing
15:20 Lockloss 25s after changing SEI_CONF to NOBRSXY_WINDY
- Locklosses from LOCKING_ALS x5 due to motion
16:06 Started an initial alignment
- ISI ETMX/Y ST2 SC changed from SC_OFF to CONFIG_FIR - immediately easier to lock ALS
16:32 Initial alignment done, relocking
16:38 Lockloss from ACQUIRE_DRMI_1F
16:49 Lockloss from CARM_150_PICOMETERS
17:18 Lockloss from LOWNOISE_ASC
17:56 Lockloss from CARM_OFFSET_REDUCTION
18:05 Stayed in DRMI_LOCKED_CHECK_ASC and adjusted SRM
18:12 Lockloss from DARM_TO_RF
18:55 NOMINAL_LOW_NOISE
19:05 Put squeezing back in, Observing
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:58 | SAF | H1 | LHO | YES | LVEA is laser HAZARD | 18:24 |
14:43 | FAC | Karen | OpticsLab | n | Tech clean | 15:07 |
15:17 | SEI | Neil | LVEA | YES | Mapping seismic array | 16:12 |
15:26 | REC | Christina | Watertank | n | Taking stuff to recycling | 15:52 |
15:31 | FAC | Karen | EY | n | Tech clean | 16:29 |
15:32 | FAC | Kim | EX | n | Tech clean | 16:26 |
15:32 | FAC | Nellie | HAM Shack | n | Tech clean | 16:35 |
15:42 | FAC | Chris + pest control | LVEA | YES | Checking traps | 16:12 |
15:51 | CDS | Jonathan | MSR | n | Setting up backup router | 17:52 |
16:04 | FAC | Twin City Metals | Near Hi-Bay | N | Picking up metal recycling bins, Will be noisey | 18:04 |
17:05 | FAC | Karen, Kim | LVEA | YES | Tech clean | 18:00 |
17:10 | VAC | Janos, Travis | LVEA | YES | Looking for parts | 17:25 |
17:16 | FAC | Chris | LVEA | YES | FAMIS checks | 17:38 |
17:25 | VAC | Janos, Travis | MY, MX | n | Grabbing parts | 18:42 |
17:38 | FAC | Chris | EX, EY, HAM Shack | n | FAMIS | 18:34 |
18:28 | ISC | Sheila | LVEA | YES | Align POP on ISCT1 | 18:43 |
TITLE: 09/03 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 143Mpc
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 17mph Gusts, 15mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.10 μm/s
QUICK SUMMARY:
IFO is in NLN and OBSERVING 19:05 UTC
Nothing else to report
On Aug 30 the office area airhandler was shut down and turned on at the following times (UTC):
Off: 19:33
On: 19:38
Off: 19:43
On: 19:55
Off: 19:33:00
On: 19:38:30
Off: 19:43:00
On: 19:55:00
Off: 00:28:00
On: 00:38:00
Sam, Robert, Genevieve
On Aug 21 we installed an accelerometer on the floor under HAM8 (H1:PEM-FCES_ACC_VEA_FLOOR_Z_DQ) and one on the beamtube (H1:PEM-FCES_ACC_BEAMTUBE_FCTUBE_X). These channels will be up and running once Fil has had a chance to run some cables. On Aug 27 we reconnected the three HAM8 accelerometers (H1:PEM-FCES_ACC_HAM8_FC2_Z, H1:PEM-FCES_ACC_HAM8_FC2_Y, H1:PEM-FCES_ACC_HAM8_FC2_X) whose cables were previously mixed up. All of these accelerometers are now connected to the correct channels.
I've written a script, /opt/rtcds/userapps/release/sqz/h1/scripts/switch_nom_sqz_states.py, which will change the nominal states of the SQZ Guardian nodes to be used when it's decided that H1 will observe without SQZ, a process adapted from the ObservationWithOrWithoutSqueezing wiki. This is intended to save operators time so the switch to a configuration where H1 can observe without SQZ is quick and brings us to observing promptly.
The script first commits any uncommitted changes in the SQZ Guardians to svn with an "as found" message before the nominal states are changed. Once changed, the nodes are then loaded and SQZ_MANAGER is requested to NO_SQUEEZING, which should make sure all nodes are in the state they should be.
This script also adds the 'syscssqz' model to the EXCLUDE_LIST in the DIAG_SDF node when going to the no-SQZ configuration. This is to facilitate any SDF diffs that may appear as a result of SQZ misbehaving and will allow H1 to go to observing while ignoring any diffs in this model. More models to exclude can be added in the script as desired.
Since the pysvn package used by the script to commit to svn is a Debian package not found in conda, all conda environments must be deactivated for this script to work. Hence, when running this script, utilize the 'noconda' bash script wrapper found in userscripts. Calling this script to change to the configuration where H1 will observe without SQZ would look as follows:
noconda python switch_nom_sqz_states.py without
This script can also be used to go back to the configuration where H1 is observing with SQZ; simply replace the 'without' argument when running it with 'with' and the nominal SQZ Guardian states will be changed back to normal and the SQZ models will be removed from DIAG_SDF's exclude list.
(Repost from alog81706 to document updates to this script)
I've updated this script so that it now also changes the nominal state for the SQZ_ANG_ADJUST node (previously this was left out due to it being set by a parameter in sqzparams).
I also added several models to the sqz_models list in the script that will allow for SDF diffs when going to the no-SQZ configuration. Since the IFO will be observing in a non-nominal configuration, the intent is to not accept SDF diffs associated with this temporary change. The full list of models that will now be excluded is as follows:
Currently Observing at 147Mpc and have been Locked for 50 minutes. We did lose lock during maintenance due to us turning off the BRSs at the end stations making us unable to hold the lock, but with Jim's help we changed ISI_ETMX/Y_ST2_SC from SC_OFF to CONFIG_FIR and that helped us relock and we were relocked by 19:05UTC. The squeezer was also able to be fixed so we are also squeezing now.
As per WP 12069 I have installed a new system in the daq-2 rack slot 39, above the existing router. I have hooked it up to the admin vlan and have ipmi enabled on it. This is all of the network connectivity that will be enabled today. I have installed a solar flare 10g card into the system for its eventual connection to the core switch. I had to replace one power supply which had failed. I pulled the replacement power supply from a spare unit on the shelves. Install notes: * I am installing the same version of vyos on this router as is used on the current router. * the test stand log from the last time I did this (https://alog.ligo-la.caltech.edu/TST/index.php?callRep=15381). * I created a bootable thumb drive from the install iso and booted into a live image mode. * After booting to the thumbdrive and logging in I issued the 'install image' * select the local disk (sda) * automatically partition to a 40GB size * Default settings otherwise * After install, issue the reboot command. To transfer the config over, I formatted a usb thumb drive as a ext4 filesystem and mounted it to /mnt. I then entered config mode, issued a 'save config_3_sep_2024' and exited out. I copied the /config/config_3_sep_2024 file to /mnt and unmounted mnt. After moving the drive to the new router, I became root, mounted /mnt copied the new config file to /home/vyos/config_3_sep_2024 and changed it's ownership to the vyos user. At this time I also updated the config to have the correct hw mac addresses for this box. Then as the vyos user I entered config mode, issued a 'load /home/vyos/config_3_sep_2024', then 'commit', then 'save' to make the config persists. I rebooted to make sure the config was properly saved. It took me two tries as the first time I only committed the change and did not save it. I have installed an optic on the router and powered it of. I will provide documentation for the operator on how to switch over to this router if there is a failure. The basic procedure is: * power off the old router (rack 5, slot 37) using the power button on the front * go to the back of the rack * move the pink cable from the older router to the the new one (the port is labeled GB1 on both systems). * move the fiber from the older router to the new one (there is presently only 1 optic in each so there should be no confusion). * go back to the front of the rack and power on the new router (rack 5, slot 39) using the power button on the front.
LIGO-T2300212 has been updated to reflect these changes.
While Oli was relocking I went to ISCT1 and checked the centering on POPAIR B (motivated by the observation that the POP18 is low compared to before the vent, as well as the DC light on this diode 79663). The beam wasn't well centered on the diode, and I've moved it to be more centered while the IFO was locked at 22W on PRM (this didn't make much difference in the powers at 22W). This did improve the powers on POP18 after power up by about 15-30%, but it doesn't recover us to the power levels we had in the earlier part of O4b. It seems that the degradation started around May 15. We might want to go to the table to check for clipping upstream (today I only touched the mirror in front of POP AIR B), or think about if we need to touch that pico motor.
FAMIS 21311
pH of PSL chiller water was measured to be just above 10.0 according to the color of the test strip.
Now that we are trying to stay locked on some maintenance days, I've added a "LIGHT_MAINTENANCE" state to the SEI_ENV guardian. This state turns off the end station stage 1 sensor correction and the all of the CPS_DIFF controls. It doesn't include all of the normal environmental tests, but will do the LARGE_EQ transition that we added a while ago, if the peakmon channel goes above 10 micron/s. It won't go to the normal eq state.
I don't think this will work as when the microseism is high, but Oli has been able to do an alignment and work on getting the IFO locked, while people were cleaning the endstations and working in the high bay.
Recovery to normal operations is the same as the normal maintenance state, select AUTOMATIC on SEI_ENV and INIT.
FAMIS 21271
After tuning up the FSS path in the enclosure last week (alog79736), the signal on the RefCav trans TPD has held steady and the PMC looks like it came back at around the same levels for reflected and transmitted power. The incursion is easily seen on several environmental trends.
No other major events of note.
Sheila, Naoki, Daniel
Overnight, the OPO was scanning for about 5 hours, during which time the 6MHz demod was seeing flashes from the CLF reflected off the OPO. This morning, we still see DC light on the diode, but no RF power on the demod channel. There aren't any errors on the demod medm screen.
We did a manual check that we have nonlinear gain using the seed, (we can't use the guardian because of the RF6 problem), and it seems that we do have NLG, so the OPO temperature correct.
Daniel found that the CLF frequency was far off from normal (5MHz), because the boosts were on in the CLF common mode board. Turning these off solved the issue. We've added a check in the OPO guardian in PREP_LOCK_CLF to check if this frequency is more than 50kHz off, if so it will not return true and will give a notificiation to check the common mode board.
Starting 17:43 Fri 30jul2024 the DTS environment monitoring channels went flatline (no invalid error just unchanging values).
We caught this early this morning when Jonathan rebooted x1dtslogin and the DTS channels did not go white-invalid. When x1dtslogin came back, we restarted the DTS cdsioc0 systemd services (dts-tunnel, dts-env) and the channels are active again.
Opened FRS31994
Tue Sep 03 08:11:49 2024 INFO: Fill completed in 11min 45secs
Jordan confirmed a good fill curbside. The low TC temperatures outside of the fill over the weekend was tracked to an ice build up at the end of the discharge line which has now been cleared. 1-week trend of TC-A also attached.
Workstations and displays were updated and rebooted. This was an os packages update. Conda packages were not updated.
Lockloss page
No warning or notice of an Earthquake was given it was a sudden small spike in ground motion.
It was Observed in Picket fence.
USGS didn't post this right away, but it was a M 4.2 - 210 km W of Bandon, Oregon right off the coast.
I took ISC_lock to Initial Alignment after a lockloss at check miche fringes.
Relocking now.
Interesting.