The login server for ssh access to the DTS has been changed from x1login to x1dtslogin. The x1dtslogin uses ligo.org authentication of users. To access the DTS, users still need an account to be created by the local system administrator.
The default NDS for the control room computers has been changed from h1nds0 to h1nds1. This takes effect with any new shell opened since 11:15 PST today.
Lost about a PSI over the weekend and have been unable to find any leaks since Tuesday. I have again shut all the valves I can, except those leading to BSC3, to help locate the problem area.
WP 3727, Alex, Dave, Jim
at 10:14 we shut down h1dc0 and replaced it with the new h1dc1. Jim activated a second 10GBE port on the H1 FE-DAQ switch and ran a second 10G FMM line to the DAQ rack. The new h1dc1 has three 10GBE pcie cards (compared with only two in h1dc0). The extra card allows the front ends to split their DAQ data over the two cards. Since each card has 16 MX end points, and LHO will have a total of 31 front ends, this means that we can ensure that no two front ends share an end point and interfer with each other's DAQ data on reboot.
The 10GBE PCIe card arrangement as seen from the rear of h1dc1 is
| top | IN1 | FMM-013-0001 |
| mid | OUT | FMM-007-0006 |
| lower | IN2 | FMM-013-0002 |
The pci bus mapping put the output card (connected to the Fujitsu switch) between the input cards.
The MX startup script for the front ends was modified such that the FE sent data to card IN1 or IN2 depending upon their location in the rtsystab file. We restarted all the front ends MX Streams (did not require any model restarts) and all DAQ data was back by 11:34
To test the new system we power cycled various non-dolphin and non critical front ends: h1pemmx, h1susauxb6 and h1susauxb123. During each reboot no DAQ data on any other system went bad.
Alex then upgraded the h1fw1/h1nds1 system and installed the following
Tests between the unmodified h1fw0 and the new h1fw1 systems show significant speed up in trend data retrieval. Speed up in minute trend writing quickly dissipated.
Attached are plots of dust counts > .3 microns and > .5 microns in particles per cubic foot requested from 5 PM Feb. 21 to 5 PM Feb 22. Also attached are plots of the modes to show when they were running/acquiring data. Data was taken from h1nds0. 1440.0 minutes of trend displayed
System is still sealed and I have only a few connections left to snoop. The leak is very small and may be a non-issue. I capped some drain and vent points this afternoon in case it is the valves that are passing some gas. There are also the accumulator bladders...maybe I can check for those...
In the famous words of a Mr. Kissel: "BOOM!"
Today's installation of PRM went extremely smoothly, as if we've had practice at this. No issues with any hardware or the placement of the suspension using the install arm and cookie cutter template. PRM is on it's spacer and all dog clamps are securely holding it to the ISI table.
Filiberto is going to finish running the external PRM/PR3 cables from the D3 port to the satelite boxes and we'll start plugging in the suspensions in the afternoon.
Pictures can be found on ResourceSpace near the one attached.
https://ligoimages.mit.edu/?c=1280
The chamber cleaning crew got the first half of the chamber floor brushed this morning. The surface is very rough and is ruining brushes in short order so work is a little slow. The second half of the floor should be completed by the end of the work day. We'll start vacuum-wipe down-vacuum on Monday.
Bubba, Mark L. and Eddie determined ideal placement of the four sections of the work platform yesterday. Today, the crew carefully craned the sections into place (mostly very tight spaces) around the chamber: this involved removing some cable trays and accelerometer plates so sections would fit. (Thank you, Richard and Hugh.) The sections were coupled together and then braces were installed to maintain some space between the work platform and HEPI.
The cleanrooms in the TMS lab space have been assembled for a couple of weeks. Today, the technical cleaning crew got a chance to begin deep cleaning in preparation for staging optics tables etc.
The 10-4 torr*L/sec leak on BSC5's dome outer O-ring has been located and marked with white tape for future reference -> This leak will have to be addressed at some point in order to pump this volume with the nominal ion pump, i.e., BSC5's dome will need to come off -> This looks to be the only appreciable leak for this annulus volume -> Also, the venting of this annulus space did not affect the 1 x 10-7 torr pressure indicated on PT510B. I improved the 2 x 10-6 torr*L/sec leak of the IP12 gate valve bonnet via - "wait for it!" - TIGHTENING the bonnet screws (it's what I do - required turning down the o.d. of a 13mm socket to a very thin wall thickness) to a more tolerable 1.5 x 10-9 torr*L/sec. Some of these screws were found to be very loose - others, not so much. This make/model of valve used to isolate the 2500 L/sec ion pumps have noticeable as-built QC issues. Among these are the use of hex-head cap screws for the bonnet joint instead of 12-pt cap screws-resulting in damning interferences which prevent the use of unmodified tools. I suspect that the as-found loose screws were loose due to these tool interferences. I'll scrutinize these joints (site-wide) as opportunities present themselves - chances are others are leaking for the same reason.
Excellent - good to hear that bonnet leak is fixed (or at least 1000 times smaller!)
john
The original plots, for the India HSTS I1-MC2 (aLOG #5550) were generated with the “meas.sensCalib” switch was set to “true”. This switch should be set to “false” for plotting test data from the X1 Triples Test Stand. The attached are the plots for the power spectra for the I1-MC2, with the switch set correctly.
Re: PSL chiller alarm mentioned below -- After the morning meeting today Richard added 1.33 cups of water to the PSL chiller. This removed the alarm condition.
H1:PSL-OSC_XCHILALARM went off near 3:56 AM and remains red after acknowledgement. The channel alarms when the temp exceeds 70F, but the value that's listed in the log file is 1.
ITMY OSEM5 and OSEM6 remain yellow as do SUSH34, SUSB123 and SUSB6.
Cheryl, Chris S. and I went down to Y End TMS lab to get the optic into a cake tin in prep for shipment to KAGRA (eventually). The process went smoothly with no magnet disturbance. We took pix of the optic (which was relatively clean) and the SUS cage. The exposed height adapter plate surface contained what Cheryl called a "forest of fibers": see photo below. Lots of particulate on other horizontal surfaces. The cake tin is now in the OSB Optics Lab. We removed the SUS cage and the dutch oven from the TMS lab.
[Kyohei W. and Kiwamu I.]
There was a concern that the actuation coefficient of the VCO --- which actuates the PSL laser frequency via the refcav for the IMC locking --- were calibrated wrong and hence we might have kept plotting more-or-less inaccurate noise budget. Indeed the coefficient was underestimated by approximately 34% according to a measurement we did today. Giacomo helped us out by confirming that the coefficient had been indeed miscalibrated [1]. Of course this brings our past noise budget of IMC_F slightly higher, but certainly this is not a major issue.
The measured VCO coefficient is 268302 Hz/V (see the attached figure made by Kyohei). Please use this number hereafter.

A good news is that our measured value agrees with that of Livingston, which is about 246 kHz [2], within 9%. Our measurement was done by applying various DC offset at the input of the VCO box (2-pin lemo) and by looking at the frequency of one of the two outputs with a frequency counter. The input of the VCO box is designed to be a differential input and hence the x-axis of the plot is Vpos-Vneg.
[1] LHO alog #5534 Comments on "IMC noise"
[2] LLO alog #4645 "low range VCO installed"
Here is the data which I forgot to attach.
Attached are plots of dust counts > .3 microns and > .5 microns in particles per cubic foot requested from 5 PM Feb. 20 to 5 PM Feb 21. Also attached are plots of the modes to show when they were running/acquiring data. Data was taken from h1nds0. 1440.0 minutes of trend displayed
The new h1dc1 machine initially only had 12GB of RAM compared with h1dc0's 24GB. Not wanting to touch h1dc0 since that is our fall back machine if h1dc1 has issues, we removed 3*8GB DIMMs from h0broadcast0 and reduced its data cache so it could work with 12GB (6*2GB) RAM. We were then able to make h1dc1 a 24GB machine. We have asked John Z to check that the broadcaster is still functioning ok with reduced memory.