BOSEMS: - cabling issue found this AM - BOSEMS installed and tested - two were replaced - TMSX now has 6 good BOSEMS - BOSEM channel LEFT is reading a known good BOSEM (20K+ counts on any other channel) as 13K counts --- Filiberto is at EX tracking down the issue. Last info from him was that all cables have checked out OK, and he's looking at the electronics. PREP FOR MOVE: - telescope safety support beams attached to the top assembly STILL TO DO - before transfer functions: - LEFT BOSEM channel fix - center BOSEMS - remove protective wipes from the table, note some have visible particulate from installation activities - check cables are not touching wires - lower genie to allow TMS to hang freely
I have to leave for a dentist appt. but expect to resume leak checking GV6 when I return (1530?) -> The Corner vacuum equipment is being pumped only via the YBM Turbo pump which is being backed by a leak detector and CP2. IP1, IP2, IP5 and IP6 are valved-out. This is a temporary pumping arrangement. Also, the instrument air used to control CP2 is not enabled currently which is also a temporary state.
W/ expected pending approval of the WHAM6 ISI Optical Table position, went in to fine tune the elevation & level of the HAM6 Optical Table. Also, we had done a rotation last week as we assessed the horizontals so vertical did need to be revisited. Using LVEA vertical monument 802, the Optical Table was set to -220.1+-0.1mm Gz as per D1100398 & E1000403. Three pages of survey, dial indicator, & load cell readings are attached
I moved the vacuum VME crate for the X beam manifold (h0velx) over to the new boot server today, as specified in WP 4090. This move did not go as smoothly as the one for LY. Two issues: 1) The paths in the startup file pointed to /src - all the other files have referenced /cvs instead, I'm not sure why this one is different; it may be that the startup file for this crate had been updated with the 'new' fs scheme (/src) at some point, but the others never were (/cvs). In any event, I need to re-scrub all the startup files to verify they point to the correct place on vxboot (/cvs in this case). 2) The 2210 VME card failed on reboot - this is the relay/BIO board. The symptom was that the fail light would not go out once the crate was booted. Tried re-seating the card (which did not work), so the card was swapped out with a spare. One symptom of this failure was that the CP fill valve went open to 100%; it's not clear if this is the design intent, or a side effect of how the card failed. In any event, h0velx is now running as expected from the new server.
This work done last week Wednesday. Opportunity arose so we (Jim & Mitchell) uncovered the upper ISI and located the 600 lbs on the Keel. Location of the mass is per D1002191-v2. This will have to be removed for Cartridge installation and then replaced in chamber but meanwhile, the ISI is ready for all auxiliary payloads, balancing and integrated testing. The upper ISI was recovered with C-3.
LLO X-arm Pcal Periscope, is wrapped outside the H2 PSL, waiting for the crate from LLO, we expect the crate arrives this week.
The structure is on a four wheel dolly, and attached to a hoist.
Today we will start the assembly of LHO Y-arm inside the H2 PSL.
pablo
I added, compiled and installed the green arm initial alignment code as part of the h1asc.mdl We are at revision 5464. The installed version has a single ON/OFF switch to turn on the loops and integrators, as well as ramp in and out the dither drives. Still TBD: - IPCs to ETMs TMSs and end station PZT actuators. - MEDM screens for the setup.
For usage in Robert Schofield's magnetic coupling measurements and general optical lever optimization, we were playing around with the whitening chassis to optimize the QPD signals going to the ADC. The optical lever QPD electronics have the ability to engage up to 3 levels of 10:1 whitening; all with a pole@10Hz and a zero@1Hz. These filters are turned on via a daughter board connected to the whitening chassis (D1001530), which are exactly the same as ISC's whitening chassis. Figure 1 shows the spectra of the three levels of whitening to make sure the electronics work the way we think they do, notice that we only go up to two levels of whitening since engaging all three levels completely saturates the ADC. At two levels of whitening, we see that the higher frequency (~50Hz)amplitude is comparable to the low frequency (~.6HZ) amplitude but not yet dominating the RMS. Although after monitoring the optical lever for a few hours with two levels of whitening engaged, we saw that the ADCs railed briefly about once an hour, not good. So in the end, one stage of whitening was optimal and of course, a de-whitening filter was also implemented in digital land. Figure 2 shows the whitened+de-whitened signal and compares it against the unfiltered signal to make sure we didn't screw up the digital de-whitening portion. Figure 3 and 4 compare the dark noise currents of one stage of whitening engaged vs. no stage of whitening engaged. There is about a magnitude of difference in the signal to noise ratio between the two set-ups at >10Hz, as we expected.
Computer in the EX TMS was swapped out for one that can run MEDMs. BOSEM signals were displayed on the screen, but did not change with BOSEM flag alignment. We swapped cables at TMS to verify that that wasn't the problem. Jeff spent a fair amount of time on the phone with Dave looking into the software situation. The outcome of that was that it appears to be a hardware issue. Dave did some further diagnostics, however it's late enough in the day that a fix will come tomorrow.
Christina, Patrick, Sheila we transitioned the PSL to commissioning mode at 2:45, it sill stay in commissioning mode for the rest of the week
EY prep work by Apollo continues
EY conduit work between chambers for instruments (Richard M & crew)
PCal Crew working in H2 PSL Room (Colin/Pablo)
cds f0 file server rebooted for Raid Disk Failure from weekend
TMS BOSEM work (Cheryl/Jeff B)
Installing new TMS EX iMac (Cyrus)
Vacuum gauged pair for BSC10 disconnected for conduit work (Ken D.)
Sunday between 8am and 9am the cdsfs0 disk system went into read-only mode. I power cycled cdsfs0 this morning at 09:45 to fix the problem. We have a replacement raid controller to try, but taking cdsfs0 down for the swap out will be very intrusive for all CDS activities and so we have not scheduled this yet.
Some workstations may come back by themselves, others may need to be rebooted. I have restored most of the control room workstations.
Logic: A) Watch H1:PSL-FSS_AUTOLOCK_STATE if = 4, set H1:IMC-REFL_SERVO_FASTEN = 1 and H1:IMC-REFL_SERVO_COMCOMP = 1 otherwise set H1:IMC-REFL_SERVO_FASTEN = 0 and H1:IMC-REFL_SERVO_COMCOMP = 0 B) Watch H1:IMC-MC2_TRANS_SUM_INMON if > threshold, set ezcawrite H1:IMC-REFL_SERVO_IN1GAIN 0 ezcawrite H1:IMC-REFL_SERVO_COMBOOST 1 ezcawrite H1:SUS-MC2_M2_LOCK_L_GAIN 0.2 ezcaswitch H1:SUS-MC2_M2_LOCK_L FM3 ON ezcawrite H1:SUS-MC2_M1_LOCK_L_GAIN 1 ezcaswitch H1:SUS-MC2_M1_LOCK_L FM1 ON otherwise set echo unlocked $MCval $thresh ezcawrite H1:IMC-REFL_SERVO_IN1GAIN -10 ezcawrite H1:IMC-REFL_SERVO_COMBOOST 0 ezcawrite H1:SUS-MC2_M2_LOCK_L_GAIN 0 ezcaswitch H1:SUS-MC2_M2_LOCK_L FM3 OFF ezcawrite H1:SUS-MC2_M1_LOCK_L_GAIN 0 ezcaswitch H1:SUS-MC2_M1_LOCK_L FM1 OFF I used THRESH_ON=1000, THRESH_OFF=900 Note that Ethercat + one command link to trigger the MC2 filters and gains could easily take care of this. For now, there are two scripts that do this in ioo/h1/scripts/imc/sballmer : FSSwatch MClockwatch Performance: The typical time from FSS in state 4 to full low noise IMC lock is about 1 to 2 sec, + the 3sec ramp time in the MC2 filters. The longest I have seen is about 7 sec. PS: The FSS definitively has to be sped up.
The last burt backup was at /2013/08/18/08:00. I also noticed a lot of other things (like ezcaswitch, ezcawrite or foton) stopped working on some ops machines. I suspect the whole issue is some full or improperly mounted disk.
I added a triggering feature similar to what the lsc has to the h1ascimc model - except that I kept it simple by defining only one trigger and one FM trigger and mask for all DoFs. Relevant channels: Setting the TRIGGER threshold for turning the WFS feed-back ON and OFF: H1:IMC-TRIG_THRESH_ON H1:IMC-TRIG_THRESH_OFF Corresponding monitor channel: H1:IMC-TRIG_MON Setting the TRIGGER threshold for turning DoF filters ON and OFF: H1:IMC-DOF_FM_TRIG_THRESH_ON H1:IMC-DOF_FM_TRIG_THRESH_OFF Corresponding monitor channels: H1:IMC-DOF_FM_TRIG_MON H1:IMC-DOF_FM_TRIG_MON_WORD Corresponding MASK channels: H1:IMC-DOF_MASK_FMx H1:IMC-DOF_MASK_MON I set both up to trigger at the same level as the lsc (ON=1000, OFF=900), and triggered the integrators in FM2. This makes the ASCIMC WFS completely autonomous - no scripts required to start them up. THe MEDM screen for setting up the threshold is still TBD.
[Jamie, Kiwamu, Mark, Stefan]
On Friday, we were finally able to get the "new" Guardian supervisors running and supervising various components of the input mode cleaner. Guardian "supervisors" are the main guardian processes that run the guardian code for a particular domain/component/subsystem. In this case, we had three components (or "system") running, delineated by channel access prefixes:
States for each of the systems were defined, and we were able to run the supervisors through their paces a bit to confirm that the behavior was more-or-less as expected. In fact, I would say everything really worked quite well, surpassing my expectations. Unfortunately we didn't quite get to the point where I felt comfortable leaving the supervisors running on their own, so I shut them down before I left on Friday evening.
Over the next couple of days I'll attempt to build out some of the infrastructure to start/stop/restart the supervisors at will, to ease their commissioning. I'll also work on documentation in preparation for the **Guardian review, Monday, August 26th, 12:00 PDT**.
A guardian "system" consists of states connected together into a directed state graph. It is described in a "system description directory", which is passed to a guardian supervisor process as it's primary argument. When the supervisor is launched it instantiates its own EPICS channel server, which is used for accepting state requests and reporting status.
When the system is in a given system state, the supervisor is executing the run script for that state. If the state run script "Returns", the supervisor transitions to the next state in the sequence to reach the requested state. Once the requested state is reached, the system remains in that state until a new request is issued. If the state code exits with a "return target", the supervisor will transition to the target state. It the system is being run in "un-managed" mode, it will attempt to re-reach the original requested state on its own. This is known as "recovery". Otherwise, if the system is run in "managed" mode, the request will be reset to the recovery target and the system will wait for instructions from its manager.
The supervisor process itself is now functioning as a true finite state machine. In its primary run state the supervisor runs the state code for the current system state (sorry for the overloading of the term "state": the supervisor has "states" of operation of its state machine that are distinct from the "states" of the system it is controlling).
Once we got things finally running, the new supervisor finite state machine architecture was working really quite well, even better than I expected. The system responded very quickly. We could issue requests, after which the supervisor would immediately calculate the path to the new requested state and ratchet through the state sequence to get there. We could easily stop at intermediate states to check status, and then instruct the system to continue on its way.
For things to work, there are really two aspects: there's the guardian supervisor itself, and the system description and its state run code. Beyond the behavior of the supervisor itself (which seemed to be working quite well), we managed to get the actual guardian code for the systems under test in a good enough state such that the graphs were well constructed and the state transitions made sense. We found some bugs in the supervisor that I was able to fix immediately, and we worked on the actual system state code until the systems were behaving properly.
We ran the IMC and SUS-MC2 supervisors in a managed mode, where we were manually coordinating their states to lock the IMC. Once we were happy with their behavior we were even able to run the SYS-IMC manager which was coordinating the transitions of IMC and SUS-MC2 to automatically lock the IMC, and recover the system back to the locked state. I will try to post descriptions of the systems we had working, including the system graphs and state descriptions, in the next couple of days.
However, things weren't working quite well enough that I felt comfortable leaving it running. The supervisor would occasionally get into a hung state that I was not immediately able to diagnose and required restarting the supervisor. The SYS-IMC manager also seemed to miss some transitions, likely due to bugging programming of the SYS-IMC state code. There is also a lot of missing features and infrastructural work needed.
I'll be posting further as more of this stuff gets in place.
BOSEM mounting plate was tapped dry by Jim and it's working great.
All TMS SUS cables were connected.
All BOSEM were reasonably centered but height was not adjusted because the MEDM was not working on cdsimac2.
Not all safety structures were installed, but the one that matters (safety wire) was. There was some problem that was fixed by bending wire (another alog).
I'll be on vacation next week, but basically everything is there for SUS test. Corey/Cheryl/Deepak need to do the following:
In parallel, you need to work on assembling the remaining safety structures.
Another ugly work around.
Mounting position of TMS vertical safety wire is shifted by about half an inch from the center of the TMS table. I'm guessing that the mounting hole position for the wire on the safety support beam, type 01 (D1100264) is wrong.
I'm bending the safety wire, as it is more like a thin rod, so that it clears the ring on the TMS tele.
The first picture shows how the safety wire is bent and the safety wire clamp tilted to allow the wire to clear the ring.
The second picture shows that a plumb bob, which is suspended from the center of the safety wire clamp, is pointing almost at the edge of the ring.
The third picture shows how I held the plumb bob.
I cannot open MEDM sitemap from the dock.
When I click the terminal icon in the dock, a terminal window opens immediately but it takes more than 5 minutes before bash prompt is shown.
When finally the bash comes up, a simple ls command takes 3 seconds.
sitemap command from bash tries to launch something, and it takes about a minute or two before "cannot access file: /opt/rtcds/lho/h1/medm/SITEMAP.adl" is displayed.