I had to clear the message queues for the TestStand this morning due to too many current message queues. The script to do so is 'rmq' in the scripts directory: '/opt/apps/scripts/'
A quick check on DTT - I loaded a channel in and hit start with default DTT settings loaded (channel was L1:QTS-Q1_MO_FACE1_IN1). After a few seconds of thinking it gave a "Test timed-out" error at the bottom. Richard hinted at a timing problem...? Also - how can I load the "editable" medm? And, we are ready to calibrate channels. Instructions on where to do this would be appreciated. BTW - All 12 Top Mass BOSEMs have been zeroed and are at 50% OLV. So signals and controls on L1:QTS-Q1_MO and Q1_RO channels are live.
The attached document shows the transfer functions used to compute the LZMP. The number we have are good: 0.24 mm and 0.30 mm in the X and Y directions respectively. However, we need a better coherence on the cross couplings in order to confirm these numbers. We are doing another measurement with much more averages (a 6 hours drive in each X direction).
We have redone a low frequency measurement (0.01 hz to 0.1 Hz) with much more averages (222) in order to estimate with more accuracy the lower zero moment point offset (distance between the actuators and zero moment point of the rods). The data is stored in: 100721_LZMP_0p01to0p1Hz_M2M_test2_tfretrieve.mat The script used to take data is : TFcollect_100721_LZMP_0p01to0p1Hz_30avg.m This script does contain 220 averages contrary to what his name says. The result are presented on the plot attached. The offset estimation is lower than it was: Last measurement gives: X offset = 0.24 mm Y offset = 0.30 mm versus X offset = 0.14 mm Y offset = 0.16 mm as presented in yesterday entree base on a measurement using only 30 averages. The numbers we get are very good, and maybe surprisingly low (that's less than .01" offset). However, they are consistently low so that we can conclude that we are good.
(Eric A., Corey G.) Over the last few days, we've bypassed our original BNC feed-thrus on our Interface Board, and are now using actual BNC feed-thrus. The issue here is that the original BNC feed-thrus proved to be troublesome and sacrificed the performance of our Sensors. We've found that using the "real" feed-thrus (and also buying better/expensive feed-thrus) have made our Sensors perform much better. Switching to the new feed-thrus has changed the DC value of of our sensors. At "zero" our Sensors were all under ~1000 counts, but with the new feed-thru the zeroes have increased to around 10,000 counts. Because of this we had to re-gap the Sensors. This sort of change required us to re-do a few more Sensor Tests (according to document T1000329). New Power Spectra for the Displacment Sensors have been run. The measurement is located here: /opt/svncommon/seisvn/seismic/HAM-ISI/X1/Data/unit_1/dtt/unit_1/dtt/20100720_all_disp_spectra.xml Attached are plots of spectra with BOTH Sensor Mini-Racks ON, and with only one ON.
To complete the autoburt alog: how to restore from a snapshot file. Just 'cd' into the directory containing the snapshot files to be used for the restoration. Then use the 'burtwb' command, it takes the snapshot file as an argument. For example, burt restore g1isihamepics to 14:01 snapshot of jul 20th 2010: [controls@seiteststand 14:01]$ cd /data/autoburt/2010/07/20/14:01/ [controls@seiteststand 14:01]$ burtwb -f ./g1isihamepics2010072014:01.snap
I have created an autoburt for seiteststand. It works by scanning the target area for any autoBurt.req files. For each file it finds, it performs a burtrb and saves the snapshot file in the /data/autoburt area. The autoburt runs every hour at one minute past the hour (as a cronjob as user controls on seiteststand). If we need to run at a different rate this is just a crontab change. Currently the script (/opt/rtcds/geo/g1/scripts/autoburt) uses the geo/g1 targets. When we move to X1 this will have to be edited. Here is an example the data directories and snapshot files names: [controls@seiteststand 14:01]$ pwd /data/autoburt/2010/07/20/14:01 [controls@seiteststand 14:01]$ ls g1isihamepics2010072014:01.snap g1x01epics2010072014:01.snap
Richard took another look at the EE QUAD test stand and fixed some current issues, such that signals are now back to expected values of ~32k counts. He also did some work on the satellite box for the AOSEMs, as he determined that jumpers were needed there.
The logbook will be unavailable from 10-noon tomorrow. If you are working on an entry during that period, please save it as a draft before 10am Pacific time. The logbook will undergo some maintenance tomorrow morning. This is to fix some layout issues with the quick search area for webkit based browsers (ie Safari and Chrome). Also it appears that there still is an issue with log session times. It appears that simply extending the Shibboleth SP Session time to a large value is insufficient. I will attempt another short term fix tomorrow. Ultimately it must move to a system that will require re-authentication when the Shibboleth session times out. However it is not quite a quick fix as we must ensure that no data can be lost when handling this. Likely the user will be given a notice that they must login again, but that their post (if there were any pending) have been saved to a draft.
The logbook is going down for maintenance
Maintenance has finished on the aLOG. The following work was done: * Updated CSS and HTML to better layout the quick search fields on Safari, Chrome, Opera, and Internet Explorer * Updated the display logic to highlight search terms better. The search has always been case insensitive, now the highlighting is case insensitive and case preserving. * Another fix to make sure the author name is always filled out. This is a temporary fix, a more correct fix will be needed. * Updated the database to make sure author names matched usernames, ensuring that searching will work properly.
My script which generated the G1ECU.ini file had a bug and built a file with syntax errors in it. This is the reason for the "daqd respawning too fast" error as the daqd process was being autostarted by inittab and crashing out. I fixed the script (/opt/rtcds/geo/g1/scripts/create_edcu_ini.pl) and also changed it to generate a G0EDCU.ini file (not G1). At this point daqd started. However the EDCU complained about another IOC at 10.12.0.13 which had all the channels 10.11.0.24 was serving. Found out that eth1 was active with the 10.12.0.13 IP address, and the IOC was being found on both IP ports. I disabled eth1. we should watch that eth1 is not reinstated on the next reboot Later Corey was having nds issues, so I restarted daqd and nds together since they had a split-start earlier in the day (see above).
(Richard, Dave, Corey) Came in this morning to find INVALID/WHITE medm screens on the iMac near our Test Stand rack in the Staging Bldg. I tried to ssh into the frontend to no avail. Since we were in this state and flying blind (diagnostics-wise), we tried several things to get the frontend re-started: * Hit RESET button on front * Hit REBOOT button on front * Pulled out (both) power cords from back None of these actions did anything (some were tried several times). We had a monitor directly hooked up to the front end, and after various steps, we'd get the following error (right after a login prompt): INIT: Id "fb" respawning too fast: disabled for 5 minutes" Richard logged into the frontend via the login prompt, and was finally able to sorta get the frontend started. We still had issues though. Richard then tracked this down to a network issue (can't remember how he discovered this). He then unplugged the power to the Network router on top of our rack. After this, we were able to get back mostly. There was still a daq/framebuilder issue, and Dave resolved this. Another get_data Issue Towards the end of the day, I tried running a matlab measurement, but there were issues with get_data (our matlab script made 25 attempts at getting data to no avail). Called up Dave, and he ultimately restarted our nds and daq (I believe he said one of them was old). This seemed to have done the trick, since I was able to get_data on my next attempt.
After doing another set of yaw adjustments to get the side BOSEMs in range at the Tablecloth and jacking the TC up by a mm or 2, we continued work to install BOSEMs. So far, we have MO FACE 1,2,3 aligned to 50% OLV, with the rest cabled and mounted to the Tablecloth. This also required the addition of washers behind the FACE magnet flag assemblies in order to get the flag into the BOSEM range. (I recall seeing this at LASTI, but had hoped it had been reconciled in the later drawing revisions. Guess not.) DV and the "InputSignals.adl" medm both proved healthy and useful for this work. Richard verified that the uncalibrated signals which go from 0 to ~ -4000 counts indeed correspond to 0-2V, so we can make some calibration of the channels eventually. Mark Barton came over and began working through input matrices and appropriate signs in various fields. AOSEM installation into Q1 is back on hold, while John re-thinks the RGA scans which had a pretty good air leak. More of the same tomorrow.
I have begun a low-frequency (down to 0.001Hz) measurement on the LHO X1 TestStand. This should take approx. 12hrs, barring any earthquake or major seismic event triggering our watchdogs. It was begun tonight at approx. 18:45 (local). Please no changes to filter banks, medm screens, or use of awgstream. Thanks.
So, I jumped the gun a little on the newly-calculated matrices from Jeff K. & Celine R. We decided to revert the matrices to the previous values for unit_1 measurement consistency. The new MATLAB versions of the matrix calculations I ran yesterday will be implemented for unit_2 and beyond... Previous filters via the bash scripts located in directory: '/opt/svncommon/seisvn/seismic/HAM-ISI/X1/Scripts' on the X1 TestStand. They are titled 'setgeo2cenmtrx', setdisp2cenmtrx', and 'setcont2actmtrx'.
I wrote a script to generate the G0EDCU.ini file from the model db files (every epics record is put into the ini file). I ran the script and noticed that we are only recording 2066 of the 4320 epics chans now in the system. On the next daqd restart, the number of chans should go up to 4320 and the full frame should get larger than the current 7.1MB. Since ISI measurements are ongoing, we'll delay the daq restart till next week. The script to generate the EDCU.ini is: /opt/rtcds/geo/g1/scripts/create_edcu_ini.pl At the moment we should run this script by hand each time the hamisi model is changed.
Working on the damping loops. The controller will be made of: - a compensation filter to account for the difference in the electronics between HAM 6 and X1. - the original damping loops from eLIGO. The plant, controller, open loop, closed loop and sensitivity are shown on the figure attached. I have updated the G1ISIHAM.txt filter file and loaded the filters in the MEDM. It should be ready to go.
Corey turned on the damping loop installed installed the morning. It worked very well.
- the first figure of the doc attached shows the power spectrum with Damping On and OFF
- the second figure compare the sensitivity ('Undamped/'Damped') of LLO HAM (Aug 2008) and LHO Unit 1. The performance are very similar, which confirms that we can use the damping loop as they are (modulo electronics change compensation). The plot also shows that the measured performance matches with the prediction (sensitivity curve posted this morning).
I am going to implement H2,H3,V1,V2 and V3.
All six damping loops are installed and running. Plant, controllers, open loops, closed loops and sensitivity are attached. H1 (solid line), H2 (dash line), and H3 (dash dot line) are on the first plot. They are hard to distinguish which is very good (very nice symmetry). V1 (solid line), V2 (dash line), and V3 (dash dot line) are on the second plot. The second document shows the power spectrum and off. Everything looks good.
Tonight, I had hopes of starting one of Fabrice's transfer functions overnight. Unfortunately, I was not able to because I was never able to clear the Watchdog. As for what seemed to be causing the trips, for the most part, the H1&V1 Actuators would immediately rail to 32k. This would cause two things: an Actuator WD trip & an H1/V1 Geophone WD trip. Additionally the Actuator would remain in RED in the "First Trig" state. I tried various tricks with gains, and turning off input/outputs to no avail. Looked at the rack and nothing was amiss. Reboot of seiteststand At this point, I performed a reboot of the frontend. It was straightforward and there were no issues, but it didn't change the situation. Matrices Re-filled All of our filter bank paths are fairly simple (no filters engaged and gains of 1 all around). Oddities I found were in some of the matrices. In the CONT2ACT matrix there was a 1 in a place there wasn't supposed to be a 1 and one of the matrix elements had a sign different from what it should be as well (see attached image of cont2act matrix). Because of this I decided I might as well run the scripts to fill the matrices. One more note: the DispAlign matrix was also empty. So, I put 1's down the diagonal of this guy as well. Matrix Set-Up Scripts Jeff Kissel has some bash scripts which fill the HAM-ISI matrices (they are the ones used to fill the HAM6 ISI for both sites). Since Dave copied over the epics bin area to the seiteststand, we were now able to run bash scripts (as well as use commands such as "caput" & "caget"). Nic helped me edit the scripts to make them work (among some location-dependent changes, commented out the --noprofile --norc values at the beginning of the script. I ran the scripts for the following matrices: Disp2Cen, Geo2Cen, & Cont2Act. After all of this, I was still not able to clear the Watchdog (or run our measurement). I've attached a snapshot of the Watchdog. Notice the "First Trig" of the Actuators (and how the Actuator Monitors are all 0). As soon as I click RESET, these monitors would show H1/V1 rail to 32k.
Just thought I'd add the location of these bash matrix scripts (Disp2Cen, Geo2Cen, & Cont2Act). They are located at: /opt/svncommon/seisvn/seismic/HAM-ISI/X1/Scripts/