Run list production primer for RUN 12 pp500
This page details the steps followed to produce the 1st priority, 2nd priority, and discard run lists. These lists are generated for spin running periods and lay out which runs have few problems and should be produced first, which runs have more problems and should be produced second, and which runs should not be produced at all (This blog is stolen from Pagebs' blog).
The steps outlined here are those which were followed to create the 2012 500 GeV run lists. The runlist generation scripts can be found at RCF: /star/u/ypwang/disk01/pi0JetAna/jet500_2012/pp500_2012_runlist
1st step: Create Raw runList (stolen from Pagebs' blog)
The first step is to create the raw runlists for your timeframe of interest. This is done using the loopMainAll.tcl script which can be found in topDir/mylist/ . To work properly, loopMainAll requires a directory named data/ to be present in the same dir and data/ must contain two empty files named: testlist.csv and testSummary.csv .
The script runs through all database entries in a set range and for each run which meets some general criteria that run is recorded in the two .csv files. The testlist.csv files are the ones given out the the individual QA'rs so they can assign status codes for each run and make comments. The file has the format: Runnumber, QA'r status codes, QA'r comments, run setup name, shift leader status, and included detectors. Obviously, when the file is generated, the spaces that the QA'r has to fill in have no information in them. The testSummary.csv files contain all the information found in the testlist.csv file and in addition lists the fill, the date and time the run started as well as the values for several scalars and triggers which are specified in loopMainAll.
The loopMainAll script runs through all database entries in a given range, this range is defined using a database parameter called dataID. The range is determined by setting the variables my(lastDbId) and my(stopDbId) near the beginning of the script. To get the dataID's corresponding to specific run numbers one can use the mysql query:
mysql -h dbbak.starp.bnl.gov --port 3411 RunLog -s -E -e " select dataID from runDescriptor where runNumber=#"
or run command: ./getDataID.sh (edit your 1st and last run number)
Once you know the range of runnumbers to be listed, you can find the corresponding beginning and ending dataID's and set them.
Run command: tclsh loopMainAll.tcl
The loopMainAll script will do some basic filtering of runs. The script will ignore runs which were marked as junk by RTS. The script will also ignore runs which are shorter than a given amount. The minimum length for a run was set to 180 sec in run 12. The minimum time is set in the line which begins: if { $myRun(totSec) .
The script can also be set to ignore runs with certain keywords in the trigger setup name. This is useful for removing test runs, commissioning runs and similar runs which are present throughout the data taking period. The list of keywords to be excluded can be set in the excludeL array. CAUTION: the code will remove any trigger setup name which contains a string found in excludeL. So, for example, if you include the string 'prod' in the excludeL list the code will remove unuseful runs like 'prod_test', but will also remove useful runs like 'production2011'.
The loopMainAll script will also record the number of times various triggers fired during the run. The triggers to be listed can be specified in the 'my(TrgL)' array which is set near the beginning of the script. The retrieval of the trigger numbers as well as the optional prescaling is done in the 'getEvents' function. The number of events for each type of trigger is printed out to the testSummary.csv file.
One final note about loopMainAll: most of the mysql commands in the code have a similar command above/below which is commented out. This is because the information in the database is archived into a backup location after the run is over. So one set of commands will be for use while the run is ongoing and the other set will be for use after the run is over and the information archived. The dBaseLoc variable stores the location of the database to use, the current run will always be at onldb.starp.bnl.gov and archived runs will be at dbbak.starp.bnl.gov. A complete list of database locations and port assignments for the current run and all archived runs can be found at: http://drupal.star.bnl.gov/STAR/comp/db/onlinedb/online-sever-port-map
Step 2: Hand QA the List
Once the run lists have been generated they can be sent out to the individual QA'rs so that they may write in status codes and comments for each run. The list sent out for use by the QA'rs is the testlist.csv file. Each QA'r is usually responsible for one week of runs, so the list should be broken up accordingly.
For each run on his/her list, the QA'r must look at PPlots, the shift log, trigger rates, L0 and L2 monitoring pages, and any other sources of relavent information and determine the overall 'health' of the run. To help classify problems and standardize the QA, a set of status codes have been created which can be used to indicate specific problems with various subsystems. The list of status codes used in the 2009 200 GeV QA can be found at this web page: http://www.star.bnl.gov/protected/spin/sowinski/runQC_2009/ . New status codes can be added to account for new subsystems or new problems within subsystems. At the bottom of the above web page, examples of QA'd lists can be found. NOTE: it is important that QA'rs do not use commas to separate different status codes or use commas in their comments or the scripts which parse and merge the lists later on will break. Also, it is only necessary to keep the first three fields in the list, the trigger setup name, run status, and detector list fields are optional.
Step 3: Merge the Lists
After the run lists have been QA'd, they need to be combined and merged with the testSummary.csv list generated in step 1. Before you run the script to merge the QA'd lists with testSummary.csv, it is a good idea to go through the QA'd lists to make sure there were no mistakes, like commas in the comments section. You should also make sure that the QA was done in a consistent way throughout the run. Some problems will persist across several reviewer's periods, you should make sure that different reviewers are handeling the problem consistently, so that one reviewer dosn't kill runs with the problem whereas another reviewer says the problem is minor for example. One can store the original QA'd lists and the lists with there modifications anywhere they want, I stored them in topDir/QA/origList and topDir/QA/revisedList . You will also have many different QA lists, one for each week, you need to concatenate these into one master list which spans the entire run. I also keep this file in topDir/QA/revisedList .
The actual merging is done with the runMerge.tcl script which can be found in topDir/QA/merge . There are also three subdirectories needed in the same directory as runMerge.tcl: runsum/ , runsumout/ , and trimlists/ . The trimlists/ directory contains the truncated version of the master list which I keep in topDir/QA/revisedList . The runMerge.tcl script only takes in the first three fields of the QA'd lists (Run number, status codes, and comments) so any extra fields from the master QA'd list must be removed. I do this with the following 'awk' command:
awk -F, '{printf("%s, %s, %s\n",$1,$2,$3)}' input_list > output_list
The default name for the truncated master QA list is 'masterRunQA.dat', it can be named anything, but the runMerge.tcl script will have to be changed accordingly. The runsum/ directory contains the testSummary.csv file which was created in step 1. The runMerge.tcl script will merge the run list found here with the truncated master run list found in trimlists/ . The final merged list will go to runsumout/ . The merged list found here is the list that will be used for sorting in the final step.
The runMerge.tcl script itself is pretty simple and shouldn't need much modification from year to year. The only thing that will need to change is a test of the number of words in a line. The script expects the lines for each run in testSummary.csv to be a certain length, if you change the output format of testSummary.csv, ie change the number of trigger values written out, you will need to change this test accordingly. The test is the if statement: 'if { $nw!=20 || ... ' which appears on line 42. The script outputs several files in addition to the merged run list. The first two, yellpol.dat and bluepol.dat are no longer used and can be ignored. The last file is merge_err.dat. It is important to check this file after the merging is done, if the word extralines appears followed by one or more lines from the QA'd run list it may indicate that there was a problem merging these lines from the two files. You should go back and check by hand to make sure the comments in the output file match the comments from the QA'd run list for the runs indicated in merge_err.dat.
Step 4: Sort the List
The final step in this process is sorting the merged run list into first priority, second priority, and discard lists. The sorting is done by the prod_runlist.tcl script found in topDir/QA/prodlist . The script will output five files: prior.csv, proc.csv, discard.csv, nosort.csv, and sortSum.html . The first three files are the first, second, and discard lists respecively, nosort.csv contains runs which didn't get sorted into any of the three catagories, and sortSum.html places the information on the number of triggers for the various priorities into tables in html format for inclusion on web pages.
The prod_runlist.tcl script has two major functions: sorting the master run list into the three priority lists and compiling statistics on the number of times various triggers fired in each list. All runs start out as first priority and then are demoted if certain conditions are met, the script provides many different ways to demote runs. Currently, runs can be demoted to second priority or discarded based on run type, status code, shift leader status, and the specific detectors present (or not present) in the run. The setup which was used to create the run 9 200 GeV lists can be found in the example script at topDir/QA/prodlist/ , the specifics of how to sort runs into different catagories will change year to year based on running conditions.
The second major task that the script performs is listing the number of times various triggers fired for the runs in each priority list. The testSummary.csv file should record the number of times the triggers defined in 'my(TrgL)' in the loopMainAll.tcl script fired for each run. The prod_runlist.tcl script can sum these numbers for the runs in each priority and display the totals. The script will also read in and sum the number of events and daq files from the online runlog browser pages. Currently I have things set up to divide the totals into all runs, zdc_polarimetry runs, and runs with the tpx included for each priority list.
- ypwang's blog
- Login or register to post comments