Run 9 BSMD Online Monitoring Documentation

For run 9, BSMD performance was monitored by looking at non-zero-suppressed events.  The code is archived at /star/institutions/mit/wleight/archive/bsmdMonitoring2009/.  This blog page will focus on the details of how to run the monitoring and the importance of various pieces of code involved in the monitoring.  The actual monitoring plots produced are discussed here.

A few general notes to begin:

Since the code all sat in the same directory constantly, a number of directories have been hard-coded in.  Please make sure to change these if necessary.

The code for actually reading in events is directly dependent on the RTS daq-reader structure: therefore it sits in StRoot/RTS/src/RTS_EXAMPLE/.

All compilation is done using makefiles.

Brief Description of the Codes:

This section lists all the pieces of code used and adds a one- or two-sentence description.  Below is a description of the program flow which shows how everything fits together.  Any other code files present are obsolete.

In the main folder:

runOnlineBsmdPSQA.py: The central script that runs everything.

onlBsmdPlotter.cxx and .h: The code that generates the actual monitoring plots.  To add new plots, create a new method and call it from the doQA method.

makeOnlBsmdPlots.C: Runs the onlBsmdPlotter code.

GetNewPedestalRun.C: Finds the newest BSMD pedestal run by querying the database.

onlBsmdMonConstants.h: Contains some useful constants.

cleanDir.py: Deletes surplus files (postscript files that have been combined into pdfs, for instance).  This script is NOT run by runOnlineBsmdPSQA.py and must be run seperately.

In the folder StRoot/RTS/src/RTS_EXAMPLE/

bsmdPedMonitor.C: Reads the pedestal file from evp and fills and saves to file the pedestal histograms, as well as generating ps and txt files describing the pedestals.

makeMapping.C: Creates a histogram that contains the mapping from fiber and rdo to softId and saves it to mapping.root.  Unless the mapping changes or the file mapping.root is lost there is no need to run this macro.

onlBsmdMonitor.C: Reads BSMD non-zero-suppressed events as they arrive and creates a readBarrelNT object to fill histograms of pedestal-subtracted ADC vs. softId and capId, as well as rates and the number of zero-suppressed events per module.  When the run ends, it saves the histograms to file and quits.

readBarrelNT.C and .h: This class does the actual work of filling the histograms: used by onlBsmdMonitor.C.

testBsmdStatus.C: Checks the quality of the most recent pedestal run and generates QA ps files.

rts_example.C: This is not used but serves as the template for all the daq-reading code.

Program Flow:

The central script, runOnlineBsmdPSQA.py, has a number of options when it is started:

Script Options

-p: print.  This is a debugging option: it causes the code to print the ADC value for every channel.

-n: number of events (per run).  Occasionally it is useful to limit the number of (non-zero-suppressed) events the monitoring program looks at for tests.  During actual monitoring, we wished to catch all possible events so the script was always initialized with -n 1000000: the actual number of non-zero-suppressed events in a run is a few hundred at most.

-t: this option is not used in the current version of the code.

-r: raw data.  This option was added during testing when online reading of pedestal files had not been implemented: if it is turned on then the code will not attempt to subtract pedestals from the data it reads.

-v: mpv cut.  If a channel has MPV greater than this value, it is considered to have a bad MPV.  The default is 5.

-m: mountpoint.  For monitoring, the mounpoint is always set to /evp/.  During tests, it may be useful not to have a mountpoint and read from a file instead, in which case no mountpoint will be given.

The standard initialization of the script for monitoring is then:

python runOnlineBsmdPSQA.py -n 1000000 -m /evp/

Usually the output is then piped to a log file.  For testing purposes, as noted above, the -m option can be left off and the mountpoint replaced with a file name.  (Side note: if neither a mountpoint nor a file name is given, the code will look for the newest file in the directory /ldaphome/onlmon/bsmd2009/daqData/, checking to be sure that the file does not have the same run number as that of the last processed run.  This last option was implemented to allow for continuous processing of daq files but never actually used.)

The main body of the monitoring script is an infinite loop.  This loop checks first to see what the current BSMD pedestal run is: if it is newer than the one currently used, the script processes it to produce the pedestal histograms that will be used by the data processing code.  This process happens in several steps:

Pedestal Processing

Step 1: The script calls the C macro GetNewPedestalRun.C, passing it the time in Unix time format.  The macro then queries the database for all pedestal runs over the last week.  Starting with the most recent, it checks to see if the BSMD was present in the run: if so, the number of that run is written to the file newestPedRunNumber.txt and the macro quits.

Step 2: The script opens newestPedRunNumber.txt and reads the run number there.  It then checks to see if the pedestals are up-to-date.  If not, it moves to step 3.

Step 3: The script moves to StRoot/RTS/src/RTS_EXAMPLE/ and calls bsmdPedMonitor, passing it the evp location of the newest pedestal run, /evp/a/runnumber/ (bsmdPedMonitor does take a couple of options but those are never used).  The main function of bsmdPedMonitor is to produce a root file containing three histograms (BSMDE, BSMDP, BPRS) each of which has a pedestal value for each softId-cap combination.  The pedestal monitoring code starts by reading in the BSMD and BPRS status table (StatusTable.txt) and the mapping histogram (from the file mapping.root) which gives the mapping from fiber and rdo to softId.  Then it goes into an infinite loop through the events in the file.  In a pedestal file, the pedestals are stored in the event with token 0: therefore the code rejects events with non-zero token.  Once it has the token-0 event, it loops through the fibers, grabs the pedestal and rms databanks for each fiber, and fills its histograms.  Finally the code generates a text file that lists every pedestal and several ps files containing the pedestals plotted vs softId and cap.

Step 4: The script calls testPedStatus.C, passing it the root file just generated.  This macro checks for each good channel to make sure that the pedestals and rms just obtained are within certain (hardcoded) limits.  If the number of channels that have pedestal or rms outside of these limits is above a (hardcoded) threshold for a given crate, that crate is marked as bad.  A ps file is generated containing a table in which crates are marked as good or bad with green or red and several more with softId vs cap histograms in which bad pedestals are marked in red.  The thresholds for whether a crate is good or bad are set on line 20 in the array badChannelThreshold: the first eight numbers are for the BSMD crates and the next 4 for the BPRS crates.  Right now there is no way to change these other than to physically alter them in the macro.

Step 5: The postscript files are combined into pdfs (one for the pedestals and one for the statuses), which are, along with the text file, copied to the monitoring webpage.  The index.html file for the monitoring webpage is updated to reflect that there are new pedestals and to link to the files generated from the new pedestal file.  Finally, the variable containing the number of the old pedestal run is changed to that of the current pedestal run.

The next step is to call onlBsmdMonitor, which reads the events as they arrive.  This code has several options:

Daq Reading Options:

-d: allows you to change the buillt-in rts log level.

-D: turns on or off printing: corresponds to -p from the script.

-m:  mountpoint, the same as -m from the script.

-n: nevents, the same as -n from the script.

-l: last run, which tells the code what the last run was so that it knows when a new run is starting.

-r: raw data, same as -r from the script.

The script thus picks what options to give to onlBsmdMonitor from the options it was given, as well as from the value it has recorded for the last run number (1 if the script is starting up).  Output from onlBsmdMonitor is piped to a log file named currentrun.log.  OnlBsmdMonitor also consists of an infinite loop.  As the loop starts, it gets the current event, checks to see if we are in a new run, and if so that this new run has the BSMD in.  If so, it initializes a new instance of the class readBarrelNT, which does the actual processing, and then processes the event.  If instead this is an already-occuring run, it checks to make sure that we have a new event, and if so processes it.  If we are in between runs or the event is not new, the loop is simply restarted, in some cases after a slight delay.  The main task of onlBsmdMonitor is to obtain histograms of pedestal-subtracted ADC vs softId and capId for the BPRS, BSMDE, and BSMDP and save them to a root file.  It also histograms the rate at which events are arriving and the number of zero-suppressed events per module.  The root file also contains a status histogram which is obtained from file StatusTable.txt.  When the run is over, the histograms are written to file and onlBsmdMonitor quits, writing out a line to its log file to indicate its status at the point at which it ends.

The script now looks at the last line of the log file.  If this line indicates that the run had no BSMD, a new entry in the monitoring webpag's table is created to reflect this fact.  If not, the script first checks to see if we are in a new day: if so, a new line for this day is added to the webpage and a new table is started.  The script then extracts the number of non-zero-suppressed BSMD events, the total number of events in the run, the number of ZS events, and the duration of the run from the last line of the log file.  If the run is shorter than a (hardcoded) length cutoff, has no NZS BSMD events, or has fewer NZS BSMD events then a (hardcoded) threshold, the appropriate new line in the table is created, with each of the values obtained from the log file entered into the table, a link to the runlog added, and the problem with the run noted.  If the run passes all quality checks, makeOnlBsmdPlots is called.  This code generates the actual monitoring plots: the script then takes the resulting ps files, combines them into a pdf, and creates a new entry in the table with all the run information, a link to the pdf, a link to the run log, and a link to pedestal QA pdf.  The plotting code requires to be given the mpv cut value and the elapsed time, as well as the name of the root file containing the histogram for the runs.  Any new monitoring plots that are desired can be added by adding a method to create them to onlBsmdPlotter.h and onlBsmdPlotter.cxx and calling them in the method doQA.  Having thus completed a run, the loop restarts.