How to Make EEmcTrees
Instructions how to make EEmcTrees.
Make directories to store the extra condor files that will be created and the log files that will be generated.
In order for the simulated trigger to work correctly, you need to copy some data files. If you are not analyzing 2006 data, this step will need slight modification.
There are four files, for four different data taking ranges. While there are comments in these files about when the values are appropraite, the code actually reads the file
You can then submit a run by specifying the run number, the trigger version (sets the L0 values, and is a label for the L2 values you entered in the setup file). For example, run 7128061 with trigger threshold version 'h' can be submitted via
Note: you may also have to adjust the trigger label in the commands
To adjust which clustering algorithms are used, find the line
There is also a flag to change from 0 to 1 if you are using MC instead of real data. There is a comment in the .xml file with details.
Finally the job can be submitted via
Preparation
Working Directory
Make some directory where you will be working. I'll assume the directory will be named "~/EEMC/2013-Feb/trees", though other names work just as wellmkdir -p ~/EEMC/2013-Feb/trees cd ~/EEMC/2013-Feb/trees
CVS Check Out
Next, checkout the needed files from CVS.cvs co StRoot/StEEmcPool/StEEmcTreeMaker/macros cvs co StRoot/StEEmcPool/StEEmcTreeMaker/schedulerIf the above step results in errors, then you may need to renew your kerberos tokens via
kinit aklogbefore issuing the "cvs co" commands. (Note: co is short for check out.)
Adjusting Directories
Issue the following commands to consolidate the directory structure. This is is mainly to make life easier. Make sure you realize that some of the following commands end with [space][period].mv StRoot/StEEmcPool/StEEmcTreeMaker/macros . mv StRoot/StEEmcPool/StEEmcTreeMaker/scheduler . rm -rf macros/CVS rm -rf scheduler/CVS rm -rf StRoot
Directories to Store Generated Log and Condor Files
Make directories to store the extra condor files that will be created and the log files that will be generated.
mkdir condor log
Trigger Simulator Data
In order for the simulated trigger to work correctly, you need to copy some data files. If you are not analyzing 2006 data, this step will need slight modification.cp -rp /star/u/sgliske/Share/StarTrigSimuSetup/L0 . cp -rp /star/u/sgliske/Share/StarTrigSimuSetup/L2 . rm -rf L2/2008You may need to modify the L2 trigger thresholds. What ever values are in the files
L2/2006/algos/l2gamma_eemc.setup*will be the values used for the L2gamma EEMC thresholds. The value on the line after "FLOATS" is the TP threshold, and the next line is the HT threshold.
There are four files, for four different data taking ranges. While there are comments in these files about when the values are appropraite, the code actually reads the file
L2/2006/db/L2TriggerIds.datto decide. The 2006 analysis currently uses
- 4.2 and 6.2 GeV (set 'h') for data
- 4.326 and 6.18 GeV (set 'i') for Monte Carlo
Making Part 1
You will need to edit the filescheduler/makeEEmcTreesPart1.xmland change the "OUTDIR" variable to the directory where you would like the resulting trees to be located.
You can then submit a run by specifying the run number, the trigger version (sets the L0 values, and is a label for the L2 values you entered in the setup file). For example, run 7128061 with trigger threshold version 'h' can be submitted via
star-submit-template -template scheduler/makeEEmcTreesPart1.xml -entities RUN=7128061,TRIGVER=h
Making Part 2 and 3
You will need to edit the filescheduler/makeEEmcTreesPart2and3.xmland change the "LOCATION" variable to the directory where the part 1 files are located and to where the part 2 and 3 files will be stored. This .xml files loops over all part 1 files, and if no part 2 file exists, then the part 2 and part 3 are generated. It is set up to allow many multiple instances simultainiously, and they will continue to run until all the part 2 files are made (or in progress) or until the job is removed from the queue.
Note: you may also have to adjust the trigger label in the commands
set FILE2=${FILE1:t:s/Part1.trig-h.root/Part2.trig-h/}.${ALGO}.root set FILE3=${FILE1:t:s/Part1.trig-h.root/Part3.trig-h/}.${ALGO}.root
To adjust which clustering algorithms are used, find the line
<!-- Edit the following line to adjust for which algorithms trees are made. -->and add the algorithm keys within the parenthesis. One can also limit the part 1 files that are considered by adjusting the line below the comment
<!-- Edit the following line to restrict the number of files considered. -->Also pay attention to the value of "nProcesses" and set it reasonable for the number of runs that are need part 2 and 3 generated and the time length of the queue. Since there are no input files the "filesPerHour" is actually the total number of hours the job is expected to run.
There is also a flag to change from 0 to 1 if you are using MC instead of real data. There is a comment in the .xml file with details.
Finally the job can be submitted via
star-submit scheduler/makeEEmcTreesPart2and3.xml
Groups:
- sgliske's blog
- Login or register to post comments