Sort by:[Date]

Performance Benchmarks

I ran a couple of TStopwatch tests on the Run 5 common trees.  Here are the specs:

Hardware:  Core Duo laptop, 2.16 Ghz

Trees:  805 runs, 26.2M events, 4.4 GB on disk

Languages:  CINT, Python, compliled C++

I also tested the impact of using a TEventList to select the ~11M JP1 and JP2 events needed to plot deta and dphi for pions and jets.  Here's a table of the results.  The times listed are CPU seconds and real seconds:

     Chain init  + TEventList generation   
    Process TEventList   
CINT156 / 247
1664 / 1909
Python
156 / 257
1255 / 1565
Compiled C++ 154 / 249
877 / 1209

I tried the Python code without using a TEventList.  The chain initialization dropped down to 50/70 seconds, but reading in all 26M events took me 1889/2183 seconds.  In the end the TEventList was definitely worth it, even though it took 3 minutes to construct one.

Conclusions:
  1. Use a TEventList.  My selection criteria weren't very restrictive (event fired JP1 or JP2), but I cut my processing time by > 30%.
  2. I had already compiled the dictionaries for the various classes and the reader in every case, but this small macro still got a strong performance boost from compilation.  I was surprised to see that the Python code was closer to compiled in performance than CINT.

Introduction at Spin PWG meeting - 5/10/07

I've been working on a project to make the datasets from the various longitudinal spin analyses underway at STAR available in a common set of trees.  These trees would improve our ability to do the kind of correlation studies that are becoming increasingly important as we move beyond inclusive analyses in the coming years.

In our current workflow, each identified particle analysis has one or more experts responsible for deciding just which reconstruction parameters and cuts are used to determine a good final dataset.  I don't envision changing that.  Rather, I am taking the trees produced by those analyzers as inputs, picking off the essential information, and feeding it into a single common tree for each run.  I am also providing a reader class in StSpinPool that takes care of connecting the various branches and does event selection given a run list and/or trigger list.

Features

  • Readable without the STAR framework
  • Condenses data from several analyses down to the most essential ~10 GB (Run 6)
  • Takes advantage of new capabilities in ROOT allowing fast fill/run/trigger selection

Included Analyses

  • Event information using StJetSkimEvent
  • ConeJets12 jets (StJet only)
  • ConeJetsEMC jets (StJet only)
  • charged pions (StChargedPionTrack)
  • BEMC neutral pions (TPi0Candidate)
  • EEMC neutral pions (StEEmcPair?) -- TODO
  • electrons * -- TODO
  • ...

Current Status

I'm waiting on the skimEvent reproduction to finish before releasing.  I've got the codes to combine jets, charged pions, and BEMC pions, and I'm working with Jason and Priscilla on EEMC pions and BEMC electrons.

Embedding Notes, 3 May 2007

Notes on embedding test sets for CuCu, P06ib. I ran several sets of embedding test files at PDSF, named Piminus_00x_spectra.

Useful Condor commands

As RCF moves towards the Condor batch system I thought I'd compile a list of some useful commands here.  The ones with a * next to them should be run from the node on which you submitted your jobs.  The full Condor 6.8.3 manual is at

Pythia notes




Starsim at PDSF seems to pick up different libraries than at rcf (I guess this is no surprise). The load of the default pythia library seem to fail for all tested library versions (SL06e, SL06f, starnew, starpro) - and it fails silently. Log file output includes
gstar_input: initializing the MPAR structure (event header such as Pythia proce
ss id etc)
*** Unknown command: ener
*** Unknown command: MSEL
a local checkout and compile of the pythia library seems to fix. Must replace the line in the kumac with the local library:
gexec $STAR_LIB/apythia.so
with

This is a test

Testing upload. Restored options (with head on shoulders).

Dca Graphs

Drupal Wishlist

Ok, after working on porting the BEMC webpages to Drupal over the past couple of days I have a list of features that I hope we can implement in our Drupal installation:

  • Performance Optimizations:  I don't know if it's Drupal settings, PHP settings, or what, but this CMS is awfully slow at times.  I've set up a couple of Joomla installations at MIT that are much snappier.  This is the number one complaint I hear from other people.  It's going to be a running joke at EMC phone meetings before too long.
  • Trash Bin:  What do I do with pages that I decide I don't want?  I don't see any way to trash them, so for the moment the BEMC subsystem page has a "Trash Bin" of it's own.  It sure would be nice if I could hide it, though.
  • Authorship changes:  Sometimes I create pages as kocolosk that would probably be better off as staruser pages, but I don't see any way to change the authorship myself, so I've resorted on at least occasion to "trashing" the old page as best I could and then recreating a new page with a different author.  Very convoluted.
  • Restricted viewership:  I would like to post analysis results in Drupal, but I can't figure out how to restrict my pages to logged in users.  At the moment I have a folder called drupal_pics in protected/spin that I put plots in.  Users are then required to type a password to see them, but again, this is something Drupal must be able to handle by itself.

2006 Charged Pion ALL Projections

Statistical errors from MB || JP1 || JP2 from 2005:

Data - Monte Carlo Comparison, Take 2

I re-examined the data - pythia comparison in my analysis after some good questions were raised during my preliminary result presentation at today's Spin PWG meeting. In particular, there was some concern over occasionally erratic error bars in the simulation histograms and also some questions about the shape of the z-vertex distributions.
Set Number Field Notes QA