SVN Repository Access

We set up a Subversion repository at MIT to track a few pieces of software that many of us are using, but that don't fit into the STAR framework.

Browsing and Checking Out Code

http://deltag5.lns.mit.edu/viewvc/

will allow you to browse the contents of the repository.  You'll need to have a Subversion client installed in order to check out code.  Simplest way on a Mac is to do

fink install svn-client

although there are also binary .pkg installers floating around for most eveery platform if you'd prefer to go that route.  Then do

svn co http://deltag5.lns.mit.edu:8080/svn/modulename

Committing Changes

The web server doesn't do any authentication, so if you plan on committing changes to these packages you'll need to be added to the svnusers group on deltag5 and you'll also need to use ssh to get your working copy:

svn co svn+ssh://deltag5.lns.mit.edu/svnrep/modulename

In that case, make sure that your .bashrc on deltag5 adds /usr/local/bin to your $PATH.  Note that this method may ask you for your password as much as 4 times, so publickey authentication is your friend (see SSHKeychain for Macs).

For more information on Subversion (basically the successor to CVS) take a look at http://svnbook.red-bean.com/

Pulser Runs

Speaker : Vi Nham/Jim


Talk time : 15:30, Duration : 00:10

Status of the Detector

Speaker : Vi Nham


Talk time : 15:20, Duration : 00:10

Impressions on Nantes Meeting

Speaker : Jim/Howard


Talk time : 15:10, Duration : 00:10

Practice Talk for Berkeley Workshop

Speaker : Jonathan Bouchet


Talk time : 15:00, Duration : 00:10

How to run the transfers

Under:
The first step is to figure out what files you want to transfer and make some file lists for SRM transfers:

At PDSF make subdirectories ~/xfer ~/hrm_g1 ~/hrm_g1/lists

Copy from ~hjort/xfer the files diskOrHpss.pl, ConfigModule.pm and Catalog.xml into your xfer directory.

SRM instructions for bulk file transfer to PDSF

Under:

These links describe how to do bulk file transfers from RCF to PDSF.

AOB

Speaker : All ( All )


Talk time : 12:45, Duration : 00:10

Removal of datasets on NFS disk

Speaker : Lidia Didenko ( BNL )


Talk time : 12:35, Duration : 00:10
Datasets removed:
  • auau200/hijing_382/b0_3/central/y2007/gheisha_on/p07ia
    pure hijing events without embedded B particle;
  • auau200/hijing_382/b0_3/central/y2004a/gheisha_on/trs_ie,

Current Simulation Production: status and plans

Speaker : Maxim Potekhin ( BNL )


Talk time : 12:25, Duration : 00:10
Current simulation wave - the 2007 status page is actively updated, users can consult it daily:

2007 Two major items:

  • Spin PWG request, quite large, 8M+ events total:

News from the front

Speaker : Stephen Trentalange ( UCLA )


Talk time : 12:10, Duration : 00:10

Embedding status and current activities

Speaker : Olga Barannikova ( UIC )


Talk time : 12:00, Duration : 00:10

Procedure

Under:

STAR-PMD calibration Procedure

Experimental High Energy Physics Group, Department of Physics, University of Rajasthan, Jaipur-302004

The present STAR PMD calibration methodology is as follows:

  1. The events are cleaned and hot_cells are removed.
  2. The cells with no hits in immediate neighbours are considered as isolated hits and are stored in a Ttree
  3. The data for each cell, whenever it is found as a isolated cell, is collected and the adc distribution forms the mip for that cell.
  4. The mip statistics is then used for relative gain normalization.

The steps (1) , (2.) and (3.) have been discussed in detail in past. This writeup concentrates only on (4.) i.e the Gain Normalization Procedure. The writeup here attepts to understand the varations in the factors affecting the gains. It also attempts to determine how frequently should the gain_factors be determined.

Calibration

Under:

Systematic Uncertainty Studies

In the 2003+2004 jet cross section and A_LL paper we quoted a 5% systematic uncertainty on the absolute BTOW calibration.  For the 2005 jet A_LL paper there is some interest in reducing the size of this systematic.

I went back to the electron ntuple used to set the absolute gains and started making some additional plots.  Here's an investigation of E_{tower} / p_{track} versus track momentum.  I only included tracks passing directly through the center of the tower (R<0.003) where the correction from shower leakage is effectively zero.

Full set of electron cuts (overall momentum acceptance 1.5 < p < 20.):

dedx>3.5 && dedx<4.5 && status==1 && np>25 && adc>2*rms && r<0.003 && id<2401

I forgot to impose a vertex constraint on these posted plots, but when I did require |vz| < 30 the central values didn't really move at all.




Here are the individual slices in track momentum used to obtain the points on that plot:







Electrons with momentum up to 20 GeV were accepted in the original sample, but there are only ~300 of them above 6 GeV and the distribution is actually rather ugly.  Integrating over the full momentum range yields a E/p measurement of 0.9978 +- 0.0023, but as you can see the contributions from invididual momentum slices scatter around 1.0 by as much as 4.5%

Next Steps?  -- I'm thinking of slicing versus eta and maybe R (distance from center of tower).

Performance Benchmarks

I ran a couple of TStopwatch tests on the Run 5 common trees.  Here are the specs:

Hardware:  Core Duo laptop, 2.16 Ghz

Trees:  805 runs, 26.2M events, 4.4 GB on disk

Languages:  CINT, Python, compliled C++

I also tested the impact of using a TEventList to select the ~11M JP1 and JP2 events needed to plot deta and dphi for pions and jets.  Here's a table of the results.  The times listed are CPU seconds and real seconds:

     Chain init  + TEventList generation   
    Process TEventList   
CINT156 / 247
1664 / 1909
Python
156 / 257
1255 / 1565
Compiled C++ 154 / 249
877 / 1209

I tried the Python code without using a TEventList.  The chain initialization dropped down to 50/70 seconds, but reading in all 26M events took me 1889/2183 seconds.  In the end the TEventList was definitely worth it, even though it took 3 minutes to construct one.

Conclusions:
  1. Use a TEventList.  My selection criteria weren't very restrictive (event fired JP1 or JP2), but I cut my processing time by > 30%.
  2. I had already compiled the dictionaries for the various classes and the reader in every case, but this small macro still got a strong performance boost from compilation.  I was surprised to see that the Python code was closer to compiled in performance than CINT.

Performance of the Silicon Strip Detector of the Star -- "rehearsal"

Speaker : Jonathan Bouchet


Talk time : 11:20, Duration : 00:30

Follow up on the cucu re-production

Speaker : Jonathan Bouchet


Talk time : 11:10, Duration : 00:10

SSD status Report

Speaker : Vi Nham


Talk time : 11:00, Duration : 00:10

Embedding description and policy (draft?)

Speaker : Jerome Lauret ( BNL )


Talk time : 16:00, Duration : 00:10