Quark Matter 2006 Plan

Quark Matter Plans

Timeline, task list for strange and multi-strange particle spectra analysis for Cu+Cu data.

 

Timeline

  • Quark Matter Abstract Submission Deadline - August 1st (Also for support applications)
  • Internal deadline to make corrected spectra available - end August
  • Quark Matter Abstract Selection - August 30th
  • Quark Matter Early registration & room reservation deadline - September 6th
  • Systematic and QA studies complete - end September
  • Proposed physics plots available for PWG discussion - mid October
  • Collaboration Meeting - November 7th
  • Quark Matter Conference - November 14th

Task List

Job  Comments  Time
Person(s)
Status
 Raw V0 Spectra
Cu+Cu 200 All centrality bins, highest possible pt, tune cuts
 2 weeks
 Anthony
 Ongoing
 Embedding Request
Calculate required events
 Now  Ant / Lee  To do
 Embedding 1
1st priority Λ, K0S, Ξ (Cu+Cu 200)
 4 weeks
 Matt / Peter  In preparation
 Event QA
Vertex inefficiency, fakes
 -
 Anthony  Done
 Tracker QA
P05id/P06ib (TPT/ITTF) comparison Λ analysis
   Lee  To do
 Feed-down Ξ analysis Cu+Cu 200 for Λ feed-down correction
   Lee  To do
 Corrected V0 spectra
Needs: embedding, feed-down, systematic error study  6 weeks
 Anthony  To do 
 Thermal Fit
Needs also input from spectra group
   Ant / Sevil  To do
 Multi-Strange Cu+Cu
Cu+Cu 200 Ξ Ω & anti-particles
   Marcelo / Jun
 To do
 Embedding 2
Ξ, Ω, anti-Λ, anti-Ξ, anti-Ω    Matt / Peter  To do
 Multi-Strange Au+Au
Year 4 Au+Au 200, higher stats →  higher pt, finer centrality bins for comparison purposes
   ???  No personnel
 Centality Redefine multiplicity cuts for centrality bins, redo Glauber calc. if required
   Lee / Ant / other PWGs
 To do
Extra things  Wishlist: Cu+Cu 62 GeV analysis, 22 GeV analysis
   -  - 

Please add comments or edit leaving a message in the log. In particular if anyone would like to sign  up to the open slots…

QM is over now so this list is no longer required.
A test link to the node 802 using the default cross-reference Software & Computing

A second link using an alternative title Software & Computing


Embedding requirements calculation

Embedding Requirements Calculation


Here I go through a sample calculation setting out the assumptions used.

Bottom up calculation.

Define desired error
The starting point is to define what statistical error we are aiming for on the correction for a particular pt bin in a spectrum. Obviously there is a contribution from the real data but here we are concerned with the statistical error from the embedding.
Say that we think a 5% error would be nice. That means that 400 reconstructed counts are required if the error is like √N. Actually is is a bit worse than that because the numerator is constructed from a more than one number with different weights so more counts are required,  ~500.
Fold in likely efficiencies
The number that must be generated in that pt bin to achieve this then depends on the efficiency for that bin. For Au+Au minbias a typical efficiency plot might have efficiencies of approximately 5% for pt <1 GeV, 10% for 1 < pt 2 GeV, 15% for 2 < pt < 4 GeV and 40% for pt > 4 GeV. This is for a set of fairly loose cuts close to the default finder ones, loosening at pt=4 because the finder cuts become less stringent at 3.5 GeV.
Therefore we find that the number generated per bin needs to be 10000, 5000, 3000 and 1250 in the pt ranges mentioned. Clearly the low pt part where the efficiency is lowest is driving the calculation.
Effect of choice of bin size
For these numbers to be generated per bin we can ask how many particles per GeV we need. At higher pt we tend to use 500 MeV bins but at lower pt 200 MeV bins are customary, I have even used 100 MeV bins in the d+Au analysis. Choosing 200 MeV bins leads to a requirement for 50000 particles per GeV at low pt etc. This is already looking quite like quite a large number…
Binning with centrality
We embed into minbias events and we'd like to have the correction as a function of the TPC multiplicity (equivalent to centrality). Normally we embed particles at a rate of 5% of the event multiplicity. The 50-60% bin is likely to be the most peripheral that is used in analysis. Here the multiplicity is small enough that we will only be embedding one particle per event. Therefore 50000 particles per GeV requires 50000 events in that particular centrality bin. The 50-60% bin is obviously around one tenth of the total events. I don't think we have a mechanism for choosing which events to embed into depending on their centrality so it means that 500k events per GeV are required.
Coverage in pt
We expect our spectra to go to at least 7 GeV so its seems prudent to embed out to 10 GeV. For Λ these data might also be used for proton feeddown analysis. This means that 5 million events are required!
Coverage in rapidity.
Unfortunately we are not finished yet. We have previously used |y|<1.2 as our rapidity cut when generating even when using |y|<0.5 in the analysis so a further factor of 12/5 is required giving a total of 12 million events per species!

Comments

This number of events is clearly unacceptably large. Matt has mentioned to me that we can do about 150 event per hour so this represents about 80k CPU hours as well as an enormous data volume. Clearly we have to find ways to cut back the request somewhat. Some compromises are listed below.
  • Cut down rapidity range from |y|<1.2 to |y|<0.7 gaining factor 12/7 ≈ 1.7 → 7 million events
  • Settle for a 10% error in a bin rather than 5% gaining factor of 4 → 1.75 million events
  • Hope that efficiency is not a strong function of multiplicity allowing is to combine mult. bins. Gain of factor 2? → 875k events