Strangeness

Strangeness (PWG pre-2008) is part of the Light flavor Spectra PWG

Lambda/K0S ratio - Year 2 and Year 4 Au+Au @ 200 GeV

Lambda/K0S ratio - Year 2 and Year 4 Au+Au @ 200 GeV

Lambda/K0S ratio: The Lambda/K0S ratio in Year 2 and Year 4 Au+Au @ 200 GeV. The data to the left of the vertical line comes from Year 2, the data to the right from Year 4. This shows the extent and reach of the Year 4 data. The plot was first shown at Quark Matter 2006.

Quark Matter 06 Plan

Quark Matter 2006 Plan

Quark Matter Plans

Timeline, task list for strange and multi-strange particle spectra analysis for Cu+Cu data.

 

Timeline

  • Quark Matter Abstract Submission Deadline - August 1st (Also for support applications)
  • Internal deadline to make corrected spectra available - end August
  • Quark Matter Abstract Selection - August 30th
  • Quark Matter Early registration & room reservation deadline - September 6th
  • Systematic and QA studies complete - end September
  • Proposed physics plots available for PWG discussion - mid October
  • Collaboration Meeting - November 7th
  • Quark Matter Conference - November 14th

Task List

Job  Comments  Time
Person(s)
Status
 Raw V0 Spectra
Cu+Cu 200 All centrality bins, highest possible pt, tune cuts
 2 weeks
 Anthony
 Ongoing
 Embedding Request
Calculate required events
 Now  Ant / Lee  To do
 Embedding 1
1st priority Λ, K0S, Ξ (Cu+Cu 200)
 4 weeks
 Matt / Peter  In preparation
 Event QA
Vertex inefficiency, fakes
 -
 Anthony  Done
 Tracker QA
P05id/P06ib (TPT/ITTF) comparison Λ analysis
   Lee  To do
 Feed-down Ξ analysis Cu+Cu 200 for Λ feed-down correction
   Lee  To do
 Corrected V0 spectra
Needs: embedding, feed-down, systematic error study  6 weeks
 Anthony  To do 
 Thermal Fit
Needs also input from spectra group
   Ant / Sevil  To do
 Multi-Strange Cu+Cu
Cu+Cu 200 Ξ Ω & anti-particles
   Marcelo / Jun
 To do
 Embedding 2
Ξ, Ω, anti-Λ, anti-Ξ, anti-Ω    Matt / Peter  To do
 Multi-Strange Au+Au
Year 4 Au+Au 200, higher stats →  higher pt, finer centrality bins for comparison purposes
   ???  No personnel
 Centality Redefine multiplicity cuts for centrality bins, redo Glauber calc. if required
   Lee / Ant / other PWGs
 To do
Extra things  Wishlist: Cu+Cu 62 GeV analysis, 22 GeV analysis
   -  - 

Please add comments or edit leaving a message in the log. In particular if anyone would like to sign  up to the open slots…

QM is over now so this list is no longer required.
A test link to the node 802 using the default cross-reference Software & Computing

A second link using an alternative title Software & Computing


Embedding requirements calculation

Embedding Requirements Calculation


Here I go through a sample calculation setting out the assumptions used.

Bottom up calculation.

Define desired error
The starting point is to define what statistical error we are aiming for on the correction for a particular pt bin in a spectrum. Obviously there is a contribution from the real data but here we are concerned with the statistical error from the embedding.
Say that we think a 5% error would be nice. That means that 400 reconstructed counts are required if the error is like √N. Actually is is a bit worse than that because the numerator is constructed from a more than one number with different weights so more counts are required,  ~500.
Fold in likely efficiencies
The number that must be generated in that pt bin to achieve this then depends on the efficiency for that bin. For Au+Au minbias a typical efficiency plot might have efficiencies of approximately 5% for pt <1 GeV, 10% for 1 < pt 2 GeV, 15% for 2 < pt < 4 GeV and 40% for pt > 4 GeV. This is for a set of fairly loose cuts close to the default finder ones, loosening at pt=4 because the finder cuts become less stringent at 3.5 GeV.
Therefore we find that the number generated per bin needs to be 10000, 5000, 3000 and 1250 in the pt ranges mentioned. Clearly the low pt part where the efficiency is lowest is driving the calculation.
Effect of choice of bin size
For these numbers to be generated per bin we can ask how many particles per GeV we need. At higher pt we tend to use 500 MeV bins but at lower pt 200 MeV bins are customary, I have even used 100 MeV bins in the d+Au analysis. Choosing 200 MeV bins leads to a requirement for 50000 particles per GeV at low pt etc. This is already looking quite like quite a large number…
Binning with centrality
We embed into minbias events and we'd like to have the correction as a function of the TPC multiplicity (equivalent to centrality). Normally we embed particles at a rate of 5% of the event multiplicity. The 50-60% bin is likely to be the most peripheral that is used in analysis. Here the multiplicity is small enough that we will only be embedding one particle per event. Therefore 50000 particles per GeV requires 50000 events in that particular centrality bin. The 50-60% bin is obviously around one tenth of the total events. I don't think we have a mechanism for choosing which events to embed into depending on their centrality so it means that 500k events per GeV are required.
Coverage in pt
We expect our spectra to go to at least 7 GeV so its seems prudent to embed out to 10 GeV. For Λ these data might also be used for proton feeddown analysis. This means that 5 million events are required!
Coverage in rapidity.
Unfortunately we are not finished yet. We have previously used |y|<1.2 as our rapidity cut when generating even when using |y|<0.5 in the analysis so a further factor of 12/5 is required giving a total of 12 million events per species!

Comments

This number of events is clearly unacceptably large. Matt has mentioned to me that we can do about 150 event per hour so this represents about 80k CPU hours as well as an enormous data volume. Clearly we have to find ways to cut back the request somewhat. Some compromises are listed below.
  • Cut down rapidity range from |y|<1.2 to |y|<0.7 gaining factor 12/7 ≈ 1.7 → 7 million events
  • Settle for a 10% error in a bin rather than 5% gaining factor of 4 → 1.75 million events
  • Hope that efficiency is not a strong function of multiplicity allowing is to combine mult. bins. Gain of factor 2? → 875k events

Strangeness Phone Meeting 2007/5/14

15 May 2007 07:54:37 Attending: Matt, Helen, Jun, Marcelo, Marek, Ant, Lee, Christine, Betty, Sevil Time        Talk        Presenter 12:00        SQM Analyses Reports ( 00:25 ) 0 files        Betty, Marek, Christine, Jiaxu, Ant, Jun 12:25        Kaon discrepancy status ( 00:15 ) 2 files        Lee, Jun 12:40        EPOS and STAR d+Au data ( 00:15 ) 0 files        Sevil 12:55        AOB ( 00:05 ) 0 files         SQM Analysis Reports Betty. Working on getting v2 for the 0-12% sample. This would be for h±, use nq scaling as before to estimate Omega v2. Aihong is providing assistance. Marek. Aim is to reproduce Jana's delta-phi result but with additional anti-merging correction and to get delta-eta projections. Have switched to using 'Yale Trees' in order to go to lower pt (reprocessing through RCF was taking too long). This would be for h-h only as there is currently insufficient V0 daughter info to do the anti-merging cut for V0-h correlations. This should be about to change though. [I have that 7M out of 20M events were done already by Marek and that a new run to get a set of Yale Trees with all V0 info is ready. I assume these are two separate things.] A fall back solution of using the 'mirror image' trechnique is available [does this work for delta eta though?] Christine Working to get efficiency correction for identified associated particles (i.e. V0) Has an example code from Ant to extract efficiencies from the flat pt embedding done for Cu+Cu Currently that cover eta ±0.5 and eta ±1 is required. Will scale Au+Au embedding by result to extend to larger eta. There should anyway be new Cu+Cu embedding coming. Jiaxu Was available for the meeting (time zone issue!) but sent update later to say that he is working on using the highest pt bin (6,9) GeV and some simulations. Ant. Working to finish feed-down correction and compare with Matt's method (working out Xi eff. as a function of the Lambda pt vs scaling to the Xi distribution in embedding [this distinction needs a better explanation sometime]) Also checking the errors on dN/dy from the extrapolation of the pt spectra to pt=0. Jun. Student working on incorporating SVT/SSD into Xi and Omega analysis. Presumably not practical on the timescale of SQM though [production only just getting underway and need to develop embedding chain to include SVT/SSD] Comment from Matt. Main job is to get embedding re-run, if this is indeed necessary. Kaons.

Working Documents

A page to keep working documents of the Strangeness Group

1. A Beam Use Request sent to the Spokesman on 29th January 2008 - doc and pdf formats below
2. Discussion of enhancement plot for Cu+Cu and Au+Au 200 GeV - ppt and pdf formats below
3. Highlight slide for Tim Hallman's QM plenary talk - ppt and pdf formats below

4. Strangeness contribution for RHIC Users meeting overview.