ProductionMinBias : second pass

  1. for a summary of changes in the codes, look at these slides : here
  2. Study/check of the cut we apply in the primary vertex Z resolution : here
  3. the new TTree format is the following :

struct D0TREE {
Float_t Vz;
Int_t NTrk;
Int_t gRefMult;
Int_t Candidates;
Float_t PtD0[50000];
Float_t PD0[50000];
Float_t MassD0[50000];
Float_t EtaD0[50000];
Float_t RapD0[50000];
Float_t PtKaon[50000];
Float_t PtPion[50000];
Float_t PKaon[50000];
Float_t PPion[50000];
Int_t ChargeKaon[50000];
Int_t ChargePion[50000];
Int_t SiKaon[50000];
Int_t SiPion[50000];
Float_t dEdxKaon[50000];
Float_t dEdxPion[50000];
Float_t ndEdxKaon[50000];
Float_t ndEdxPion[50000];
Float_t dcaXYKaon[50000];
Float_t dcaXYPion[50000];
Float_t dcaZKaon[50000];
Float_t dcaZPion[50000];
Float_t PhiKaon[50000];
Float_t PhiPion[50000];
Float_t SigmaDcaXYKaon[50000];
Float_t SigmaDcaXYPion[50000];
Float_t SigmaDcaZKaon[50000];
Float_t SigmaDcaZPion[50000];
Float_t DcaTrackTXY[50000];
Float_t DcaTrackTZ[50000];
Float_t slength[50000];
Float_t dslength[50000];
Float_t probability[50000];
Float_t Lxy[50000];
Float_t CosPointing[50000];
Float_t thetaGJ[50000];
Float_t RadiusKaon[50000];
Float_t RadiusPion[50000];
};
D0TREE D0Tree;

     4. We also modified the histograms we booked :

TH1D *NumEv           = new TH1D("NumEv","",5,0,5);
TH2D *priVtxXY        = new TH2D("priVtxXY","Primary vertex x y",100,-2.5,2.5,100,-2.5,2.5);
TH1D *priVtxZA        = new TH1D("priVtxZA","Primary vertex z before pos. and res. cuts",200,-50,50);
TH1D *priVtxZB        = new TH1D("priVtxZB","Primary vertex z after pos. and res. cuts",200,-50,50);
TH1D *priVtxSigZA     = new TH1D("priVtxSigZA","Primary vertex res. z before pos. and res. cuts",200,0,.05);
TH1D *priVtxSigZB     = new TH1D("priVtxSigZB","Primary vertex res. z after pos. and res. cuts",200,0,.05);
TH2D *priVtxXMult     = new TH2D("priVtxXMult","Primary vertex x vs. gRefMult",500,0,1000,100,-2.5,2.5);
TH2D *priVtxYMult     = new TH2D("priVtxYMult","Primary vertex y vs. gRefMult",500,0,1000,100,-2.5,2.5);
TH2D *priVtxSigXMult  = new TH2D("priVtxSigXMult","Primary vertex res.x vs.gRefMult",500,0,1000,50,0,.05);
TH2D *priVtxSigYMult  = new TH2D("priVtxSigYMult","Primary vertex res.y vs.gRefMult",500,0,1000,50,0,.05);
TH2D *priVtxSigZMult  = new TH2D("priVtxSigZMult","Primary vertex res.z vs.gRefMult",500,0,1000,50,0,.05);
TH2D *radiusKAON      = new TH2D("radiusKAON","radius of first hit on global track for kaon candidates",600,-30,30,600,-30,30);
TH2D *radiusPION      = new TH2D("radiusPION","radius of first hit on global track for pion candidates",600,-30,30,600,-30,30);
TH2D *dEdxKAON        = new TH2D("dEdxKAON","dEdx of kaon candidates",500,0,10,500,0,10);
TH2D *dEdxPION        = new TH2D("dEdxPION","dEdx of pion candidates",500,0,10,500,0,10);
TH2D *SiKaonMap0      = new TH2D("SiKaonMap0","#eta vs #phi of kaon candidates with N=0 silicon hits",16,-3.2,3.2,10,-1,1);
TH2D *SiKaonMap1      = new TH2D("SiKaonMap1","#eta vs #phi of kaon candidates with N=1 silicon hits",16,-3.2,3.2,10,-1,1);
TH2D *SiKaonMap2      = new TH2D("SiKaonMap2","#eta vs #phi of kaon candidates with N=2 silicon hits",16,-3.2,3.2,10,-1,1);
TH2D *SiKaonMap3      = new TH2D("SiKaonMap3","#eta vs #phi of kaon candidates with N=3 silicon hits",16,-3.2,3.2,10,-1,1);
TH2D *SiKaonMap4      = new TH2D("SiKaonMap4","#eta vs #phi of kaon candidates with N=4 silicon hits",16,-3.2,3.2,10,-1,1);
TH2D *SiKaonMapNorm   = new TH2D("SiKaonMapNorm","#eta vs #phi of kaon candidates",16,-3.2,3.2,10,-1,1);
TH2D *SiPionMap0      = new TH2D("SiPionMap0","#eta vs #phi of kaon candidates with N=0 silicon hits",16,-3.2,3.2,10,-1,1);
TH2D *SiPionMap1      = new TH2D("SiPionMap1","#eta vs #phi of kaon candidates with N=1 silicon hits",16,-3.2,3.2,10,-1,1);
TH2D *SiPionMap2      = new TH2D("SiPionMap2","#eta vs #phi of kaon candidates with N=2 silicon hits",16,-3.2,3.2,10,-1,1);
TH2D *SiPionMap3      = new TH2D("SiPionMap3","#eta vs #phi of kaon candidates with N=3 silicon hits",16,-3.2,3.2,10,-1,1);
TH2D *SiPionMap4      = new TH2D("SiPionMap4","#eta vs #phi of kaon candidates with N=4 silicon hits",16,-3.2,3.2,10,-1,1);
TH2D *SiPionMapNorm   = new TH2D("SiPionMapNorm","#eta vs #phi of kaon candidates",16,-3.2,3.2,10,-1,1);
 

The following plot shows the tracks pairs per event, for a given file and given cuts.

These cuts were the same as in the final production but I kept only the K(-)Pi(+) combination so as now we will save all combinations, the maximum will increase

I set the size of the array that saved the D0 candidates to 50000 so one has to check before the maximum size of D0 cand. per event to prevent a seg. fault.

  • 05/20 : update for including the D0 candidate azimuth angle and eventId in the Tree
  1. eventId

I fill the EventId via :

Int_t eveId = EventId[0];

which refers to (from the MuDst tree) :

const Int_t*& EventId  = iter("MuEvent.mEventInfo.mId");

Following are the plot of the eventId distribution for the file /star/data21/reco/2007ProductionMinBias/FullField/P08ic/2007/120/8120081/st_physics_8120081_raw_1030012.MuDst.root

each bin has 1 entry because we have 1 event/id filled.

The following plot shows the eventId vs zvertex

 

  • test on more than 1 file

I ran over 5 files :

0001 -  TTreeIter::AddFile /star/data22/reco/2007ProductionMinBias/FullField/P08ic/2007/123/8123074/st_physics_8123074_raw_1030010.MuDst.root
0002 -  TTreeIter::AddFile /star/data22/reco/2007ProductionMinBias/FullField/P08ic/2007/123/8123074/st_physics_8123074_raw_1030011.MuDst.root
0003 -  TTreeIter::AddFile /star/data22/reco/2007ProductionMinBias/FullField/P08ic/2007/123/8123074/st_physics_8123074_raw_1030013.MuDst.root
0004 -  TTreeIter::AddFile /star/data22/reco/2007ProductionMinBias/FullField/P08ic/2007/123/8123074/st_physics_8123074_raw_1030038.MuDst.root
0005 -  TTreeIter::AddFile /star/data22/reco/2007ProductionMinBias/FullField/P08ic/2007/123/8123074/st_physics_8123074_raw_1040018.MuDst.root

The run number is filled via :

Int_t run   = RunId[0];

which refers to :

 const Int_t*&      RunId  = iter("MuEvent.mRunInfo.mRunId");

The plot for the RunId is below :

We see that all entries in the Tuple have the same number : 8123074, which is expected since I run over 5 files from the run number : 8123074

The next plot shows the run Id vs Event Id

We see a single entry for the Y-axis (run number) and few groups that correspond to the 5 files

If I zoom on the last group, we see that the eventId is unique (expected)

     2. Azimuth angle of D0

This angle is calculated via :

Float_t aziD0  = TMath::ATan2(PP[0].Y(),PP[0].X());

where PP[0].Y() and PP[0].X() are the componants of the D0 4-vector (which is the sum of the 4-vector of the kaon and pion candidate)

The distribution for the real data is here :

 

I check with one of the pure D0 sample, where I used to generate D0 in flat azimuth ( so flat in [-2pi ; 2pi])

So for this sample we see that the azimuth angle of the reco. D0 is flat (as expected)

 

  • 05/21 : update of the TTree structure

struct D0TREE {
Int_t RunId;
Int_t EventId;

Float_t Vz;
Int_t NTrk;
Int_t gRefMult;
Int_t Candidates;
Float_t PtD0[50000];
Float_t PD0[50000];
Float_t MassD0[50000];
Float_t EtaD0[50000];
Float_t RapD0[50000];
Float_t AziD0[50000];
Float_t PtKaon[50000];
Float_t PtPion[50000];
Float_t PKaon[50000];
Float_t PPion[50000];
Int_t ChargeKaon[50000];
Int_t ChargePion[50000];
Int_t SiKaon[50000];
Int_t SiPion[50000];
Float_t dEdxKaon[50000];
Float_t dEdxPion[50000];
Float_t ndEdxKaon[50000];
Float_t ndEdxPion[50000];
Float_t dcaXYKaon[50000];
Float_t dcaXYPion[50000];
Float_t dcaZKaon[50000];
Float_t dcaZPion[50000];
Float_t PhiKaon[50000];
Float_t PhiPion[50000];
Float_t SigmaDcaXYKaon[50000];
Float_t SigmaDcaXYPion[50000];
Float_t SigmaDcaZKaon[50000];
Float_t SigmaDcaZPion[50000];
Float_t DcaTrackTXY[50000];
Float_t DcaTrackTZ[50000];
Float_t slength[50000];
Float_t dslength[50000];
Float_t probability[50000];
Float_t Lxy[50000];
Float_t CosPointing[50000];
Float_t thetaGJ[50000];
Float_t RadiusKaon[50000];
Float_t RadiusPion[50000];
};
D0TREE D0Tree;

  • explanation of how the Phi of D0 candidate is calculated :

In the code, we have 2 successive loops over primary tracks.

We always used the global tracks associated to these primary tracks and save the momentum componants :

p[0] = TVector3(pXGl[kg],pYGl[kg],pZGl[kg]);

p[1] = TVector3(pXGl[ig],pYGl[ig],pZGl[ig]);

Then when we 'built' the D0 candidate, we built first 2 TLorentzVector with the momentum of each daughters :

p4[0][0].SetVectMag(p[0],amK);

p4[1][0].SetVectMag(p[1],amPi);            

Here amK and amPi refer to the mass of Kaon and pion candidates.

Finally, the D0 candidate is the sum of these 2 TLorentzVector

PP[0]  = p4[0][0];
PP[0] += p4[1][0];

And we get access to the D0 momentum componants with :

momentum along X : PP[0].X()

momentum along Y : PP[0].Y()

momentum along Z : PP[0].Z()

The azimuthal angle is then : Float_t aziD0  = TMath::ATan2(PP[0].Y(),PP[0].X());

 

  • The location of the files and code is :

/star/institutions/ksu/bouchet/PROD_ANAMEETING_JUNE10/test

 

05/31 : Attempt to apply cuts to speed up the macro

  • the cut I have changed is to require the first hit belongs to the 1st or 2nd layer of the SVT
  • the cuts I'm working on is :
  1. cut on dca : we're applying a upper limit of the dca transverse to PV, it is set at 0.5 cm

Or from the following plot that shows the dca transverse for tracks with N=2,3,4 and all cumulated silicon hits, the limits of the DCA transverse

Is some tests I set this cut to |dcaXY|<0.15 cm  ; it loses some entries for the case N=2 silicon hits but is OK as soon as we required 3 silicon hits

2. change the cut on the fit TPC hits to the ratio (fit/poss)

The plot shows the correlation between the number of TPC fitted hits and the ratio : fitted/poss

In the curent cuts, we require NFitTPC>20 ; however, a cut often use in analysis is the ratio Fit/Poss>0.51 to avois split tracks in TPC.

However when I looked at the number of tracks for the different cuts :

a. no cut --> numbe rof entries = 2977635

b. cut on NFitTPc>20 -->number of entries = 2725467

c. cut on NFitTPc/NPossTpc>0.51 -->number of entries = 2955565

So it is not good since the number of tracks is higher, meaning we don't decrease the number of tracks per loop

3. Changes in the code

I change some lines in the code in order to remove earlier in the process the tracks that don't pass the cuts.

  • For example, we were calculating the ndEdx after the 2 tracks loop ; however , as it is a track level cut, I moved this calculation in the track loop itself (with the other cuts based on pT, etc...)

With this, if a track in the first loop has to be remove because the ndEdx is too large, it is then cut and do no process the iteration over the second track loop

  • I've replaced the D0 candidates geometrical variables calculation by access directly to the TLorentzVector method.

ex  : instead of doing , for the rapidity and the phi of D0 :

float rapD0                 = 0.5 * log((PP[0].E() + PP[0].Z())/(PP[0].E() - PP[0].Z()));
Float_t aziD0               = TMath::ATan2(PP[0].Y(),PP[0].X());

we can do directly :

D0Tree.RapD0[cand]          = PP[0].Rapidity();
D0Tree.AziD0[cand]          = PP[0].Phi();

 

 **CHECK **

code :

 Float_t TMomentum           = TMath::Sqrt(PP[0].X()*PP[0].X() + PP[0].Y()*PP[0].Y());
          Float_t theta               = TMath::ATan2(TMomentum,PP[0].Z());
          Float_t ETA                 = -log(TMath::Tan(theta/2.));
          float rapD0                 = 0.5 * log((PP[0].E() + PP[0].Z())/(PP[0].E() - PP[0].Z()));
          Float_t aziD0               = TMath::ATan2(PP[0].Y(),PP[0].X());
          float ptdo = TMath::Sqrt(PP[0].X()*PP[0].X() + PP[0].Y()*PP[0].Y());
          float pdo  = TMath::Sqrt(PP[0].X()*PP[0].X() + PP[0].Y()*PP[0].Y()+PP[0].Z()*PP[0].Z());
          cout << " from calculation,   eta = " <<ETA << " rapidity = " << rapD0 << " azimuth = " << aziD0 << " pt = " << ptdo << " P = " << pdo <<endl;
          cout << " from TLorentzVector eta = " <<PP[0].PseudoRapidity() <<" rapidity = " << PP[0].Rapidity() <<" Azimuth = " << PP[0].Phi() <<" Pt = "<<PP[0].Perp()<<" P = "<<PP[0].P()<< endl;

results :

from calculation,   eta = -1.19778 rapidity = -0.361881 azimuth = 1.97091 pt = 0.412474 P = 0.745468
 from TLorentzVector eta = -1.19778 rapidity = -0.361881 Azimuth = 1.97091 Pt = 0.412474 P = 0.745468
 
 from calculation,   eta = -0.133349 rapidity = -0.0726226 azimuth = -0.736206 pt = 1.07704 P = 1.08663
 from TLorentzVector eta = -0.133349 rapidity = -0.0726226 Azimuth = -0.736206 Pt = 1.07704 P = 1.08663
 
 from calculation,   eta = 0.102206 rapidity = 0.0482051 azimuth = -0.770504 pt = 0.975828 P = 0.980929
 from TLorentzVector eta = 0.102206 rapidity = 0.0482051 Azimuth = -0.770504 Pt = 0.975828 P = 0.980929
 

 CCL  --> it gives the same results

 

  • 06/01 : update on benchmarking :

latest test :
I've run some jobs with 2 different cuts.
The cuts are the same except in one i required the first hit on track to be < 9cm --> so it takes only tracks where the first hit in the first inner layer of SVT, but there are still the possibility of tracks with 2 hits.

The second is when I required the first hit on track to be < 13cm --> so it takes only tracks where the first hit belongs to the first or second inner layer of SVT

With the first cut (radius < 9cm), a job with a single file takes ~ 30 mn (some are lower and some take a bit more time)

With the second cut (radius < 13 cm), jobs take longer because this cut is looser.
The average time is 2 hours (but the time varies from 1h30 to 2h30)

The cut on P>0.8GeV/c that Jaiby mentioned does not help a lot because we already have a cut on pT>0.5 and in general, P > pT then the cut on pT does all the job

The cuts on radius<9cm seems efficient since I get ~ 100k events by running jobs started yesterday evening.

I also submitted jobs , not day by day, but within run Id because if I submit jobs for all files in a given day, as we run with a single file, it will submit 2 to 5k jobs and I don't know if there is a limit/user

To recall, the cuts are :

static const Double_t pTCut          = 0.5;    // transverse momentum cut
static const Double_t PCut           = 0.7;    //  momentum cut
static const Double_t mKpiMin        = 1.2;  // min mass of (Kpi) association
static const Double_t mKpiMax        = 2.2;  // max mass of (Kpi) association
static const Double_t DcaCut         = 0.5; // single track DCA to PV
static const Int_t    TpcCut         = 20;   // TPC hits fitted
static const Float_t  TpcRatioCut    = 0.51; // TPC hits fitted / TPC hits Poss
static const Double_t TrackLengthCut = 40;   // min value for dEdxTrackLength
static const Int_t    SiCut          = 2;    // (SVT+SSD) hits fitted
static const Float_t  RadiusCut      = 13.0;
static const Float_t  SigmaPionCut   = 3;    // nsigma for pion
static const Float_t  SigmaKaonCut   = 3;    // nsigma for kaon
static const Double_t EtaCut         = 1.0;  // track pseudorapidity
static const Double_t zcut           = 10;    // zvertex cut
static const Double_t PrimZResCut    = 0.02; // zvertex resolution cut
static const Double_t slengthCut     =.1;    // max value of decaylength calculated by TCFIT
static const Double_t dslengthCut    =.1;    // max value of the error associated to the decaylength from TCFIT
static const Double_t ProbCut        =.01;    // min value of probability of FIT
static const Int_t    writeHisto     = 1;    // flag to write histos
 

  • plot phi vs Z and phi vs eta, for the first hit on track

Taking the global track, we can plot the efficiency (= active ladders) in the silicon layers as a function of phi vs Z of the  first hit, or phi vs eta.

Here is the 2 plots for day 110

phi vs Z :

phi vs eta :

  •  logic for the radius of first hit / Number of silicon hits :

code

Float_t radiusPi = TMath::Sqrt(GlobalTracks_mFirstPoint_mX1[ig]*GlobalTracks_mFirstPoint_mX1[ig]+GlobalTracks_mFirstPoint_mX2[ig]*GlobalTracks_mFirstPoint_mX2[ig]);      Bool_t conditionP1 = (NoFSvtSsdHits[1]==2 && radiusK<RadiusCutSvt1);             
Bool_t conditionP2 = (NoFSvtSsdHits[1]>2  && radiusK<RadiusCutSvt2);
Bool_t conditionP3 = conditionP1 || conditionP2;
if(!conditionP3)continue;

results (output of the code)

 NSiliconHits = 2 radiusK = 7.65261
 -->after NSiliconHits = 2 radiusK = 7.65261
ok, because the radius is <9 cm

 NSiliconHits = 2 radiusK = 6.21886
 -->after NSiliconHits = 2 radiusK = 6.21886
idem

 NSiliconHits = 2 radiusK = 15.7567
not kept because radius > 9 cm

 NSiliconHits = 2 radiusK = 10.8635
idem

 NSiliconHits = 2 radiusK = 9.9388
idem, radius is just higher than 9 cm but it is not kept

 NSiliconHits = 2 radiusK = 23.3316
interesting , tracks with 2 hits in SSD = overlap of ladders

 NSiliconHits = 2 radiusK = 7.35059
 -->after NSiliconHits = 2 radiusK = 7.35059
kept

  This is confirmed by the following plot, showing the radius of first hit vs number of Si hits :

  1.  tracks with 2 silicon hits (here the sum of svt and ssd) have a radius < 9cm.
  2.  tracks with 3 and 4 have a radius < 13 cm