PPV with the use of BTOF

This area contains the study on the Pileup-Proof Vertex (PPV) finder with the use of hits from Barrel Time-Of-Flight (BTOF).

Coding

Checklist:

This list the items that need to be implemted or QAed to include BTOF information in the PPV.

Coding: - almost done, need review by experts

1) BTOF hits (StBTofHitMaker) to be loaded before PPV - done

2) BTOF geometry need to be initialized for the PPV - done

    As PPV is executed before the StBTofMatchMaker, I think in the future, the BTOF geometry will be firstly initialized in PPV in the chain and added to the memory. StBTofMatchMaker will then load this directly w/o creating its own BTof geometry.

3) Creation of BtofHitList - done

    The BTOF part is rather segmented accroding to tray/module/cell, but BTOF modules don't have the same deta coverage. The binning is segmented according to module numbers.

    The justification of match and veto is different from other sub-systems as we now allow one track to project onto multiple modules considering the Vz spread and track curvature.

    Define: Match - any of these projected modules has a valid BTOF hit.   Veto - At least one projected module is active and none of the active projected modules has a valid BTOF hit.

4) Update on TrackData VertexData to include BTOF variables - done

5) Main matching function: matchTrack2BTOF(...) - done

    Currently, the match/veto is done at the module level. I set a localZ cut (|z|<3cm currently and also possibly remove cell 1 and 6 to require the track pointing to the center of the module). But this can be tuned in the future. Also whether we need to to match at the cell level can also be discussed.

6) Update on StEvent/StPrimaryVertex, add mNumMatchesWithBTOF - done (need update in CVS)

7) A switch to decide whether to use BTOF or not. - done (but need to add an additional bfc option)

The lastest version is located in /star/u/dongx/lbl/tof/NewEvent/Run9/PPV/StRoot

QA: - ongoing

 

MC simulation study

Default PPV for PYTHIA minibias events in y2009

The first check is to test the default PPV and check whether the result is consitent with those from Vertex-Group experts.

GSTAR setup

geometry y2009

BTOF geometry setup:  btofConfig = 12 (Run 9 with 94 trays)

vsig  0.01  60.0
gkine -1 0 0 100 -6.3 6.3 0 6.29 -100.0 100.0
 

PYTHIA setup

MSEL 1         ! Collision type
MSTP (51)=7
MSTP (82)=4
PARP (82)=2.0
PARP (83)=0.5
PARP (84)=0.4
PARP (85)=0.9
PARP (86)=0.95
PARP (89)=1800
PARP (90)=0.25
PARP (91)=1.0
PARP (67)=4.0
 

BFC chain reconstruction options

trs fss y2009 Idst IAna l0 tpcI fcf ftpc Tree logger ITTF Sti VFPPV NoSvtIt NoSsdIt bbcSim tofsim tags emcY2 EEfs evout -dstout IdTruth geantout big fzin MiniMcMk eemcDb beamLine clearmem

Just for record about the PPV cuts:

StGenericVertexMaker:INFO  - PPV::cuts
 MinFitPfrac=nFit/nPos  =0.7
 MaxTrkDcaRxy/cm=3
 MinTrkPt GeV/c =0.2
 MinMatchTr of prim tracks =2
 MaxZrange (cm)for glob tracks =200
 MaxZradius (cm) for prim tracks &Likelihood  =3
 MinAdcBemc for MIP =8
 MinAdcEemc for MIP =5
 bool   isMC =1
 bool useCtb =1
 bool DropPostCrossingTrack =1
 Store # of UnqualifiedVertex =5
 Store=1 oneTrack-vertex if track PT/GeV>10
 dump tracks for beamLine study =0
 

Results

Total 999 events PYTHIA mb events were processed. Among these, 990 events have at least one reconstructed vertex (frac = 99.1 +/- 0.3 %). The following plot shows the "funnyR" plot of the vertex ranking for all found vertices.

Clearly seen is there are lots of vertices with negative ranking. If we define the vertices with positive ranking are "good" vertices, the left plot in the following shows the "good" vertex statistics

Only 376 events (frac 37.6 +/- 1.5 %) have at least one "good" vertex. The middle plot shows the Vz distributions for MC input and the reconstructed first "good" vertices. However, if you look at the right plot which shows the Vz difference between reconstructed vertex and the MC input vertex, not only all good vertices are well distributed, most of any-found vertices even with negative ranking are within 1cm difference.

If we define the "good" vertex is |Vz(rec)-Vz(MC)|<1cm, as Jan Balewski studied in this page: http://www.star.bnl.gov/protected/spin/balewski/2005-PPV-vertex/effiMC/ then 962 events (frac 96.3 +/- 0.6 %) have at least one "good" vertex.

One note about the bfc log, I notice there is a message as the following:

......

 BTOW status tables questionable,
 PPV results qauestionable,

  F I X    B T O W    S T A T U S     T A B L E S     B E F O R E     U S E  !!
 
 chain will continue taking whatever is loaded in to DB
  Jan Balewski, January 2006
......

The full log file is /star/u/dongx/institutions/tof/simulator/simu_PPV/code/default/test.log

 

Update 9/10/2009

With including BTOF in the PPV, please find the vertex ranking distributions below. (Note: only 94 trays in y2009)

The # of events containing at least one vertex with ranking>0 is 584 (frac. 58.5 +/- 1.6 %). This number is more close to what I have in mind the vertex finding efficiency for pp minibias events. So the early low efficiency was due to missing CTB, while BTOF now is acting like CTB ???

 

Update 9/22/2009

After several rounds of message exchange with Jan, Rosi etc, I found several places that can be improved.

1) Usually we use the BBC triggered MB events for study. So in the following analysis, I also only select the BBC triggered MB events for the vertex efficiency study. To select BBC triggered events, please refer to the code $STAR/StRoot/StTriggerUtilities/Bbc on how to implement it.

2) Use ideal ped/gain/status for BEMC in the simulation instead of the pars for real data. To turn on this, one need to modify the bfc.C file: add the following lines for the db maker in the bfc.C (after line 123)

    dbMk->SetFlavor("sim","bemcPed"); // set all ped=0 <==THIS
    dbMk->SetFlavor("sim","bemcStatus");  // ideal, all=on
    dbMk->SetFlavor("sim","bemcCalib"); // use ideal gains

These two changes significantly improves the final vertex efficiency (I will show later). The following two are also suggested, although the impact is marginal.

3) Similarly use ideal ped/gain/status for EEMC.

    dbMk->SetFlavor("sim","eemcDbPMTped");
    dbMk->SetFlavor("sim","eemcDbPMTstat");
    dbMk->SetFlavor("sim","eemcDbPMTcal");

4) Use ideal TPC RDO mask. You can find an example here: /star/u/dongx/institutions/tof/simulator/simu_PPV/test/StarDb/RunLog/onl/tpcRDOMasks.y2009.C

With these updates included, the following plot shows the # of good vertex distribution from 500 PYTHIA mb events test.

The vertex efficiency is now raised to ~50%. (OK? low?)

Just as a check on the BBC efficiency, here I accepted 418 events with BBC triggers out of 500 events in total. Eff = 83.6 +/- 1.7 %, which is reasonable.

 

MC study on PPV with BTOF in Run 9 geometry

This MC simulation study is to illustrate the performance of PPV vertex finder with BTOF added under different pileup assumptions. All the PPV coding parts with BTOF included are updated in this page.

The geometry used here is y2009 geometry (with 94 BTOF trays). Generator used is PYTHIA with CDF "TuneA" setting. The details of gstar setup and reconstruction chain can be found here. The default PPV efficiency for this setup (with BBC trigger selection) is ~45-50%.

The triggered event selected are BBC triggered minibias events. The simulation has included the TPC pileup minibias events for several different pileup conditions. The pileup simulation procedure is well described in Jan Balewski's web page.  I choose the following pileup setup:

mode BTOF back 1; mode TPCE back 3761376; gback 470 470 0.15 106. 1.5; rndm 10 1200

A few explanations:

  1. 'mode BTOF back 1' means try pileup for BTOF only in the same bXing
  2. '3761376' means for TPC try pileup for 376 bXing before , in, and after trigger event. TRS is set up to handle  pileup correctly. Note, 376*107=40 musec - the TPC drift time.
  3. gback is deciding how pileup events will be pooled from your minb.fzd file.
    • '470' is # of tried bXIngs back & forward in time.
    • 0.15 is average # of added events for given bXIng, drawn with Poisson distribution - multiple interactions for the same bXing may happen if prob is large. I choose this number to be 0.0001, 0.02, 0.05, 0.10, 0.15, 0.20, 0.30, 0.40, 0.50 for different pileup levels.
    • 106. is time interval , multiplied by bXIng offset and presented to the peilup-enabled slow simulators, so the code know how much in time analog signal needs to be shifted and in which direction.
    •  1.5 if averaged # of skipped events in minb.fzd file. Once file is exhausted it reopens it again. If you skip too few soon your pileup events start to repeat. If you skip too much you read inpute file like creazy
  4. 'rndm' is probably seed for pileup random pileup generator

Here shows the results:

  • Vertex efficiency

Fig. 1: Vertex efficiencies in different pileup levels for cases of w/ BTOF and w/o BTOF.

Here: good vertex is definited as vertex with positiving ranking. real vertex is definited as good vertex && |Vz-Vz_MC|<1 cm.

  • # of BTOF/BEMC Matches

Fig. 2: # of BTOF/BEMC matches for the first good & real vertex in different pileup levels.

  • Ranking distributions

Fig. 3: Vertex ranking distributions in each pileup level for both w/ and w/o BTOF cases.

 

  • Vertex z difference

Fig. 4: Vertex z difference (Vzrec - VzMC) distributions in different pileup levels for both w/ and w/o BTOF cases. Two plots in each row in each plot are the same distribution, but shown in two different ranges.

  pileup = 0.0001 pileup = 0.02 pileup = 0.05

w/o

BTOF

w/

BTOF

  pileup = 0.10 pileup = 0.15 pileup = 0.20
w/o BTOF

w/

BTOF

  pileup = 0.30 pileup = 0.40 pileup = 0.50

w/o

BTOF

w/

BTOF

 To quantify these distributions, I use the following two variables: Gaussian width of the main peak around 0 and the RMS of the whole distribution. Fig. 5 and 6 show the distributions of these two:

Fig. 5: Peak width of Vzrec-VzMC in Fig. 4 in different pileup levels.

 

Fig. 6: RMS of Vzrec-VzMC distributions in different pileup levels.

  • CPU time

The CPU time in processing pileup simulation grows rapidly as the pileup level increases. Fig. 7 shows the CPU time needed per event as a function of pileup level. The time shown here is the averaged time for 1000 events splitted into 10 jobs executed on RCF nodes. I see there is a couple out of these 10 jobs took significantly shorter time than others, and I guess this is due to the performance on different node. But I haven't confirmed it yet.

Fig. 7: CPU time vs pielup levels

 

Update on 12/23/2009

There were some questions raised at the S&C meeting about why the resolution w/ TOF degrades at low pileup cases. However, as we know, including BTOF would increase the fraction of these events finding a good vertex. While this improvement is mainly on those events with fewer # of EMC matches that will not be reconstructed with one good vertex if BTOF is not included (see the attached plot for the comparison between w/ and w/o BTOF at 0.0001 pileup level). Events entering into Fig. 5 are any of those who has at least one good vertex. In the case of w/ BTOF, many events with only 1 or 0 EMCs matches can have one reconstructed vertex because of BTOF matches included. While tracks with small pT can reach BTOF easier than BEMC. So one would expect the mean pT of the tracks from these good vertices if we include BTOF would be smaller (not sure how big quantitatively), then resulting in worse projection uncertainty to the beamline, thus these event sample will have slight worse Vz resolution.

I don't have a better idea to pick up the same event sample in the case of w/ BTOF as that in the case of w/o BTOF rather than I select the number of BEMC matches >=2 for the vertices that will be filled in my new Vz difference plot. Fig. 8 shows the same distribution as that in Fig. 5 but with nBEMCMatch>=2.

 

One can see the change is in the right direction, but still it seems not perfect from this plot for very low pileup cases. I also went back to compare the reconstructed vertices event-by-event, here are some output files:
/star/u/dongx/lbl/tof/simulator/simu_PPV/ana/woTOF_0.0001.log and wTOF_0.0001.log
The results are very similar except a couple of 0.1cm movements in some events (I attribute this to the PPV step size). Furthermore, in the new peak width plot I show here, for these very low pileup cases, the difference between two are even much smaller 0.1cm, which I expect to be the step size in the PPV.

 

Test with real data

The PPV with BTOF included is then tested with Run 9 real data. The detail of the coding explanation can be found here.

The PPV is tested for two cases: w/ BTOF and w/o BTOF and then I make comparisions later. The data files used in this test are (1862 events in total):

st_physics_10085140_raw_2020001.daq

st_physics_10096026_raw_1020001.daq

(Note that in 500 GeV runs, many of the triggered events are HT or JP, presumably with higher multiplicity compared with MB triggers.) And the chain options used in the production are:

pp2009a ITTF BEmcChkStat QAalltrigs btofDat Corr3 OSpaceZ2 OGridLeak3D beamLine -VFMinuit VFPPVnoCTB -dstout -evout

Production was done using the DEV lib on 11/05/2009. Here shows the results:

Fig. 1: The 2-D correlation plot of # of good vertices in each event for PPV w/ TOF and w/o TOF.

Fig. 2: The ranking distributions for all vertices from PPV w/ and w/o TOF

Fig. 3: Vz correlation for the good vertex (ranking>0) with the highest ranking in each event for PPV w/ and w/o TOF. Note that, if the event doesn't have any good vertex, the Vz is set to be -999 in my calculation, which appears in the underflow in the histogram statstics.

Fig. 4: Vz difference between vertices found in PPV w/ TOF and w/o TOF for the first good vertex in each event if any. The 0.1 cm step seems to come from the PPV.

 

Update on 4/5/2010:

I have also run some test with the 200 GeV real data. The test has been carried out on the following daq file

st_physics_10124075_raw_1030001.MuDst.root

All the rest setup are the same as described above. Here are the test results:

Fig. 5: The 2-D correlation plot of # of good vertices in each event for PPV w/ TOF and w/o TOF (200 GeV)

Fig. 6: The ranking distributions for all vertices from PPV w/ and w/o TOF (left) and also the correlation between the funnyR values for the first primary vertex in two cases. (200 GeV)

Fig. 7: Vz correlation for the good vertex (ranking>0) with the highest ranking in each event for PPV w/ and w/o TOF (200 GeV).

Fig. 8: Vz difference between vertices found in PPV w/ TOF and w/o TOF for the first good vertex in each event if any (200 GeV)


Conclusion:

The above tests with real data have shown the expected PPV performance with inclusion of BTOF hits.