Minutes for HF PWG meeting on 2015/09/17

Xin:

K0s and double counting study for Au+Au 200 geV run 14 with HFT

With cuts similar to D0-study, the K0s spectra rather consistent with the published results (for pT>1.3 GeV/c, 5-10% difference).

Why difference (compared to published data) is smaller when tighter cuts are used? It could be due to dca12 cut, which is looser compared to previous study, but needs more thoughts.

Double counting study:
Using side bands to estimate fraction of double counted entries which are removed with the correlated background when signal is extracted.
Slide 6 - proposed scheme of correction and syst. uncertainty, no objections.

Syst. uncertainties:
TPC tracking - 3% per track, will be 6% for D0. This could be overestimation, will be studied if time permits.
pT cut variation: please make a comparison with published data (spectra) on the linear scale -> to understand why there is so large difference when cut on pT>0.6 GeV/c is used instead of pT>1.0 GeV/c. Such a difference comes from efficiency correction; however, there is no such effect observed for K0s. Need some thoughts.
- which syst. errors would cancel out int he R_AA? TPC tracking eff,  the rest is slightly different in Au+Au and p+p so probably will be included in the RAA to be on the safe side.

Nasim  - Ds
Ds v2:
Few systematic checks done.
"1 sigma" and "2 sigma" -> signal obtained by bin counting after the background subtracted in +/-1 or 2 sigma window, where sigma is from the Gaussian fit
- is the stat. error from res. background fit propagated to the error on the signal? How it was done? Nasim varied all the parameters within their +/-1 sigma errors, then calculate the integral of the residual BG.
Is this a reliable procedure? We were not sure since the correlations are unknow. Suggestion:  fit only the side-band background ("exlusion fit"). This gives the fit parameters and their correlation matrix, which then could be used to get the integral and the stat. uncertainties.
Syst. error on different signal extraction methods - right now differences added in quadratures. Suggestion: take the max. deviation as a syst. error. Remark: if this is going to be combine with other errors, then is should be translated into standard deviation.
- efficiency correction: syst. error due to centrality dependence: right now very conservative approach (100% eff. for 40-80%). Can we have better estimation of this effect? Maybe, but not sure how reliable is the eff. in the peripheral bin..
Why so large variation with centrality? - because of different cuts.
Suggestions: try to use the actual efficiency and re-calculate the v2. Then check how large is the difference compared to the uncorrected value and then we will we decide if this is sufficient or not, and if we need more studies. The fallback solution is to use the current corrected v2 as an estimate of syst. error due to efficiency correction.
Suggestion: use corrected v2 as a central value of reported data.

Ds Spectra:
0-10% most central bin discarded due to large background and not significant signal. What if v2 is also calculated for 10-40%? Systematic error could be lower (no error due to unknown efficiency in the peripheral bin).

Spectra calculation:
<pT>: from iterative method with Levy fit

Syst. error study done, most syst. uncertaities come from signal extraction.

Why there is so large difference (factor of ~10) compared to last week? Nasim: this is due to unreliable signal in the previous study (large background fluctuations). Now signal is reliable because mass and width are consistent with the PDG values. Mustafa: hard to believe that there is so large fluctuation and there was a signal peak in the mass distribution. Very important to understand the effect and where the difference comes from, because we have to be confident that the signal extraction is reliable.

slide 20:
- use 42 mb for sigma_inelastic for conversion form cross-section to the minimum-bias yield in p+p.

slide 22: systematic check suggestion: vary topological cuts (for pT>0.5 GeV/c cut ) to check if signal is reliable.

signal extraction: chi2 fit could systemically underestimate the background so signal could be overestimated. Suggestion: use a side band to estimate the BG, then use the bin counting, or do a exclusion fit for the background and then subtract the res. bg from the signal. This is also a good cross check for the current signal extraction.

Rongrong:
Syst. uncertaities for J/psi->mu+mu-
Syst. error estimated vs pT for 0-60% data, then applied in different cent. bins. Cross checks done and this seems to be a reasonable approach.
for some bins there is a difference in the signal when bin size changed, we should understand this effect since it does not come from the stat. fluctuation.

Preliminary plots presented.
Concern: low-pT points are lower compared to published J/psi->e+e- data. Lijuan -> this could be due to trigger efficiency, ongoing study, Rongrong will send some slides on this topics.

Syst. error on the Upsilon(2s+3s)/Upsilon(1s) -> not necessary given the current stat. precision.

For internal discussion on run 10 vs run 14 consistency - make a plot with ratios to the Tsallis BW fit (right now R_AA plot have p+p baseline uncertainty folded in).

Takahito:
Trig. efficiency estimated based on  pure muon sample fromJ/psi signal peak.
slide 7 - lines show the trigger window, open points - reconstructed from the data by mirroring a part on the right from the max. bin.
Comment: we should have more stat. in the future to avoid the fluctuations.
Trigger eff and syst. error assigned seem reasonable.

v2 -> check with 2 and 5 phi-Psi bins, consistent results, no significant improvement when  2 bins are used.

Chensheng

J/psi v2 run 11: a few systematic checks done,
- run 10 and 11 (with Crystal ball fit for yield extraction) looks consistent, although run 11 v2 is systematically higher compared to run 10 data.

Game plan: we assume the same systematic error for run 11 as for run 10, with additional uncertainty due to yield extraction (Crystal Ball vs Gaussian fit). Then these two will be compared and either larger or average will be used. For stat. error, a standard approach will be used (weighting by the 1/variance etc.)
Hao's suggestion: we could try to combine raw histogram from run 11 and run 10, this will minimize the fluctuations and thus reduce the sys. error if we re-evaluate them. Sounds good, but no time for this before the Quark Matter.