Possible Causes of Data/Theory Discrepancy

The formulas for the cross section are [from slide 7 of v5 of my DNP talk]

formula
formula

One can consider the possible effect of inaccuracies in each factor and whether any are of the order of magnitude of the data/theory comparison (x2).
  • Raw number of counts: this is obtained form the M_gg distribution, by integrating in the window 0.1 to 0.2.  This is a simple calculation--don't see how it can be wrong by a factor of 2.  Things such as merging that might affect the counts are included also in the MC processing, and thus accounted for in the efficiencies.  As of now, the data/MC comparison is pretty good.  Note, the merging is based on the phi and eta direction of the pi0s, a quantitiy modeled fairly well.
  • Signal fraction: on the order of 60 to 90%.  See slide 68 of Sept 13th Spin PWG talk.  Note: the black is the relative weight of each template over the whole fitting range in the Pythia (the initialization values for the fits), the blue is the fitted values of the relative weights of each template over the full fit range (i.e. the result of the fit) and the purple is the resulting relative weights within the peak region (i.e. the purple points on the signal fraction plot are the values used for s_j).  Even if the signal fraction were truly 100%, this could only increase the cross section in the 6-7 GeV bin by a factor of 1/0.7 = 1.42.  Too small for the apperent factor of 2, and we can be fairly certain there is some background.  Thus problems in the background subtraction cannot account for the discrepancy.
  • Unfolding: (both the f factor and the S matrix) The overall effect of unfolding on the cross section over the full pT range considered (6-16 GeV) is on the order of a 15% reduction.  We have considered the cross section without unfolding, and the data/thery comparison is no better.  While limited pT resolution may effect the slope--causing it to appear slightly more flat--the integral over a large enough pT range should effected in a minor way by the smearing.  In order for inaccuracies in our smearing matrix to cause the discrepancy, one would have to have a situation where the pT resolution in 6-9 GeV is actually much larger than currently estimated, but with the pT resolution in 4-6 GeV not being significantly larger than currently modeled.  This is not overly likely.  The ratio of unfolded counts to before-unfolded counts is also generally (maximum) on the order of a few 10% of percents, not on the order of a factor of 2
  • Luminoscity: we switched from using the scalar board counts to actualy counting the number of minbias events per run.  We get the prescales from a script from Jamie, and double checked that the results are consistent with the run log for a number of runs.  We count the number of minbias events with no bbc time bin cut and with a bbc time bin cut, take the average for the number of counts and half the difference as a systematic uncertainty.  While I do not have the number handy at present as far as the amount of livetime, I believe amoung the runs used it was on the order of 60-80%.  In any case, the correct procedures are now in place.  The total sampled luminoscity in our runs is on the right order as the commonly listed luminoscities, and do not allow for the possibility that our lumi estimate is low by a factor of 2--at most it could be off on the percent level.
  • Branching Ratio: we're using the PDG value
  • Average p_T: can be off by no more than few percent, based on bin widths
  • Phase space factors: fixed values.
  • Reconstruction efficiency: our current estimate could be too high due to missing material.  An estimate of the effect can be determined from the difference in the background fraction between MC and data.  Slide 68 of Sept 13th Spin PWG talk shows results as of that time.  Note: the signal fraction in data is low by 20 - 30% in the 6-8 GeV range where the factor of 2 is seen between data and theory.  The reconstruction efficiency can be seen on slide 7 of my DNP talk, and is around 30% in the 6-8 GeV range.  Assuming each missing signal causes one background event, a 30% reduction in signal results in a 30% reduction in efficiency.  While missing material may be interfering with the reconstruction efficiency, it is extremely unlikely that this can be effecting the cross section by more than 50%.
  • Trigger (and Filter) Efficiency: with the latest correction, it is beleived that everything is now computed correctly.  See this blog for many checks and more details.  Given the consistency betwen the various methods, there is no indication of problems.

Conclusions

The only feature which could possibly be off by a factor of 2x is the trigger efficiencies.  However, all studies thus far indicate that the current code and estimtae are correct within uncertainties.  We welcome other suggestions, and will continue to think, but at present all studies we (and the PWG) has thought of have been investigated.  No source has been found for the factor of 2x.  Steve T. also double checked our formulas, and found no missing factors.