Run 9 200GeV Dijet XSec Data / Theory Discrepancy Investigation

Here I look at possible causes for the disagreement seen between data and theory for the dijet cross section ...

My last blog post shows the comparison between data and theory for the dijet cross section. I restricted myself to the barrel (-0.8 <= eta <= 0.8) and to events where at least one jet satisfied the L2JetHigh trigger. There are two major areas of disagreement: the first two bins which show large data deficiencies and the rest of the bins which show a somewhat consistent 40-50% data excess compared to theory. For now, I'm mainly focussed on the 40-50% data excess and I want to look at the following things:

  • Correctness of my personal dijet trees and trigger conditions
  • Comparisons of raw yields between Grant and I
  • RFF vs FF cross sections
  • Effect of underlying event and hadronization corrections


The results I show in the previous blog post all came from code which runs on my personal dijet trees. I wanted to make sure that there was no mistake in my tree structure, so I reran the analysis using the standard jet trees as input and confirmed that I get the same yields. I also confirmed that the combination of trigger categories I was using really was the full sample of dijets for which at least one jet satisfied L2JetHigh.

The second thing I wanted to check were my actual dijet yields. It turns out that Grant has 2009 200GeV jet trees from Matt so I can compare my yields with his. We picked 20 common runs (10 from RFF and 10 from FF spaced evenly throughout the run) and I matched my dijet cuts to those that Grant used and we compared dijet yields for any trigger condition. There were 4 runs for which Grant's trees had no entries, but all other runs showed that Grant's yields were between ~2% and 5% higher than mine. I don't think this is the dominant source of the discrepency I see between my data and theory.

Figure 1: Comparison between yields from my analysis (Blue) and Grant's dijet trees (Red) for 20 runs.

The next thing I wanted to check was that the cross sections were consistent for the RFF and FF field settings. I divide the data as well as the simulation runs between the two field settings and calculate the correction factors seperately.  

Figure 2: This figure shows the L2 barrel cross section for the RFF runs.

 

Figure 3: This figure is the same as figure 2 but now for the FF field setting.

 

The above results are disturbing, there should be no difference between the RFF and FF parts of the run. The plots below seem to indicate the problem may not be with the data but an issue with the full reconstructed simulation. More work is needed to sort out the issue.

Figure 4: This plots shows the yields for data (upper left), detector level simulation (upper right), and particle level simulation from the unbiased pythia.root files (lower left). The three curves in each pannel show the full sample (Black), the RFF only sample (Blue) and the FF only sample (Red). Each data curve is scaled by the luminosity for the sample, the full curve is scaled by 17.25 pb^-1, the RFF curve is scaled by 6.64 pb^-1, and the FF curve is scaled by 10.62 pb^-1. The simu and pythia curves are scaled by the number of runs used: 785 for the full sample, 384 for the RFF sample, and 401 for the FF sample.

Figure 5: This figure shows the unscaled particle level unbiased pythia.root spectra (Red) and the detector level spectra (Blue) for the Full sample (upper left), the RFF sample (upper right), and the FF sample (lower left). The lower right pannel shows the correction factors (truth spectrum divided by detector level simu spectrum) for the full (Black), RFF (Blue), and FF (Red) samples. A significant discrepancy can be seen for the correction factors from the RFF and FF runs.

Figure 4 seems to show that the discrepency lies in the detector level simulation. There is roughly a factor of two difference in the yields between the RFF and FF runs. On the other hand, the data and particle level pythia.root simulation each show agreement between RFF and FF at the ~25-35% level. This is seen as well in figure 5 where the RFF correction factor sits roughly a factor of 2 above the FF correction factor. This is consistent with what is seen in the final RFF and FF cross sections where the RFF data sits roughly a factor of two above theory. 

One reason for the discrepancy seen between the RFF and FF yields for the detector level simu could be missing files. The simulation was generated for a number of runs and each run consists of a number of seperate files, at least one for each partonic pt range thrown. If there are a number of runs from the RFF side which are missing files in the full reconstruction simu sample, this could lead to a discrepancy between RFF and FF. To check this, I have compared the number of files from the pythia.root sample to the number of files from the full reconstruction sample on a run-by-run basis. For each run, the number of files should be about the same modulo one or two files missing from the full reco sample because there were no events which passed the BFC trigger filter in that file. The two files linked below show for each run the number of pythia.root files minus the number of full reconstructed files:

RFF Runs
FF Runs 

As can be seen by looking at the RFF runs, there are a number of runs which come toward the end of the RFF sample which show large deficits of full reco simu files as compared to the pythia.root files. There is no such behavior in the FF runs. A number of missing full reco simu files would be consistent with the effects that have been observed: it would mean fewer events in RFF vs FF and would drive up the correction factors for RFF which would in turn increase the cross section with respect to theory for the RFF runs. As a test, I removed the runs 10144030 and above from the simulation samples and recalculated the cross section. The idea was that by removing some of the runs with file number mismatch, the cross section should better match the theory.

Figure 6: This figure shows the cross section spectrum and data / theory ratio for the RFF runs. The runs greater than and equal to 10144030 have been removed.

 


 
The above figure without the mismatched runs from 10144030 on actually shows slightly worse agreement between data and theory. This means that either the difference is being driven by several runs with very pathological behavior, or there is some systematic difference between the RFF and FF simulation. To test if the discrepancy is due to a file mismatch or due to a problem with the RFF simulation, I took a sample of 40 runs (20 RFF and 20 FF) which have the same number of files from the pythia.root files and the full reconstructed simulation. The ratio of dijets from the pythia.root files to the dijets from the full reco simulation should be the same for the RFF and FF parts of the run. The files used can be found here.

Figure 7: Same as figure 5 but now for the 40 run sample which have matching numbers of files between pythia.root and full reco simulation.

 
Figure 8: This figure shows the number of dijets which pass all fiducial style cuts (Red) and the number of dijets which pass all cuts and also have at least one L2JetHigh jet. I use the same 40 runs (20 RFF and 20 FF) as in figure 7.

 


Figure 9: This figure is an extension of figure 8. I show the total number of events which pass the dijet cuts (delta phi, neutral fraction, asymmetric pt, eta and delta eta, and detector to particle level matching) in black. The dijets which have an L2JetHigh jet (JP2-JP2, JP2-JP1Lo, JP2-UnJP2) are in Red and the dijet which have a JP1 jet (JP1Lo-JP1Lo, JP1Lo-JP1Hi, JP1Lo-UnJP1Lo, JP1Hi-JP1Hi, JP1Hi-UnJP1Hi) are in Blue. Note: for the first 15 runs, the blue curve overlaps and covers the black curve. 

Figure 10: Same as figure 9 but now I look at the data. We see that the L2JetHigh and JP1 triggers are present for all runs. Note: the runs with no entries are runs which have no dijet tree because they failed my QA.


From figure 9, it appears that there is no real RFF vs FF discrepency but that it just happens that all the runs missing L2JetHigh triggers are in the RFF half of the run, so that part is effected more. The cause of the missing L2JetHigh triggers in the simu will need to be investigated, but in the mean time, I can use the remaining good simulation to calculate the correction factors and get the cross section. I have removed all simu runs prior to 10135058 as well as all RFF runs which had a greater than 4 discrepancy between the number of files from the pythia.root record and the number of files in the full reconstructed sample. The simu file list used can be seen here. Note that I still use all the data runs. Also note that none of the simu runs in FF have been removed.

Figure 11: Same as figurs 5 and 7, but now the correction factor is calculated using only the simu runs which have L2JetHigh triggers.

As can be seen, the correction factors for the RFF and FF parts of the run are now practically the same, as they should be.

Figure 12: For reference, using the full simulation sample with the runs missing L2JetHigh triggers included. This figure shows the cross sections for the full data set (Black), the RFF data set (Blue) corrected with the RFF simu, and the FF data set (Green) corrected with the FF simu all compared to theory (Red). The bottom plot shows the data / theory ratio for each of the three sets.

 

Figure 13: Same as figure 12, but now I use the simulation set containing only runs which have L2JetHigh triggers and which have reasonable agreement in the number of files between pythia.root and full reco simulation.

The problem with the cross sections being different between the RFF and FF field settings and the 50% offset between the combined data and the theory seems to mostly be attributable to the simulation files that were missing L2JetHigh triggers. Pibero has identified several bugs with the simulation and is now rerunning those files.

With that issue addressed, the last thing to look at is the effect of Underlying Event and Hadronization corrections. It may be a bit premature to look at these seeing as new simulation is on the way, but I figured I would show what I have so people can get a feeling for what this is like.

Figure 14: This figure shows the ratio between the number of reconstructed dijets which have a matching particle or parton level dijet to the total number of reconstructed dijets. This is the R1 factor which accounts for false dijets. The dijet yield from data is multiplied by this number before the correction factor is applied. The spectra show the invariant mass before matching cut (Black), after matching cut (Blue), and failing matching cut (Red).

Figure 15: This figure shows the correction factors (pythia / detector level full reco simu) for the particle level pythia and the parton level pythia.

 

Figure 16: This figure shows the data cross section corrected to particle level (Blue) and parton level (Red) and the difference between them.

Figure 17: This figure shows the data cross section (Black) corrected to particle level compared with the raw theory (Red) and the theory corrected for underlying event and hadronization (Blue)