Quick Shower Shape Study
Below are a few quick studies showing the robustness of the Bayesian model comparison approach to shower shape analysis. A review of the algorithm is still up on local MIT space, although at the moment some of the technical details are out of date.
In each study the simulated showers were created with total (integrated) energies between 7 and 15 GeV and HWHM between 0.5 cm and 1 cm (for reference a strip is around 1.5 cm). These values were chosen in accordance with experience with full detector simulations. Detector reconstructed was simulated by smearing the strip depositions (energy of the showers integrated across each strip) with a Gaussian distribution, N(E, 0.06 * sqrt(E)).
The first question that was raised was the behavior of the algorithm when neither the single shower or double shower hypothesis was true. I argued that the algorithm would still perform well because the double shower model could fit the data much better than the single shower model and the difference between the evidence, which is reported by the algorithm, would be sensible. This is borne out in the test, with the double and triple shower results being statistically equivalent.*
* Except for the region near dLogZ = 0 where one expects single showers. This is just a manifestation of the fact that the probability for two showers overlapping to form a single shower is much larger than the probability that three independent showers all overlap.
The second study addresses the robustness of the algorithm under the Cauchy shower model hypothesis. We see the results of the algorithm on two data sets, one created with Cauchy shower shapes and the other with Gaussian shower shapes. While the performance of the algorithm worsens for the Gaussian data (as is expected as the double shower model has the opportunity to compensate for deviations between the data and the model, pulling the dLogZ distribution for single showers to the left) the overall performance remains strong.
Note that the difference between the Cauchy and Gaussian distributions is NOT relevant to data/simulation comparisons. Provided that the the simulation performs adequately then the output of the algorithm will be the same for the data and the simulation. Any defect in the simulation will manifest as deviations between the output distributions in data and simulation.
- betan's blog
- Login or register to post comments