Efficiency loss from PPV cuts in Run 12 pp510

Zilong Chang has demonstrated that the default vertex finder cuts resulted in efficiency loss in Run 12 pp510 reconstruction with respect to using other cut sets (see this presentation, or slide 9 of this presentation). I will attempt to quantify here approximately the total efficiency loss of reconstructable events.

(Note: plots on this page are bigger than presented on the page, so you can open the image in a new browser window/tab and see it at higher resolution.)

Step 1. Determine the relative efficiency loss due to using the wrong cuts:
I divided Zilong's "Good Primary Vertex" efficiency numbers from "default PPV" by "Modified PPV 2":
95.3/96.6 = 0.9865
90.4/91.1 = 0.9923
52.2/77.0 = 0.6779

Step 2. Model this efficiency loss as a function of luminosity:
I plotted the above numbers vs. ZDC coincidence rate and then found a function that to first order describes the loss. I show this in the first plot using the function: 1.0-2.2e-6*x*x (where x here is the ZDC coincidence rate in kHz):

Step 3. Plot this efficiency vs. time:
Using an Ntuple of ZDC coincidence rates from Run 12 pp510, I selected runs with good status, TPC in the data stream, and ZDC coincidence rate greater than 10 Hz. I plotted the ZDC coincidence rate (zdcx) vs. (unix) time, and this efficiency loss function versus time in the following two plots:

Step 4. Determine the time-weighted efficiency loss:
The next plots just show the distribution of the efficiency loss function (i.e. the projection of the above efficiency loss plot), which show that the mean (weighted by time) is 88%. I made the plots using cuts on zdcx of 10 Hz (first plot) and 25 kHz (second plot) to get rid of the spike at ~1.0 due to low luminosity data samples, and this dropped the mean of the plot from 0.8841 to 0.8754. That difference is below other uncertainties in my model, so it isn't worth arguing about.

In other words, the default vertex finder cuts cost us roughly ~12% of the events that we could have reconstructed. It is expected (and demonstrated already by Zilong) that embedding will replicate these efficiency losses, so it will not introduce a significant additional uncertainty into physics analyses.