AuAu200 (2007)

AuAu data was taken at 200 GeV during 2007 (Run VII). We have Full Field (FF) and Reversed Full Field (RFF) data. During data processing, we will again use the event-by-event SpaceCharge + GridLeak correction method. As a reminder, this means:
  1. A scaler-based correction is used as an initial guess for a prepass to determine more accurately the distortion correction needed for a particular file.
  2. The prepass value is used to start the production pass.
  3. Once enough statistics are built up, the event-by-event method kicks in.
  4. If there is a long gap in time, or a long series of low statistics events, we revert to the prepass value.
In this scenario, it is not imperative that the scaler-based calibration of the SpaceCharge-to-scaler (we call it SC herein) ratio is perfect. However, it is imperative to get the value correlation the GridLeak to SpaceCharge (we call it GL) correct. It is also beneficial to the event-by-event method to have SC as close to correct as possible as it helps expedite convergence on the best solution.

It appears that the SpaceCharge & GridLeak are larger than they have ever been before. At this point (and we have seen this in the past with the highest luminosities in CuCu), we begin to have track splitting due to the GridLeak distortion. Therefore, the calibration cannot begin with zero corrections. So I took a stab at a few guesses based on old AuAu data for my initial starting points. It also became clear that the ZDC coincidence rate (ZDCx) appears to have the tightest correlation with the distortions.

In the end, the calibration converged quite well for both FF and RFF data. Of note, the values of SC (corrected for the use of ZDCx instead of the ZDC east+west sum we used in 2004) and GL stayed near what has been measured in the past, which is good confirmation of our understanding of these effects. Additionally, the FF and RFF values match quite well with each other.

The low luminosity data turned out to be important in nailing down the behavior of the ionization distortions. It turned out there was a small systematic problem in the Calib_SC_GL.C macro which became noticeable because of this low luminosity data (due to using histogram bins and losing some information) which was introducing some of the offset at zero luminosity, and it is now fixed. One conclusion from the good match between the FF and RFF offset is that there does appear to be a real offset at low luminosity, and that offset is such that there is no distortion below a certain luminosity (ZDCx of approximately 500 Hz in AuAu).


Full Field

* Constraint on SC x GL = 7.6e-06
* Guesses on SC = 8.35e-07 , 8.22e-07
* Guesses on GL = 9.11 , 9.26
* Guesses on SO = 503 , 506

*** FINAL CALIBRATION VALUES: ***
SC = 8.22e-07 * ((zdcx) - ( 505)) with GL = 9.26


Reversed Full Field

* Constraint on SC x GL = 8.06e-06
* Guesses on SC = 9e-07 , 8.74e-07
* Guesses on GL = 8.96 , 9.23
* Guesses on SO = 632 , 626

*** FINAL CALIBRATION VALUES: ***
SC = 8.74e-07 * ((zdcx) - ( 629)) with GL = 9.23

Gene Van Buren

Run7 Prepass Evaluation

With all calibrations now done, I wanted to see whether running with the Prepass was still any better than running using the 1-second RICH scalers which are, as of the Run 7 data, fully available through the DAQ and production software chains (a follow up may be necessary to decide whether the 1-second scalers are truly available for the entire run as there may have been some outages).

I analyzed two samples:

  • 5067 low luminosity minbias events from run 8120055
  • 1890 btag events from run 8141106

I processed each dataset twice: with a Prepass and without. For the btag, it took between 3 and 9 events for the PrePass, and all the files had 200 or more events. For the minbias data, it was between 5 and 15, with over 500 events per file. So I will look at the performance of the prepassed events (i.e. the first 9 or 15 events), and then the performance of events after that for the signed DCA distributions at the primary vertex. These are presented here for two fits to the DCA distributions: a single Gaussian over +/-0.4cm, and a double Gaussian over +/-0.6cm, which more accurately describes the data because tracks with silicon hits tend to have a much narrower DCA distribution:

We learn that the distributions look pretty much identical for whether a Prepass is used, regardless of the case that those events determine the Prepass calibration ("Prepass events"). The only point here possibly favoring of using a Prepass is the mean of the broad (i.e. TPC-only tracks) DCA distribution in the double Gaussian fit for the btag events which were Prepass events. But this argument is countered by the same quantity being slightly better without Prepass for the low luminosity minbias events.

An additional check of relative performance can be made by simply counting tracks reconstructed:

It should be noted here that the FTPC was in the btag sample, but not the minbias sample. So the btag sample is shown for all tracks, and for those excluding the FTPC (i.e. mid-rapidity, they must have a TPC hit). Note that the btag sample has more primaries and globals per event most likely both as a trigger bias versus the minbias sample and due to pileup/multiple vertices as the btag was recorded at a high luminosity, while the slight decrease in primaries/global at mid-rapidity for the btag is probably due to pileup or backgrounds in the higher luminosity running contributing more to the globals. The important comparison is whether the Prepass makes any impact, and this appears to not be the case with any noticeable significance. I will note for the record that all three of these track samples (minbias, btag, btag mid-eta) there were approximately 0.05+/-0.08% more primary tracks for the production without the Prepass.

Getting closer to the physics level, we can see how the Prepass affects reconstructing Lambdas.
Here is the Lambda invariant mass spectrum from the entire btag V0 sample (no further cuts) with
no Prepass (left, or first), and with Prepass (right, or second):

Using amplitude times width: (7.61553*1.96626)/(7.69510*1.95244) = 0.996664 as many Lambdas in the Prepass as in the no Prepass sample.
So I see a very small amount more in the No Prepass (about 0.3%) , and a narrower width (about 0.1%), but both statements are within error of the
two mass spectra being equivalent.

Conclusion

From these samples, there is no significant benefit to using the Prepass. I recommend not to use it in further reconstruction of the Run 7 (2007) AuAu200 dataset.

Gene Van Buren