Correlations Results Area




Welcome! On this page you will find results from the correlations & fluctuations code base. Please feel free to send any questions or comments to wjllope@wayne.edu.




Recent Related Materials:

Ayeh Jowzaee & W.J. Llope (PAs), "Beam-energy dependence of identified two-particle angular correlations in Au+Au collisions at RHIC", Phys. Rev. C 101, 014916 (2020)
Paper Homepage in STAR
Paper Homepage in HEPdata

Amelia Doetsch, CF presentation, June 17, 2020, Slides
Bill Llope. CF presentation, August 26, 2020, Slides
Nandita Raha, CF presentation, September 17, 2020, Slides
Bill Llope, CF presentation, April 28, 2022, Slides
Launa di Carlo, CF presentation, May 12, 2022, Slides
Nandita Raha, CF presentation, May 12, 2022, Slides
W.J. Llope for STAR, BES-Tea, May 20, 2022, Slides
Nandita Raha for STAR, intended for WPCF (but not approved), July 19, 2022, Slides




Denominator Mode

What distinguishes the present code from all other correlations codes in the literature is the ability to form the correlation function denominator in two different ways. Mixing uses tracks to fill both the numerator and denominator inside the event loop, while convolution forms the denominator after all event processing as the 6-dimensional product of the two single-particle densities. This is a highly nontrivial comparison of the correlation functions and a strong test of their accuracy. Note that the crossing correction for convolution is slightly less effective than that in mixing when the crossing is very strong (LS pions at high root-s, or LS protons at the lowest root-s). 
PDF 

"Closure"
When applying efficiency corrections to the densities, the effect on the R2 correlation functions is of course mathematically zero. Nonetheless, we show the comparison of correlation functions at the "generator level" (a.k.a. perfect detector), at the detector level (i.e. including inefficiency in the acceptance), and at the corrected level (i.e. after applying the efficiencies as weights when filling the densities). Several models were used for these tests but note that only the UrqmdLN results include light nuclei at the generator level. 
PDF - Mixing (NMIX=40), urqmdLN 5M evts
PDF - Convolution (dPt=100), urqmdLN 5M evts
PDF - Convolution (dPt=100), ampt 1M evts
PDF - Convolution (dPt=100), hijing 1M evts
We show also the comparison of the measured and corrected R2s for star data. For these plots, a parameterized inefficiency map in (y,pt) from Evan Sangaline was applied as weights to the densities. This shows again that the R2 CFs are insensitive to single particle inefficiency within a specific acceptance. 
PDF - Mixing (NMIX=40, star data, 5M evts) 

Comparison to 2020 Publication (Ayeh)
The projections of the R2 histograms from the star data from the code used in the 2020 publication are compared to those from the modern code base. The input star data trees and centrality definitions are the same in both cases. There may be slight differences in cuts (e.g. ymax for protons), and the crossing correction has been improved (cf. LS pions at high root-s, or LS protons at the lowest root-s), but the projections from the two codes agree very well.
PDF 7.7, PDF 11.5, PDF 14.5, PDF 19.6. PDF 27, PDF 39, PDF 62.4, PDF 200(10), PDF 200(11)  

Q-cuts
With mixing, it is possible to cleanly excise specific regions of the kinematic acceptance in both the numerator and denominator. An example is Invariant Mass cuts to remove Phi mesons from R2's and balance functions. In the 2020 publication it was showed that femtoscopic pairs can be cleanly removed with cuts on the modulus of the 4-vector difference, Q. This PDF shows the R2 projections with Q cuts of no cut (black), and 50, 100, 150, and 200 MeV/c. The intermediate and long range behavior of the CFs are completely stable. 
PDF

Uncertainties
The modern code base allows so-called "chunk mode" in which the dataset is divided into large subsets. This allows analysis jobs that would require months to finish to complete in a day - one is trading disk space for time. A following step "collects" the root files from all the chunks and produces the final histograms. This data allows one to calculate subgroup uncertainties, which do not require the typical (and unmotivated) assumption that the relevant uncertainty components are uncorrelated. In the linked PDF, it is shown that propagated uncertainties break down for mixing for pair types and data sets in which there are insufficient counts to calculate the denominator. Subgroup uncertainties are the default for all physics plots. Additional details may be found in the CF PWG presentation on April 28, 2022.

Reconstruction Efficiency within the analysis acceptance
The R2 ratios are insensitive to single-particle inefficiencies, and we explicitly correct for the two-particle inefficiency called "crossing". However, related observables, such as the cumulant correlators, C, and the balance functions, B, require explicit correction for the track reconstruction efficiency within the acceptance. These efficiencies are measured using embedding simulations. The track reconstruction efficiencies extracted from BES-I embedding for all particle species, centralities, and specific track quality cuts sets are shown in the following PDFs. For brevity, only cuts sets 1 and 16 are shown in each PDF - these two sets span the cuts space from most lenient (cuts 1) to most strict (cuts 16). On each page, the different frames are rapidity slices spanning our analysis acceptance and indicate the efficiency versus transverse momentum, pt, in that rapidity slice. The black histograms are the embedding results, while the red curves are fits with the function  which are used to smooth out the statistical fluctuations in the embedding efficiencies. These embedding efficiency maps are applied as weights to the densities in the "W2" reader code results. 
PDF 7.7, PDF 11.5PDF 14.5, PDF 19.6, PDF 27, PDF 39PDF 62.4, PDF 200, PDF 200(11), PDF 3.05