pAu centrality

Meeting agenda

Meeting ID: 933 8996 8512
Passcode: 524959
One tap mobile
+13462487799,,93389968512# US (Houston)
+12532158782,,93389968512# US (Tacoma)

2022/03/01

Yanfang Liu

a) Presented latest Jpsi RpAu in different centrality bins, where the central events are seen to be much higher than peripheral events. There are two issues in the current results: i) the trigger bias factor for MB events is used for all centrality classes while it should have a strong dependence on centrality; ii) the equivalent MB events for each centrality class is calculated based on Glauber centrality interval while it should be calculated based on multiplicity cuts. Yanfang will fix these two issues and update RpAu. 
b) We also discussed about the differences in Jpsi RpAu between PHENIX and STAR. PHENIX uses BBC as the centrality classifier while STAR uses underlying event multiplicity at mid-rapidity. It could be that these two methods of centrality definition categorize events in very different ways. It was suggested to check the correlation between BBC signal and mid-rapidity multiplicity to see how strong/weak their correlations are.
c) Since we can not reliably calculate Ncoll for BBC signal, one can use normalized Jpsi yield vs. normalized event multiplicity (event activity) to compare between BBC and mid-rapidity multiplicity.  It should be straightforward to calculate this with all the information in hand. 



2021/11/16

Tong Liu:

a) Currently investigating smearing effects in assigning events into different centrality classes, especially for peripheral events where the dynamic range for the event multiplicity could be as small as 1. Such smearing effects could arise from fluctuations in event multiplicity and trigger axis selection, TPC efficiency and its non-uniformity in phi, etc. Current approach is to pick a random trigger in a MB event multiple times, and check the possible differences in the resulting spectra on an ensemble basis.
b) It was also suggested to use HJING or PYTHIA, with TPC efficiency from single particle embedding put in by hand, to check the smearing effects. The advantage of using event generators is that the truth is known. 



2021/09/28

Tong Liu:

a) Use an iterative bin-by-bin correction for taking into account TPC tracking efficiency. Large fluctuations are seen above 10 GeV/c after weighting the embedding sample with realistic charged particle spectrum, which is due to outliers in reconstructed track momentum. Cutting on the momentum resolution within 3sigma can largely reduce such fluctuations. 
b)The current approach used to take into account luminosity dependent tracking efficiency is to correct data in different luminosity bins, and combine them after correction. Alternatively, one can get the average efficiency over the entire luminosity range, and apply it to data. This can be done by ensuring the luminosity profile in embedding to be the same as in data. It also found that the error bars on the tracking efficiency are unexpectedly large, and this needs to be checked. 
c) In terms of what can be shown at the upcoming DNP meeting, it was suggested that the ratio of charged particle spectra with bTOF matching in different event activity bins can be shown since TPC tracking and bTOF matching efficiencies are expected to cancel. Here the bTOF matching is required to eliminate pileup tracks. Need to label these ratios properly depending on what will be used as the reference, and uncertainties in Ncoll values need to be added.



2021/08/31

Tong Liu:

a) Checked the charged particle efficiency in different vz, event activity and luminosity bins. There is little vz dependence. A hint of higher efficiency for larger EA is seen, which could be caused by EA selection bias. It was suggested to fit the efficiencies above 2 GeV/c to quantify the difference between different EAs. One can also check the luminosity distributions for different EAs or apply luminosity calibration before selecting EA. A visible dependence on luminosity is seen, as expected. 


2021/08/17

Tong Liu:

a) Currently looking at the embedding sample to evaluate the tracking efficiency. The statistics becomes marginal when dividing the embedding sample into 64 ZDC-vz-RandMult bins. Suggest to check the efficiency as functions of vz and RandMult. If the dependence is very weak, no need to divide the embedding sample into small vz or RandMult bins.
b) To obtain the inclusive charged hadron efficiency from pi/K/P embedding, one can use the measured pi/K/P spectra in 200 GeV p+p collisions as weights. The limited pt reach (~6 GeV/c) of these spectra should not be an issue since tracking efficiencies for pi/K/P converge at high pt. 

Yanfang Liu

a) Mainly writing the codes to calculate the trigger bias factors in different centrality bins
b) When calculating the charged particle multiplicity in the transverse region, Yanfang only required TOF match, without any track quality cuts. It might be a good idea to use the same quality cuts as Tong does for consistency, even though the effect should be quite small. 



2021/08/03

Tong Liu:

a) Use a new "MB" distribution, without mult=0 bin, as a reference. The ratios of charged particle spectra in different centrality bins to the new MB reference are essentially the same between the scenarios of LowLuminosity and LowLuminosity+TofMatch, which indicate that pileup contribution is small when selecting low luminosity data. For the case of TofMatch only, ratios deviate from the two scenarios aforementioned by about 10%. The next step is to correct these spectra with TPC tracking efficiency, and check the consistency.
b) It was suggested to compare ZDC rate distributions for different centrality bins, and check if they are compatible as expected. 



2021/07/20

Tong Liu:

a) Checked charged particle yield ratios between different centrality classes and MB for several scenarios, i.e. inclusive, low luminosity, TOF matching, low luminosity + TOF matching. Charged particles are taken from MB events in the full phase space excluding the area where the centrality multiplicity is defined. In the ratios, the tracking efficiency is supposed to cancel. It was found that the ratios barely change between low luminosity, and low luminosity + TOF matching. On the other hand, the ratios change sizably for most central and most peripheral collisions for the case of "TOF matching". Several suggestions were made: i) check the luminosity distributions for different centrality classes; ii) exclude mult=0 bin in the MB sample; iii) apply tracking efficiency and compare directly the particle yields for cases of TOF matching and low luminosity + TOF matching.
b) The TOF matching efficiency seems to be lower than 50% even for ZDCx < 10k. Will check even lower luminosity values, and maybe extrapolate the matching efficiency to zero luminosity. It was pointed out that the TOF matching efficiency also depends on NHitsFit cut used since matched tracks need to traverse the whole TPC in order to project to TOF.

2021/07/06

Tong Liu:

a) Checked the single track Rcp in pAu collisions for different configurations, i.e. inclusive, low luminosity, TOF matching. It was concluded that pileup contribution would make Rcp smaller, i.e. it has a larger impact for peripheral events than for central events. 
b) Still need to unfold jet spectrum to take into account different JER in peripheral and central events. This is needed to interpret the jet Rcp. 



2021/06/08

Tong Liu:

a) Will continue investigating why data distribution is systematically higher than Glauber at high multiplicity
b) Fix the bug of not reading in the last track in the array
c) The uncorrected full jet Rcp in pAu collisions has a peak around 5-10 GeV/c (Rcp ~ 2), and decreases monotonically between 10-30 GeV/c. Need to unfold the spectra in central and peripheral events separately, and check the ratio again.

Yanfang Liu:

a) Exclude muons from underlying event calculation, and refit the Jpsi signal peaks in different centrality bins
b) Efficiency correction should be independent of centrality, so the efficiency for MB sample can be used for individual centrality bins
c) Currently working on extracting trigger bias factor for different centrality bins



2021/05/25

Tong Liu (SLIDES):

a) Increased vertex cut from [-10 cm, 20 cm] to [-30 cm , 30 cm] to recover statistics. A cut of ZDCx < 23.5 kHz is applied to avoid the a-few-percent jump in event multiplicity, which results in a few percent loss in statistics. The effect on the final results should be very small even if no ZDCx cut is used.
b) Apply 2D centrality calibration as a function of ZDCx and vz, which improves the agreement in event multiplicity distributions in different ZDCx bins after calibration. For multiplicity distributions in different vz bins, larger variations in shape are seen for events with larger vz values. Even after 2D correction, one can still see residual differences in shapes of multiplicity distributions, which can be fixed by doing a shape correction, i.e. use the difference as additional weights.
c) Performed Glauber fit to the multiplicity distributions from Yanfang and Tong, which should only differ in the vz ranges used. In terms of Yanfang's distribution, the fit works quite well above Mult = 5, while a slight underfit is seen at Mult ~ 4. For Tong's distribution, the Glauber distribution is systematically harder than that in data at high multiplicity values. It was suggested to check: i) two 1D calibration vs. 2D calibration; ii) fit multiplicity distributions in different vz ranges separately
d) Tong also showed that Glauber fit yields different results by assuming produced multiplicity proportional to Ncoll or Npart. It was not clear why this is the case, and Tong will send out the detailed procedure of the Glauber fit. 



2021/05/04

Yanfang:

a) After changing the PID cuts, the Jpsi yields per equivalent MB event are consistent with Rongrong's results above 1 GeV/c. The difference below 1 GeV/c could be due to the difference in background subtraction and signal extraction
b) Will try using DCA < 1 cm, instead of 3 cm, to further increase signal-to-background ratio

Tong:

a) It was found that the increase in the number of tracks matched to TOF around day 149 is likely caused by the recovery of TOF tray 103. This can be corrected for by either excluding tray 103 in days after 149 or applying a correction factor to these days
b) After vz and luminosity calibration, distributions of TOF matched tracks
 in different luminosity ranges still differ sizably in shapes even though the mean values agree. One can correct for this shape difference by applying weights up to multiplicity ~ 10ish where the statistical precision is still good. For higher multiplicities, whether applying weights or not should not affect the selection of 0-10% centrality as the boundary is likely around multiplicity ~ 4-5. The shapes of multiplicity distributions agree much better in different vz ranges. 



2021/04/06

Yanfang:

a) Currently working on calculating the equivalent number of MB events for different centrality bins. Will compare with Rongrong's number once done.
b) Luminosity and vz correction factors should be applied to both MB and dimuon events. 

Tong:

a) Will study TofMult vs. RunID using the newly produced mini-tree. This is to check whether the change in the behavior for TofMult vs. ZDCx is related to time dependent detector status, and to identify possible bad runs.
b) Perform Glauber fit for event multiplicity in the UE region used by Tong and Yanfang, as well as event multiplicity in full TPC acceptance used by Shengli. The numerical values of Npart and Ncoll will be provided. In the case of using BBC as a classifier, Tong will check the Ncoll values for internal discussion but we might not provide it to the collaboration.
c) The next step is to evaluate uncertainties for Npart and Ncoll values. If needed, Tong can contact Guannan who has done similar studies for Run14 AuAu data set. 



2021/03/23

Yanfang: (SLIDES)

a) Yanfang presented vz and luminosity dependent corrections for track multiplicity matched to TOF in the UE area. It was suggested to check the track multiplicity distributions after calibration in different vz and luminosity bins.
b) Even after the track multiplicity distribution in data is corrected to have a similar mean value as that in HIJNG, the shapes between data and HIJING are still noticeably different. Therefore, it probably makes more sense to determine Ncoll based on Glauber model. Yanfang will send the corrected multiplicity distribution in data to Tong for such purpose. Nevertheless, determining Ncoll using HIJING could serve as a good cross check, which yields a factor 2 difference in Ncoll between 0-12% and 50-100%.
c) Yanfang also showed Jpsi signal extraction in different multiplicity and pt bins, and the results look quite nice. The next steps are:
 i) evaluate efficiency correction; ii) determine equivalent number of MB event; iii) calculate trigger bias factor, for different centrality bins.
d) Rongrong is currently working on determining the Jpsi signal shape in embedding using Student's t function. Will send the results to Yanfang once finished. 

Tong: (SLIDES)

a) Tong presented similar vz and luminosity dependent corrections for track multiplicity matched to TOF in the UE area. He restricted the event selection to be -10 < vz < 20 cm and 3500 < ZDCx < 23500 Hz, which rejected about 40% of statistics. It was suggested to try to recover the lost statistics by relaxing the cuts on vz and ZDCx. Tong mentioned that extending the vz range should not create any issues. He will also get in touch with Gene to see if one can understand the jump in mean multiplicity around ZDCx ~ 23500 Hz. 
b) There is no significant correlation between vz and ZDCx corrections, and therefore performing two 1D correction instead of a 2D correction is OK. It was also suggested to check the track multiplicity distributions, not just the mean, after calibration in different vz and luminosity bins.


2021/02/23

Rongrong: (SLIDES)

a) Rongrong presented TOF matching efficiency on slide 12 using MB 200 GeV pAu data. The efficiency saturates around 65% above 0.5 GeV/c using low luminosity data with BBC coincidence rate between 0-50 kHz. This value is inline with the expectation, and could increase if tracks within |eta| < 0.9 instead of |eta| < 1.0 are used. Rongrong will check this. On the other hand, the extracted TOF matching efficiency decreases with increasing luminosity, due to increasing contribution of pileup TPC tracks.
b) The lower TOF matching efficiency in HIJING that that in data could be partially due to different track pt distributions, as the efficiency depends on track pt. It was suggested to check this in HIJING.  

2021/02/09

Tong
a) Tong presented latest studies of track matching to the BEMC. There are three functions in StPicoTrack that can be used to obtain BEMC matching information: 
isBemcTrack(): returns true if the track has an associated BemcPidTrait. The matching is done to BEMC clusters whose seed towers exceed 700 MeV. This function is developed by HF PWG, and mainly used for HF analyses
isBemcMatchedTrack(): returns true if a track is matched to a BEMC tower geometrically. If the track hits the gap along the phi direction, a larger search window is used to find a matched tower
isBemcMatchedExact(): same as isBemcMatchedTrack(), excluding the case when a track hits the gap
b) When using isBemcMatchedExact() and requiring the matched towers have energies above 200 MeV, one sees less tracks matched to BEMC compared to TOF. In this case, using matching to BEMC or TOF brings few additional tracks compared to matching to TOF only. This could be because the 200 MeV cut rejects the MIP, and therefore throwing away too many tracks. Tong will check the BEMC tower energy distributions to see if the energy cut could be lowered to include the MIP. c) Once the BEMC tower energy threshold is decided, Tong will using the Glauber model to fit number of tracks matched TOF and TofOrBemc, to see how much improvement adding BEMC could bring in terms of sensitivity to Ncoll.
d) If the improvement is significant, it could be worth requesting a reproduction of PicoDst for Run15 pAu st_mtd data to include 
isBemcMatchedExact() information for Yanfang's analysis

Rongrong: (SLIDES)
a) Rongrong presented TOF matching efficiency as a function of BBC rate in HIJING embedded into zero-bias events. The TOF matching efficiency is about 57%, and decreasing slightly with increasing luminosity. Such a value is lower than the expected value of between 65% to 70%. It was suggested to check the TOF matching efficiency in low luminosity data. 


2021/01/26

Yanfang: (SLIDES)
a) Yanfang presented comparisons between data and HIJING simulation for number of TOF hits, number of primary tracks matched to TOF, and number of primary tracks in the UE region matched to TOF. It was seen that there are more TOF hits in HIJING, while the agreement becomes better for number of primary tracks matched TOF. It was suggested to check the vz distribution between data and HIJING to see if that affects the number of TOF hits distribution. 
b) When including BEMC in the track matching, Yanfang only saw very small increase in number of primary tracks matched to fast detectors. This could be because the matching between tracks and BEMC towers were done by requiring a 700 MeV seed tower for eligible clusters when PicoDst was produced, resulting in low BEMC matching efficiency.  Later on, JetCorr PWG requested to add BEMC matching information for all tracks without the 700 MeV requirement. This information is available in the dataset used by Tong and David, so they can check the effect of including BEMC in the matching


Rongrong: (SLIDES)

a) Rongrong presented studies of pileup contribution for matching to TOF. It was found that the pileup level is negligible for both DCA cuts of 1 cm and 3 cm. This supports using TOF matched tracks for centrality classification. 
b) It was suggested to check the TOF matching efficiency vs. luminosity in the HIJING sample

Tong: (SLIDES)
a) Tong presented feedback on centrality classifier from analysis meeting three weeks ago. Embedding HIJING events into zero-bias events to check the BBC performance is probably difficult since the BBC signal simulated in the HIJING is very different from that in data. It was also suggested to use undistorted VPD-E multiplicity. However, the code is not ready from Bassam.
b) Distributions of track multiplicities for various DCA cuts and matching to TOF or TOF/BEMC are shown. Matching to BEMC needs further investigation as the current result indicates that lots of pileup tracks are included. The next step is to fix the BEMC matching issue, and explore how much one can gain by including BEMC (fast detector) to reject pileup tracks. Using TOF along suffers from 30-35% track loss. If the improvement is sizable, we can request a reproduction of PicoDst for Yanfang.
c) It was seen that the TOF-matched track multiplicity decreases with luminosity, which is expected due to the decreasing tracking efficiency with increasing luminosity. To confirm this, it was suggested to correct the TOF-matched track multiplicity with tracking efficiency to see if the corrected distribution is flat
d) Tong also saw that the track multiplicity has a non-trivial dependence on vz due to the asymmetric collision system. For Tong's analyses based on VPDMB-30 trigger, one can cut on -10 < vz < 20 cm, for which the track multiplicity is mostly flat. For Yanfang's analysis based on VPDMB-novtx trigger, it is necessary to correct for the vz dependence


2021/01/05

Rongrong: (SLIDES)

a) Rongrong presented correlations between NGPT_MC and NTofMth. The later is calculated using only NGPT_MC, and therefore a subset of the former. Comparing NGPT_RC and NTofMth distributions for a given NGPT_MC range, the relative spread (RMS/Mean) is slightly smaller for NGPT_RC, but NTofMth distributions are more Gaussian.
b) It was pointed out by Saskia that the spread in NTofMth distribution seen by Yanfang before is larger than what Rongrong showed. This could be due to different NTofMth definitions used. Rongrong will check NTofMth distributions using NGPT_RC and/or DCA
< 3 cm cut. 



2020/12/22

Rongrong: (SLIDES)

a) Rongrong showed that there are on average about 2 pileup tracks associated with the chosen vertex after applying the DCA < 1 cm cut. The fraction of pileup tracks increases slightly with increasing luminosity, as expected. 
b) An anti-bais is seen in dVz = Vz_MC - Vz_RC distribution, i.e. |dVz| peaks around 1-2 cm instead of 0, when there is no RC tracks 
for the chosen vertex matched to MC tracks.
c) Rongrong will check the distribution of number of tracks matched to TOF to see which variable is better suited for centrality classification, NGPT or NTofMth. The metric is the spread of the distribution divided by mean of the distribution, which quantifies the smearing effects and thus the sensitivity of the variable to centrality. 


2020/12/08 (Helen)

Yanfang: (SLIDES)

Figure: NGPTUE vs Ncoll (using HIJING events. NGPT using DCA<1 )

a)Yanfang is exploring what ranges of NGPTUE are best to use to define the EA bins.
Seems to be some resolution for Ncoll. Maybe 3 bins are possible
There is a cut at NGPTUE>0 due to Rongrong’s study that showed the correction for the trigger, reco efficiencies and pile up don’t work at low NGPT.
b) Discussion if instead of using lowest NGPTUE bin use the MB result. MB Ncoll~5 where as the lowest NGPTUE bin (1-2 tracks) give Ncoll ~ 4. So not much lower.
c) Working on switching to ToF matching rather than DCA < 1cm

Tong: (SLIDES)
Figure: NGPT vs inner BBC or All BBC. (using real data, NGPT using DCA<3)

a) BBC signals scaled to maximum possible value in both cases. 
Correlation there for both inner only and all BBC. The All BBC (green on 3rd plot) has a slightly steeper correlation with NGPT and seems a bit more linear.
b) Seems like the mean NGPT is slightly higher that expected. Thought the <NGPT> for Minbias was ~8, Tong’s lowest bin is always >10. Maybe this is because the DCA cut is 3 rather than the usual 1cm? Tong will redo with DCA<1 to check.

Figure: Number BBC tiles saturated vs NGPT (dac<3) or RefMult

There is some correlation of NGPT with N Tiles saturated but its rather weak.
Tong noted that the correlation breaks down for high NSat, even seems to be dropping.
Suggestion to look at these plots for low luminosity


2020/11/24

a) It was reported by Shengli (SLIDES) that there are still non-negligible pileup tracks for DCA less than 1 cm, especially in low event activity events. Therefore, our current cut of (DCA < 1 cm) is likely insufficient to reject all pileup tracks. Requiring TOF matching will reject out-of-time pileup tracks, but also limits the dynamic range of the track multiplicity distribution for centrality classification. The following variables will be checked: 
i) TPC tracks matched to TOF
ii) TPC tracks matched to TOF or BEMC
iii) Number of TOF hits
It will be nice to check their distributions as well as dependences on the BBC coincidence rate. Number of TOF hits could suffer from in-time pileup, which will manifest itself as increasing number of TOF hits at high luminosity. 

b) It was suggested for Rongrong to check the following using MB
 HIJING embedded into zero-bias events
i) NGPT vs. true NGPT to check pileup effect
ii) Vertex resolution vs. NGPT

c) Tong presented new studies about the influence of jets on classifying centrality for MB events. It was concluded that using random axis in MB events is the correct approach. More studies will be performed to determine the low pT cut one can apply on jets while still maintaining a small influence on MB centrality. 


2020/10/27
Yanfang's report:
a) Confirmed the expected geometric scaling between <NGPT> and <NGPTUE> in MB events, and showed fits to Jpsi signal in individual centrality bins by fixing mean and sigma to those in inclusive sample.  
b) It was suggested to divide the data sample into 3 instead of 5 centrality bins given the limited kinematic range. Yanfang will also work on centrality definition based on HIJING+GEANT simulations.
c) It was also suggested to use Maximum Likelihood method in fitting for all the pt bins above 2 GeV/c. It should yield comparable results as chi2 method when statistics is good, and better results when statistics is poor.
d) Given that mean and sigma of the Gaussian function are fixed in individual centrality bins, one can vary the fixed mean and sigma within their respective errors and 
assign the difference in the extracted signal yields as a source of uncertainty.

Tong's report:
a) Present a ToyMC to validate the method of using a random axis in MB events for defining EA and calculating number of events for normalization. 
b) It was suggested to show Y_hi/Y_low ratio under various scenarios as a function of fraction of MB events used for injecting jets in a figure for better presentation 

2020/10/13
Oral reports from Yanfang:
a) Fixed the issue of larger than expect <NGPT> when running through PicoDst due to wrongly selected trigger id
b) Working on refining the Jpsi signal fitting procedure. Two suggestions were made: i) subtract like-sign distribution from unlike-sign at low pt; ii) use mean and sigma from inclusive sample for individual centrality classes

Oral reports from Tong:
a) Currently verifying the hadronic correction; might switch to the small trees produced by Dave
b) The trigger+vertex efficiency at low event multiplicity is found to be quite low. Will discuss more on this in the near future. 


2020/09/29


Yanfang presented updates on centrality study for Jpsi analysis. 
a) The mean value of the NGPT distribution for MB events is about 10 when analyzing MuDst files, while the mean is about 18 when switching to analyzing PicoDst files. Yanfang will send the relevant codes to Rongrong to try to figure out the cause together.
b) The track multiplicity transverse to the Jpsi direction is named NGPTUE. In Jpsi events, Yanfang found that the ratio of the mean values between NGPTUE
 and NGPT is about 2 instead of 2.75 according to the phase space scaling. Tong mentioned that the exact phase space scaling was observed in MB events when taking a random axis. In jet events, the scaling is not exact, but also not as large a deviation as observed by Yanfang. 
c) The issue of mismatch between unlike-sign and like-sign dimuon distributions, reported last time, is fixed now. 
d)
 To obtain the NGPTUE distribution for Jpsi events, it was suggested to statistically subtract the NGPTUE distribution for like-sign dimuon events from that for unlike-sign dimuon events within the Jpsi mass range

Tong presented updates for his jet analysis
a) The number of events used for normalizing the jet distributions should be MB events corresponding to the track multiplicity distribution used for centrality definition
b) Initial look at the jet Rcp has value close to unity, consistent with the expectation
c)
 Correcting for underlying events via rho*A method has a sizable impact on jet Rcp. The next step is to explore other background subtraction methods, correct for detector effects and background fluctuation 


2020/09/01
Rongrong showed studies on pAu multiplicity distribution: SLIDES
a) Weighted average VPD trigger and vertex finding efficiencies are needed for correcting for Jpsi yield in pAu collisions. The vertex finding efficiency is found to be strongly dependent on event multiplicity, and therefore the true multiplicity distribution for p+Au collisions is needed to calculate the weighted average. On the other hand, HJING is seen to agree with scaled PHOBOS d+Au measurements of dN/deta, but discrepancies are seen at low multiplicities at detector level when compared to STAR data. However, attempts to obtain the true multiplicity through unfolding have not been successfully so far due to unstable results after correcting for vertex finding efficiency.

b) s7: it was brought up that the HIJING distribution in Yanfang's study decreases less steeply at low multiplicities compared to Rongrong's. A possible reason is that Yanfang did not embed HIJING events into zero-bias background, which results in a higher efficiency at low multiplicities. Rongrong will check this. 

c) s23: the fact that different pT cuts result in different distributions indicates that the track pT distribution in HIJING is different from that in data. This is indeed the case, i.e. HIJING has a softer pT spectrum than data. It was suggested to reweight HIJING events with low/high <pT> to make it look closer to data. 



2020/08/18

Helen

Present: Tong, Yanfang, Dave and Helen

Tong presented his slides on using the transverse region in the TPC as the EA definition. (SLIDESThe transverse area is defined as the two regions pi/2 +- 1 from the hardest jet or a random direction. For these slides the jet is a full jet

Page 3: We discussed whether 2.7% of the events having a jet will affect the EA binning using a random angle to define the axis.  It boils down to do all those 3% of events end up in the top 10% bin because they contain jet fragments.  Tong did some math and “proved’ to us that it wouldn’t matter as its <1% effect but I confess I didn’t follow it all so he’s going to send around his argumentation

Page 4: Dave noted that while the correlation is strong as expected the mean mult hardest jet axis < mean mult from random because of 2 different biases. At low multiplicities, where these is not jet, picking the hardest candidate biases the transverse region to low multiplicity as the thigh mult region made the hardest jet candidate. IN events where there is a jet the hardest jet axis again has a lower multiplicity but this time because the random axis often takes in the jet fragments making it have too high a mult

Page 5: Although the z scale goes from log to linear so by-eye comparisons may be misleading its interesting that while deleting events with a >10 GeV jet does increase the mean transversals mult the very high multiplicity events seem to be selected against

Page 7: Tong also showed on the fly the sliced projection of mean transverse mult as a function of jet pt. This looked very similar in trends to LI’s pp UE results (with larger values of course) in that it rises, has a peak ~5 GeV and then drops slightly with jet pt.

page 8: We had a long discussion about how to normalize these spectra (currently there is no normalization) should it be by all MB events, by the number of events predicted in the MB sample, by the number of events looked at to produce the spectrum in each case or the number of events where a jet > X GeV is seen.

Page 10: Since this seems so far to be looking great. Of the 3 next steps Tong is interested in pursuing we suggested he focus on the Glauber calc for the random axis. 

New ideas: Need to make a clear concise case for why random not hardest axis.
Tong and Yangfang will try to check that there is a clear correlation between the transverse NGPT and the NGPT in eta  = 0.5-1 which is what Yanfang has been using and also seems to work well for the quarkonia measurements. Assuming the Glauber fitting works well I think we are homing in on our final solution.

Thought that occurred to me after meeting was over:
We probably need to do some efficiency correction for the luminosity and z vertex location before we are totally done.


2020/08/04

a) Yanfang presented results on Jpsi yields in different bins classified based on track multiplicity within 0.5 < |eta| < 1.0 
- In extracting Jpsi raw yields, two improvements are suggested: i) at low pT, one can fit UL-LS distributions since the background shape is not easy to fit; ii) use the Jpsi signal width in each pT from the inclusive sample for different centrality bins
- Yanfang will also try using track multiplicity transverse to the J/psi direction. In case there are multiple unlike-sign and/or like-sign pairs, the track multiplicity should be calculated in the transverse region of each pair.
- With the new track multiplicity definition, one will also need to re-calculate equivalent number of MB events in each centrality bin.
- For now, using the same trigger bias factor for different centrality bins is OK. Rongrong is still working on finalizing this for inclusive pAu analysis.

2020/06/23
a) Yanfang presented results on NGPT distributions in the region of 0.5 < |eta| < 1.0 to avoid auto-correlation with Jpsi
- A good agreement between data and HIJING is seen. Preliminary selection of different centrality classes is carried out on the HIJING distribution.
- Note that for HIJING events that do not have a valid reconstructed vertex, its NGPT is 0. 
- Will probably need to redo the vz-dependent calibration
- The next step is to check the Jpsi yield in different centrality bins based on the new selection

b) Helen brought up a possible concern of using the track multiplicity in the region transverse to the jet axis for centrality classification. The concern is that in MB events, such a procedure could introduce potential bias on the resulting track multiplicity distribution. A suggestion is to check how large the effect is. If it turns out to be very small, one might need to worry about it and assign some uncertainties if needed. 



2020/06/09

a) Tong's updates
- Followed Mike suggestion to artificially lower the saturation threshold. It was found that the correlation between BBCAdcSumE and mid-rapidity multiplicity seems get worse due probably to the reduced dynamic range of the BBCAdcSumE signal. This is consistent with the initial expectation that BBC saturation affects the centrality determination.
- Also checked the correlation between Ncoll and RefMult. It was found that RefMult increases non-linearly as a function of Ncoll. On the other hand, there is significant smearing between 
BBCAdcSumE and mid-rapidity multiplicity. The combination of the two explains why the Ncoll values for central and peripheral events are quite similar using BBCAdcSumE signal based on the two-step algorithm, while a nice correlation between BBCAdcSumE and mid-rapidity multiplicity was seen. 

b) Yanfang's updates
- A nice correlation between BBCAdcSumE and Ncoll is seen in HIJING. However, since HIJING BBC signal does not reproduce data, this can not be directly applied to data.
- The Au-going side multiplicity in HIJING is about 30% lower than that in data

c) Next step

- The main idea is to identify a phase space at mid-rapidity different from the signal region to avoid auto-correlation as much as one can. For example, one can count the track multiplicity in the region transverse to the jet axis, or 0.5 < |eta| < 1 for J/psi analysis since J/psi's are found within |eta| < 0.5. 
- Tong and Yanfang will check these ideas and see: i) how much the kinematic range is reduced; ii) can the resulting multiplicity distribution be described by Glauber or HIJING
 


2020/05/05
a) Dave showed a correlation between BBCAdcSum and mid-rapidity multiplicity, whose mean value increases from ~8 for lowest BBCAdcSum to ~18 for highest BBCAdcSum. Dave will check the exact track quality cuts he used when calculating mid-rapidity multiplicity. On the other hand, both Tong and Yanfang have observed that the Ncoll values for peripheral and central collisions based on BBCAdcSum are quite similar, which seems inconsistent with what Dave showed. It was pointed out that the large width of the multiplicity distribution in each BBCAdcSum bin could play a role. To understand better the apparent inconsistency, it was suggested that both Tong and Yanfang will check the correlation between BBCAdcSum and mid-rapidity multiplicity to see if they observe similar behavior as what Dave showed. 

b) In the report, it will be nice to show a comparison of the mean mid-rapidity multiplicity (refMult or NGPT) in different BBCAdcSum bins for the following four scenarios:
- Use all BBC tiles regardless their saturation status
- Use BBC tiles that do not saturation in each event
- Use events that none of the BBC tiles saturate
- Use BBC tiles 8-16 with channel 12 excluded

c) Helen suggested Yanfang to use different rapidity ranges for Jpsi reconstruction and NGPT calculation. Currently, the Jpsi sample Yanfang uses is mostly within |eta| < 0.5, and therefore Yanfang can calculate NGPT within 0.5 < |eta| < 1. This is to check whether the auto-correlation effects seen in RpA of different centralities change. One can also think of using tracks excluding muon candidates used for Jpsi to calculate centrality.

2020/04/10
1) Tong presented a set of slides as a first draft of a report to the collaboration. See here. Please take a look and send your comments.
- The general consensus is that presentation will be focused on conveying the main message that using BBC as a centrality classifier in p+Au collisions does not seem to work. 
- David and Yanfang will prepare some slides of their work, and send them to Tong to be included in the report.

2) Yanfang presented a first look at the Jpsi yield vs. event activity. She will send out more details.  

3) A new meeting time will be polled due to changes in Rongrong's schedule. 



2020/03/27

1) In maybe 4-6 weeks time, we should aim for preparing a set of slides to be presented to a wider audience about our studies of classifying centralities in pAu collisions.
- Tong will present some updates in two weeks.
- We will discuss more in two weeks and think of what distributions should be included  in our report. 

2) At this point, the centrality classification in pAu collisions does not seem promising. BBC east is heavily saturated. Alternatively, Dave/Tong can look at the centrality dependence of jet quenching in dAu collisions, and only MB events for pAu collisions. Yanfang can look at event activity dependence of Jpsi yield in pAu collisions. 

3) Dave presented his studies of PYTHIA pAu events. A suppression of recoil jets per trigger is seen in 0-30% central collisions compared to that in 70-100% peripheral collisions if the centrality is classified using multiplicities in forward regions.
- This behavior is similar to that observed in real data.
- The underlying cause seems to be momentum conservation. Dave will continue investigate how the momentum conservation dictates the observed suppression given i) the total available energy is 200 GeV, much larger than the jet energy, and ii) there are on average 5 binary collisions in one MB pAu event while the hard jets are only produced in one such collision. 
- It was also suggested to look at event multiplicity vs. Ncoll in PYTHIA to understand better the events we are selecting.


2020/02/28

1) Tong presented updates on continued study of understanding BBC signal in p+Au collisions
a. It was found that the event multiplicity is systematically higher for unsaturated events compared to saturated ones, which is opposite to the expectation if the saturation is caused by high-multiplicity events. Tong will do a detailed comparison of BBCAdcSum and event multiplicity distributions for saturated vs. unsaturated events. 
b. p+Au-like events in d+Au collisions are selected by cutting on the single neutron peak in ZDC. However, the corresponding multiplicity distribution does not seem to change significantly. On the other hand, such a selection should bias towards more peripheral d+Au collisions. Tong will do a detailed comparison of BBCAdcSum distribution for inclusive and tagged d+Au collisions to see if such a selection is effective. 
c. Trying to understand whether the large BBCAdcSum signals in high-luminosity events are due to in-bunch pileup, a simulation was done by mimicking pile-up events through overlapping low-luminosity events, which are assumed to be free of pileup. The simulated signal is inconsistent with data. Several suggestions were made: i) use HIJING events for such a simulation since there is no pileup; ii) check with Akio to see how they select good BBC signals in terms of rejecting out-of-timing signals; iii) one can fix the Poisson distribution in high-luminosity events by calculating the expected number of collisions based on the BBC rate, which is then used to weight events of different levels of pileup in simulation. 
2) The next meeting is canceled as it coincides with the collaboration meeting. Meanwhile, we will continue discussion vis emails if there are updates.

2020/02/14

Slides: TongLiu_centrality_Glauber&BBC_200214.pdf

Minutes for today's meeting.

1) Tong presented studies on BBC signals in both p+Au and d+Au collisions, as well as Glauber model investigation
a. It was found that the average refMult for a given BBC bin is higher for non-saturated events than the inclusive events. This is opposite to the expectation if the saturation is caused by high multiplicity events. Tong will compare the multiplicity distributions between saturated and non-saturated events to check if there is any multiplicity bias.
b. In d+Au collisions, the correlation between BBC and refMult is tighter compared to the p+Au collisions, but not by a lot. Also, one only sees tighter correlation in large activity events.
c. Slide 12: it is suggested to cut on 10 < ZDC < ~50 to select the one-neutron peak in order to identify p+Au-like events in d+Au collisions
d. Glauber model simulation shows that about 3% of events have Ncoll = 0 in 200 GeV Au+Au collisions, while the fraction is about 6% for p+Au. Will get in touch with people who did centrality calibration for Au+Au collisions to see if the Ncoll = 0 bin is used or not. The impact of excluding Ncoll = 0 bin is small on the calculated Ncoll value for MB events, though.
e. Slide 26: the deficit of small BBC signals at low luminosity is caused by the calibration procedure. The raw distribution shown on slide 25 look normal.
f. On slide 25, the BBC signal increases with luminosity, which is suspected to be due to in-bunch pileup. An early study by Rongrong showed that |vz_TPC - vz_VPD| < 3 cm can effectively reject in-bunch pile-up events. See slides 43-50 of slides. Tong will try different |vz_TPC - vz_VPD| cuts and look at correlation between BBC and luminosity again. 
g. If the increase in BBC signal vs. luminosity is indeed caused by in-bunch pileup, it is not clear how to correct for it. Neither scaling nor shifting seems appropriate. 

 

2020/01/31

Here are the minutes for today's meeting.
1) Tong presented his updates on BBC signals in p+Au, d+Au as well as Glauber model calculation
a. Tong will use Yanfang's cuts on selecting good primary tracks for mid-rapidity multiplicity 
b. Using events without any saturated BBC inner tiles does not seem to significantly improve the correlation between BBCAdmSum vs. RefMult in p+Au collisions. Tong will do a quantitative comparison. 
c. BBCAdcSum vs. multiplicity is checked in 2016 200 GeV d+Au collisions, and the behavior looks better, i.e. no saturation. Again, Tong will do a quantitative comparison. It is worth noting that the "mid" gain is used in the Au-going side for d+Au collisions, while "high" gain is used for p+Au collisions, which explains less saturation in d+Au collisions. 
- It will be interesting to check BBCAdcSum vs. multiplicity distributions for "p+Au" like events in the d+Au collisions by tagging neutrons in the d-going direction. Will get in touch with Kong to see how he does neutron tagging. 
d. It is found that in the Glauber model calculation, there are cases that the quark-level collisions is non-zero but nucleon-level collisions is zero, which does not make much sense physically. The underlying cause is that the quark-level and nucleon-level collisions are evaluated separately in the model. Tong will try to calculate the quark-level and nucleon-level collisions in the same loop to improve the correlation. 
- By excluding Ncoll = 0 bin as Yanfang did, it seems like Tong's result is still slightly different than Yanfang's. Yanfang will send her distributions of Ncoll and b to Tong for a direct comparison, to avoid possible different boundaries of percentiles used by them.
e. There was also a discussion whether one should exclude Ncoll = 0 bin when using Glauber model results. Tong will check the relative contribution of Ncoll = 0 in Au+Au collisions. The consensus is to use the same procedure as done in Au+Au for fair comparison. 

2) Yanfang presents her studies of BBCAdmSum vs. NGPT
a. Three scenarios are presented: i) use all events; ii) reject events with saturated inner channels; iii) reject the saturated inner channels in each event. It was suggested to normalize BBCAdmSum by the number of channels used for a fair comparison. 
b. Yanfang will compare correlation of BBCAdmSum vs. NGPT between data and HIJING, to help understand whether the poor correlation is due to physics or detector effects.
c. There was a concern that rejecting events with saturated inner channels could preferentially throwing away events with large multiplicities, thus causing a bias. It was suggested to check the NGPT distributions for events with and without saturated channels. 

2020/01/17

Minutes (Helen):
We had slides from Yanfang and Tong on Glauber comparisons and Dave on pile-up and tracking efficiencies as a function of luminosity

Total X-section agrees between Tong and Yanfang’s glauber codes.
Tong’s error smaller as Yanfang includes some other uncertainties
Impact parameter distributions seem to agree (Yanfang has slightly fewer events, probably why bMax slightly lower)
Need to investigate why Yanfang has no Nbin = 0 events
If Tong ignore his N_bin = 0 events things are still not quite the same
Yanfang Action Item: Investigate in code why Nbin = 0 missing

BBC Studies:
Although 18 PMTs only 16 readout channels 
7&9 from Yanfang plot are combined and 
13&15 from Yanfang plot are combined
Seems BBC “outer” inner channels saturate less.
Tong sent around image of where the channel numbers are from (email 1/17/2020 10:23 am). Link is here : https://www.star.bnl.gov/public/bbc/fy03/bbc.run3.ps

 Action Item: Look again at 
ignoring events where any BBC channel saturate
ignoring signal for BBC channel that saturates
Looking only at outer inner BBC channels
Looking only at outer inner BBC channels ignore in the 4 that are combined


Recentering has little effect
removing N_bin = 0 events get’s Tong and Yanfang closer but still not exactly the same.
Tong Action Item: What condition defines an event? Doesn’t seem to be the N_bin = 0, which is what Yanfang’s code seems to do

Slide 17: 
xdcx vs BBC has odd dependence on luminosity. Low luminosity don’t have low BBC inner signals. 
Seems like a hard cut somehow
Tong Action Item:  Poke at what can be the possible cause, other correlations

 important cut on events/tracks are
  DCA < 3
| Z_vertex | < 10 cm
No TOF matching required
Efficiencies shown are the weighted efficiency of the 6 particle species given “known” ratios
pt on x- axis is embedded truth pt

There is a tracking efficiency dependence on luminosity. max difference ~8%
No clear dependence of TPC uncorrected multiplicity on luminosity
After unfolding corrected spectra should some "inverse” luminosity dependence. However, scatter of final data points masks it somewhat. Dave suspects this is a resolution in unfolding questions

Potential reason: pile up seems to be offsetting efficiency drop.

Dave Action Items:  
Look at results if DCA < 1 applied
How does DCA distribution change in real data for different luminosities
Look when requiring ToF matching 

2020/01/03

Yanfang: HIJING and Glauber setting
Tong:
 Glauber setting

Minutes
i) To find out the underlying cause of the 10% difference between HJING and Glauber, we agree to compare the following variables between Yanfang's Glauber, Yanfang's HJING and Tong's Glauber: a) total pAu cross-section; b) impact parameter distribution; c) Ncoll for each centrality bins (0-10%, 10-20%, ... 90-100%) based on impact parameter distribution. 

ii) Tong will send out his Glauber settings to be compared with Yanfang's (slide 5)

iii) Helen pointed out that simply removing saturated channels in the BBC for centrality determination could cause some kind of bias since these channels might be saturated due to a large influx of particles associated with more central events. One probably needs to take these saturations into account in some way.

iv) It was suggested that Yanfang could check the following: a) the ADC distribution of each BBC channel at the same rapidity to check whether the gains of different channels are comparable; b) check the saturation probability for each channel at different azimuthal angle but the same rapidity. If the saturation is caused by physics, there should be no azimuthal dependence. If the saturation is caused by gain setting, one might be able to identify a phase space in which the saturate happens less. 

v) David and Tong will look at the efficiency vs. luminosity in embedding and charged-particle multiplicity vs. luminosity in data to see if they have similar trends. One can correct for the efficiency and check whether the corrected multiplicity is flat as a function of luminosity. Alternative, one can check the corrected charged particle pt distributions in different luminosity ranges. It will be interesting to do similar studies for DCA < 1 cm and DCA < 3 cm, and look at the pileup effects.


vi) Email exchange with Akio on 2019/12/11
"Run15, I think we used same "high gain" setting for both east and west and both during pp and pAu/pAl. At least that what BBC HV setting file archive says. And this is adjusted to a MIP peak (and not mean of ADC specra like we often do with AuAu).
Run14 He3Au run, we used "low" gain on east side, so that it saturate less for Au side.
Run16 dAu run, we used "mid" gain on east side.


We count channel# (pmt#) from 1 to 16, not 0 to 15.
ch1-6 (your ch0-5, I guess) are inner most ring.
ch7-16 (your 6-15) are outer ring, seeing less charge than inner.
ch6 vs 16 (your ch5 and ch15) difference is the inner/outer.
ch7 (your 6) and ch12 (your 11) fibers from see 2 tiles, thus sees more charge than other ones in outer ring.

I have not done this myself, but EPD people told me that adding the long landau tails from many particles would adds up and ADC spectra goes higher (~x1.5 or x2.0?) compare to what you naively expect to see by multiplying MIP peak position * number of particles. "



2019/12/06

Minutes:
i) BBC ADC as centrality classifier
- The BBCSumAdc distribution could be contaminated by noise or pileup. The latter contribution is less clear as the BBCSumAdc distribution does not seem to change significantly as a function of luminosity. 
- As shown in slides, the BBCSumAdc distribution in HIJING simulation is no where near that seen in data. There could be multiple factors contributing: i) the BBC gain setting is not correctly reflected in the simulation; ii) the charged particle multiplicity in HIJING is lower in Au-going side than reality. See slide 5 of slides iii) too much saturation in data distribution. By the way, a nice correlation between BBCSumAdc and impact parameter is seen for 2008 dAu analysis, and the HIJING simulation agrees with data reasonably well.
- The BBC ADC distributions in different channels are shown in slides. It could be seen that different channels have different level of saturations, and possibly different gains. It was suggested to: i) roughly calibrate gains by lining up BBC ADC distributions for channels at the same rapidity; ii) instead of removing channels that saturate very often for all the events, one can try to remove saturated channels on an event-by-event level, and then group events with the same set of channels removed together
- Shengli will check the BBCSumAdc on the Au-going side as a function of luminosity for p+Au, d+Au and 3He+Au to see if the behavior is similar or not
- Saskia will get in touch with Akio to find out the BBC gain settings for 2015 p+Au data
- Link to the collection of Yanfang's presentations: 
link

ii) Mid-rapidity charged particle multiplicity as centrality classifier
- It was agreed to use DCA < 1 cm to suppress pileup tracks.
- One way to check the level of remaining pileup contribution is to look at the corrected charged particle multiplicity as a function of luminosity, which should be flat in the case of no pileup. Deviation from being flat indicates the level of remaining pileup contribution. 
- In the Jpsi -> mu+mu embedding, a strong dependence of the TPC tracking efficiency as a function of luminosity is seen for tracks above 1.3 GeV/c. See slide 12 of
slides, upper right figure. 

iii) Next meeting will be held on 01/03/2020, and Tong and Yanfang will present details of how they extract Ncoll and Npart from Glauber and Hijing. Meanwhile, we will continue communicate with emails.


2019/11/22

Minutes:
- The end goal would be producing a pAu centrality framework that can be used by the whole collaboration. One can consider a paper dedicated on this topic and present single particle RpA as a demonstration 
- Currently, it is puzzling that BBC does not seem to work in terms of selecting centralities. Saskia suggests it could be a sign of saturation in the BBC. It is suggested check the BBC distribution at low luminosity runs, and Saskia will dig out some old studies done by Yanfang.
- David asked whether this could be due to in-bunch pileup. Based on my study, the in-bunch pile-up events can be reduced significantly by applying |VzTpc - VzVpd| < 6 cm cut. See slides 43-49 of https://www.star.bnl.gov/protected/heavy/marr/Analysis/Jpsi/Run15_pAu200/20190926_ppJpsiYield_HFPWG.pdf.
- Furthermore, it might be worthwhile to check the BBC information in 2016 200 GeV dAu runs, to see whether it looks better or not. This we will contact Shengli.

2019/04/11

David Stewart - Centrality using BBC
Shengli Huang - 
Centrality using track multiplicity
Yanfang Liu
Centrality using track multiplicity

Minutes: 
1) Executive summary
- Ntrack as the centrality classifier:
i) We seem to have a good handle on using the track multiplicity at mid-rapidity to define centrality. The HIJING simulation shows a positive correlation between Ncoll and Ntrack. Also both Glauber and HIJING simulations agree with data reasonably well at high mulitplicities.
ii) However, this centrality classifier suffers from auto-correlation since most of the analyses are also done using tracking at mid-rapidity. The effect of the auto-correlation depends on the analysis. For example, the effect could be small for flow measurement while it is unaccepted for jet measurement. Jpsi measurement might be somewhere in between.

- BBCAdcSum as the centrality classifier
i)
 A better choice is to use BBCAdcSum at east side (Au-going direction) as a centrality classifier, which minimizes the auto-correlation. A positive correlation is also seen between BBCAdcSum and Ntrack even though the smearing effect is large.
ii)
 One issue with this variable is that the BBCAdcSum distribution depends strongly on luminosity. The current calibration procedure of aligning the mean values of BBCAdcSum distributions in different luminosity ranges might not be sufficient. More studies are needed to figure out how to correct for this. It might be worth checking the Ncoll values for the same centrality interval when using the different BBCAdSum distributions in different luminosity ranges. If the variation in Ncoll is small, this dependence of shape on luminosity will not impose a significant impact on the final results.
iii) HIJING simulation with current BBC simulator could not produce the BBCAdcSum distribution that is matched to data. This situation stayed basically the same after Akio did some improvement on the BBC simulation code. An earlier study for dAu seem to do a better job in terms of data-simulation agreement (
http://inspirehep.net/record/855715/). 

- The final goal of the current effort is to provide a centrality definition based on maybe both Ntrack and BBCAdcSum to the whole collaboration for pAu collisions. It is interesting or even necessary for analyses to try both classifiers in order to understand the different biases. 

2) Technical details
i) Both the Ntrack and BBCAdcSum distributions are very different in different TPC vz ranges when using VPDMB-5 (500003) and VPDMB-10 (500904). The differences seem to be smaller for VPDMB-NoVtx trigger (500004). It is not clear what the exact cause of the trigger bias is. Practically, if one wants to use the VPDMB-5 and VPDMB-10 triggers, some event level weights should be applied to account for the trigger selection bias. 
ii) The Run15 st_ssdmb stream data has been reproduced, both MuDst and PicoDst, recently with SL18h library (https://www.star.bnl.gov/public/comp/prod/prodsum/production_pAu200_2015.P18ih.html)
iii) Need to check how much pileup tracks are left after applying DCA < 1 cm cut. 
iv) Here are the studies that Takahito did concerning the VPD trigger and vertex finding efficiencies in pAu collisions: 
- Comparison of dN/deta: https://www.star.bnl.gov/protected/lfsupc/tdrk/Analysis/MTD/Dimuon/Run15/2017_0802_dNdeta.pdf. It looks like HIJING is not doing too bad of a job.
- VPD trigger and vertex finding efficiency vs. multiplicity: https://www.star.bnl.gov/protected/lfsupc/tdrk/Analysis/MTD/Dimuon/Run15/2017_1018_TBwLumiMultWeight.pdf (slide 6) The efficiency reaches a plateau at true multiplicity of about 8-10, and drops down a bit again around 25. 
Takahito did embed HIJING pAu events into zero-bias real data. You can find the file here: /star/u/tdrk/mtd/Simulations/2015pp200/mysimdev/dir_hijing_minbias_embed_hft_mcT/