- yezhenyu's home page
- Posts
- 2022
- April (1)
- 2021
- August (1)
- 2020
- 2019
- 2018
- 2017
- November (2)
- September (2)
- August (4)
- July (2)
- June (3)
- May (2)
- April (3)
- March (2)
- February (2)
- January (10)
- 2016
- December (3)
- November (2)
- October (2)
- September (3)
- August (3)
- July (1)
- June (1)
- May (2)
- April (2)
- March (2)
- February (3)
- January (2)
- 2015
- December (2)
- November (6)
- October (1)
- September (5)
- August (3)
- July (6)
- June (1)
- May (7)
- April (3)
- March (1)
- February (7)
- 2014
- My blog
- Post new blog entry
- All blogs
HF PWG Meeting Minutes by Zhenyu - 2017/4/20
slide 2:
Q: Have data QA and bad run rejection been applied? A: Yes
Q: are the listed tracking quality cuts the same as those used for data-simu comparison? A: Yes
Comment by Jim: gDCA distribution depends on TPC distortion correction, which in turn depends on lumi. the gDCA<1cm which was used to remove pile-up tracks might be too tight as it may cut in-time track distribution.
Suggestion by Rongrong: plot gDCA distributions for different lumi see how much the gDCA distribution changes versus lumi, and how much pile-up tracks contribute to different gDCA cuts. One may have to fit the gDCA distribution to separate the in-time and pile-up tracks.
Comment by Zhenyu: Jim's concern may not be a problem, as long as the fraction of the pile-up tracks for gDCA<1cm is negligible
slide 5: non-linear dependence on lumi at high lumi region
Q: what is the fraction of data from high lumi, whose lumi-dependence is not fully corrected by the linear function? A: will check
Suggestion by Zhenyu: if the fraction is small, just throw away that part of the data. If the fraction is non-negligible, derive an additional correction factor for high lumi data on top of the linear function. Derive these corrections using all the data combined, instead of the data with a narrow range in Vz. Then apply the correction to the full data, and to the data with narrow Vz bins, to see if the correction works independent of Vz
Slide 10: random correction to make nGoodPrimTrack integer numbers
Q: how does the random correction work? A: will try to explain in text
Comment by Zhenyu: having a complete random correction is not a good idea, which will lead to different results between two analyzers even if they are looking at the same data. One can get rid of the randomness by e.g. using a random seed based on a characteristic quantity of the event such as the event number. But I don't see the real problem to begin with for a non-smooth nGoodPrimTrack distribution after the correction. Suggest to first derive the centrality definition (cuts on the corrected nGoodPrimTrack), and then see whether there is any issue with the fractional correction (non-integer numbers) or random correction (integer numbers)
- yezhenyu's blog
- Login or register to post comments