ROOT beginers' guide

To get started, you can first learn a little C++ and ROOT:
https://root.cern.ch/

To install it on your own laptop, go to:
https://root.cern.ch/downloading-root
You can find a version that works for your operating system,
and you don't have to install the latest version.

To read the user manual, go to:
https://root.cern/root/htmldoc/guides/users-guide/ROOTUsersGuide.html
You can go through Chapters 1-6, then 11 and 12.

Once you are done, I can give you some further assignments.

1. calculate PI

The first assignment is to numerically calculate the value of PI.
We use the so-called Monte Carlo approach.
We define a square, say, 1 by 1, whose area is 1.
Then have a circle inside the square, whose area is PI/4.
Now randomly generate a 2D point inside the square,
which has a chance to be also in the circle.
After you generate many such points, count the number of points
that fall into the circle. That's proportional to the area inside the circle.
So you can figure out the value of PI.
It doesn't need to be very accurate. Just make sure the code works.


2. calculate e

 

The next assignment is to calculate the value of e, the log base.

Or the root of the function: y(x) = log(x) -1. You start with a trial value of x0 = 1, and find y is -1. Then you increases x by a step, e.g, 0.1, and check out the value of y again. If it's still negative, you keep  increasing x, until y becomes positive. Once y is positive, you start to decrease x by a smaller step, and stop when y is negative again. Then increase x and so on. In the end, y is very close to zero, and x is very close to the value of e.

This is a practice of iteration.

3. sorting
Here's the 3rd assignment, which contains 3 steps.
And you don't need to send me the code. The final figures are fine.

1) randomly generate 20 numbers, and sort them in an ascending order.
To do this, we have two approaches. The 1st one loops through every
element, and find the smallest number, and then loop over the rest for
the 2nd smallest, and loop again until the 20th. This approach takes
a fixed number of comparisons between two numbers.
The 2nd approach compares the first and second numbers, and if they
obey the ascending order, leave them, otherwise swap them.
Then try the next pair. When you reach the end, come back from the beginning.
Keep doing this, until no more swap is needed.
This is faster, and requires fewer comparisons. Please try both approaches.

2) for the second approach only, every time you deal with 20 random numbers,
you will count the number of comparisons, Nc. When you have another set
of 20 random numbers, you get another Nc. I want you to fill a Histogram,
or a distribution of Nc, after trying many sets of 20 random numbers.

3) Similar to step 2, but this time we are not limited to "20".
You can try 10, 20, 30 until 100 random numbers, and each time you get a Nc.
I want you to fill a TProfile for Nc vs N, where N is the number of the random numbers,
10, 20 and so on. In this way, we can study the relationship between Nc and N.
You can try to fit it with some function forms. If you are not familiar with profile,
read the ROOT manual again.

4. input/output
Now the next task is about input/output. 2 steps:
 
1) fill a histogram many times with 2 Gaussian random numbers,
one  with mean at 0, and rms of 1, and
the other with mean at 2 and rms of 3.
The first type of random number can be filled two times as many as the second type.
So in the end you will have two overlapping Gaussians  in a single histogram.
Then write this histogram into a ROOT file.

2) open the previous ROOT file, obtain the saved histogram,
and fit it with a predefined function. The fit function is a
2-Gaussian, of course. Try to retrieve the parameters you just input.

5. shifting

 

The next assignment is a more realistic one.

In reality, the reconstructed particle angles are not evenly distributed, and we need to force them to be flat. You start with the histogram from the last assignment, the 2-Gaussian one. Fold it into a two-PI range, like -PI to PI. It will not be flat. Use this histogram as a random number generator (with GetRandom() function). Follow the procedure below to calculate the Fourier coefficients, and shift each raw angle a little according to the recipe. Then fill a new histogram with the new shifted angles. If you do everything right, the new histogram should look flat.
I've attached the paper for the shifting method, which is described in the appendix. To know more about the shifting method, you can read Ch 6.1.2 of my thesis: https://drupal.star.bnl.gov/STAR/files/startheses/2005/wang_gang.pdf  You can also read the first 3 chapters to know more about the physics and our experiment. You may apply the shifting correction up to 10th order harmonics, and the final distribution should be flat enough.

6. flow simulation

You need to first read this paper: https://arxiv.org/abs/nucl-ex/9805001
 
Basically you simulate some particles, say 200, in each event.
Each particle has an azimuthal angle phi_i, and all these phi_i
will follow the distribution as described by Eq(1) of the paper:
The reaction plane angle Psi_r is random, event by event.
After you have those particles with input v2 = 5%, and all other vn = 0,
you can calculate v2 of these particles based on Eq(8) (and other related equations).
What you obtain should be the same as what you input (5%).
 
You first define an array of 200 elements.
For each event, you will set some angle values to that array, and 
from event to event, you just keep refreshing that array.
For each of the 200 particles, you generate two random numbers,
x within a 2*PI range, and y from 0 to 1+2*v2_input.
If y is below 1+2*v2_input*cos(2*x - 2*Psi_RP), you keep this x.
Psi_RP is a event-by-event random number within 2*PI, a constant for each event.
 
In each event, you need to reconstruct the flow vector, Q, as described in the paper.
From Q, you have the event plane. In fact, you divide the particles into two sub-events,
and reconstruct two sub-event planes. Follow the procedures in the paper,
to calculate v2_obs (observed v2), and correct it with the event plane resolution.
If you do everything right, the corrected v2 is equal to v2_input.

7. Recentering and shifting

I've attached my note to show that shifting and recentering are roughly equivalent.
When you do recentering for v2, basically you start with
<cos(2phi - 2Psi)> = <cos(2phi)cos(2Psi)> - <sin(2phi)sin(2Psi)>  (1)
Since the detector is not perfect, you want to modify each term above:
cos(2phi) -> cos(2phi) - <cos(2phi)>
cos(2Psi) -> cos(2Psi) - <cos(2Psi)>
sin(2phi) -> sin(2phi) - <sin(2phi)>
sin(2Psi) -> sin(2Psi) - <sin(2Psi)>


Put them back into Eq(1), you have
<[cos(2phi) - <cos(2phi)>][cos(2Psi) - <cos(2Psi)>]> - <[sin(2phi) - <sin(2phi)>][sin(2Psi) - <sin(2Psi)>]>
= <cos(2phi)cos(2Psi)> - <cos(2phi)><cos(2Psi)> - <sin(2phi)sin(2Psi)> + <sin(2phi)><sin(2Psi)>
= <cos(2phi - 2Psi)> - <cos(2phi)><cos(2Psi)> + <sin(2phi)><sin(2Psi)>

So now the first term is the raw v2, and the next 2 terms are the corrections with "recentering".
Every term can be obtained in one run.
You can run a simulation to test if these results are correct.

8. Q-cumulant method

Here's the reference for the cumulant method (also attached):
https://journals.aps.org/prc/pdf/10.1103/PhysRevC.83.044913

9. Freeze-out parameters

To obtain the Ntrans. u+d, you need to retrieve the values of Tch and muB from
table IX (column SCER) of this paper:

First get the values for each centrality, and then combine the centrality bins for 0-10% and 40-80%.
Just do a simple average, e.g of 0-5% and 5-10% to have 0-10%.
 
The 14.5 GeV data are missing, so you can first define a TGraph with the rest energy points,
and then use "Eval(14.5)" to estimate it for 14.5 GeV. For example,

TGraph* mu = new TGraph(7,energy,mu_B_temp);
cout<<"mu_B at 14.5 GeV = "<<mu->Eval(14.5)<<endl;