Click here for second round.
-------------------------------------------------------------------------
Report of Referee A:
-------------------------------------------------------------------------
This is really a well-written paper. It was a pleasure to read, and I
have only relatively minor comments.
We thank the reviewer for careful reading of our paper and for providing
useful feedback. We are pleased to know that the reviewer finds the
paper to be well written. We have incorporated all the comments into a
new version of the draft.
Page 3: Although there aren't published pp upsilon cross sections there
is a published R_AA and an ee mass spectrum shown in E. Atomssa's QM09
proceedings. This should be referenced.
We are aware of the PHENIX results from
E. Atomssa, Nucl.Phys.A830:331C-334C,2009
and three other relevant QM proceedings:
P. Djawotho, J.Phys.G34:S947-950,2007
D. Das, J.Phys.G35:104153,2008
H. Liu, Nucl.Phys.A830:235C-238C,2009
However, it is STAR's policy to not reference our own preliminary data on the manuscript we submit for publication on a given topic, and by extension not to reference other preliminary experimental data on the same topic either.
Page 4, end of section A: Quote trigger efficiency.
The end of Section A now reads:
"We find that 25% of the Upsilons produced at
midrapidity have both daughters in the BEMC acceptance and at least one
of them can fire the L0 trigger. The details of the HTTP
trigger efficiency and acceptance are discussed in Sec. IV"
Figure 1: You should either quote L0 threshold in terms of pt, or plot
vs. Et. Caption should say L0 HT Trigger II threshold.
We changed the figure to plot vs. E_T, which is the quantity that is
measured by the calorimeter. For the electrons in the analysis, the
difference between p_T and E_T is negligible, so the histograms in
Figure 1 are essentially unchanged. We changed the caption as suggested.
Figures 3-6 would benefit from inclusion of a scaled minimum bias spectrum
to demonstrate the rejection factor of the trigger.
We agree that it is useful to quote the rejection factor of the trigger.
We prefer to do so in the text. We added to the description of Figure
3 the following sentence: "The rejection factor achieved with Trigger
II, defined as the number of minimum bias events counted by the trigger scalers
divided by the number events where the upsilon trigger was issued, was
found to be 1.8 x 105."
Figure 9: There should be some explanation of the peak at E/p = 2.7
We investigated this peak, and we traced it to a double counting error.
The problem arose due to the fact that the figure was generated from
a pairwise Ntuple, i.e. one in which each row represented a pair of
electrons (both like-sign and unlike-sign pairs included), each with a
value of E and p, instead of a single electron Ntuple. We had plotted
the value of E/p for the electron candidate which matched all possible
high-towers in the event. The majority of events have only one candidate
pair, so there were relatively few cases where there was double
counting. We note that for pairwise quantities such as opening angle and
invariant mass, each entry in the Ntuple is still different. However,
the case that generated the peak at E/p = 2.7 in the figure was traced
to one event that had one candidate positron track, with its
corresponding high-tower, which was paired with several other electron
and positron candidates. Each of these entries has a different invariant
mass, but the same E/p for the first element of the pair. So its entry
in Figure 9, which happened to be at E/p=2.7, was repeated several times
in the histogram. The code to generate the data histogram in Figure 9
has now been corrected to guarantee that the E/p distribution is made
out of unique track-cluster positron candidates. The figure in the paper
has been updated. The new histogram shows about 5 counts in that
region. As a way to gauge the effect the double counting had on the
E/p=1 area of the figure, there were about 130 counts in the figure at
the E/p=1 peak position in the case with the double-counting error, and
there are about 120 counts in the peak after removing the
double-counting. The fix leads to an improved match between the data
histogram and the Monte Carlo simulations. We therefore leave the
efficiency calculation, which is based on the Monte Carlo Upsilon
events, unchanged. The pairwise invariant mass distribution from which
the main results of the paper are obtained is unaffected by this. We
thank the reviewer for calling our attention to this peak, which allowed
us to find and correct this error.
-------------------------------------------------------------------------
Report of Referee B:
-------------------------------------------------------------------------
The paper reports the first measurement of the upsilon (Y) cross-section
in pp collisions at 200 GeV. This is a key piece of information, both
in the context of the RHIC nucleus-nucleus research program and in its
own right. The paper is rather well organized, the figures are well
prepared and explained, and the introduction and conclusion are clearly
written. However, in my opinion the paper is not publishable in its
present form: some issues, which I enumerate below, should be addressed
by the authors before that.
The main problems I found with the paper have to do with the estimate
of the errors. There are two issues:
The first: the main result is obtained by integrating the counts above
the like-sign background between 8 and 11 GeV in figure 10, quoted to
give 75+-20 (bottom part of table III). This corresponds the sum Y +
continuum. Now to get the Y yield, one needs to subtract an estimated
contribution from the continuum. Independent of how this has been
estimated, the subtraction can only introduce an additional absolute
error. Starting from the systematic error on the counts above background,
the error on the estimated Y yield should therefore increase, whereas
in the table it goes down from 20 to 18.
Thanks for bringing this issue to our attention. It is true that when
subtracting two independently measured numbers, the statistical
uncertainty in the result of the subtraction can only be larger than the
absolute errors of the two numbers, i.e. if C = A - B, and error(A) and
error(B) are the corresponding errors, then the statistical error on C
would be sqrt(error(B)2+error(A)2) which would yield a larger absolute
error than either error(A) or error(B). However, the extraction of the
Upsilon yield in the analysis needs an estimate of the continuum
contribution, but the key difference is that it is not obtained by an
independent measurement. The two quantities, namely the Upsilon yield
and the continuum yield, are obtained ultimately from the same source:
the unlike sign dielectron distribution, after the subtraction of the
like-sign combinatorial background. This fact causes an
anti-correlation between the two yields, the larger the continuum yield,
the smaller the Upsilon yield. So one cannot treat the subtraction of
the continuum yield and the Upsilon yield as the case for independent
measurements. This is why in the paper we discuss that an advantage of
using the fit includes taking automatically into account the correlation
between the continuum and the Upsilon yield. So the error that is
quoted in Table III for all the "Upsilon counts", i.e. the Fitting
Results, the Bin-by-bin Counting, and the Single bin counting, is quoted
by applying the percent error on the Upsilon yield obtained from the
fitting method, which is the best way to take the anti-correlation
between the continuum yield and the Upsilon yield into account. We will
expand on this in section VI.C, to help clarify this point. We thank the referee for
alerting us.
The second issue is somewhat related: the error on the counts (18/54, or
33%) is propagated to the cross section (38/114) as statistical error,
and a systematic error obtained as quadratic sum of the systematic
uncertainties listed in Table IV is quoted separately. The uncertainty on
the subtraction of the continuum contribution (not present in Table IV),
has completely disappeared, in spite of being identified in the text as
"the major contribution to the systematic uncertainty" (page 14, 4 lines
from the bottom).
This is particularly puzzling, since the contribution of the continuum
is even evaluated in the paper itself (and with an error). This whole
part needs to be either fixed or, in case I have misunderstood what the
authors did, substantially clarified.
We agree that this can be clarified. The error on the counts (18/54, or
33%) includes two contributions:
1) The (purely statistical) error on the unlike-sign minus like sign
subtraction, which is 20/75 or 26%, as per Table III.
2) The additional error from the continuum contribution, which we
discuss in the previous comment, and is not just a statistical sum of
the 26% statistical error and the error on the continuum, rather it must
include the anti-correlation of the continuum yield and the Upsilon
yield. The fit procedure takes this into account, and we arrive at the
combined 33% error.
The question then arises how to quote the statistical and systematic
uncertainties. One difficulty we faced is that the subtraction of the
continuum contribution is not cleanly separated between statistical and
systematic uncertainties. On the one hand, the continuum yield of 22
counts can be varied within the 1-sigma contours to be as low as 14 and
as large as 60 counts (taking the range of the DY variation from Fig.
12). This uncertainty is dominated by the statistical errors of the
dielectron invariant mass distribution from Fig. 11. Therefore, the
dominant uncertainty in the continuum subtraction procedure is
statistical, not systematic. To put it another way, if we had much
larger statistics, the uncertainty in the fit would be much reduced
also. On the other hand, there is certainly a model-dependent component
in the subtraction of the continuum, which is traditionally a systematic
uncertainty. We chose to represent the combined 33% percent error as a
statistical uncertainty because a systematic variation in the results
would have if we were to choose, say, a different model for the continuum
contribution, is smaller compared to the variation allowed by the
statistical errors in the invariant mass distribution. In other words,
the reason we included the continuum subtraction uncertainty together in
the quote of the statistical error was that its size in the current
analysis ultimately comes from the statistical precision of our
invariant mass spectrum. We agree that this is not clear in the text,
given that we list this uncertainty among all the other systematic
uncertainties, and we have modified the text to clarify this. Perhaps a
more appropriate way to characterize the 33% error is that it includes
the "statistical and fitting error", to highlight the fact that in
addition to the purely statistical errors that can be calculated from
the N++, N-- and N+- counting statistics, this error includes the
continuum subtraction error, which is based on a fit that takes into
account the statistical error on the invariant mass spectrum, and the
important anti-correlation between the continuum yield and the Upsilon
yield. We have added an explanation of these items in the updated draft of
the paper, in Sec VI.C.
There are a few other issues which in my opinion should be dealt with
before the paper is fit for publication:
- in the abstract, it is stated that the Color Singlet Model (CSM)
calculations underestimate the Y cross-section. Given that the discrepancy
is only 2 sigma or so, such a statement is not warranted. "Seems to
disfavour", could perhaps be used, if the authors really insist in making
such a point (which, however, would be rather lame). The statement that
CSM calculations underestimate the cross-section is also made in the
conclusion. There, it is even commented, immediately after, that the
discrepancy is only a 2 sigma effect, resulting in two contradicting
statements back-to-back.
Our aim was mainly to be descriptive. To clarify our intent, the use of
"underestimate" is in the sense that if we move our datum point lower by the
1-sigma error of our measurement and this value is higher than the top
end of the CSM calculation. We quantify this by saying that the
size of the effect is about 2-sigma. We think that the concise statement
"understimate by 2sigma" objectively summarizes the observation, without
need to use more subjective statements, and we modified
the text in the abstract and conclusion accordingly.
- on page 6 it is stated that the Trigger II cuts were calculated offline
for Trigger I data. However, it is not clear if exactly the same trigger
condition was applied offline on the recorded values of the original
trigger input data or the selection was recalculated based on offline
information. This point should be clarified.
Agreed. We have added the sentence: "The exact same trigger condition was
applied offline on the recorded values of the original trigger input data."
- on page 7 it is said that PYTHIA + Y events were embedded in zero-bias
events with a realistic distribution of vertex position. Given that
zero-bias events are triggered on the bunch crossing, and do not
necessarily contain a collision (and even less a reconstructed vertex),
it is not clear what the authors mean.
We do not know if the statement that was unclear is how the realsitic
vertex distribution was obtained or if the issue pertained to where the analyzed collision comes from.
We will try to clarify both instances. The referee has correctly understood
that the zero-bias events do not necessarily contain a collision.
That is why the PYTHIA simulated event is needed. The zero-bias events
will contain additional effects such as out of time pile-up in the Time
Projection Chamber, etc. In other words, they will contain aspects of
the data-taking environment which are not captured by the PYTHIA events.
That is what is mentioned in the text:
"These zero-bias events do not always have a collision in the given
bunch crossing, but they include all the detec-
tor effects and pileup from out-of-time collisions. When
combined with simulated events, they provide the most
realistic environment to study the detector e±ciency and
acceptance."
The simulated events referred to in this text are the PYTHIA events, and
it is the simulated PYTHIA event, together with the Upsilon, that
provides the collision event to be studied for purposes of acceptance
and efficiency. In order to help clarify our meaning, we have also added
statements to point out that the dominant contribution to the TPC occupancy
is from out of time pileup.
Regarding the realistic distribution of vertices,
this is obtained from the upsilon triggered events (not from the zero-bias events, which
have no collision and typically do not have a found vertex, as the referee correctly
interpreted). We have added a statement to point this out and hopefully this will make
the meaning clear.
- on page 13 the authors state that they have parametrized the
contribution of the bbar contribution to the continuum based on a PYTHIA
simulation. PYTHIA performs a leading order + parton shower calculation,
while the di-electon invariant mass distribution, is sensitive to
next-to-leading order effects via the angular correlation of the the two
produced b quarks. Has the maginuted of this been evaluated by comparing
PYTHIA results with those of a NLO calculation?
We did not do so for this paper. This is one source of systematic
uncertainty in the continuum contribution, as discussed in the previous
remarks. For this paper, the statistics in the dielectron invariant
mass distribution are such that the variation in the shape of the b-bbar
continuum between LO and NLO would not contribute a significant
variation to the Upsilon yield. This can be seen in Fig. 12, where the
fit of the continuum allows for a removal of the b-bbar yield entirely,
as long as the Drell-Yan contribution is kept. We expect to make such
comparisons with the increased statistics available in the run 2009
data, and look forward to including NLO results in the next analysis.
- on page 13 the trigger response is emulated using a turn-on function
parametrised from the like-sign data. Has this been cross-checked with a
simulation? If yes, what was the result? If not, why?
We did not cross check the trigger response on the continuum with a
simulation, because a variation of the turn-on function parameters gave
a negligible variation on the extracted yields, so it was not deemed
necessary. We did use a simulation of the trigger response on simulated
Upsilons (see Fig. 6, dashed histogram).
Finally, I would like to draw the attention of the authors on a few less
important points:
- on page 6 the authors repeat twice, practically with the same words,
that the trigger rate is dominated by di-jet events with two back-to-back
pi0 (once at the top and once near the bottom of the right-side column).
We have changed the second occurrence to avoid repetitiveness.
- all the information of Table I is also contained in Table 4; why is
Table I needed?
We agree that all the information in Table I is contained in Table 4
(except for the last row, which shows the combined efficiency for the
1S+2S+3S), so it could be removed. We have included it for convenience
only: Table I helps in the discussion of the acceptance and
efficiencies, and gives the combined overall correction factors, whereas
the Table IV helps in the discussion of the systematic uncertainties of
each item.
- in table IV, the second column says "%", which is true for the
individual values of various contributions to the systematic uncertainty,
but not for the combined value at the bottom, which instead is given
in picobarn.
Agreed. We have added the pb units for the Combined error at the bottom of the
table.
- in the introduction (firts column, 6 lines from the bottom) the authors
write that the observation of suppression of Y would "strongly imply"
deconfinement. This is a funny expression: admitting that such an
observation would imply deconfinement (which some people may not be
prepared to do), what's the use of the adverb "strongly"? Something
either does or does not imply something else, without degrees.
We agree that the use of "imply" does not need degrees, and we also
agree that some people might not be prepared to admit that such an
observation would imply deconfinement. We do think that such an
observation would carry substantial weight, so we have rephrased that
part to "An observation of suppression of Upsilon
production in heavy-ions relative to p+p would be a strong argument
in support of Debye screening and therefore of
deconfinement"
We thank the referee for the care in reading the manuscript and for all
the suggestions.
> I think the paper is now much improved. However,
> there is still one point (# 2) on which I would like to hear an
> explanation from the authors before approving the paper, and a
> couple of points (# 6 and 7) that I suggest the authors should
> still address.
> Main issues:
> 1) (errors on subtraction of continuum contribution)
> I think the way this is now treated in the paper is adequate
> 2) (where did the subtraction error go?)
> I also agree that the best way to estimate the error is
> to perform the fit, as is now explicitly discussed in the paper.
> Still, I am surprised, that the additional error introduced by
> the subtraction of the continuum appears to be negligible
> (the error is still 20). In the first version of the paper there
> was a sentence – now removed – stating that the uncertainty
> on the subtraction of the continuum contribution was one
> of the main sources of systematic uncertainty!
> -> I would at least like to hear an explanation about
> what that sentence
> meant (four lines from the bottom of page 14)
Response:
Regarding the size of the error:
The referee is correct in observing that the error before
and after subtraction is 20, but it is important to note
that the percentage error is different. Using the numbers
from the single bin counting, we get
75.3 +/- 19.7 for the N+- - 2*sqrt(N++ * N--),
i.e. the like-sign subtracted unlike-sign signal. The purely
statistical uncertainty is 19.7/75.3 = 26%. When we perform
the fit, we obtain the component of this signal that is due
to Upsilons and the component that is due to the Drell-Yan and
b-bbar continuum, but as we discussed in our previous response,
the yields have an anti-correlation, and therefore there is no
reason why the error in the Upsilon yield should be larger in
magnitude than the error of the like-sign subtracted unlike-sign
signal. However, one must note that the _percent_ error does,
in fact, increase. The fit result for the upsilon yield alone
is 59.2 +\- 19.8, so the error is indeed the same as for the
like-sign subtracted unlike-sign signal, but the percent error
is now larger: 33%. In other words, the continuum subtraction
increases the percent error in the measurement, as it should.
Note that if we one had done the (incorrect) procedure of adding
errors in quadrature, using an error of 14.3 counts for the
continuum yield and an error of 19.7 counts for the
background-subtracted unlike-sign signal, the error on the
Upsilon yield would be 24 counts. This is a relative error of 40%, which
is larger than the 33% we quote. This illustrates the effect
of the anti-correlation.
Regarding the removal of the sentence about the continuum
subtraction contribution to the systematic uncertainty:
During this discussion of the continuum subtraction and
the estimation of the errors, we decided to remove the
sentence because, as we now state in the paper, the continuum
subtraction uncertainty done via the fit is currently
dominated by the statistical error bars of the data in Fig. 11,
and is therefore not a systematic uncertainty. A systematic
uncertainty in the continuum subtraction would be estimated,
for example, by studying the effect on the Upsilon yield that
a change from the Leading-Order PYTHIA b-bbar spectrum we use
to a NLO b-bbar spectrum, or to a different Drell-Yan parameterization.
As discussed in the response to point 6), a complete
removal of the b-bbar spectrum, a situation allowed by the fit provided
the Drell-Yan yield is increased, produces a negligible
change in the Upsilon yield. Hence, systematic variations
in the continuum do not currently produce observable changes
in the Upsilon yield. Varying the continuum yield
of a given model within the statistical error bars does, and
this uncertainty is therefore statisitcal. Therefore, we removed the
sentence stating that the continuum subtraction is one
of the dominant sources of systematic uncertainty because
in the reexamination of that uncertainty triggered by the
referee's comments, we concluded that it is more appropriate
to consider it as statistical, not systematic, in nature.
We have thus replaced that sentence, and in its stead
describe the uncertainty in the cross
section as "stat. + fit", to draw attention to the fact that
this uncertainty includes the continuum subtraction uncertainty
obtained from the fit to the data. The statements in the paper
in this respect read (page 14, left column):
It should be noted that
with the statistics of the present analysis, we find that the
allowed range of variation of the continuum yield in the fit is
still dominated by the statistical error bars of the invariant mass
distribution, and so the size of the 33% uncertainty is mainly
statistical in nature. However, we prefer to denote
the uncertainty as “stat. + fit” to clarify that it includes the estimate of the anticorrelation
between the Upsilon and continuum yields obtained
by the fitting method. A systematic uncertainty due to
the continuum subtraction can be estimated by varying
the model used to produce the continuum contribution
from b-¯b. These variations produce a negligible change in
the extracted yield with the current statistics.
We have added our response to point 6) (b-bbar correlation systematics)
to this part of the paper, as it pertains to this point.
> Other issues:
> 3) (two sigma effect)
> OK
> 4) (Trigger II cuts)
> OK
> 5) (embedding)
> OK
> 6) (b-bbar correlation)
> I suggest adding in the paper a comment along the lines of what
> you say in your reply
> 7) (trigger response simulation)
> I suggest saying so explicitly in the paper
Both responses have been added to the text of the paper.
See page 13, end of col. 1, (point 7) and page 14, second column (point 6).
> Less important points:
> 8) (repetition)
> OK
> 9) (Table I vs Table IV)
> OK…
> 10) (% in last line of Table IV)
> OK
> 11) (“strongly imply”)
> OK
We thank the referee for the care in reading the manuscript, and look forward to
converging on these last items.