HLT Review Page - Common Content

This page can be used for common content for the HLT review, Summer 2010.

 

Received HLT response to 2nd review (PDF)

Directions:

  1. Run 10 performance (Manuel)
  2. Run 11 goals & means: archiving, simulations, integration w/offline (Jan)
  3. Is respons addressing concerns of 2nd review (Thorsten)
  4. Future: - man power &  expansion plans (Gang)

A) Questions/Comments on Run 10 Performance

  1. In the overview of HLT in run 2010, there is a comment that the HLT has been used as a real trigger.  This should be qualified, as by construction the HLT needs TPC readout, and therefore this use does not reduce deadtime like a level-0 trigger.  Rather, the gain here is to save disk space and offline processing time for those HLT-triggered event.  The accounting for cross section purposes when an event gets recorded only after HLT trigger is a topic that has been raised as a concern.  It could be useful to exemplify how this has been/is being handled.
  2. Figure 1: We would like to see a more detailed description of the data flow. In particular, addition of detector information such as the HFT can have very different design issues if the data is sent to SL3 (so it can be used for tracking) or to GL3 (so it can only be used to refit).
  3. Section 3 on Trigger efficiency: The comparison of offline counts vs. HLT counts as a way to estimate efficiency suffers from the comparison is not done on a track-by-track basis.  Therefore, it is impossible to determine if there is a significant number of split tracks.  For example, in Table 1, in the counts of the 0.5 - 1 GeV/c rigidity bin, there are 615 offline counts, and 543 HLT counts.  Without more information, it is not possible to know if these 543 counts are all good, or if say only 300 tracks are good and the other 243 are ghost or split tracks. A minor point on the same table is that the errors shown are wrong, the efficiency cannot be greater than 100%, so quoting 99 +/- 7 % is incorrect.  See the note from Thomas and Zhangbu on the treatment of errors for efficiency calculations.
  4. We also wondered why the J/psi efficiency was so much smaller than the gamma conversion electron efficiency.  Could be just low statistics, but the difference appears significant.  Any further studies on the J/psi efficiency would be useful.
  5. Section 4 on performance: it would be extremely useful to have figures that show how the performance, e.g. speed, deadtime, etc., scale with the occupancy, and in particular with the luminosity.  This can then be used to make projections for the future, which is one of the key issues of this review.
  6. Section 5 on calibration: We had several questions, as this is an important topic.
    1. How is it decided what is an acceptable resolution?
    2. How are the calibration constants archived? Are they sent to the star offline database so that they can be used in production and analysis? This is a necessary condition to guarantee that any analysis that relies on HLT-triggered events only uses those tracks that satisfy the given trigger condition, so these tracks have to be available in the event.  Is the HLT tracking re-run during production and the HLT tracks stored in the MuDst?
    3. The report says that the necessary calibrations for the TPC took one week to achieve.  After that one week, did it need someone to take care of it again during the run?  This is important to consider vis-a-vis manpower issues.
    4. Regarding the 7.4% resolution achieved for HLT tracks, it would be useful to understand which observables or studies can be done (or not) with this kind of resolution.  For the future, are there studies could rely on PID at the HLT level (e.g. J/psi, D-mesons, Lambda_c)? What dE/dx (and possibly secondary vertex resolution) would they need?
    5. It is not mentioned how much much manpower and time the TOF calibrations needed.
    6. Regarding the BEMC calibration: what is meant by the sentence "the gains are from theoretical calculation".  We think the BEMC gain calibration is one of the procedures that needs to be done before the L0 High-Tower triggers can work, so this calibration should happen regardless of whether the HLT is running. 
  7. Section 6 on online monitoring: in the vertex x-y comparison between online and offline, the offline vertex finder has the capability to find many vertices and then decide among them in order to reject out of time pile-up.  How does the comparison take this into account?  Is there any pile-up rejection in the HLT vertex finder?  Does the HLT vertex finder use information from the VPD also? If not, should it?
  8. Section 7 on the monte-carlo embedding to study the efficiency.  It is written that "the tracking efficiency is defined as the ratio of the number of successfully found MC tracks by the HLT tracker to that of the embedded MC tracks".  How is a track deemed to be "successfully found"?  How is the amount of split and ghost tracks studied?  If these are not specifically studied, simply counting tracks is not enough to correctly determine the tracking efficiency.
  9. Section 7.2 : we would like to hear about any progress on why a difference exists in Figure 11 left and right.  Is it simply that the embedded tracks have a different eta-pt range and therefore would have a different nhits distribution?  If so, that is easy to correct.  If the discrepancy persists when selecting tracks of similar pt and eta (and phi to be complete) then this is a potentially very serious issue.
  10. On the physics results: both the charge -2 and the di-electron results require dE/dx and the di-electrons require TOF also.  With the cuts discussed in section 8 and the resolution discussed earlier for dE/dx (the TOF resolution was not discussed), what efficiency and purity is expected for these observables?  How is this expected to degrade in larger luminosity future runs?
  11. For electron id using the BEMC, in the L2 trigger studies done for the J/psi and Upsilon L2 triggers we concluded that clustering the BEMC towers improved the mass resolution.  It probably improves the E/p as well.  Has this been studied with the HLT algorithms?
  12. Since there are 3 possible ways that an electron can be selected according to the cuts mentioned in section 8.2, it must be made clear to anyone using these triggered events for physics that they have to reproduce the complete trigger conditions, and that it will make the analysis more complicated (each possible selection should be simulated in order to estimate its efficiency, and the more variables and conditions, the better the Monte Carlo has to be tuned to reproduce the data. It is not encouraging that, as mentioned in Point 9 above, even reproducing a simple nhits distribution is not currently within our milestones.)

 

B) Follow up questions for Run 11 plans:

  1. What physics trigger(s) will use HLT in run 11 ?
  2. Show intended data flow diagram for HLT in the context of trigger and daq machines
    1. show details of GPUs configuration
  3. Estimation of HLT dead time for the   for run 11 (pp500, HI running)
    1. Show impact of different luminosity levels (25%, 50%, 75%,100%, 125% of projected peak lumi for run 11) 
  4. Describe procedure of establishing 'good enough'  on-line HLT calibration for: Estimate # of days and name people responsible for on-line HLT calibration of given detector. Estimate # of days needed and name people responsible for on-line HLT calibration of given detector.
    1. TPC, expected 2x larger lumi than previously, ramp up for first ~6 weeks
    2. BTOW
    3. BTOF
    4. MTD
  5. How an end-user can simulate HLT efficiency for y2011 M-C geometry & Pythia events in the off-line framework (root4star on rcas6nnn)?
  6. How TPC, BTOW,MTD calibration used by HLT will be archived ?
  7. How HLT code used on-line will be archived?

 

C) Responses to the 2nd HLT  review report

  1. The performance of two physics triggers has been sucessfully demonstrated. It is assumed that we deal with the performance not in this part.
  2. It's impressive to see a calibrated TPC after one only one week. However in the current report some information is missing to fully assess the calibration quality achieved:
    1. comparison of the calibration table to the "final" offline table (I assume that Fig3 is a comparison of the table to the offline calculation available at this time).
    2.  was there only one calibration table used or was it constantly upgraded during the run?
    3.  Is there an agreed-on workload sharing between the HLT and subsystem teams?
  3. On the section for future developments: The 2nd HLT review recommended a close collaboration with offline efforts to implement a new tracking/seed finder (if needed). Has there been a common activity? Why was there a decision to rewrite the SL3 code based on the old concept?
  4. On section 11:  Will there be sufficient human ressources to do any R&D towards HFT integration? Has there been collaboration with the HFT group?
  5. (Section 11.4) good to see that common activities with other groups has helped. Something not mentioned is reduced needs in the trigger system, i.e. are there any plans to obsolete L2? (At least the last time we discussed this, L2 had no benefits compared to the HLT)
  6. The point behind the seperate readout and tracking PCs was not for the current run, but for any further R&D. With the strong coupling you are very limited, decoupling will offer new possibilities (especially when the HFT comes online) For reference here the relevant part of the last report: Before proceeding with the installation of the SL3 tracking computers, it should be clarified if the coupling of TPC readout and HLT tracking is compatible with the envisioned further development of the HLT and its tracking algorithm or if a separate system would be more beneficially. As a byproduct, this will also result in the recommended clarification of the HLT-DAQ interface.
  7. OpenCL vs CUDA: Keeping it as an option is not too informative. There shold be a decision what to use - note that there were strong feelings in favor of an OpenCL based system. 
  8. Not discuseed: Given the impressive number of proposed physics algorithms, the impact of a large number of faststreams on offline computing (additional storage, CPU etc) should be clarified in collaboration with the STAR computing group.

 

D) Future development

  

  1. The inclusion of HFT is very necessary, since the prototype of HFT is planned to be installed in run12 and the completion of HFT is aimed for run14 with Pixel available. The study to include HFT in the tracking system should quantitatively evaluate the requirements for HLT data, such as the pointing resolution, the secondary vertex resolution and so on. This study needs to get started soon.
  2. The efficiency study via Monte Carlo seems to focus on the relative efficiency between HLT and offline data, instead of the full/absolute efficiency. The advantage is that the offline performance is well understood, and much less effort is needed than the full efficiency study. The caveat is that the relative efficiency has to be safely factorized out of the full efficiency. We suggest that some simple simulation works to be carried out to demonstrate the factorization. 
  3. It is put to a lower priority now to rewrite the SL3 tracking package.
  4. It is necessary to add the HLT information into MuDst, so that the HLT conditions can be reproduced offline.
  5. Manpower: Hao will graduate next year, and Xiangming could be away. They both work on tracking, which means the HLT project could be badly short of manpower in tracking. More attention should be paid to this direction.