HLT Review Book Page - Common Content
Updated on Tue, 2010-09-07 16:35. Originally created by calderon on 2010-08-24 16:58.
Under:
This page can be used for common content for the HLT review, Summer 2010.
Received HLT response to 2nd review (PDF)
Directions:
- Run 10 performance (Manuel)
- Run 11 goals & means: archiving, simulations, integration w/offline (Jan)
- Is respons addressing concerns of 2nd review (Thorsten)
- Future: - man power & expansion plans (Gang)
For Executive Summary, click here.
A) Questions/Comments on Run 10 Performance
- In the overview of HLT in run 2010, there is a comment that the HLT has been used as a real trigger. This should be qualified, as by construction the HLT needs TPC readout, and therefore this use does not reduce deadtime like a level-0 trigger. Rather, the gain here is to save disk space and offline processing time for those HLT-triggered event. The accounting for cross section purposes when an event gets recorded only after HLT trigger is a topic that has been raised as a concern. It could be useful to exemplify how this has been/is being handled.
- Figure 1: We would like to see a more detailed description of the data flow. In particular, addition of detector information such as the HFT can have very different design issues if the data is sent to SL3 (so it can be used for tracking) or to GL3 (so it can only be used to refit).
- Section 3 on Trigger efficiency: The comparison of offline counts vs. HLT counts as a way to estimate efficiency suffers from the comparison is not done on a track-by-track basis. Therefore, it is impossible to determine if there is a significant number of split tracks. For example, in Table 1, in the counts of the 0.5 - 1 GeV/c rigidity bin, there are 615 offline counts, and 543 HLT counts. Without more information, it is not possible to know if these 543 counts are all good, or if say only 300 tracks are good and the other 243 are ghost or split tracks. A minor point on the same table is that the errors shown are wrong, the efficiency cannot be greater than 100%, so quoting 99 +/- 7 % is incorrect. See the note from Thomas and Zhangbu on the treatment of errors for efficiency calculations.
- We also wondered why the J/psi efficiency was so much smaller than the gamma conversion electron efficiency. Could be just low statistics, but the difference appears significant. Any further studies on the J/psi efficiency would be useful.
- Section 4 on performance: it would be extremely useful to have figures that show how the performance, e.g. speed, deadtime, etc., scale with the occupancy, and in particular with the luminosity. This can then be used to make projections for the future, which is one of the key issues of this review.
- Section 5 on calibration: We had several questions, as this is an important topic.
- How is it decided what is an acceptable resolution?
- How are the calibration constants archived? Are they sent to the star offline database so that they can be used in production and analysis? This is a necessary condition to guarantee that any analysis that relies on HLT-triggered events only uses those tracks that satisfy the given trigger condition, so these tracks have to be available in the event. Is the HLT tracking re-run during production and the HLT tracks stored in the MuDst?
- The report says that the necessary calibrations for the TPC took one week to achieve. After that one week, did it need someone to take care of it again during the run? This is important to consider vis-a-vis manpower issues.
- Regarding the 7.4% resolution achieved for HLT tracks, it would be useful to understand which observables or studies can be done (or not) with this kind of resolution. For the future, are there studies could rely on PID at the HLT level (e.g. J/psi, D-mesons, Lambda_c)? What dE/dx (and possibly secondary vertex resolution) would they need?
- It is not mentioned how much much manpower and time the TOF calibrations needed.
- Regarding the BEMC calibration: what is meant by the sentence "the gains are from theoretical calculation". We think the BEMC gain calibration is one of the procedures that needs to be done before the L0 High-Tower triggers can work, so this calibration should happen regardless of whether the HLT is running.
- Section 6 on online monitoring: in the vertex x-y comparison between online and offline, the offline vertex finder has the capability to find many vertices and then decide among them in order to reject out of time pile-up. How does the comparison take this into account? Is there any pile-up rejection in the HLT vertex finder? Does the HLT vertex finder use information from the VPD also? If not, should it?
- Section 7 on the monte-carlo embedding to study the efficiency. It is written that "the tracking efficiency is defined as the ratio of the number of successfully found MC tracks by the HLT tracker to that of the embedded MC tracks". How is a track deemed to be "successfully found"? How is the amount of split and ghost tracks studied? If these are not specifically studied, simply counting tracks is not enough to correctly determine the tracking efficiency.
- Section 7.2 : we would like to hear about any progress on why a difference exists in Figure 11 left and right. Is it simply that the embedded tracks have a different eta-pt range and therefore would have a different nhits distribution? If so, that is easy to correct. If the discrepancy persists when selecting tracks of similar pt and eta (and phi to be complete) then this is a potentially very serious issue.
- On the physics results: both the charge -2 and the di-electron results require dE/dx and the di-electrons require TOF also. With the cuts discussed in section 8 and the resolution discussed earlier for dE/dx (the TOF resolution was not discussed), what efficiency and purity is expected for these observables? How is this expected to degrade in larger luminosity future runs?
- For electron id using the BEMC, in the L2 trigger studies done for the J/psi and Upsilon L2 triggers we concluded that clustering the BEMC towers improved the mass resolution. It probably improves the E/p as well. Has this been studied with the HLT algorithms?
- Since there are 3 possible ways that an electron can be selected according to the cuts mentioned in section 8.2, it must be made clear to anyone using these triggered events for physics that they have to reproduce the complete trigger conditions, and that it will make the analysis more complicated (each possible selection should be simulated in order to estimate its efficiency, and the more variables and conditions, the better the Monte Carlo has to be tuned to reproduce the data. It is not encouraging that, as mentioned in Point 9 above, even reproducing a simple nhits distribution is not currently within our milestones.)
B) Follow up questions for Run 11 plans:
- What physics trigger(s) will use HLT in run 11 ?
- Show intended data flow diagram for HLT in the context of trigger and daq machines
- show details of GPUs configuration
- Estimation of HLT dead time for the for run 11 (pp500, HI running)
- Show impact of different luminosity levels (25%, 50%, 75%,100%, 125% of projected peak lumi for run 11)
- Describe procedure of establishing 'good enough' on-line HLT calibration for: Estimate # of days and name people responsible for on-line HLT calibration of given detector. Estimate # of days needed and name people responsible for on-line HLT calibration of given detector.
- TPC, expected 2x larger lumi than previously, ramp up for first ~6 weeks
- BTOW
- BTOF
- MTD
- How an end-user can simulate HLT efficiency for y2011 M-C geometry & Pythia events in the off-line framework (root4star on rcas6nnn)?
- How TPC, BTOW,MTD calibration used by HLT will be archived ?
- How HLT code used on-line will be archived?
C) Responses to the 2nd HLT review report
- The performance of two physics triggers has been sucessfully demonstrated. It is assumed that we deal with the performance not in this part.
- It's impressive to see a calibrated TPC after one only one week. However in the current report some information is missing to fully assess the calibration quality achieved:
- comparison of the calibration table to the "final" offline table (I assume that Fig3 is a comparison of the table to the offline calculation available at this time).
- was there only one calibration table used or was it constantly upgraded during the run?
- Is there an agreed-on workload sharing between the HLT and subsystem teams?
- On the section for future developments: The 2nd HLT review recommended a close collaboration with offline efforts to implement a new tracking/seed finder (if needed). Has there been a common activity? Why was there a decision to rewrite the SL3 code based on the old concept?
- On section 11: Will there be sufficient human ressources to do any R&D towards HFT integration? Has there been collaboration with the HFT group?
- (Section 11.4) good to see that common activities with other groups has helped. Something not mentioned is reduced needs in the trigger system, i.e. are there any plans to obsolete L2? (At least the last time we discussed this, L2 had no benefits compared to the HLT)
- The point behind the seperate readout and tracking PCs was not for the current run, but for any further R&D. With the strong coupling you are very limited, decoupling will offer new possibilities (especially when the HFT comes online) For reference here the relevant part of the last report: Before proceeding with the installation of the SL3 tracking computers, it should be clarified if the coupling of TPC readout and HLT tracking is compatible with the envisioned further development of the HLT and its tracking algorithm or if a separate system would be more beneficially. As a byproduct, this will also result in the recommended clarification of the HLT-DAQ interface.
- OpenCL vs CUDA: Keeping it as an option is not too informative. There shold be a decision what to use - note that there were strong feelings in favor of an OpenCL based system.
- Not discuseed: Given the impressive number of proposed physics algorithms, the impact of a large number of faststreams on offline computing (additional storage, CPU etc) should be clarified in collaboration with the STAR computing group.
D) Future development
- The inclusion of HFT is necessary, but not urgent, since the completion of HFT is aimed for run-14 with Pixel available. The HFT hit information will be passed to GL3 machines, while the TPC information will be passed to SL3 machines, and the committee is curious about how and where the re-fit of the track is carried out.
- The efficiency study via Monte Carlo seems to focus on the relative efficiency between HLT and offline data, instead of the full/absolute efficiency. The advantage is that the offline performance is well understood, and much less effort is needed than the full efficiency study. The caveat is that the relative efficiency has to be safely factorized out of the full efficiency.
- What's the time line for rewriting the SL3 tracking package?
- How necessary is it to add the HLT information into MuDst? Once the MuDst is produced, all the information that HLT provides can be obtained with the analysis code. And the MuDst production is usually separated into several physics streams, so some streams can be defined with HLT triggers, and then the HLT information doesn't need to be written into MuDst.
- Manpower: Hao will graduate next year, and Xiangming could be away. They both work on tracking, which means the HLT project could be badly short of manpower in tracking. On the other hand, the development of secondary vertex finder with GPU may be not so urgent.
Executive Summary
- Run 10 Performance
- HLT and DAQ/Trigger Interface: During Run 10, the interface between HLT and trigger seemed adequate. The important changes in the operation were 1) to include all BEMC tower information, requiring HLT to access the EVB information directly instead of via L2, and 2) TOF information, sent directly from DAQ machine. For the future, a design issue to be solved involves the incorporation of the HFT information, and whether this information should be sent to GL3 to be used for track refitting or whether it can/should be sent to SL3 machines to be used during the track-finding stage. The addition of the MTD is expected to be similar to the current TOF interface and DAQ communication.
- Trigger efficiency: For charge-2 events, an estimate of the trigger efficiency yielded values of 90% or above (here, a "trigger efficiency" of 100% would mean that the same events were found by the HLT as in offline). For J/psi events, the trigger efficiency, based on looking at photonic electrons, was estimated to be 71%. NOTE before finalizing bullet: Needs a quantitative way to discuss whether these performance numbers are adequate, or not.
- Speed and deadtime: One test was done where the load was increased on the HLT CPU's and no noticeable deadtime was seen. However, it is not known at this time what is the rate at which dead time will be apparent.
»
- Printer-friendly version
- Login or register to post comments