- Collaboration
- Nominated Speakers 2019-2024
- Operations
- Restricted Access
- STAR Images
- STAR Management Document
- Information for International Visitors
- Management Team 2014--2017
- Organization
- Physics Opportunities with STAR in 2020+
- STAR 2016 BUR for run 17
- STAR run 15-16 BUR
- STAR run16-17 BUR
- STAR run18-19 BUR
- STAR run19-20 BUR
- STAR run20-21 BUR
- STAR run21-25 BUR
- STAR run22-25 BUR
- STAR run23-25 BUR
- STAR run24-25 BUR
- STAR run24-25 BUR- backup
- STAR run25 BUR
- STAR Technical Support Group
- Talk's commitee related documents
- Users area
- Web master
computing and calibration
Updated on Mon, 2016-05-23 07:40. Originally created by xzb on 2016-05-23 07:40.
Under:
Producing the requested data for physics will involve significant use of data stores, considerable processing time, and time spent understanding and completing calibrations. When a dataset is a continuation of a collision species and energy from a previous year (with STAR's detector similarly set up), first-physics calibrations for the highest priority dataset typically require on the order of two months after the conclusion of data-taking for that year. Subsequent dataset calibrations for a given year need another month each. The proposed 500 GeV p+p and 27 GeV (and potentially 62 GeV) Au+Au datasets will be such continuations, adding to what were acquired in 2013 and 2011 (2010) respectively (repeating the environment of no HFT and no iTPC). Understanding features of new running conditions could extend calibrations of the isotope datasets to take a few months more, and it is important to keep in mind that unforeseen peculiarities of any given data set can further delay delivery.
Table X presents estimates of the DAQ and data summary ("MuDst") dataset sizes of the proposed colliding species, along with projected single-pass production times on 100% of STAR's 2016 allocation of the RACF computing farm. It is critical to emphasize that these numbers are tied to the proposed event goals, and would scale with the actual events acquired. These productions will need to balance computing resource usage with prior-year datasets as well as ongoing calibrations and run support. STAR may choose, as an imaginable example, to produce Run 16 200 GeV Au+Au concurrently with the Run 17 500 GeV p+p in a 60%-40% apportionment, which would elongate the latter production to a year or more.
For the 500 GeV pp dataset, we expect 360 pb^-1 to require approximately 3.3 billion events recorded for processing.
Data set Events DAQ size [PB] MuDst size [PB] Production time [months]
500 GeV p+p 3.3B 3.20 1.75 5.0
62 GeV Au+Au 1.5B 0.81 0.54 1.0
27 GeV Au+Au 0.5B 0.24 0.12 0.5
200 GeV Ru+Ru 1.2B 0.88 0.65 1.0
200 GeV Zr+Zr 1.2B 0.88 0.65 1.0
Totals 6.01 3.71
Caption: Table X: Computing resource estimates for production of the proposed Runs 17 and 18 datasets (see text for details)
»
- Printer-friendly version
- Login or register to post comments