- jeromel's home page
- Posts
- 2020
- 2019
- 2018
- 2017
- 2016
- 2015
- December (1)
- November (1)
- October (2)
- September (1)
- July (2)
- June (1)
- March (3)
- February (1)
- January (1)
- 2014
- 2013
- 2012
- 2011
- 2010
- December (2)
- November (1)
- October (4)
- August (3)
- July (3)
- June (2)
- May (1)
- April (4)
- March (1)
- February (1)
- January (2)
- 2009
- December (3)
- October (1)
- September (1)
- July (1)
- June (1)
- April (1)
- March (4)
- February (6)
- January (1)
- 2008
- My blog
- Post new blog entry
- All blogs
S&C News 2016/02/02
Greetings everyone,
A few news from the S&C activities
-
Last week, I visited LBNL, reviewed & discussed with many people our local support structure and path forward. There was a renewed verbal commitment from the local group to support STAR computing (resource) needs throughout the BES years though it would be fair to say that we will have to work inventively through the reality of funding constraints - the visit was highly productive and a general plan to restore services was discussed (new compiler, embedding, data production workflow, and possibility to harvest new resources and leverage virtualization)
-
The Run 14 MTD stream data production is finishing soon (we need to push new jobs in within a week)
-
Our current thinking in terms of significant set of jobs to pipe to the farm (bulk production) are
-
run-15 p+p (incl. HFT)
-
st_physics, 2.03B events - rough estimates @ 21 sec/evts ~ 1.5 months at full farm
-
st_ssdmb / st_nossdmb, 1.16B evts ~ 0.9 months at full farm
-
st_mtd - 507M evts ~ 2 weeks at full farm
-
st_himult - 7.6M evts - ...
-
-
run-15 p+Au (incl. HFT), together with p+Al
-
total statistics for st_physics - 1.62B events [TBC]
-
st_physics + st_rp + st_himult - 1.86B events [TBC]
-
-
-
pA
-
calibration for p+Al is in its final stage (TPC + TOF + BeamLine)
-
calibrations for p+Au is fully in place (BeamLine and/or HFT-to-TPC alignment can be tweaked/improved but neither of these is deemed critical to starting a production)
-
-
A recent HFT issue (thanks to PostDoc & students as well as Flemming for reporting and doing the communication with S&C) may change the plan above
-
The issue has large impact on HFT physics: track/HFT hit association efficiency change is as large as by x2 after bug fix in a PXL specific code. But we feel the conservative approach i.e., requiring hits in 3 or more layers for the HFT to participate in track reconstruction, protects the non-HFT physics from bad side-effects. Further evaluation could be done to evaluate the probability to really have 3 spurious hits - simulation showed the track quality was either better or the same (but not destroyed) and tracking refit would reject hits destroying the track quality.
-
There has been discussions of not running an HFT based chain for the p+p and reserve the ssdmb stream for "later" and the MTD team are considering reconstruction without the HFT as well.
-
-
Mysteries in the run-16 HFT-TPC alignment calibrations using cosmic data reported at the S&C meeting two weeks ago are now understood and calibrations are proceeding
-
We have discussed resuming the Si/StiCA side by side evaluation with a focus on the “W” stream and samples. A general plan was drafted and the base infrastructure was put in place and code assembled for this evaluation. More news on how to organize this coming soon.
-
Xrootd issues reported by users (large amount of jobs end with error) has been I think identified and the first fix showed positive outcome (more tweaking coming)
-
Distributed Data storage is however reaching a max-out state. We have begun to discuss with the PWGC of what to keep and what not but we will soon run out of space to save further productions. The current disk occupancy was documented in Xrootd datasets, 201602 check. There are clearly only two path forward to this space/storage pending crisis: we need to either accelerate the use of the picoDST as mainstream new analysis format (discussion renewed with the PWGC) or acquire more storage. Both are being looked into.
-
We have had regular meeting with the TOF folks to review the software status (and embedding use cases). More on this soon but our general path is to put together a document that summarizes the needs (on the way) and agree on the way to approach the embedding problematic.
-
The activities related to all aspects of the offline software are continuing under the careful documentation of Jason. The latest meeting is available at You do not have access to view this node and discussed
-
Framework evolution to support misalignment of the ideal geometry in simulation and reconstruction continues to move forward. A first working version of the software has been tested and we are now discussing (a) finalizing the transformation convention and (b) the test-cases (involving the HFT and other detectors)
-
The continuing evaluation of Vertex finding methods with an eye toward making sure all evaluations are fair and balanced (i.e. done within the same set of criteria and principles). For example, Dmitri added a quick fitter to PPV so it can be better compared to KFV.
-
We are looking into methodologies for accounting for materials in projecting tracks to points outside of the TPC, e.g. TOF, GMT, BSMD (only helical projections have been available until now).
-
-
We have revamped the real-time event display idea with a new interface and tool available at http://online.star.bnl.gov/aggregator/livedisplay/ . Entirely Web based, thanks to Dmitry for this as we were asked if such facility could be provided for a soon coming National Lab Day (April 20th)
-
As noted at the last Collaboration Meeting, there is a vacancy in the position of Embedding deputy - Kefeng graduated and we are thankful for his past work. If you know of a good candidate, please speak up.
-
There are lots of activities online in support of the run and noticeably, apart from the daily troubleshooting, the evaluation of file-systems with automatic compression behind the scene is progressing. Providing the impact on resources is reasonable (TBD), this could be a good addition for teams like the Trigger team in need for large storage (but no $ to buy any) and we an access pattern of “save once, may read later”. We are also finishing the CEPH evaluation on a secondary CEPH cluster and hopefully, will expand its usage as a centralized file system for online users soon.
-
Online as well, the normalization of account and UID/GID online (not yet complete) opens the avenue and goes hand-to-hand to providing centralized and hierarchical services to all teams without breaking the security principles of the online enclave. Much work is still to be done to complete this (but if successful, the harvesting of online resources off-run, coupled with the presence of a central storage, would surely be finally at reach).
Jerome
(with editing review/contributions from Gene & Jason and help from many in the team)
- jeromel's blog
- Login or register to post comments