Single-Photon Data-MC Comparison
Updated on Tue, 2007-08-21 07:58. Originally created by ahoffman on 2007-08-21 07:58.Data
Monte Carlo
L2 status tables
Updated on Thu, 2007-08-09 08:43. Originally created by kocolosk on 2007-08-09 08:23.Instead of producing lots of 2007EmcMbStatus runs to generate pre-production status tables, I thought we should look into using the compressed tower spectra Jan saved in the l2ped monitoring program. I wrote a script to regenerate histograms from these ASCII lines and then asked Matt Cervantes to run the CSMStatusUtils code on them. I also asked him to run the status code on standard histograms produced by analyzing MuDsts for a single test run (8141062). I compiled some stats on the differences between the offline status and the l2 status:
offline == 1, L2 != 1: 76 towers
offline != 1, L2 == 1: 8 towers
Both bad, but different reasons: 40 towers
Some comments:
- L2 status had trouble catching stuck bits (220, 1143, 1612, 2188) as well as recognizing cold towers (187, 4595). These two scenarios accounted for pretty much all of the cases where L2 marked a tower good and offline didn't.
- L2 marked a bunch of towers with high pedestals as "cold", since there are zero counts above 60. Most of the differences in Case 1 are due to this problem.
- Generally very good agreement -- less than 2% of towers were different if all you care about is status==1.
We'd like to tweak things a little to see if we can catch the few differences we have. In particular, marking all those towers as cold could hurt the vertex-finding efficiency a little bit (that's all we really care about in this pass).
Detailed status codes and histograms are available at the bottom of the post.
Xgrid jobmanager status report
Updated on Wed, 2008-02-06 17:32. Originally created by kocolosk on 2007-08-09 07:35.- xgrid.pm can submit and cancel jobs successfully, haven't tested "poll" since the server is running WS-GRAM.
- Xgrid SEG module monitors jobs successfully. Current version of Xgrid logs directly to /var/log/system.log (only readable by admin group), so there's a permissions issue to resolve there. My understanding is that the SEG module can run with elevated permissions if needed, but at the moment I'm using ACLs to explicitly allow user "globus" to read the system.log. Unfortunately the ACLs get reset when the logs are rotated nightly.
- CVS is up-to-date, but I can't promise that all of the Globus packaging stuff actually works. I ended up installing both Perl module and the C library into my Globus installation by hand.
- Current test environment uses SimpleCA, but I've applied for a server certificate at pki1.doegrids.org as part of the STAR VO.
Important Outstanding Issues
- streaming stdout/stderr and stagingOut files is a little tricky. Xgrid requires an explicit call to "xgrid -job results", otherwise it just keeps all job info in the controller DB. I haven't yet figured out where to inject this system call in the WS-GRAM job life cycle, so I'm asking for help on gram-dev@globus.org.
- Need to decide how to do authentication. Xgrid offers two options on the extreme ends of the spectrum. On the one hand we can use a common password for all users, and on the other hand we can use K5 tickets. Submitting a job using WS-GRAM involves a roundtrip user account -> container account -> user account via sudo, and I don't know how to forward a TGT for the user account through all of that. I looked around and saw a "pkinit" effort that promised to do passwordless generation of TGTs from grid certs, but it doesn't seem like it's quite ready for primetime.
Hit density in FGT region
Updated on Tue, 2007-08-07 17:03. Originally created by fsimon on 2007-08-07 17:03.I am using FTPC hits to study the hit density in the forward region. For this I use files from run 7145009, where the BBC coincidence rates were around 500 kHz.
Low Mass Background (round 1)
Updated on Tue, 2007-08-07 13:36. Originally created by ahoffman on 2007-08-03 14:31.First look at some Simulations
Updated on Wed, 2007-08-08 10:42. Originally created by ahoffman on 2007-08-02 10:53.Some Luminosity Statistics
Updated on Wed, 2007-08-01 15:45. Originally created by kocolosk on 2007-07-29 22:23.===========================================================================================
trigId L_int_mb ε_vtx ε_mb vz vz_mb L_samp_mb < ps > L_sampled
-------------------------------------------------------------------------------------------
117001 4.794 pb^-1 0.505 0.505 0.678 0.678 6.191 μb^-1 1.0 6.191 μb^-1
137221 111.932 nb^-1 0.876 0.475 0.622 0.679 360.330 mb^-1 24894.4 8.970 nb^-1
137222 4.682 pb^-1 0.936 0.507 0.616 0.678 5.830 μb^-1 35544.9 207.237 nb^-1
137611 3.606 pb^-1 0.896 0.500 0.641 0.675 2.620 μb^-1 39894.6 104.544 nb^-1
137622 4.682 pb^-1 0.977 0.507 0.637 0.678 5.830 μb^-1 24114.8 140.597 nb^-1
===========================================================================================
where
- L_int_mb is the integrated luminosity seen by the minbias trigger for runs in which the specified trigger was active
- eps_vtx is the vertex finding efficiency for this trigger
- eps_mb is the vertex finding efficiency for mb-triggered events in runs in which the specified trigger was active
- vz is the fraction of events with a reco vertex that had fabs(vz) < 60 cm
- vz_mb is the same quantity for mb triggered events
- L_samp_mb is N_good_vertex_mb / (eps_mb * sigmaBBC))
- < ps > is sum_{runs} (ps_mb * n_mb) / sum_{runs} (ps_trig * n_trig)
- L_sampled is L_samp_mb * < ps >
Data were obtained from the 289 spinTree runs without any additional QA selection. If I restrict to the 188 runs in the latest jet runlist I get
===========================================================================================
trigId L_int_mb ε_vtx ε_mb vz vz_mb L_samp_mb < ps > L_sampled
-------------------------------------------------------------------------------------------
117001 003.286 pb^-1 0.513 0.513 0.676 0.676 4.757 μb^-1 1.0 4.757 μb^-1
137221 065.594 nb^-1 0.957 0.518 0.625 0.683 208.913 mb^-1 24838.9 5.189 nb^-1
137222 003.220 pb^-1 0.942 0.512 0.613 0.676 4.548 μb^-1 36362.2 165.371 nb^-1
137611 002.253 pb^-1 0.905 0.510 0.636 0.669 1.649 μb^-1 40234.9 066.358 nb^-1
137622 003.220 pb^-1 0.981 0.512 0.637 0.676 4.548 μb^-1 24376.0 110.859 nb^-1
===========================================================================================
The vertex finding efficiency for minbias triggers climbs a little bit, but it's still only 51%. After doing a little digging it seems this is consistent with Jan's findings in his evaluation of PPV for 2006:
http://www.star.bnl.gov/HyperNews-star/protected/get/starspin/2820.html
Kaon-pion correlations in d+Au: multiplicity and vertex study, unlike vs like-sign in HIJING
Updated on Thu, 2007-07-19 18:03. Originally created by kopytin on 2007-07-19 18:03.BTOW pedestal comparison -- emcOnline vs. L2ped
Updated on Tue, 2007-07-17 09:20. Originally created by kocolosk on 2007-07-17 08:47.I wrote a script that compares each Run 7 BTOW pedestal table with a corresponding L2ped printout. The script queries the RunLog_onl database for the runnumber immediately preceding the table timestamp, then it looks for a log file corresponding to that runnumber in the L2ped directory on online.star.bnl.gov. If it can’t find the precise runnumber, it compares the online pedestals to the next closest L2ped runnumbers both before and after, and then prints the comparison for the better match.
Here’s a summary log printing the number of channels per table where
- the difference in pedestal value is greater than 4, or
- the difference in pedestal sigma is greater than 1.5
There are a handful of channels per run with drastic differences. For example, emcOnline will declare a zero pedestal, but l2ped will say there’s a peak at 3500 ADCs. Channels like these are obviously bad. Of greater concern are the runnumbers where several hundred channels do not match. Most of the time it’s because the widths are different, but occasionally there are runs where a few hundred towers will have pedestals that differ by more than 4 ADCs. The full verbose output (listing every tower that fails the cut) can be found here.