Sort by:[Date]

First Look at Electron Jets

I've been working with Priscilla on adding her electron candidates to the spin trees.  spinTrees with electrons are now available, and we went ahead and made these plots investigating the properties of jets with identified electrons.  We analyzed BJP1 and L2Gamma triggers and plotted the p_T spectra, track multiplicity, and neutral energy fraction for all jets and for jets associated with an electron candidate (deltaR < 0.7).  Full-size plots available by clicking on each image.

Electron Cuts

  • Global dE/dx cut changing with momentum
  • nFitPoints >= 15
  • nDedxPoints >= 10
  • nHits / nPoss >= 0.52
  • track Chi2 < 4
  • DCAGlobal < 2
  • NEtaStrips > 1 && NPhiStrips > 1
  • Primary dE/dx cut changing with momentum
  • 0.3 < P/E < 1.5
  • -0.01287 < PhiDist < 0.01345
  • ZDist in [-5.47,1.796] (West) or [-2.706,5.322] (East)

Jet Cuts

  • 0 < R_T < 0.99
  • -0.7 < detEta < 0.9
  • Jet points at fired jet patch (BJP1 only)

l2gamma_rt bjp1_rt l2gamma_track_mult bjp1_track_mult l2gamma_ptComp bjp1_ptComp

Conclusions

  • The neutral energy fraction for L2gamma electron jets peaks at the same point as the ridge in the inclusive jet plot.
  • There are a significant number of 1-track jets in the L2gamma sample
  • In the plot of jet p_T vs. electron p_T we see the maximum of the distribution occurring at a electron pT of ~5.5 GeV and a jet pT of ~11 GeV.  This is in stark contrast to the BJP1 distribution. 

Kaon-pion correlations in d+Au: varying the momentum cut + historic summary

In
this update
a set of results with Adam's suggested momentum cut is added to the pool of analysis results for d+Au.

.

.

BTOW pedestal comparison -- emcOnline vs. L2ped

I wrote a script that compares each Run 7 BTOW pedestal table with a corresponding L2ped printout. The script queries the RunLog_onl database for the runnumber immediately preceding the table timestamp, then it looks for a log file corresponding to that runnumber in the L2ped directory on online.star.bnl.gov. If it can’t find the precise runnumber, it compares the online pedestals to the next closest L2ped runnumbers both before and after, and then prints the comparison for the better match.

Here’s a summary log printing the number of channels per table where

  1. the difference in pedestal value is greater than 4, or
  2. the difference in pedestal sigma is greater than 1.5

There are a handful of channels per run with drastic differences. For example, emcOnline will declare a zero pedestal, but l2ped will say there’s a peak at 3500 ADCs. Channels like these are obviously bad. Of greater concern are the runnumbers where several hundred channels do not match. Most of the time it’s because the widths are different, but occasionally there are runs where a few hundred towers will have pedestals that differ by more than 4 ADCs. The full verbose output (listing every tower that fails the cut) can be found here.

Some Luminosity Statistics

===========================================================================================
trigId L_int_mb ε_vtx ε_mb vz vz_mb L_samp_mb < ps > L_sampled
-------------------------------------------------------------------------------------------
117001 4.794 pb^-1 0.505 0.505 0.678 0.678 6.191 μb^-1 1.0 6.191 μb^-1
137221 111.932 nb^-1 0.876 0.475 0.622 0.679 360.330 mb^-1 24894.4 8.970 nb^-1
137222 4.682 pb^-1 0.936 0.507 0.616 0.678 5.830 μb^-1 35544.9 207.237 nb^-1
137611 3.606 pb^-1 0.896 0.500 0.641 0.675 2.620 μb^-1 39894.6 104.544 nb^-1
137622 4.682 pb^-1 0.977 0.507 0.637 0.678 5.830 μb^-1 24114.8 140.597 nb^-1
===========================================================================================

where

  • L_int_mb is the integrated luminosity seen by the minbias trigger for runs in which the specified trigger was active
  • eps_vtx is the vertex finding efficiency for this trigger
  • eps_mb is the vertex finding efficiency for mb-triggered events in runs in which the specified trigger was active
  • vz is the fraction of events with a reco vertex that had fabs(vz) < 60 cm
  • vz_mb is the same quantity for mb triggered events
  • L_samp_mb is N_good_vertex_mb / (eps_mb * sigmaBBC))
  • < ps > is sum_{runs} (ps_mb * n_mb) / sum_{runs} (ps_trig * n_trig)
  • L_sampled is L_samp_mb * < ps >

Data were obtained from the 289 spinTree runs without any additional QA selection.  If I restrict to the 188 runs in the latest jet runlist I get

===========================================================================================
trigId L_int_mb ε_vtx ε_mb vz vz_mb L_samp_mb < ps > L_sampled
-------------------------------------------------------------------------------------------
117001 003.286 pb^-1 0.513 0.513 0.676 0.676 4.757 μb^-1 1.0 4.757 μb^-1
137221 065.594 nb^-1 0.957 0.518 0.625 0.683 208.913 mb^-1 24838.9 5.189 nb^-1
137222 003.220 pb^-1 0.942 0.512 0.613 0.676 4.548 μb^-1 36362.2 165.371 nb^-1
137611 002.253 pb^-1 0.905 0.510 0.636 0.669 1.649 μb^-1 40234.9 066.358 nb^-1
137622 003.220 pb^-1 0.981 0.512 0.637 0.676 4.548 μb^-1 24376.0 110.859 nb^-1
===========================================================================================

The vertex finding efficiency for minbias triggers climbs a little bit, but it's still only 51%.  After doing a little digging it seems this is consistent with Jan's findings in his evaluation of PPV for 2006:

http://www.star.bnl.gov/HyperNews-star/protected/get/starspin/2820.html

First look at some Simulations

At the moment my main focus is the off line calibration of the barrel EMC using neutral pions from 2006.  My (very) rough plan of how to do this is as follows:

Low Mass Background (round 1)

While I am waiting for my simulation jobs to finish, I am switching gears for a moment and looking at real data.  I am trying to finalize the mass window I use for my asymmetry measurement. 

Hit density in FGT region

I am using FTPC hits to study the hit density in the forward region. For this I use files from run 7145009, where the BBC coincidence rates were around 500 kHz.

Xgrid jobmanager status report

  • xgrid.pm can submit and cancel jobs successfully, haven't tested "poll" since the server is running WS-GRAM.
  • Xgrid SEG module monitors jobs successfully.  Current version of Xgrid logs directly to /var/log/system.log (only readable by admin group), so there's a permissions issue to resolve there.  My understanding is that the SEG module can run with elevated permissions if needed, but at the moment I'm using ACLs to explicitly allow user "globus" to read the system.log.  Unfortunately the ACLs get reset when the logs are rotated nightly.
  • CVS is up-to-date, but I can't promise that all of the Globus packaging stuff actually works.  I ended up installing both Perl module and the C library into my Globus installation by hand.
  • Current test environment uses SimpleCA, but I've applied for a server certificate at pki1.doegrids.org as part of the STAR VO.

Important Outstanding Issues

  • streaming stdout/stderr and stagingOut files is a little tricky.  Xgrid requires an explicit call to "xgrid -job results", otherwise it  just keeps all job info in the controller DB.  I haven't yet figured out where to inject this system call in the WS-GRAM job life cycle, so I'm asking for help on gram-dev@globus.org.
  • Need to decide how to do authentication.  Xgrid offers two options on the extreme ends of the spectrum.  On the one hand we can use a common password for all users, and on the other hand we can use K5 tickets.  Submitting a job using WS-GRAM involves a roundtrip user account -> container account -> user account via sudo, and I don't know how to forward a TGT for the user account through all of that.  I looked around and saw a "pkinit" effort that promised to do passwordless generation of TGTs from grid certs, but it doesn't seem like it's quite ready for primetime.