Sort by:[Date]

BSMD ADC saturation simulation

There's been quite some discussion about the saturation of the BSMD ADCs around 850.  We don't have this feature in the simulator, so here's my attempt to put it in.

Here's a plot of the ADC and energy distributions from the slow simulator in DEV without any reconfiguration.  This is the output of testSimulatorMaker.C using 1000 events from photon MC (photon_25_35_10.geant.root):

Note the sharp ADC peaks at 1023 for the SMDs and the PRS.  Now if I configure the simulator with the options (not yet in CVS)

emcSim->setMaximumAdc(kBarrelSmdEtaStripId, 850.0);
emcSim->setMaximumAdcSpread(kBarrelSmdEtaStripId, 15.0);

emcSim->setMaximumAdc(kBarrelSmdPhiStripId, 850.0);
emcSim->setMaximumAdcSpread(kBarrelSmdPhiStripId, 15.0);


Update

These changes were checked into CVS on October 08 2007

Data-MC comparison: Zgg

Data Cuts:
L2Gamma Triggered Events
BBC timebin 6 - 9
Pt > 5.2
No Charged Track Association
1 SMD strip good in each plane

Embedding Notes, Sept 28, 2007

Requests #1154003721 and #1154003633 (Upsilon and J/psi into Pythia+pp)

Working Towards a Photon Cross Section

A schematic view of a cross section is:

Combinatorics Within Jets: First Looks

  

Combinatorics With Jets: Second Look

BSMD ADC Responses

These plots were generated from the IUCF MuDsts, both /star/institutions/iucf/hew/2006ppLongRuns/ and /star/institutions/iucf/balewski/prodOfficial06_muDst. 

Database load times

Here's a sample breakdown of the time it takes to initialize each of the databases we use in STAR:
Calibrations_eemc.......Real time 0:00:00.101, CP time 0.050
Calibrations_emc........Real time 0:00:00.045, CP time 0.010
Calibrations_ftpc.......Real time 0:00:00.005, CP time 0.000
Calibrations_l2.........Real time 0:00:00.014, CP time 0.000
Calibrations_l3.........Real time 0:00:00.013, CP time 0.000
Calibrations_pmd........Real time 0:00:00.019, CP time 0.010
Calibrations_rhic.......Real time 0:00:00.004, CP time 0.000
Calibrations_rich.......Real time 0:00:00.004, CP time 0.000
Calibrations_ssd........Real time 0:00:00.012, CP time 0.010
Calibrations_svt........Real time 0:00:59.828, CP time 0.490
Calibrations_tof........Real time 0:00:00.054, CP time 0.020
Calibrations_tpc........Real time 0:00:00.845, CP time 0.030
Calibrations_tracker....Real time 0:00:00.018, CP time 0.000
Calibrations_trg........Real time 0:00:00.004, CP time 0.000
Calibrations_zdc........Real time 0:00:00.005, CP time 0.000

Geometry_emc............Real time 0:00:00.030, CP time 0.010
Geometry_ftpc...........Real time 0:00:00.051, CP time 0.000
Geometry_pmd............Real time 0:00:00.117, CP time 0.010
Geometry_ssd............Real time 0:00:00.026, CP time 0.000
Geometry_svt............Real time 0:00:06.470, CP time 0.090
Geometry_tof............Real time 0:00:00.033, CP time 0.000
Geometry_tpc............Real time 0:00:00.326, CP time 0.010

Conditions_trg..........Real time 0:00:00.021, CP time 0.020

RunLog_l3...............Real time 0:00:00.022, CP time 0.000
RunLog_onl..............Real time 0:00:00.019, CP time 0.000
RunLog_rhic.............Real time 0:00:00.005, CP time 0.000

StarDb..................Real time 0:03:39.359, CP time 0.710
I'm a little confused by the fact that StarDb is not equal to the sum of all the other tables, but the breakdown clearly indicates that the SVT tables take up far more time than all the other listed tables combined.  If we could find a way to turn off loading of those tables it would be a great help to people doing code development (and would likely take some load off the DB servers as well).  I understand Yuri is working on something in this department, but I'm not sure how far along it is.

UPDATE

I repeated this test again and found that this time StarDb ~ Calibrations_svt:
Calibrations_svt........Real time 0:05:56.508, CP time 0.460
Geometry_svt............Real time 0:00:16.197, CP time 0.080
StarDb..................Real time 0:06:17.437, CP time 0.630

MY HACK

After a bit of digging I found that one can skip the SVT tables by adding the following line to the beginning of StDbConfigNodeImpl::buildTree:
// Skip SVT tables for performance boost -- APK
if( mdbDomain == dbSvt ) return 1;
I've made this hack available in ~kocolosk/fastDbLib. I can't guarantee it won't have side-effects, but with DB load times reduced to < 1 second I think it's worth a shot.

MY HACK, VERSION 2

I missed some recursion in the first hack; this seems to do a better job:
// Skip SVT tables for performance boost -- APK
if( mdbDomain == dbSvt ) {
if(mnextNode)mnextNode->buildTree(opt);
return 1;
}

2006 TPC Drift Velocity Investigation

Preliminary analyses of the 2006 data have shown an abnormally large DCA for tracks from a 4-day period following a purge of the TPC on the evening of May 18th.  TPC experts have suggested that a recalculation of the drift velocity measurements using the procedure developed for Run 7 may allow for better reconstruction of these tracks.  Here's my first attempt at the recalculation, using Yuri's codes "out-of-the-box".

Procedure

  • Restore st_laser DAQ files from HPSS
  • cons StRoot/StLaserAnalysisMaker
  • Run a simple BFC chain:  root.exe -q -b 'bfc.C(9999,"LanaDV,ITTF","path_to_st_laser_daq_file")'
  • execute LoopOverLaserTrees.C+("./st_laser_*.tags.root") to generate drift velocity measurements
StLaserAnalysisMaker has a README which documents this procedure and describes the other macros in the package.

Results

Here are the drift velocity measurements currently in the Calibrations_tpc database and the ones that I recalculated from the st_laser DAQ files.  I'm only showing measurements from the 10 days around the purge:



I'm not sure how much attention should be paid to the original East laser measurements.  The West laser measurements in the DB track pretty closely with the new ones.  The significant difference is that there are more new measurements covering the period where the D.V. was changing rapidly:



So what we're really interested in is, for a given event, how different will the D.V. returned by the database be?  The way to calculate that is to compare each new measurement to the DB measurement with the closest preceding beginTime:



In the West ratio plot one can clearly see the effect of the additional measurements.  For comparison I've plotted the time period where we see problems with the track DCAs and <nTracks> / jet.  See for example

http://deltag5.lns.mit.edu/~kocolosk/protected/drupal/4036/summary/bjp1_sum/bjp1_sum_dcaG.gif

http://drupal.star.bnl.gov/protected/spin/trent/jets/2007/apr06/problem_highlights.gif

http://cyclotron.tamu.edu/star/2006Jets/apr23_2007/driftVelocityProb.list

Next Steps

As I mentioned, I didn't tweak any of the parameters on Yuri's codes to get these numbers, so it may be possible to improve them.  I looked at the sector-by-sector histograms in the file and the values for the drift velocities looked generally stable.  The values for the slopes jumped around a bit more.  Assuming there are no additional laser runs that I missed, we could look into interpolating between drift velocity measurements to get even more fine-grained records of the period when the gas mixture was still stabilizing.  Here's an example of a fit to the new combined drift velocity measurement in the rapidly-varying region:


References

Discussion on starcalib:  http://www.star.bnl.gov/HyperNews-star/get/starcalib/402.html