Sort by:[Date]

problem with BTOW geometry in Geant

 This is zoom in on the eta=1 gap between BTOW & ETOW.

Tracking Efficiency : details

I : Selection of good wafers

Goal : 

BEMC Towers with Good Status but Bad Gain

The attached lists have the towers with status == 1 and gain == 0 and gain_status == 0. There are 130 from 2006 and 143 from 2005.

 

Cutting on SMD information in single particle MC

The two attached .pdf files show the results of a preliminary set of endcap cuts including the SMD.

DB Consistency Analysis / Maatkit

Intro

I’ve been struggling to keep our MIT database mirror synchronized with the BNL master, and I wanted to write about some steps we (STAR) might take to do a better job of keeping our slaves synchronized. The problem that I’m worried about is the situation where, according the the Heartbeat Page, a slave is up-to-date with robinson, but in reality the slave somehow become silently corrupted.

Initial Checksums

It turns out that this problem is actually pretty common. I found what seems to be a slick set of utilities called Maatkit that will calculate checksums of every table in a DB and look for differences between replicated DBs. I ran mk-table-checksum on the following servers

  • robinson.star.bnl.gov:3306
  • db01.star.bnl.gov:3316
  • db02.star.bnl.gov:3316
  • db03.star.bnl.gov:3316
  • rhig.physics.yale.edu:3316
  • star1.lns.mit.edu:3316

and attached the output below as initial_checksum.txt. None of the slaves in that list are fully in sync with robinson according to those checksums. db02 and db03 come much closer than the others; in db02’s case only Calibrations_tracker.{schema,ssdHitError} are different from robinson. I verified for a few cases that differences actually do exist in the tables when the checksums don’t match.

Resynchronization and Results

Maatkit also provides a utility (mk-table-sync) which will determine and optionally execute the SQL commands needed to re-sync one server against another. I used this utility to re-sync star1 against robinson — it takes quite a while. I then ran mk-table-checksum and mk-checksum-filter again, and attached the output as checksum_after_sync_filtered.txt. Unfortunately, robinson and star1 still don’t have perfect agreement according to the checksums. I’m not sure what tables like Nodes, NodeRelation, and tableCatalog do, but I noticed the following “physics” tables still did not have matching checksums:

  • Calibrations_ftpc.ftpcGasOut
  • Calibrations_rich.trigDetSums
  • Calibrations_svt.svtPedestals
  • RunLog_onl.beamInfo
  • RunLog_onl.biFitParams
  • RunLog_onl.starMagOnl
  • RunLog_onl.zdcFitParams

Now comes a weird part: I tried a SELECT * FROM biFitParams on star1 and on robinson, and in that case there was no difference in the output. I’m not sure how the checksums could still be different in that case. I also tried diff’ing the starMagOnl tables; the only difference I found was one server reported some currents as “-0.0000000000” and the other one reported “0.0000000000” (no leading minus sign).

Summary

So, I realize the results aren’t 100% conclusive, but I still believe that these maatkit scripts would be a valuable addition to STAR’s QA toolkit. They definitely helped me correct a variety of real problems with our MIT database mirror.

It’s straightforward to take the output of mk-table-checksum and mk-checksum-filter and programmatically put it on a webpage; in fact, Mike Betancourt wrote up a little sed script to do just that. I think we should try scheduling ~daily checksum calculations and posting any discrepancies to the Heartbeat webpage automatically.

Pileup simulation in BFC

M-C pileup simulation 

  1. Prepare 2 .fzd files, use the same geometry
    1. phys.fzd ( hard process simulation you want to study).

2008 PPV vertex finder status

Mission statement:  PPV finds only Z-vertex (as of June 2008)

»

Gluon Workshop Talk

 Talk Draft 

collaboration meeting '08

first version : ssd-bouchet-ver2

second version : ssd-bouchet-ver3

third version : ssd-bouchet-ver4

comparison D0 : version 1

comparison D0 : version 2

Conversion Rates for Gammas Identified in pi0 Decays and Isolated w/in a Single Tower

Abstract:  Background conversion probability using identified pi0 sample, take 2 we have used identified pi0 and eta decays to estimate background conversion rates.  It is important to cross check that the conversion rate for a single photon, isolated in a single calorimeter cell, agrees with the conversion rate expected based on the thickness of the radiator.  This was shown to work well for single photons isolated from eta decays, and for the pair of photons identified in both eta and pi0 decays.  However,  when we examine the conversion rate for single photons isolated in a tower from a pi0 decay, the conversion rate does not agree with the 1-photon hypothesis.  In this study, we show that the background beneath the pi0 peak does not explain this discrepancy.