computing

Patch for a bug in PYTHIA6

This patch is included in PYTHIA from PYTHIA 6.412. You don't need to apply this patch if you are using PYTHIA 6.412 or newer.

Download Patch

pythia-6.4.11-t.diff : patch for PYTHIA-6.4.11
pythia-6205-6205t.diff : patch for PYTHIA-6.205

If you need the patch for a different version of PYTHIA, email sakuma@bnl.gov.

Specification for a Grid efficiency framework

Under:

The following is an independently developed grid efficiency framework that will be consolidated with Lidia&rsq

Jobs catalog

Under:

When running on rcas, we typically use a legacy csh script named "alljobs".

Miscellaneous scripts

Under:
a

Production overview

Under:

As of spring of 2007, the Monte Carlo production is being run on three different platforms:

Test Plan (Alex S., 14 June)

Under:

 

Charge

Under:

From email:

We had a discussion with Arie Shoshani and group pertaining

SRM/DRM Testing June/July 2007

Under:

SRM/DRM Testing June/July 2007

Grid-friendly Starsim production scripts

Under:
Since the production activity of STAR is migrating to, and eventually will end up running mostly in the Grid environment, this necessitates modification (which often means simplification) of the produ

Merger/filtering script

Under:
Typically, a Starsim run will result in an output which is a file, or a series of files with names like gstar.1.fz, gstar.2.fz etc. Regardless of whether we run locally or on the Grid, there is a small chance that the file(s) will be truncated. To guard against the possibility of feeding up incorrect data to the reconstruction stage, and/or performing a split or merger of a few file, a KUMAC script has been developed. It will, among other things, discard incomplete events, and produce serially numbered files with names like rcf1319_01_100evts.fzd, which contains the name of the dataset, the serial number of the file (distinct from the numbering of the input files), and the number of events contained therein, all of which is helpful in setting up or debugging the production. It has recently been simplified (although still not easily readable), and wrapped into a utility shell script, which does preparation work as well as cleanup. The resulting script, named "filter.tcsh", takes a single argument which is assumed to be the name of the dataset (and which is then used in naming the output files).

Miscellaneous production scripts

Under:

This page has been created with the purpose to systematize the various scripts currently used in the Monte Carlo production and testing. The contents will be updated as needed, however the codes are presumed to be correct and working at any given time.

Monitoring transfers

Under:
You can tell if transfers are working from the messages in your terminal window.

You can monitor the transfer rate on the pdsfgrid1 ganglia page on the “bytes_in” plot. However, it’s also good to verify that rrs is entering the files into the file catalog as they are sunk into HPSS. This can be done with get_file_list.pl:

Running the HRM servers at RCF

Under:

I suggest creating your own subdirectory ~/hrm_grid similar to ~hjort/hrm_grid. Then copy from my directory to yours the following files:



srm.sh

hrm

bnl.rc

drmServer.linux (create the link)

Running the HRM servers at PDSF

Under:

I suggest creating your own subdirectory ~/hrm_g1 similar to ~hjort/hrm_g1. Then copy from my directory to yours the following files:



setup

hrm

pdsfgrid1.rc

hrm_rrs.rc

Catalog.xml (coordinate permissions w/me)

How to run the transfers

Under:
The first step is to figure out what files you want to transfer and make some file lists for SRM transfers:

At PDSF make subdirectories ~/xfer ~/hrm_g1 ~/hrm_g1/lists

Copy from ~hjort/xfer the files diskOrHpss.pl, ConfigModule.pm and Catalog.xml into your xfer directory.

Procedure

Under:

STAR-PMD calibration Procedure

Experimental High Energy Physics Group, Department of Physics, University of Rajasthan, Jaipur-302004

The present STAR PMD calibration methodology is as follows:

  1. The events are cleaned and hot_cells are removed.
  2. The cells with no hits in immediate neighbours are considered as isolated hits and are stored in a Ttree
  3. The data for each cell, whenever it is found as a isolated cell, is collected and the adc distribution forms the mip for that cell.
  4. The mip statistics is then used for relative gain normalization.

The steps (1) , (2.) and (3.) have been discussed in detail in past. This writeup concentrates only on (4.) i.e the Gain Normalization Procedure. The writeup here attepts to understand the varations in the factors affecting the gains. It also attempts to determine how frequently should the gain_factors be determined.

Calibration

Under:

Embedding focus meeting #6

Under:
-00-00
Thursday, 1 January 1970
, at 00:00 (GMT), duration : 00:00
The following topics should be discussed items:
  • Understanding of the who's doing what for the reshape of the framework (new macros, new maker, etc...) - Comprehensive plan and timing.
  • Review of the embedding work plan ; next action items to attack [node:3206]. I would propose to address items 12-15 (one of which is below) 40-42 and 45-47.
  • Review of the embedding procedures
TimeTalkPresenter
16:00Embedding description and policy (draft?) ( 00:10 ) 1 fileJerome Lauret (BNL)

Embedding focus meeting 5

Under:
-00-00
Thursday, 1 January 1970
, at 00:00 (GMT), duration : 00:00

Embedding focus meeting 4

Under:
-00-00
Thursday, 1 January 1970
, at 00:00 (GMT), duration : 00:00