Detector Sub-systems

Welcome to the STAR sub-system page. The menu displays all major sub-systems in the STAR setup.

Those pages are used for keeping documentation and information about the subsystems as well as some operational procedures, drawing, pictures and for detector sub-systems calibration procedures and result of studies as well.

Also consult the STAR internal reviews for more information.

 

BEMC

The STAR Barrel Electromagnetic Calorimeter

BEMC Detector Operator Manual

Authored by O. Tsai, 04/13/2006 (updated 02/02/2011)

You will be operating three detectors:

BTOW Barrel Electromagnetic Calorimeter
BSMD Barrel Shower Maximum Detector 
BPSD Barrel Preshower Detector


All three detectors are delicate instruments and require careful and precise operation.

It is critical to consult and follow the “STAR DETECTOR STATES FOR RUN 11
and “Detector Readiness Checklist”  for instructions.

Rule 1: If you have a concern of what you are going to do with any of these detectors please don’t hesitate to ask people around you in the control room or call experts to get help or explanations.

This manual will tell you:

  1. how to turn On/Off  low and high voltages for  all three detectors.
  2. how to prepare BTOW for “Physics”.
  3. how to recover from a PMT HV Trip.
  4. how to deal with common problems.

First, familiarize yourself with the environment of the control room. This is a picture of the four terminal windows which you will be using to operate the BEMC systems.  For run 11, Terminal 4 is not in use.  Terminal 0 (not shown) is on the left side of terminal 1/

 

(clicking on a link will take you directly to that section in the manual)

0 - (on beatrice.starp.bnl.gov)

BEMC Main Control Window


1 - (on emc02.starp.bnl.gov)
BEMC PMT Low Voltage Power Supply Slow Control

2 - (on emcsc.starp.bnl.gov)
BTOW HV Control

3 - (on hoosier.starp.bnl.gov)
BSMD HV Control

0 - (on emc01.starp.bnl.gov)
BPSD HV Control
 
To login on any of these computers use the emc account with password (check the control room copy of the manual).

Terminal 0    “BEMC Main Control Window

Usually this terminal is logged on to beatrice.starp.bnl.gov
The list of tasks which you will be doing from this terminal is:

  1. Prepare BEMC detectors for Physics.
  2. Turning On/Off low voltages on FEEs.
  3. Turning On/Off BEMC crates.
  4. Resetting Radstone Boards (HDLC).
  5. Explain to experts during phone calls what you see on some of the terminals.



The screenshot above shows how the display on emc02.starp.bnl.gov usually looks during the run. There are five windows open all the time. They are:

  1. “Barrel EMC Status”  - green.
  2. “BEMC MAIN CONTROL” – gray.
  3. “BARREL EMC CANBUS” – blue.
  4. Terminal on sc5.starp.bnl.gov  (referred to as the ‘sc5 window’)
  5. Terminal from telnet scserv 9039 (referred to as ‘HDLC window’)

Prepare BEMC detectors for PHYSICS.

In normal operation this is a one click operation.



Click “Prepare for PHYSICS” button and in about 10 minutes “Barrel EMC Status” window will turn green and tell you that you are ready to run.  This window may look a little bit different from year to year depending trigger requirements for BTOW. 

However if this window turns red, then you will be requested to follow the suggested procedures which will popup on this window: simply click on these procedures to perform them. 

During “prepare for physics” you can monitor the messages on the sc5 window. This will tell you what the program is actually doing. For example, when you click  “Prepare for Physics” you will start a multi-step process which includes:

  1. Turning OFF FEEs on all SMD/PSD crates
  2. Programming TDC (Tower Data Collector, Crate 80)
  3. Reprogramming FPGAs on all BTOW crates
  4. Configuring all BTOW crates
  5. Configuring all SMD/PSD FEEs
  6. Loading pedestals on all BTOW crates
  7. Loading LUTs on all BTOW crates  (only for pp running)
  8. Checking BTOW crate configuration
  9. Checking SMD/PSD configuration
  10. Checking that pedestals were loaded correctly (optional)
  11. Checking that LUTs were loaded correctly (only for pp running)

Usually, you will not be asked to use any other buttons shown on this window. 
 

BEMC MAIN CONTROL

You can initiate all steps outlined above manually from the BEMC MAIN CONTROL window shown below, and do much more with the BEMC system. However, during normal operation you will not be asked to do that, except in cases when an expert on the phone might ask you to open some additional window from this panel and read back some parameters to diagnose a problem. 



You might be asked to:

  1. Turn OFF or ON  SMD/PSD FEE
  2. Open SMD/PSD panel and read voltages and currents on different SMD/PSD FEEs
  3. Open East or West panels to read voltages on some BTOW crates. (Click on Voltages)

 That will help experts to diagnose problems you are calling about.

BARREL EMC CANBUS

From this window you can turn Off and On BEMC crates, read parameters of VME crates. This screenshot shows you this window during normal operation with all BEMC crates being ON.

  1. Crate 80 TDC  (Tower Data Collector and “Radstone boards”)
  2. Thirty crates for BTOW
  3. Eight crates for BSMD 
  4. Four crates for BPSD

The BTOW crates are powered in groups of three or four from a single power supply (PS) units.  The fragment below explains what you see.




Tower crates 0x10, 0x11, 0x12, 0x13 are all powered from a single power supply: PS 16.
Thus, by clicking the On and Off buttons you will be switching On/Off all four crates and the communication with PMT boxes which are associated with them. (see details  in Tower HV Control GUI description).

Sc5 and HDLC windows.

Two other terminal windows on “Terminal 0” are the so-called sc5, and HDLC windows.
These need to be open at all times. To open the sc3 terminal you will need to login as sysuser on sc5.starp.bnl.gov  with password (check the control room copy of the manual).

From this sc5 terminal you run two programs. The first program is emc.tcl. If you need for some reason to restart “BEMC MAIN CONTROL” or “Barrel EMC Status” GUI you need to start emc.tcl: the alias for this is emc.  To kill this program use alias kill_emc.

To open “Barrel EMC Canbus” GUI use alias emc_canbus_noscale.
If you need to reboot canbus then:

  1. open sc5 window
  2. telnet scserv 9040
  3. press “Ctrl” and “x” keys
  4. wait while canbus reboots (~5 minutes or so)
  5. press “Ctrl” and “]” keys
  6. quit telnet session
  7. close sc5 window

To open an HDLC window, first you need to open an sc3 window and then telnet scserv 9039.
To close telnet session you need to press “Ctrl” and “]”, and then quit from telnet.
You may be asked by experts on the phone to reset the radstone boards. This is why you need this window open. There are two radstone boards and to reset them type:
radstoneReset 0 and radstoneReset 1

 

Terminal 1. “BEMC PMT Low Voltage Power Supply Slow Control

There is a change in operation procedures for Run7 for PMT HV Low Voltage power supplies.
There are two low voltage power supply PL512 units which powers PMT bases.
PL512 with IP address 130.199.60.79 powers West side and PL512 with IP address 130.199.60.81 powers East side of the detector. A single power supply feeds thirty PMT boxes.  The GUI for both PL512 should be open all time on one of the workspace on Terminal1.  A screenshot below shows GUI at normal conditions. Both PL512 should be ON all the time, except the case when power cycling of the PMT bases is required.
There are two buttons to turn power On and Off, as usual, wait 30 sec. after turning power supply Off before you will turn it On again. To start GUI use aliases bemc_west and bemc_east on sc5.starp.bnl.gov

Terminal 2.       “BTOW HV Control


This is typical screen shot of the BTOW HV GUI during “Physics” running.

What is shown on this screen?

The top portion of the screen shows the status of the sixty BTOW PMT boxes.  In this color scheme green means OK, yellow means bad, gray means masked out. 

Buttons marked “PS1 ON” etc. allows to ramp slowly HV on the groups of the boxes. (PS1 ON will bring only boxes 32-39)
Buttons “EAST ON” and “WEST ON” allows to ramp up slowly entire east or west sides.

The fragment below explains what the numbers on this screen means.



Each green dot represent a single PMT box.  Label 0x10 tells you that signals from PMTs in boxes 1 and 2 feed to BTOW crate 0x10, boxes 3 and 4 feed crate 0x11 etc. You will need to know this correspondence to quickly recover from HV trips.

 

Turning ON HV on BTOW from scratch.
    1. BEMC PMT LV East and West should be ON.
    2. EMC_HighVoltage.vi should be running.
    3. Made a slow ramp on the West Side by pressing button “WEST ON”
    4. Made a slow ramp on the East Side by pressing button “EAST ON”
Both steps 3 and 4 takes time, there is no audio or visual alarm to tell operators that HV was ramped – operators should observed progress in the window in the “Main Control” subpanel (see below).
 

Subpanel “Main Control




The HV on PMTs is usually ON during the Run.  The two buttons on top are for turning the HV On and OFF on all PMT boxes. The most frequently used button on this subpanel is “Re-Apply HV all PMT Boxes” which is usually used to recover after HV trip. Sometimes you will need to “Re-Apply HV current PMT box” if the system does not set the HV on all boxes cleanly.
The scale 0-60 shows you a progress report. The small icons below this scale tells you what PMT box and PMT were addressed or readout. 

Once you recover from a HV trip please pay attention to the small boxes labeled “Box”, “PMT”, and the speed at which the program reads the voltages on the PMTs.  This will tell you which box has “Timeout” problems and which power supply will need to be power cycled.

Subpanel “PMTs In a Selected Box”


This subpanel shows you the status of the PMTs in a given PMT box.

If you want to manually bring a single PMT box to the operational state by clicking on “Re-Apply HV current PMT box” on the Main Control subpanel you will need to specify which Box To Display on the panel first.

Subpanel “Alarm Control”



On the Alarm Control sub-panel the SOUND should be always ON, except for the case when you are recovering from a HV trip and wish to mute this annoying sound.

Shift personnel are asked to report HV trips in the shift log (type of trip, e.g. single box#, with or without timeout, massive trip, etc…)

Please don’t forget to switch this button back to the ON position after recovering from a trip.

The Main Alarm LED will switch color from green to red in case of an alarm.
 

HV trip with Timeout problem.

Typical situation – you hear a sound alarm indicating a HV trip. The auto-recovery program did not bring all PMT boxes to the operational state, e.g. some boxes will be shown in yellow.  First thing to check for the presence of a “Timeout” problem.


Look at the right upper corner of the GUI. If the field below “Timeouts” is blank then try to recover by re-applying  HV to all PMT boxes if the number of bad boxes more then two. If only one or two boxes is yellow then you can try to re-apply HV to the current PMT box.

If only one or two PMT bases timed out and HV tripped try to recover using above procedure. It is possible that one or two PMT bases will timed without causing trips of HV, then just continue running and made a note in the shift log about timed out PMT bases, experts will take care of this problem during the day.


But it is also a chance that a single timed out PMT base will trip lots of other PMT. In this case this bad PMT should be masked out. The procedure to do this is simple and can be found at the end of this manual. However, this is an expert operation and should only be performed after consulting with a BEMC expert.

However, if the field below “Timeouts” is filled with numbers (these are PMTs addresses) then you have a Timeout problem.  The procedure to recover is below:

  1. notify shift leader about this problem and tell him that it will take at least 20 min. to bring back BEMC for Physics running.
  2. Second, try to identify which PMT box has timeouts (usually it will be first box in yellow counting from 1 to 60). If you are not sure which box has the timeout problem then read all pmt boxes by clicking on corresponding button at the “Main Control” subpanel, and observe which box creates the problem. The box with timeout problem will be responding VERY slowly and will be clearly seen on the “Main Control” subpanel. At the same time, PMTs addresses will be appear in “Timeouts” white space to the right of the green control panel. 
  3. As soon as you find the box with the timeout problem, click “Cancel” on the “Main Control” subpanel and then click “OFF” – you will need to turn HV OFF on all PMTs.
  4. Wait till HV is shut OFF (all PMT Boxes).
  5. From Terminal 1, power cycle correspondent PL512 (Off, wait 30 sec., On).
  6. Now turn ON HV on PMT boxes from the “Main Control” subpanel. It will take about 2 to 3 minutes first to send desired voltages to all PMTs and then read them back – if the HV is set correctly.

It is possible that during step (6) one of the BEMC PMT LV on the East or West side will trip. In this case cancel ramp (press “Cancel” button on the “Main Control” subpanel).  Power cycle tripped BEMC PMT LV.  Proceed with the slow ramp.


However, if during step (6) you still get a “Timeout” problem then you will need to:

  1. Call experts

-------------------- Changes for Run10 operation procedures ----------------------------
To reduce the number of HV trips and associated efficiency losses during data taking we changed functionality of the PMT HV program. Namely, the HV read back time interval was changed from 15 minutes to 9999 hours, because it was found that most of the HV trips were self induced during HV read back. As a result efficiency of data taking was improved for the price of “conscious operation”. You can‟t relay anymore on absence of the HV Trip alarm as an indicator that HV on all PMTs is at nominal values. Instead shift crew should monitor associated online plots during data taking to be sure that HV was not tripped. In particular, for Run 10, shift crew should watch two plots “ADC Eta Vs Phi” and “Tower ADC” under the “BEMC Shift” tab. Example of missed HV trip on one of the PMT box during recent data taking is shown below.

The white gap in the upper left corner shows absence of hits in BTOW due to HV trip in one of the BTOW PMT Box. It is easy to find out which box tripped by looking at second plot.

The gap with missing hits on the second subpanel for crate 0x15 will tell you that one of the PMT box 11 or 12 was tripped (correspondence between BTOW crates ID and PMT boxes is shown in the BTOW HV GUI see the picture at the beginning of this section).
What to do if shift crew will notify you of HV trip?
The fastest way to recover is to identify what box tripped and then try to recover only this PMT box. In some case it will be impossible to do this, because you will be needed to powercycle LV power supply for PMT HV system (timeout problems).
This is typical scenario:
    1. Identify which PMT box(es) potentially tripped. (In the example above one PMT box 11 or 12 lost HV) to do that:
    1.1 From the BTOW HV GUI form subpanel “PMTs In a Selected Box” select needed box in the “Box To Display” window.
    1.2 From the BTOW HV GUI form subpanel “Main Control” press “Read HV current PMT Box”. (In the example above, detector operator         found that Box 11 was OK after reading HV, but Box 12 had timeout problems).
    1.3 Depending what will be result of (1.2) you may need to simply Re-Apply HV to current PMT box (no timeouts during step (1.2), or             you will need to resolve timeout problem.
There are additional duties for detector operators when STAR is not taking data for long period of time for any reasons. We need to keep HV On on all PMTs at all time. This will assure stable gains on PMTs. If for some reasons PMTs will be Off for long time (few hours) then it will be difficult to assure that the PMTs gain will not drift once we turn HV On again. Typical situation is APEX days, when STAR is not taking data for 12 hours or so. To check that HV is on shift will be asked to take a short run 1k events using “bemcHTtest” configuration. TRG+DAQ+BTOW only. Once the data will be taken use a link from the run log browser to “LO Trigger”. Check page 6.

All trigger patches should have some hits. In case of the HV trips you will see blank spots. 

 

Procedure to mask out single timeout PMT base.
Information you will need:
    1. In which PMT box timeout PMT base is located
    2. PMT (CW Base) to masked out.
In the timeout window the displayed number is CW Base ID this number need to be translated to PMT (CW Base).

From this sub panel you can find out which PMT in the affected PMT box need to be masked out. Scroll thru the
PMT (CW Base) top right small window and read out at the same time CW Base ID in the second from the bottom left window. (As shown, PMT (CW Base) 80 correspond to CW Base ID 1428).

 

Now, click “Configuration (expert only)” button on the Main Control panel.
 Another panel EMC_Configuration.vi will open.
  From this panel click “CW Bases Configuration” button on the right bottom.
 Another panel EMC_CWBasesConf.vi will open.
  On this panel specify PMT BOX Number and then click on desired CW Base to be masked out. The color of the dot will change from           bright green to pale green. Then click OK button.
Panel will close after that.
  Click OK button on EMC_Configuration.vi panel, this panle will close after that.


To check that you masked out right CW Base, Re-Apply HV to current PMT box. Once HV will be re-applied you will see masked CW base will be in the gray color as shown in the picture above (Bases 50, 59, 65 were masked out in the PMT box 4).

 

Terminal 3.       “BSMD HV Control



Your login name is emc, your password is __________________________

This screenshot shows how the window on the terminal3 will look when the HV is Off on the BSMD modules. There should be two open windows. One is a LabView GUI and another is a telnet session  SY1527 (CAEN HV Main Frame).

In normal operation it is a one click procedure to turn the HV On or OFF on the BSMD.

There is complete description of the BSMD HV program in a separate folder in this document.

Although, operation is very simple, attention should be made for audio alarms.
Do not mute the ALARM. Shift personnel are asked to report all BSMD trips in the shift log.

 

Terminal 0.       “BPSD HV Control

The BPSD (Barrel PreShower Detector) HV supplies are two Lecroy 1440 HV systems located on the second floor platform, racks 2C5.  Each 1440 is commanded by a 1445A controller which communicates via the telnet server on the second floor of the platform (SCSERV [130.199.60.167]).  The left supply uses port 9041 and the right supply uses port 9043.

The HV for BPRS should be On at all times.


From Run 10 for BPRS control we will be using new GUI. They will be open on one of the desktop on Terminal 0 (Beatrice.starp.bnl.gov). Usually detector operators no need to take any actions regarding BPRS HV unless specifically requested by experts. The screen shots of of the new GUIs shown below.


To start the GUI use type bemc_lecroy and bemc_lecroy2.

 

A Green LED indicators tells you that HV is On and at desired level.
You can open subpanel for any slot to read actual values of HV. The screen shot is shown below.

Please Ignore empty current charts – it is normal.


Sometime, BEMC PSD GUI can turn white, due to intermittent problems with LeCroy crate controller. Simply make a log in the shift log and continue normal operation. It is likely HV is still On and at desired level. Experts not need to be called right away in this case.

 

Easy Troubles:

BEMC Main Control seems to be frozen, e.g. program doesn’t respond to operators requests.

Probable reason: RadStone cards in a “funny state” and needs to be reset

From Terminal 1 try to:

  1. kill_emc
  2. soft reboot first “Ctrl” + “x” from HDLC terminal window.
  3. start emc and see if problem solved

If the problem is still there then:

  1. kill_emc
  2. Power Cycle Crate 80
  3. start emc and see if problem solved

If problem is still there call experts

PL512 Information (Run 11 configuration)


There are two PL512 power supplies which provides power to the BEMC PMT boxes.  Both are located in the rack 2C2,  second floor.
The top unit (IP address 130.199.60.79) serves West side of the detector.  The bottom unit (IP address 130.199.60.81) serves East side of the detector.  The connection scheme is shown below

PMT Boxes
                      West Side                                East Side

1-8  9-16  17-22  23-30          32-39  40-47  48-53 54-31 
U 0,4,8 1,5,9  3,7,11  2,6,10    0,4,8  1,5,9  3,7,11  2,6,10 
U 0,1,2,3 +5 V              
U 4,5,6,7 -5 V              
U 8,9,10,11 +12 V               


Note ,  BEMC PMT Low Voltage Power Supply Slow Control channels enumerated from 1 to 12 in the Labview GUI.

Slow control for PL512 runs on EMC02,  login as emc, alias PMT_LV.
You will need to specify IP address.
Configuration (Experts Only)  password is ____________.
Log files will be created each time you will start PMT_LV in the directory
/home/emc/logs/
for example
/home/emc/logs/0703121259.txt  (March 12, 2007, 12:59)


To restart PL512 epics applications.
Login to softioc1.starp.bnl.gov bemc/star_daq_sc
Look at procIDs
->screen –list
->screen –r xxxx.Bemc-west(east) xxxx is procID
->bemclvps2 > (ctrl A) to detach
                        exit to kill
->BEMC-West or East to restart


Experts control for PL512
If you need to adjust LV on PL512 you can do this using expert_bemc_west or (east).
These GUI has experts panel. Adjusting LV setting DO NOT try to slide the bars.
Instead click on it then on popup window you can simply type desired value.
Make sure you will close expert GUI and return to normal operational GUI once
you will finish adjustments.


A copy of this manual as a PDF document is available for download below.

Expert Contacts

Steve Trentalange (on site all run) e-mail: trent@physics.ucla.edu
Phone: x1038 (BNL) or (323) 610-4724 (cell)

Oleg Tsai (on site all run) e-mail: tsai@physics.ucla.edu
Phone: x1038 (BNL)

SMD High Voltage Operation

Version 1.10 -- 04/25/06, O.D.Tsai

Overview

The SMD detectors are a set of 120 proportional wire chambers located inside the EMC modules (one per module). The operating gas is Ar/CO2(90/10). The nominal operating voltage is +1430 V. As for any other gaseous detector, manipulation with high voltage should be performed with great care.

A detailed description of the system is given in the Appendix.

SMD HV must be turned OFF before magnet ramp !
 
Standard Operation includes three steps.

Turn HV ON
Turn HV OFF
Log Defective Modules

To turn SMD HV ON the procedure is:
  1. On HOOSIER.STARP.BNL.GOV computer double click on the SMDHV icon on the Windows desktop.  The 'SMD HIGH VOLTAGE CONTROL' window will open.
  2. On the SMD HIGH VOLTAGE CONTROL window click 'POWER' button.
    1. 'POWER' button will turn RED
    2. In no later than 90 + 150 + 300 sec in window 'Current Mode' you will see the message - "Physics Mode"
    3. All modules with high voltage on them will be shown in GREEN, LIGHT BROWN or YELLOW.


To turn HV OFF on SMD the procedure is:
  1. Click on the green 'POWER' button.  Result:
    1. 'POWER' button will turn from green to BROWN
    2. after a 30 sec. or so small window will pop-up telling you that voltages an all channels reached zero.
  2. Click 'OK' on that small window to stop the program.

To Log Defective Modules

  1. Scroll down the window -- you will see three tables
  2. Log contents of the left table called 'Defect Module List' if any modules are presented here.
  3. Log contents of the right table called 'Modules tripped during Standby' (for example Run XXX  #8 - 3 trips, Run XXX  #54 - 1 trip)
  4. Close the 'SMD HIGH VOLTAGE CONTROL' window.
-------------------
Detailed description of the SMD HV Program and associated hardware settings can be found in Appendix.
-------------------

Indicators to watch:

1.  Interlock went RED

In case STAR global interlock went ON
  1. Interlock led will turn in RED
  2. the SMD HV program will turn OFF HV on all SMD channels and program will halt.
  3. Operator should close SMD HV Voltage control window.
       
Once STAR global interlock will be cleared follow usual procedure to power up SMD.

 2. Server Timeout went RED or SMD HV Control program is frozen.

This is an unusual situation and SMD Expert Contacts should be alerted.  The lost communication to SY1527 should not lead to immediate damages to the SMD chambers.

In case communication to the SY1527 is lost for some reason the 'SERVER TIMEOUT' led will turn RED.  In case SMD HV Control program is frozen the 'Current Time' will not be updated.

Procedure to resolve problem is:
  1. Open terminal window on EMC01.STARP.BNL.GOV (monitor is on top of SMD HV Control PC)
  2. ping 130.199.60.50 -- observe that packets transmitted and received.
    1. If there are no communication with SY1527 (packets lost) -- Call one of the Expert Contacts!
  3. If communication is OK, then stop ping and type telnet 130.199.60.50 1527  -- You should see 'login window for CAEN SY1527 system'
  4. Press any key
  5. Login as 'admin' with password 'admin' -- you will see Main Menu window for SY1527
  6. From 'Main' chose 'Channels' by pressing 'Enter' -- you will see Channels menu window
  7. Verify HV is presented on channels (VMon)

Usually second terminal window is open on HOOSIER.STARP.BNL.GOV to monitor SY1527 HV power supply. If this window is not open use “putty” and open SY1527 session.

Now you can operate HV using this window, but if there is no emergency to turn HV OFF you should first try to restart SMD HV Control program.  

Basic operations from that window are:
Turn HV OFF
Turn HV ON             

Turn HV OFF
  1. press Tab key
  2. scroll to 'Groups' menu
  3. press 'Enter' to chose "Group Mode" --you will see highlighted column
  4. scroll to "Pw"
  5. press space bar -- you will see "Pw" will switched from On to Off and VMon will start to decrease.
Turn HV ON
  1. press Tab key
  2. select 'Group' mode
  3. scroll to IOSet
  4. type 5.0  (Current limit 5uA)
  5. scroll to Trip
  6. type 0.5 sec
  7. Verify that V0Set is 1430 V
  8. Scroll to Pw
  9. press Space bar -- you will see Pw will switch from OFF to ON and Status goes to Up.  VMon will start to increase.
Once HV will reach to nominal it is very important to switch I0Set to 1.6 uA and Trip to 0.1s on all channels.  Some channels might trip after I0Set and Trip were changed, that is 'normal'. For those channels I0Set and Trip should be set to 5 uA and 0.5s. To do that:
  1. Press Tab key
  2. reselect 'Group' mode
  3. Change I0Set and Trip for tripped channels
  4. Power them up - scroll to Pw, and press Space bar.
Usually it is safe to keep channels in 'Group' mode to be able to switch them OFF fast in case of emergency.

Appendix

Before you start:
"Be afraid, even paranoid, and that gives you a chance to catch bad effects in the early stages when they still do not matter" 
--J. Va'vra (Wire Aging Conference)

Detailed information regarding SY1527 mainframe and A1733 HV distribution boards can be found at CAEN web page.

The SMD HV is supplied by the CAEN SY1527 HV system.  The mainframe is located in rack 2C5 (Second Floor, Third Row, near the center).  The HV cables run from modules to the SMD crates (15 cables per each crate).  At each crate HV cables re-grounded on patch panels assuring same ground for HV and signals to be read. From SMD crates, the HV lines then run to the SY1527 system. There are 10 HV cards, 12 HV channels each (model A1733) inside the mainframe to supply high voltages to the SMD chambers.  The parameters of the high voltage system controlled via Ethernet.  The GUI based on LabView and CAEN OPC server.

Hardware settings are:

 HV Hardware limit set to +1500 V on each of A1733 cards.
 HV Software limit (SVMax) set to +1450 V for each channel.
 Communications settings for SY1527 are:
IP 130.199.60.50
Net Mask 255.255.254.0
Gateway 0.0.0.0
User name Admin
Password Admin
Position of switches and status of LEDs on the front panel of SY1527
(from top to bottom, HV is Off)
                              LED
Chk Pass                     On
Toggle Switch 'Loc enable'   On
Ch Status     'NIM'          On
Interlock     'Open'
Master                       On
+48 V                        On
+5  V                        On
+12 V                        On
-12 V                        On
Main                         On

Each A1733 card should have 50 Ohm Lemo 00 terminator to enable HV.


Description of SMD High Voltage Control program.

All SMD HV software is installed on EMCSC.STARP.BNL.GOV in folder C:/SY1527

The SMD HV Control provides one button operation of the HV system for the SMD. There are three main functions Power On, Physics Mode, Power Off. There are two configuration files Conf.txt and Conf2.txt which defines ramp up speed and trip settings for different mode of operation.

The nominal settings for Power On are (C:/SY1527/Conf.txt):

V0          1430 V
I0Set       5 uA
Trip        0.5 s
Ramp Down   50 V/sec
Ramp Up     20 V/sec

All channels are allowed to be ramp up in three consecutive attempts. If the first attempt (90 sec) for given channel lead to trip then ramp up speed will be set to 10 V/sec and second attempt (150 sec) will be performed. If second attempt will lead to trip then ramp up speed will be set to 7 V/sec and third attempt (300 sec) will be made. If all three attempt led to trip the program disconnect that particular channel from HV (corresponding led on main panel will turn in RED).

In no later then 90 + 150 + 300 seconds program will change parameters of I0Set and Trip form 5 uA and 0.5 sec to 1.6 uA and 0.1 sec and will switch to the Physics Mode.

The nominal settings for Physics Mode are (C:/SY1527/Conf2.txt)
                         
V0          1430 V
I0Set       1.6 uA
Trip        0.1 s
Ramp Down   50 V/sec
Ramp Up     20 V/sec

Channels allowed to trip no more then 6 times during the Physics Mode. If channel will trip then I0Set and Trip for that particular channel will be set to 5 uA and 0.5 sec and program will try to bring that channel up. On front panel Alarm Status will turn in RED, corresponded message will pop up in 'Current Mode' window, and corresponded to that channel LED will turn in YELLOW or RED.  Once voltage reach 90% of nominal all indicators will turn to normal status.  Usually it takes about 15 seconds to bring channel back to normal.  At the bottom of the screen the right table with modules that tripped during the Physics Mode will be updated.  If some channel will trip more than 6 times that channel will be HV disabled by the program and corresponded led will turn RED.  The left table on the bottom of the screen will be updated.  Such cases should be treated by experts only later on.

All SMD channels during the Physics Mode is under the monitoring.  Each 3 seconds or so, the values of Voltage, Current and time are logged in files XX_YY_ZZ_P.txt in C:/SY1527/Log, where XX - current month, YY - current day, ZZ - current year, and P is V for Voltages and I for Currents. The current time saved in seconds.  The macro C:/SY1527/smdhv.C plots bad channels history and fills histograms for all other channels.
 
Turn Off is trivial and no need explanations.

The meaning of the LED on main panel are:
Green – channel is at nominal HV, current < 1 uA
Brown – channel is at nominal HV, 1 uA < current < 5 uA
Yellow – the HV is < 50% of nominal
Red    - channel was disconnected due trips during ramp up or physics modes.
Black  - channel disabled by user (Conf.txt, Conf2.txt)

It is important to monitor channels with high current (“Brown”), as well as channels that shows high number of trips during the operation.  Note: some channels (54 for example) probably has a leakage on external HV distribution board and believed to be OK in terms of discharges on anode wires as was verified during summer shutdown. The other might develop sparking during the run or might have intermittent problems (#8 for example). If problems with sparking will be detected at early stages then those channels might be cured by experts during the run, without of loss of entire chamber as it was happened during the first two years of operation.  

The C:/SY1527/ReadSingleChannel.vi allows to monitor single channel. You may overwrite nominal parameters of HV for any channel using that program (it is not desired to do that). This program can run in parallel with the Main SMD HV Control program.  The operation is trivial, specify 'Channel to monitor' and click on 'Update Channel'. If you wish to overwrite some parameters (see above) then fill properly VOSet etc. icons and click on Update Write.

Important! Known bug.
Before you start you better to fill all parameters, if you do not do that and stop the program later on the parameters will be overwritten with whatever it was in those windows, i.e. if V0Set was 0 then you will power Off the channel.  

 In some cases it is easy to monitor channels by looking at front panel of the SY1527 mainframe. The image of this panel can be
 obtained by opening telnet session (telnet 130.199.60.50 1527) on EMC01.STARP.BNL.GOV.

 All three programs can run in parallel.

 -----Experts to be called day or night no matter what---

 1. The operation crew lost communication with SY1527
 2. Any accidental cases (large number (>5) of SMD channels
    suddenly disconnected from HV during the run)
 

For experts only!

What to do with bad modules?

1.    If anode wire were found broken then disable channel by changing 1430 to 0 in both configuration files. That should help to avoid confusions of detector operators.

2.    If anode wires are OK, and module trips frequently during first half an hour after power Up then it is advised to set HV on that particular channel to lower value (-100V, -200V from nominal etc…) and observe the behavior of the chamber (ReadSingleChannel.vi). In some cases after a few hours module can be bring back to normal operation.


3.    If step 2 did not help, then wait till scheduled access and try to cure chamber by applying reverse polarity HV. Important, you can do that using good HV power supply (fast trip protection, with current limit 5uA), or by using something like ‘Bertran’ with external microammeter and balance resistor of no less then 10 MOhm, only! In any case you need to observe the current while gradually increase HV. In no case ‘Bertarn’ like power supply might be left unattended during the cure procedure. It is not advised to apply more then -1000 V. In some cases curing procedure might be fast (one hour or so). In others it might take much longer (24 hours and more) to bring module back to operation. In any case I would request to talk with me.


4.    The SMDHV is also installed on EMCSC.STARP.BNL.GOV and can be run from there, although that will for sure affect BTOW HV, be advised.



LOG:
  Version 1.00 was written 11/12/03
  Version 1.10 corrected   04/25/06
 

A copy of this manual as a Word document is available for download below.

Calibrations

Here you'll find links to calibration studies for the BEMC:

BTOW

2006
procedure used to set the HV online
MIP study to check HV values -- note: offline calibrations available at Run 6 BTOW Calibration

2005
final offline

2004
final offline

2003
offline slope calibration
MIPs
electrons

BSMD

2004
Dmitry & Julia - SMD correlation with BTOW for 200 GeV AuAu

BPSD

2005
Rory - CuCu PSD calibration studies

2004
Jaro - first look at PSD MIP calibration for AuAu data

BPRS

This task has been picked up by Rory Clarke from TAMU. His page is here:

http://www4.rcf.bnl.gov/~rfc/

Run 8 BPRS Calibration

Parent for Run 8 BPRS Calibration done mostly by Jan

01 DB peds R9069005, 9067013

 Pedestal residua for 434 zero-bias events from run 9069005.

The same pedestal for all caps was used - as implemented in the offline DB.

 

Fig 1.

Fig 2.  Run 9067013, excluded caps >120.  All 4800 tiles, pedestal residua from 100 st_zeroBias events. Y-axis [-50,+150].

Fig 3.  Run 9067013, excluded caps >120. Pedestal corrected spectra for all 4800 tiles, 10K  st_physics events. Y-axis [-50,+150].

Dead MAPMT results with 4 patches 4 towers wide.

Fig 4.

Run 9067013, excluded caps >120. 

Zoom-in Pedestal corrected spectra, one ped per channel.

Top 10K st_physics events (barrel often triggered)

Bottom pedestal residua 100 st_zeroBias events

Fig 5.

Run 9067013, input =100 events, accept if capID=124 , raw spectra.

There are 4 BPRS  crates, so 1200 channels/crate.  In terms of softIds it's

PSD1W:  1-300 + 1501-2400
PSD19W: 301-1500
PSD1E:  2401-2900 + 4101-4800
PSD20E: 2901-4100

Why only 2 channels fired in crate PSD20E ?

02 pedestal(capID)

 Run 9067013, 30K st_physics events, spectra accumulated separately for every cap.

Top plot pedestal (channel), bottom plot integral of pedestal peak to total spectrum.

 


Fig 1. CAP=122


Fig 2. CAP=123


Fig 3. CAP=124


Fig 4. CAP=125


Fig 5. CAP=126


Fig 6. CAP=127



Fig 7. Raw spectra for capID=125. Left: typical good pedestal, middle: very wide pedestal, right: stuck lower bit.

For run 9067013 I found: 7 tiles with ADC=0, ~47 tiles with wide ped, ~80 tiles with stuck lower bit.

Total ~130 bad BPRS tiles based on pedestal shape, TABLE w/ bad BPRS tiles




Fig 8. QA of pedestals, R9067013, capID=125. Below is 5 plots A,...,F all have BPRS soft ID on the X-axis.

A: raw spectra (scatter plot) +  pedestal from the fit as black cross (with error).

B: my status table based on pedestal spectrum. 0=good, non-zero =sth bad.

C: chi2/DOF from fitting pedestal, values above 10. were flagged as bad

D: sigma of pedestal fit, values aove 2.7 were flagged as bad

E: integral of the found pedestal peak to the total # of entries. On average there was ~230 entries per channel.



Fig 9. BPRS pedestals for 'normal' caps=113,114,115 shown with different colors


Fig 10. BPRS pedestals for caps=100..127 and all softID , white means bad spectrum, typical stats of ~200 events per softID per cap


Fig 11. BPRS:" # of bad caps , only capID=100...127 were examined.


Fig 12. BPRS:" sig(ped)


Fig 13. BPRS:" examples of ped distribution for selected channels. Assuming for sing;e capID sig(ped)=1.5, the degradation of pedestal resolution if capID is not accounted for would be: sqrt(1.5^2 +0.5^2)=1.6 - perhaps it is not worth the effort on average. There still can be outliers.

 

03 tagging desynchronized capID

BPRS Polygraph detecting corrupted capIDs.

Goal: tag events with desynchronized CAP id, find correct cap ID

Method: 

  1. build ped(capID, softID)
  2. pick one BPRS crate (19W)
  3. compute chi2/dof for  series capID+/-2
  4. pick 'best'  capID with smallest chi2/dof
  5. use pedestals for best capID for this crate for this event
  6. if best capID differs from nominal capID call this event 'desynchronized & fixed'

Input: 23K st_physics events from run 9067013.

For technical reason limited range of nominal capID=[122,126] was used, what reduces data sample to 4% ( 5/128=0.04).

Results:


Fig 1. ADC-ped(capID,softID) vs. softID for crate 1 (i.e. PSD19W) as is'. No capID corruption detection

 


Fig 2. ADC-ped(capID,softID) with capID detection & correction enabled. The same events.

 

Note, all bands are gone - the capID fix works.

Right: ADC-ped spectra: Black: 594 uncorrupted  events, red: 30 corrupted & fixed events.

 

The integral for ADC[10,70] are 2914 and 141 for good and fixed events, respectively.

141/2914=0.048;  30/594=0.051 - close enough to call it the same (within stats).

 

 

 

 


Fig 3. Auxiliary plots. 

 

TOP left: chi/dof for all events. About 1100 channels is used out of 1200 in served by crate 1. Rejected are bad & outliers.

TOP right: change of chi2/dof for events with corrupted & fixed capID. 

BOTTOM: frequency of capID for good & fixed events, respectively.

 

 


Conclusions:

  • BPRS-Polygraph algo efficiently identifies and corrects BPRS for corrupted capID, could be adopted to used offline .
  • there is no evidence ADC integration widow changes for BPRS data with corrupted capID.   

 

 


Table 1.

shows capIDs for the 4 BPRS crates for subsequent events. Looks like for the same event cap IDs are strongly correlated, but different.
Conclusion: if we discard say capID=125, we will make hole of 1/4 of barrel , different in every event. This holds for BPRS & BSMD.

capID= 83:89:87:90: eve=134 
capID= 1:4:11:3: eve=135 
capID= 74:74:81:72: eve=136 
capID= 108:110:116:110: eve=137 
capID= 68:72:73:75: eve=138 
capID= 58:55:65:64: eve=139 
capID= 104:110:106:101: eve=140 
capID= 9:6:8:15: eve=141 
capID= 43:37:47:46: eve=142 
capID= 120:126:118:122: eve=143 
capID= 34:41:41:40: eve=144 
capID= 3:0:126:2: eve=145 
capID= 28:33:28:30: eve=146 
capID= 72:64:70:62: eve=147 
capID= 2:6:7:5: eve=148 
capID= 22:32:33:24: eve=149 
capID= 8:4:5:124: eve=150 
capID= 23:17:17:19: eve=151 
capID= 62:57:63:61: eve=152 
capID= 54:53:45:47: eve=153 
capID= 68:75:70:67: eve=154 
capID= 73:79:73:72: eve=155 
capID= 104:98:103:103: eve=156 
capID= 12:5:13:10: eve=157 
capID= 5:10:10:2: eve=158 
capID= 32:33:27:22: eve=159 
capID= 96:102:106:97: eve=160 
capID= 79:77:72:77: eve=161 

 

04 BPRS sees beam background?

The pair of plots below demonstrates BPRS pedestal residua are very clean once peds for 128 caps are used and this 5% capID corruption is detected and fixed event by event.

 INPUT: run 9067013, st_phys events, stale BPRS data removed, all 38K events .


Fig 0. capID correction was enabled for the bottom plot. Soft ID is on the X-axis; rawAdc-ped(softID, capID) on the Y-axis.

 


Now you can believe me the BPRS pedestals are reasonable for this run. Look at the width of pedestal vs. softID, shown in Fig 1 below.

There are 2 regions with wider peds, marked by magenta(?) and red circle.

The individual spectra look fine (fig 2b, 2bb).

But the correlation of pedestal width with softID (fig 1a,1b) and phi-location of respective modules (fig 3a, 3b) suggest it could be due to the beam background at

~7 o'clock on the West and at 6-9 o'clock on the East.

 

 

 

05 ---- peds(softID,capID) & status table, ver=2.2, R9067013

INPUT: st_hysics events from run=9067013


Fig 1. Top: pedestal(softID & capID), middle: sigma of pedestal, bottom: status table, Y-axis counts how many capID had bad spectra.

Based on pedestal spectra there are 134 bad BPRS tiles

 


Fig 2. Distribution of pedestals for 4 selected softIDs, one per crate.

 


Fig 3. Zoom-in of ped(soft,cap) spectrum to see there is more pairs of 2 capID which have high/low pedestal vs. average, similar to the known pair (124/125).
Looks like such piar like to repeat every 21 capIDs - is there a deeper meaning it?
(I mean will the World end in 21*pi days?)


Fig 4. Example of MIP spectra (bottom). MIP peak is very close to pedestal, there are worse cases than the one below.

06 MIP algo ver 1.1

 TPC based MIP algo was devised to calibrate BPRS tiles.

Details of the algo are in 1st PDF,

example of MIP spectra for 40 tails with ID [1461,1540] are in subsequent 5 PDF files, sorted by MAPMT

 

Fig 1 shows collapsed ADC-ped response for all 4800 BPRS tiles. The average MIP response is only 10 ADC counts above ped wich has sigma of 1.5 ADC. The average BPRS gain is very is very low.

07 BPRS peds vs. time

Fig 1. Change of BPRS pedestal over ... within the same fill, see softID~1000

Pedestal residua (Y-axis) vs. softID (X-axis), same reference pedestals used from day 67 (so some peds are wrong for both runs)  were used both plots.

Only fmsslow events, no further filtering, capID corruption fixed in fly.

Top run 9066001, start 0:11 am, fill 9989

Bottom run 9066012, start 2:02 am, fill 9989

 

 

Fig 2. Run list. system config was changed between those 2 runs marked in blue.

Fig 3. zoom in of run 9066001

Fig 4. another example of BPRS ped jump between runs: 9068022 & 9068032, both in the same fill.

08 BPRS ped calculation using average

Comparison of accuracy of pedestal calculation using Gauss fit & plain average of all data.

The plain average method is our current scheme for ZS for BPRS & BSMD for 2009 data taking.

Fig 2. TOP: RMS of the plain average, using 13K of fmsslow-triggered events which are reasonable surrogate of minBias data for BPRS.
Middle: sigma of pedestal fit using Gauss shape
Bottom: ratio of pedestals from this 2 method. The typical pedestal value is of 170 ADC. I could not make root to display the difference, sorry.

09 BPRS swaps, IGNORING Rory's finding from 2007, take 1

This page is kept only for the record- information here is obsolete.

This analysis does not accounts for BPRS swaps discovered by Rory's in 2007, default a2e maker did not worked properly 

  • INPUT: ~4 days of fmsslow-triggered events, days 65-69
  • DATA CORRECTIONS:
    • private  BPRS peds(cap,softID) for every run,
    • private status table, the same , based on one run from day 67
    • event-by-event capID corruption detection and correction
    • use vertex with min{|Z|}, ignore ranking to compensate for PPV problem
  • TRACKING:
    • select prim tracks with pr>0.4 GeV, dEdX in [1.5,3.7] keV, |eta|<1.2
    • require track enters tower 1cm from the edge and exists tower at any distance to the edge
    • tower ADC is NOT used (yet)
  • 2 histograms of rawADC-ped were accumulated: for all events (top plot) and for BPRS tiles pointed by TPC track (middle plot w/ my mapping   & lower plot with default mapping)

There are large section of 'miples' BPRS tiles if default mapping is used: 3x80 tiles + 2x40 tiles=320 tiles, plus there is a lot of small mapping problems. Plots below are divided according to 4 BPRS crates - I assumed the bulk of mapping problems should be contained within single crate. 

 


Fig 1, crate=0, Middle plot is after swap - was cured for soft id [1861-1900]

   if(softID>=1861 && softID<=1880) softID+=20;
    else if(softID>=1881 && softID<=1900) softID-=20;

 

 


Fig2, crate=1, Middle plot is after swap - was cured for soft id [661-740]

  if(softID>=661 && softID<=700) softID+=40;
    else if(softID>=701 && softID<=740) softID-=40;


Fig3, crate=2, Middle plot is after swap - was cured for soft id [4181-4220].

But swap in [2821-2900] is not that trivial - suggestions are welcome?

 if(softID>=4181 && softID<=4220) softID+=40;
    else if(softID>=4221 && softID<=4260) softID-=40;

 

Fig4, crate=3, swap in [3781-3800] is not that trivial - suggestions are welcome?

 

 

10 -------- BPRS swaps take2, _AFTER_ applying Rory's swaps

 2nd Correction of BPRS mapping (after Rory's corrections are applied).

  • INPUT: ~7 days of fmsslow-triggered events, days 64-70, 120 runs
  • DATA CORRECTIONS:
    • private  BPRS peds(cap,softID) for every run,
    • private status table, excluded only 7 strips with ADC=0
    • event-by-event capID corruption detection and correction
    • use vertex with min{|Z|}, ignore PPV ranking, to compensate for PPV problem
    • BPRS swaps detected by Rory in 2007 data have been applied 
  • TRACKING:
    • select prim tracks with pr>0.4 GeV, dEdX in [1.5,3.7] keV, |eta|<1.2, zVertex <50 cm
    • require track enters tower 1cm from the edge and exist the same tower at any distance to the edge (0 cm)
    • tower ADC is NOT used (yet)
  • 3 2D histograms of were accumulated:
    •  rawADC-ped (softID)  for all events
    •  the same but only  for BPRS tiles pointed by QAed MIP TPC track 
    •  frequency of correlation BPRS tiles with MIP-like ADC=[7,30] with towers pointed by TPC MIP track

 

Based on correlation plot (shown as attachment 6) I found ~230 miss-mapped BPRS tiles (after Rory's correction were applied).

Once additional swaps were added ( listed in table 1, and in more logical for in attachment 3) the correlation plot is almost diagonal, shown in attachment 1.

Few examples of discovered swaps are in fig1. The most important are 2 series of 80 strips, each shifted by 1 software ID.

Fig 2 shows MIP signal shows up after shit by 1 softID is implemented. 

The ADC spectra for all 4800 strips are shown in attachment 2. Attachment 5 & 6  list basic QA of 4800 BRS tiles for 2 cases: only Rory's swaps and Rory+Jan swaps.

 


Fig 1. Examples of swaps, dotted line marks the diagonal. Vertical axis shows towers pointed by TPC MIP track. X-axis shows BPRS soft ID if given ADC was in the range [7,30] - the expected MIP response. Every BPRS tile was examined for every track, multiple times per event if more than 1 MIP track was found. 

Left: 4 sets of 4 strips needs to be rotated.                                              Right: shift by 1 of 80 strips overlaps with rotation of 6 strips.

 


Fig 2. Example of recovered 80 tiles for softID~2850-2900. Fix: softID was shifted by 1.


Fig 3. Summary of proposed here corrections to existing BPRS mapping


Table 1. List of all BPRS swaps , ver 1.0,  found after Rory's corrections were applied, based on 2008 pp data from days 64-70.

The same list in human readable from is here

Identified BPRS 233 swaps. Convention: old_softID --> new_softID 
389 --> 412 , 390 --> 411 , 391 --> 410 , 392 --> 409 , 409 --> 392 ,
 410 --> 391 , 411 --> 390 , 412 --> 389 , 681 --> 682 , 682 --> 681 ,
 685 --> 686 , 686 --> 685 ,1074 -->1094 ,1094 -->1074 ,1200 -->1240 ,
1220 -->1200 ,1240 -->1260 ,1260 -->1220 ,1301 -->1321 ,1303 -->1323 ,
1313 -->1333 ,1321 -->1301 ,1323 -->1303 ,1333 -->1313 ,1878 -->1879 ,
1879 -->1878 ,1898 -->1899 ,1899 -->1898 ,2199 -->2200 ,2200 -->2199 ,
2308 -->2326 ,2326 -->2308 ,2639 -->2640 ,2640 -->2639 ,2773 -->2793 ,
2793 -->2773 ,2821 -->2900 ,2822 -->2821 ,2823 -->2822 ,2824 -->2823 ,
2825 -->2824 ,2826 -->2825 ,2827 -->2826 ,2828 -->2827 ,2829 -->2828 ,
2830 -->2829 ,2831 -->2830 ,2832 -->2831 ,2833 -->2832 ,2834 -->2833 ,
2835 -->2834 ,2836 -->2835 ,2837 -->2836 ,2838 -->2837 ,2839 -->2838 ,
2840 -->2839 ,2841 -->2840 ,2842 -->2841 ,2843 -->2842 ,2844 -->2843 ,
2845 -->2844 ,2846 -->2845 ,2847 -->2846 ,2848 -->2847 ,2849 -->2848 ,
2850 -->2849 ,2851 -->2850 ,2852 -->2851 ,2853 -->2852 ,2854 -->2853 ,
2855 -->2854 ,2856 -->2855 ,2857 -->2856 ,2858 -->2857 ,2859 -->2858 ,
2860 -->2859 ,2861 -->2860 ,2862 -->2861 ,2863 -->2862 ,2864 -->2863 ,
2865 -->2864 ,2866 -->2865 ,2867 -->2866 ,2868 -->2867 ,2869 -->2868 ,
2870 -->2869 ,2871 -->2870 ,2872 -->2871 ,2873 -->2872 ,2874 -->2873 ,
2875 -->2874 ,2876 -->2875 ,2877 -->2876 ,2878 -->2877 ,2879 -->2878 ,
2880 -->2879 ,2881 -->2880 ,2882 -->2881 ,2883 -->2882 ,2884 -->2883 ,
2885 -->2884 ,2886 -->2885 ,2887 -->2886 ,2888 -->2887 ,2889 -->2888 ,
2890 -->2889 ,2891 -->2890 ,2892 -->2891 ,2893 -->2892 ,2894 -->2893 ,
2895 -->2894 ,2896 -->2895 ,2897 -->2896 ,2898 -->2897 ,2899 -->2898 ,
2900 -->2899 ,3121 -->3141 ,3141 -->3121 ,3309 -->3310 ,3310 -->3309 ,
3717 -->3777 ,3718 -->3778 ,3719 -->3779 ,3720 -->3780 ,3737 -->3757 ,
3738 -->3758 ,3739 -->3759 ,3740 -->3760 ,3757 -->3717 ,3758 -->3718 ,
3759 -->3719 ,3760 -->3720 ,3777 -->3737 ,3778 -->3738 ,3779 -->3739 ,
3780 -->3740 ,3781 -->3861 ,3782 -->3781 ,3783 -->3782 ,3784 -->3783 ,
3785 -->3784 ,3786 -->3785 ,3787 -->3786 ,3788 -->3787 ,3789 -->3788 ,
3790 -->3789 ,3791 -->3790 ,3792 -->3791 ,3793 -->3792 ,3794 -->3793 ,
3795 -->3794 ,3796 -->3835 ,3797 -->3836 ,3798 -->3797 ,3799 -->3798 ,
3800 -->3799 ,3801 -->3840 ,3802 -->3801 ,3803 -->3802 ,3804 -->3803 ,
3805 -->3804 ,3806 -->3805 ,3807 -->3806 ,3808 -->3807 ,3809 -->3808 ,
3810 -->3809 ,3811 -->3810 ,3812 -->3811 ,3813 -->3812 ,3814 -->3813 ,
3815 -->3814 ,3816 -->3855 ,3817 -->3856 ,3818 -->3817 ,3819 -->3818 ,
3820 -->3819 ,3821 -->3860 ,3822 -->3821 ,3823 -->3822 ,3824 -->3823 ,
3825 -->3824 ,3826 -->3825 ,3827 -->3826 ,3828 -->3827 ,3829 -->3828 ,
3830 -->3829 ,3831 -->3830 ,3832 -->3831 ,3833 -->3832 ,3834 -->3833 ,
3835 -->3834 ,3836 -->3795 ,3837 -->3796 ,3838 -->3837 ,3839 -->3838 ,
3840 -->3839 ,3841 -->3800 ,3842 -->3841 ,3843 -->3842 ,3844 -->3843 ,
3845 -->3844 ,3846 -->3845 ,3847 -->3846 ,3848 -->3847 ,3849 -->3848 ,
3850 -->3849 ,3851 -->3850 ,3852 -->3851 ,3853 -->3852 ,3854 -->3853 ,
3855 -->3854 ,3856 -->3815 ,3857 -->3816 ,3858 -->3857 ,3859 -->3858 ,
3860 -->3859 ,3861 -->3820 ,4015 -->4055 ,4016 -->4056 ,4017 -->4057 ,
4018 -->4058 ,4055 -->4015 ,4056 -->4016 ,4057 -->4017 ,4058 -->4018 ,
4545 -->4565 ,4546 -->4566 ,4549 -->4569 ,4550 -->4570 ,4565 -->4545 ,
4566 -->4546 ,4569 -->4549 ,4570 -->4550 ,

  For the reference:

 Below I listed BPRS tiles which look suspicious. However it is not a detailed study, there is more problems in the histos I have accumulated.
BPRS status tables need more work, in particular channels with close to 100% cross talk can't be found using single spectra, because they will fine (just belong to another channel)  

 

Identified BPRS hardware problems:

Tiles w/ ADC=0 for all events:

* 3301-4, 3322-4 - all belong to PMB44-pmt1, dead Fee in 2007
   belonging to the same pmt:
     3321 has pedestal only
     3305-8 and 33205-8  have nice MIP peaks, work well

* 2821, 3781 , both at the end of 80-chanel shift in mapping
     neighbours of 2821: 2822,... have nice MIP 
     similar for neighbours of 3781

* 4525, 4526 FOUND! should be readout from cr=2 position 487 & 507, respectively 
 
I suspect all those case we are reading wrong 'positon' from the DAQ file



Pair of consecutive tiles with close 100% cross talk, see Fig 4.
35+6, 555+6, 655+6, 759+60, 1131+2, 1375+6, 1557+8,
2063+4, 2163+4, 2749+50, 3657+8,   
3739 & 3740 copy value of 3738 - similar case but with 3 channels.
4174+5


Hot pixels, fire at random
1514, 1533, 1557,
block: 3621-32, 3635,3641-52 have broken fee, possible mapping problem 
block: 3941-3976 have broken fee


Almost copy-cat total 21 strips in sections of 12+8+1
3021..32, 3041..48, 3052
All have very bread pedestal. 3052 may show MIP peak if its gain is low.

 

Fig 4. Example of pairs of correlated channels.

11 BPRS absolute gains from MIP, ver1.0 ( example of towers)

 BPRS absolute gains from MIP, ver 1.0 

  • INPUT: ~9 days of fmsslow-triggered events, days 62-70, 200 runs, 6M events
  • DATA CORRECTIONS:
    • private  BPRS peds(cap,softID) for every run,
    • private status table, excluded only 7 strips with ADC=0
    • event-by-event capID corruption detection and correction
    • use vertex with min{|Z|}, ignore PPV ranking, to compensate for PPV problem
    • BPRS swaps detected by Rory in 2007 data have been applied 
    • BTOW swaps detected & applied
  • TRACKING:
    • select prim tracks with pr>0.4 GeV, dEdX in [1.5,3.3] keV, |eta|<1.3, zVertex <50 cm
    • require track enters & exits a tower 1cm from the edge
  • triple MIP coincidence, requires the following (restrictive) cuts:
    •  to see BPRS  MIP ADC :  TPC MIP track and in the same BTOW tower  ADC in [10,25] 
    •  to see BTOW MIP ADC :  TPC MIP track and in the same BPRS tile  ADC in [7,30] 

 


Fig 1 Typical MIP signal seen by BPRS(left) & BTOW (right) for soft ID=??, BPM16.2.x  (see attachment 1 for more)

Magenta line is at MIP MPV-1*sigma -> 15% false positives

 


Fig 2 Typical MIP signal seen by BPRS, pmt=BPM16.2

Average gain of this PMT is on the top left plot, MIP is seen in ADC=4.9, sig=2.6


Fig 3 Most desired MIP signal (ADC=16) seen by BPRS(left) & BTOW (right) for soft ID=1388, BPM12.1.8

Magenta line is at MIP MPV-1*sigma -> 15% false positives,  (see attachment 2 for more)

 


Fig 4 Reasonable BPRS, pmt=BPM11.3, pixel to pixel gain variations is small

 


Fig 5 High MIP signal (ADC=28) seen by BPRS(left) & BTOW (right) , BPM11.5.14

Magenta line is at MIP MPV-1*sigma -> 15% false positives,  (see attachment 3 for more)

 


Fig 6 High gain BPRS, pmt=BPM11.5

 

12 MIP gains ver1.0 (all tiles, also BTOW)

 This is still work in progress, same algo as on previous post.

Now I run on  12M events (was 6 M) and do rudimentary QA on MIP spectra, which results with ~10% channel loss (the bottom figure). However, the average MIP response per tile is close to the final value.

Conclusion: only green & light yellow tiles have reasonable MIP response of ~15 ADC. For blue we need to rise HV, for red we can lower it (to reduce after pulsing). White area are masked/dead pixels.


Fig 1: BPRS MIP gains= gauss fit to MIP peak.

A) gains vs. eta & phi to see BPM pattern. B) gains with errors vs. softID, C) sigma MIP shape, D) tiles killed in QA 

 


Fig 2: BTOW MIP gains= gauss fit to MIP peak. Content of this plot may change in next iteration.

A) gains vs. eta & phi. "ideal" MIP is at ADC=18, all towers. Yellow& red have significantly to high HV, light blue & blue have too low HV.

B) gains with errors vs. softID, C) sigma MIP shape, D) towers killed in QA 

 

13 MIP algo, ver=1.1 (example of towers)

This is an illustration of improvement of MIP finding efficiency if ADC gates are set on BPRS & BTOW at the places matched to actual gains instead of  fixed 'ideal' location.


Fig 1 is from previous iteration (item 11) with fixed location  MIP gates. Note low MIP yield in BPRS (red histo) due to mismatched BTOW ADC gate (blue bar below green histo).


 


Fig 2 New iteration with adjusted MIP gates. (marked by blue dashed lines).

MIP ADC gate is defined (based on iteration 1) by mean value of the gauss fit +/- 1 sigma of the gauss width, but not lower than ADC=3.5 and not higher than 2* mean ADC

Note similar MIP yield in BPRS  & BTOW. Also new MIP peak position from Gauss fit did not changed, meaning algo is robust. The 'ideal' MIP ADC range for BTOW is marked by magenta bar (bottom right) - is visibly too low.


 


Attached PDF shows similar plots for 16 towers. Have a look at page 7 & 14

14 ---- MIP gains ver=1.6 , 90% of 4800 tiles ----

 BPRS absolute gains using TPC MIPs & BTOW MIP cut, ver 1.6 , 2008 pp data

  • INPUT:  fmsslow-triggered events, days 43-70, 525 runs, 16M events (see attachment 1)
    • BTOW peds & ped-status  from offline DB
  • DATA CORRECTIONS (not avaliable using official STAR software)
    • discard stale events
    • private  BPRS peds(cap,softID) for every run,
    • private status table
    • event-by-event capID corruption detection and correction
    • use vertex with min{|Z|}, ignore PPV ranking, to compensate for PPV problem
    • BPRS swaps detected by Rory in 2007 data have been applied 
    • additional ~240 BPRS swaps detected & applied
    • BTOW ~50 swaps detected & applied
    • BTOW MIP position determine independently (offline DB gains not used)
  • TRACKING:
    • select prim tracks with pr>0.35 GeV, dEdX in [1.5,3.3] keV, |eta|<1.3, zVertex <60 cm, last point on track Rxy>180cm
    • require track enters & exits a tower 1cm from the edge, except for etaBin=20 - only 0.5cm is required (did not helped)
  • triple MIP coincidence, requires the following (restrictive) cuts:
    •  to see BPRS  MIP ADC :  TPC MIP track and in the same BTOW tower  ADC in  MIP peak +/- 1 sigma , but above 5 ADC 
    •  to see BTOW MIP ADC :  TPC MIP track and in the same BPRS tile   ADC in  MIP peak +/- 1 sigma , but above 3.5 ADC

 


Fig 1.   Example of typical BPRS & BTOW MIP peak determine in this analysis. 

MIP ADC gate (blue vertical lines) is defined (based on iteration 1.0) by the mean value of the gauss fit +/- 1 sigma of the gauss width, but not lower than ADC=3.5 (BPRS) or 5 (BTOW) and not higher than 2* mean ADC.

 FYI, the  nominal MIP ADC range for BTOW (ADC=4096 @ ET=60 GeV/c) is marked by magenta bar (bottom right).

  • Attachment 6 contains 4800 plots  of BPRS & BTOW like this one below ( large 53MB !!)
  • Numerical values of MIP peak position,width, yield  for all 4800 BPRS tiles are in attachment 4.

 


Fig 2.  ADC of MIP peak for 4800 BPRS tiles.

Top plot: mean, X-axis follows eta bin , first West then East. Y-axis follows  |eta| bin, 20 is at physical eta=+/- 1.0.Large white areas are due to bad BPRS MAPMT (4x4 or 2x8 channels), single white rectangles are due to bad BPRS tile or bad BTOW tower.

Middle plot: mean +/- error of mean, X-axis =soft ID. One would wish mean MIP value is above 15 ADC to place MIP cut comfortable above the pedestal (sig=1.5-1.8 ADC counts).

Bottom plot: width of MIP distribution. Shows the width of MIP shape is comparable to the mean and we want to put MIP cut well below the mean to not loose half of discrimination power.

Note, the large # 452 of not calibrated BPRS tiles does not mean that many are broken. There are 14 known bad PMTs and e 'halves' , total=15*16=240 (see attachment 2). There rest are due broken towers (required by MIP coincidence) and isolated broken fibers, FEE channels.



Fig 3. Example of PMT with fully working 16 channels.
Top left plot shows average MIP ADC from 16 pixels. Top middle: correlation between MIP peak ADC and raw slope - can be used for relative gain change in 2009. Top right shows BTOW average MIP response.
Middle: MIP spectra for 16 pixels.
Bottom: raw spectra fro the same 16 pixels.

300 plots like this  is in attachment 3.

 


Fig 4.  Top plot: average over 16 pixels MIP ADC  for 286 BPRS pmts.  X axis = PMB# [1-30] + pmt #[1-5]/10. Error bars represent RMS of distribution (not error of the mean).

Middle plot : ID of 14 not calibrated PMTs. For detailed location of broken PMTs see attachment 2,  the red computer-generated ovals on the top of 2007 Will's scribbles mark broken PMTs (blue ovals are repaired PMTs) found in this 2008 analysis.

Bottom plot shows # of pixels in given PMT  with reasonable MIP signal (used in the top figure).

  • Numerical values of MIP peak per PMT are in attachment 5.


 

 

 


Fig 5.  ADC of MIP peak for 4800 BTOW tower. Top lot: mean, middle plot: mean +/- error of mean, bottom plot: width of MIP 

  •  Numerical values of MIP peak position,width, yield  for all 4800  BTOW towers are in attachment 4.

Note, probably 1/2 of not calibrated BTOW towers are broken, the other half is due to bad BPRS tiles, required to work by this particular algo.

Koniec !!!

 

15 Broken BPRS channels ver=1.6, based on data from March of 2008

 The following BPRS pmts/tiles have been found broken or partially not functioning, based on reco MIP response from pp 2008 data.

 

 Based on PMT-sorted spectra available HERE  (300 pages , 3.6MB)

 

Table 1. Simply dead PMT's. Raw spectra contain 16 nice pedestals, no energy above, see Fig 2.

PMB,pmt, PDF page # , 16 mapped softIDs
2,3     8, 2185 2186 2187 2188 2205 2206 2207 2208 2225 2226 2227 2228 2245 2246 2247 2248 
2,4,    9, 2189 2190 2191 2192 2209 2210 2211 2212 2229 2230 2231 2232 2249 2250 2251 2252 
4,5,   20, 2033 2034 2035 2036 2053 2054 2055 2056 2073 2074 2075 2076 2093 2094 2095 2096 
5,1,   21, 1957 1958 1959 1960 1977 1978 1979 1980 1997 1998 1999 2000 2017 2018 2019 2020 
12,2,  57, 1421 1422 1423 1424 1425 1426 1427 1428 1441 1442 1443 1444 1445 1446 1447 1448 
14,1,  66, 1221 1222 1223 1224 1225 1226 1227 1228 1241 1242 1243 1244 1245 1246 1247 1248 
24,4, 119, 433 434 435 436 453 454 455 456 473 474 475 476 493 494 495 496 
25,4, 124, 353 354 355 356 373 374 375 376 393 394 395 396 413 414 415 416 
26,4, 129, 269 270 271 272 289 290 291 292 309 310 311 312 329 330 331 332 
32,3, 158, 2409 2410 2411 2412 4749 4750 4751 4752 4769 4770 4771 4772 4789 4790 4791 4792 
44,5, 220, 3317 3318 3319 3320 3337 3338 3339 3340 3357 3358 3359 3360 3377 3378 3379 3380 
39,2, 192, 2905 2906 2907 2908 2925 2926 2927 2928 2945 2946 2947 2948 2965 2966 2967 2968    

 

Table 2.  FEE is broken (or 8-connector has a black tape), disabling 1/2 of PMT, see Fig 3.

PMB,pmt,  nUsedPix,  avrMIP (adc), rmsMIP (adc),PDF page # , all mapped softIDs
7,1,   7,   19.14, 4.36,  31, 1797 1798 1799 1800 1817 1818 1819 1820 1837 1838 1839 1840 1857 1858 1859 1860   
40,1,  8,    5.16, 0.53, 196, 2981 2982 2983 2984 3001 3002 3003 3004 3021 3022 3023 3024 3041 3042 3043 3044  
40,2,  6,    5.44, 0.80, 197, 2985 2986 2987 2988 3005 3006 3007 3008 3025 3026 3027 3028 3045 3046 3047 3048  
40,3, 12,    8.50, 1.05, 198, 2989 2990 2991 2992 3009 3010 3011 3012 3029 3030 3031 3032 3049 3050 3051 3052    
44,1, 10,   13.72, 8.24, 216, 3301 3302 3303 3304 3305 3306 3307 3308 3321 3322 3323 3324 3325 3326 3327 3328  
45,2,  8,    5.04, 1.55, 222, 3421 3422 3423 3424 3425 3426 3427 3428 3441 3442 3443 3444 3445 3446 3447 3448   
51,5,  7,   11.37, 2.15, 255, 3877 3878 3879 3880 3897 3898 3899 3900 3917 3918 3919 3920 3937 3938 3939 3940  
52,5,  8,   15.84, 4.80, 260, 3957 3958 3959 3960 3977 3978 3979 3980 3997 3998 3999 4000 4017 4018 4019 4020  
60,1,  8,   15.12, 3.41, 296, 4581 4582 4583 4584 4601 4602 4603 4604 4621 4622 4623 4624 4641 4642 4643 4644

 

Table 3. Very low yield of MIPs (1/5 of typical), may de due to badly sitting optical connector, see Fig 4

PMB,pmt, QAflag, nUsedPix,  avrMIP (adc), rmsMIP (adc),PDF page # , all mapped softIDs
10,5,  14,     0,  0.00, 0.00,  50, 1553 1554 1555 1556 1573 1574 1575 1576 1593 1594 1595 1596 1613 1614 1615 1616    
31,5,  14,     0,  0.00, 0.00, 155, 4677 4678 4679 4680 4697 4698 4699 4700 4717 4718 4719 4720 4737 4738 4739 4740  
37,2,   0,    10, 12.90, 7.41, 182, 2745 2746 2747 2748 2765 2766 2767 2768 2785 2786 2787 2788 2805 2806 2807 2808    
49,1,   0,    16,  9.36, 3.55, 241, 3701 3702 3703 3704 3705 3706 3707 3708 3721 3722 3723 3724 3725 3726 3727 3728   

 

Table 4. Stuck LSB in FEE, we can live with this. (do NOT mask those tiles)

PMB,pmt, QAflag, nUsedPix,  avrMIP (adc), rmsMIP (adc),PDF page # , all mapped softIDs
51,1,0, 15,   8.37, 1.30, 251, 3861 3862 3863 3864 3881 3882 3883 3884 3901 3902 3903 3904 3921 3922 3923 3924  
51,2,0, 16,   8.66, 1.06, 252, 3865 3866 3867 3868 3885 3886 3887 3888 3905 3906 3907 3908 3925 3926 3927 3928  
51,3,0, 16,  11.08, 1.28, 253, 3869 3870 3871 3872 3889 3890 3891 3892 3909 3910 3911 3912 3929 3930 3931 3932  
51,4,0, 16,  17.16, 2.70, 254, 3873 3874 3875 3876 3893 3894 3895 3896 3913 3914 3915 3916 3933 3934 3935 3936   

Table 5. Other problems:

PMB,pmt, QAflag, nUsedPix,  avrMIP (adc), rmsMIP (adc),PDF page # , all mapped softIDs
31,2,0, 16,  12.32, 3.73, 152,4665 4666 4667 4668 4685 4686 4687 4688 4705 4706 4707 4708 4725 4726 4727 4728   

 


Fig 1. Example of fully functioning PMT (BPM=4, pmt=5). 16 softID are listed at the bottom o fthe X-axis.
Top plot: ADC spectra after MIP condition is imposed based on TPC track & BTOW response.
Bottom plot: raw ADC spectra for the same channels.

 


Fig 2. Example of dead PMT with functioning FEE.  


Fig 3. Example of half-dead PMT, comes in pack, most likely broken FEE.  


Fig 4. Example of weak raw ADC, perhaps optical connector got loose.  


Fig 5. Example of stuck LSB, We can live with this, but gain hardware must be ~10% higher (ADC--> 18)  

16 correlation of MIP ADC vs. raw slopes

 Scott asked for crate based comparison of MIP peak position vs. raw slopes .

I selected 100 consecutive BPRS tiles, in 2 groups, from each of the 4 crates. The crate with systematic lower gain is the 4th (PSD-20E). 

The same spectra from pp 2008 fmsslow events are used as for all items in the Drupal page.

 


Fig 1. BPRS carte=PSD-1W

 


Fig 2. BPRS carte=PSD-19W

 


Fig 3. BPRS carte=PSD-1E

 


Fig 4. BPRS carte=PSD-20E This one has lower MIP peak

 

Run 9 BPRS Calibration

Parent for Run 9 BPRS Calibration

01 BPRS live channels on day 82, pp 500 data

Status of BPRS live channels on March 23, 2009, pp 500 data.

Input: 32K events accepted by L2W algo, from 31 runs taken on day 81 & 82.

Top fig shows high energy region for 4800 BPRS tiles

Middle fig shows pedestal region, note we have ZS & ped subtracted data in daq - plots is consistent. White area are not functioning tiles.

Bottom fig: projection of all tiles. Bad channels included. Peak at ADC~190 is from corrupted channels softID~3720. Peak at the end comes most likely form saturation of BPRS if large energy is deposited. It is OK - BPRS main use is as a MIP counter.

Attached PDF contains more detailed spectra so one can see every tile.

 

 

BSMD

Collected here is information about BSMD Calibrations.

There are 36,000 BSMD channels, divided into 18,000 strips in eta and 18,000 strips in phi. They are located 5.6 radiation lengths deep in the Barrel Electromagnetic Calorimeter (BEMC).

1) DATA: 2008 BSMD Calibration

Information about the 2008 BSMD Calibration effort will be posted below as sub-pages.


Fig 1. BSMD-E 2D mapping of soft ID. (plot for reverse mapping is attached)

01) raw spectra

 Goals:

  1. verify pedestals loaded to DB are reasonable for 2008 pp data
  2. estimate stats needed to find slopes of individual strips for minB events

 Method:

  1. look at pedestal residua for individual strips, exclude caps 1 & 2, use only status==1
  2. fit gauss & compare with histo mean
  3. find integrals of individual strips, sum  over 20 ADC channels starting from ped+5*sig 

To obtain muDst w/o zero suppression I run privately the following production chain:

chain="DbV20080703 B2008a ITTF IAna ppOpt VFPPV l3onl emcDY2 fpd ftpc trgd ZDCvtx NosvtIT NossdIT analysis Corr4 OSpaceZ2 OGridLeak3D beamLine BEmcDebug"


Fig 1

Examples of single strip pedestal residua, based on ~80K minB events from days 47-65, 30 runs. (1223 is # of bins, ignore it).

Left is typical good spectrum, see Fig2.3. Middle is also reasonable, but peds is 8 channels wide vs. typical 4 channels.

The strip shown on the right plot is probably broken. 


Fig 2

Detailed view on first 500 strips. X=stripID for all plots.

  1. Y=mean of the gauss fit to pedestal residuum, in ADC channels, error=sigma of the gauss.
  2. Y=integral of over 20 ADC channels starting from ped+5*sig. 
  3. Raw spectra, Y=ADC-ped, exclude caps 1 & 2, use only status==1

 


Fig 3

Broader view of ... problems in BSMD-E plane. Note, status flag was taken in to account.

Top plot is sum of 30 runs from days 47-65, 80K events. Bottom plot is just 1 run, 3K events. You can't distinguish individual channels, but scatter plot works like a sum of channels, so it is clear the slopes are there, we need just more data.

 


Conclusions:

  1. DB peds for BSMDE look good on average.
  2. with 1M eve we will start to see  gains for individual strip relativewith ~20% error. Production will finish tomorrow.
  3. there are portions of SMDE masked out (empty area in fig 3.2, id=1000) - do why know what broke? Will it be fixed in 2009 run
  4. there are portions of SMDE not masked but dead (solid line in fig 3.2, id=1400) - worth to go after those
  5. there are portions of SMDE not masked with unstable (or wrong) pedestal, (fig 3.1 id=15000)
  6. for most channels there is one or more caps with different ped not accounted for in DB ( thin line below pedestal in fig 2.3)
  7. One gets a taste of gain variation from fig 2.2
  8. Question: what width of pedestal is tolerable. Fig 2.1 shows width as error bars. Should I kill channel ID=152?

 

02) relative BSMD-E gains from 1M dAu events

 Glimpse of relative calibration of BSMDE from 2008 d+Au data

 

Input: 1M dAu minb events from runs: 8335112+8336019

Method : fit slopes to individual strips, as discussed 01) raw spectra

 


Fig 1

Examples of raw pedestal corrected spectra for first 9 strips, 1M dAu events


Fig 2

Detailed view on first 500 strips. X=stripID for all plots.

  1. Y=mean of the gauss fit to pedestal residuum, in ADC channels, error=sigma of the gauss.
  2. Y=integral of over 20 ADC channels starting from ped+5*sig. Raw spectra,
  3. Y=gain defined as "-slope" from the exponential fit over ADC range  20-40  channels, errors from expo fit. Blue line is constant fit to gains.


Fig 3

BSMDE strips cover the whole barrel and eta-phi representation is better suited to present 18K strips in one plot.

  1. Mapping of BSMDE-softID (Z-axis) in to eta-phi space. Eta bin 0 is eta=-1.0, eta bin 299 is eta=+1.0. Phi bin 0 starts at 12:30 and advances as phi angle. 
  2. gains for majority of 18K BSMDE strips. White means no data or discarded by rudimentary QA of peds, yields or slope.



Fig 4

For reference spectra from 1M pp events from ~12 EmcCheck runs from days 47-51. It proves I did it and it was naive on my side to expect 1M pp events is enough.


Fig 5

More pp events spectra - lot of problems with DB content. 

 

03) more details , answering Will

 This page provides more details addressing some of Will's questions.

 



   2) fig 2: well, 500 chns is not a very "natural" unit, but I wonder
      what corresponds to 50 chns (e.g., the region of fluctuation
      250-300) ... I need to check my electronics readout diagrams
      again, or maybe folks more expert will comment

Fig 1.

Zoom-in of the god-to-bad region of BSMDE


Fig 2. 

'Good' strips belong to barrel module 2, crate 2, sitting at ~1 o'cloCk on the WEST


 

Fig 3. 

'BAD' strips belong also  to barrel module 2, crate 2, sitting at ~1 o'cloCk on the WEST

 

 

04) bad CAP 123

 Study of pedestal correlation for BSMDE

Goal: identify source of the band below main pedestals. 

Figs 1,2 show pedestals 'breathe' in correlated way for channels in the same crate, but this mode is decoupled between crates. It may be enough to use individual peds for all CAPS to reduce this correlation.

Fig3 shows CAP=123 has bi-modal pedestals. FYI, CAPS 124,125 were excluded because they also are different.

 

Based on Fig1 & 3 one could write an algo identifying event by event in which mode CAP=123 settled, but for now I'll discard CAP123 as well.

 All plots are made based on 500K d-Au events from the run 8336052.

 


Fig 0
Example of pedestal residua for BSMDE strips 1-500, after CAPS 124 and 125 were excluded.


Fig 1
Correlation between pedestal residua for neighbor strips. Strip 100 is used on all plots on the X-axis


Fig 2
Correlation between pedestal residua for strips in different crates. Strip 100 is used on all plots on the X-axis


Fig 3
Squared pedestal residua for strips [1,150] were summed for every event and plotted as function of CAP ID (Y-axis).

Those strips belong to the same module #1 . X-axis shoes SQRT(sum) for convenience. CAP=123 has double pedestal.

 

05) BSMDE saturation, dAu, 500K minB eve

 Input: 500K d-Au events from run 8336052,

Method : drop CAPS 123,124,125, subtract single ped for all other CAPS.

 


Fig 1 full resolution, only 6 modules , every module contains 150 strips.


Fig 2 All 18K strips (120 modules), every module contain only 6 bins, every bin is sum of 25 strips. 

 h->RebinX(25), h->SetMinimum(2), h->SetMaximum(1e5)

06) QAed relative gains BSMDE, 3M d-Au events , ver1.0

Version 1 of relative gains for BSMDE, d-AU 2008.

 

INPUT: 3M d-AU events from day ~336 of 2007.

Method: fit slopes to ADC =ped+30,ped+100.

The spectra,  fits of pedestal residuum, and slopes were QAed.

Results: slopes were found for 16,577 of 18,000 strips of BSMDE.

 


Fig1   Good spectrum for strip ID=1. X-axis ADC-ped, CAPs=123,124,124 excluded.


Fig2 TOP:  slopes distribution (Y-axis) vs. stripID within given module ( X-axis). Physical eta=0.0 is at X=0, eta=1.0 is at X=150.

BOTTOM: status tables with marked eta-phi location excluded 1423 strips of BSMDE by multi-stage QA of the spectra. Different colors denote various failed tests.

 

Fig3 Mapping of known BSMDE topology on chosen by us eta-phi 2D localization. Official stripID is shown in color.

 

 

07) QA method for SMD-E, slopes , ver1.1

Automatic QA of BSMDE minB spectra. 

Content

  1. example of good spectra (fig 0)
  2. QA cuts definition (table 1) + spectra (figs 1-7)
  3. Result:
    1. # of bad strips per module. BSMDE modules 10,31,68  are damaged above  50%+. (modules 16-30 served by crate 4 were not QAed). 
    2. eta-phi distributions of: slopes, slope error (fig 9) , pedestal, pedestal (fig 10) width _after_ QA
    3. sample of good and bad plots from every module, including modules 16-30 (PDF at the end)

The automatic  procedure doing QA of  spectra was set up in order to preserve only good looking spectra as shown in the fig 0 below. 


Fig 0   Good spectra for random strips in module=2. X-axis shows pedestal residua. It is shown to set a scale for the bad strips shown below. 

 

INPUT: 3M d-AU events from day ~336 of 2007.

All spectra were pedestals subtracted, using one value per strip, CAPS=123,124,125 were excluded. Below I'll use term 'ped' instead of more accurate pedestal residuum.

Method: fit slopes to ADC =ped+40,ped+90 or 5*sig(ped) if too low.

The spectra,  fits of pedestal residuum, and slopes were QAed.

QA method was set up as sequential series of cuts, upon failure later cuts were not checked.

Note, BSMD rate 4 had old resistors in day 366 of 2007 and was excluded from this analysis.
This reduces # of strips from 18,000 to 15,750 . 

 

Table 1 Definition of QA cuts
cut# cut code description # of discarded strips figure
1 1 at least 10,000 entries in the MPV bin 4 -
2  2  MPV position within +/-5 ADC channels  57 Fig 1
3 4  sig(ped) of gauss fit in [1.6,8] ADC ch 335 Fig 2
4 8  position of mean gauss within +/- 4 ADC  11 Fig 3
5 16  yield from [ped+40,ped+90] out of range 441 Fig 4
6 32  chi2/dof from slop fit in [0.6,2.5] 62 Fig 5
7 64  slopeError/slop >16% 4 Fig 6
8 128  slop within [-0.015, -0.05] 23 Fig 7
-  sum  out of processed 15,750 strips discarded  937 ==> 5.9%  

 

 


Fig 1 Example of strips failing QA cut #2, MPV position out of range , random strip selection

 



Fig 2a Distribution of width of pedestal vs. eta-bin


Fig 2b Example of strips failing QA cut #3, width of pedestal out of range , random strip selection



Fig 3a Distribution of pedestal position vs. eta-bin


Fig 3b Example of strips failing QA cut #4, pedestal position out of range , random strip selection



Fig 4a Distribution of yield from the slope fit range vs. eta-bin


Fig 4b Example of strips failing QA cut #5, yield from the slope fit range out of range , random strip selection



Fig 5a Distribution of chi2/DOF from the slope fit vs. eta-bin


Fig 5b Example of strips failing QA cut #6, chi2/DOF from the slope fit out of range , random strip selection



Fig 6a Distribution of err/slope vs. eta-bin


Fig 6b Example of strips failing QA cut #7, err/slope out of range , random strip selection



Fig 7a Distribution of slope vs. eta-bin


Fig 7b Example of strips failing QA cut #8, slope out of range , random strip selection



Results

Fig 8a Distribution of # of bad strips per module. 

BSMDE modules 10,31,68  are damaged above  50%+. Ymax was set to 150, i.e. to the # of eat strips per module. Modules 16-30 served by crate 4 were not QAed. 


Fig 8b 2D Distribution of # of bad strips indexed by eta & phi strip location. Z-scale denotes error code from the 2nd column from table 1.


Fig 9 2D Distribution of slope indexed by eta & phi strip location.
TOP: slopes. There is room for gain improvement in the offline analysis. At fixed eta (vertical line) there should be no color variation.
BOTTOM error of slope/slope.


Fig 10 2D Distribution of pedestal and pedestal width indexed by eta & phi strip location.
TOP: pedestal
BOTTOM: pedestal width.

08) SMD-E gain equalization , ver 1.1

Goal : predict BSMD-E relative gain corrections for every eta bin 

Method : find average slope per eta slice, fit gauss, determine average slope : avrSlope(iEta)

Gain correction formula is used only for extreme deviations:  

  • gainCorr_i = slope_i/avrSlope(iEta),  i =1,..,18,000
  • if( fabs(1-gainCorr)<0.15) gainCor=1.0  // do not correct

Fig 1 Example of 2 eta slices

 


Fig 2 TOP: Slope distribution vs. eta-bin, average marked by crosses

BOTTOM: predicted gain correction. Correction=1 for strips w/ undetermined gains. 


Fig 3 Predicted gain correction. Correction=1 for ~14K of 18K strips.


Fig 4 The same predicted gain correction vs. stripID.

09) QA of SMD-P slopes, ver1.1

Automatic QA of BSMD-P minB spectra. 

Content

  1. example of good spectra (fig 0)
  2. QA cuts definition (table 1) + spectra (figs 1-7)
  3. Result:
    1. # of bad strips per module. BSMD-P modules 1,4,59,75,85  are damaged above  50%+. (modules 16-30 served by crate 4 were not QAed). 
    2. eta-phi distributions of: slopes, slope error (fig 9) , pedestal, pedestal (fig 10) width _after_ QA
    3. sample of good and bad plots from every module, including modules 16-30 (PDF at the end)

The automatic  procedure doing QA of  spectra was set up in order to preserve only good looking spectra as shown in the fig 0 below. 


Fig 0   Good spectra for random strips in module=2. X-axis shows pedestal residua. It is shown to set a scale for the bad strips shown below. 

 

INPUT: 3M d-AU events from day ~336 of 2007.

All spectra were pedestals subtracted, using one value per strip, CAPS=123,124,125 were excluded. Below I'll use term 'ped' instead of more accurate pedestal residuum.

Method: fit slopes to ADC =ped+40,ped+90 or 5*sig(ped) if too low.

The spectra,  fits of pedestal residuum, and slopes were QAed.

QA method was set up as sequential series of cuts, upon failure later cuts were not checked.

Note, BSMD rate 4 had old resistors in day 366 of 2007 and was excluded from this analysis.
This reduces # of strips from 18,000 to 15,750 . 

 

Table 1 Definition of QA cuts
cut# cut code description # of discarded strips figure
1 1 at least 10,000 entries in the MPV bin 2 -
2  2  MPV position within +/-5 ADC channels  10 Fig 1
3 4  sig(ped) of gauss fit in [0.75,8] ADC ch 32 Fig 2
4 8  position of mean gauss within +/- 4 ADC  0 Fig 3
5 16  yield from [ped+40,ped+90] out of range 758 Fig 4
6 32  chi2/dof from slop fit in [0.55,2.5] 23 Fig 5
7 64  slopeError/slop >10% 1 Fig 6
8 128  slop within [-0.025, -0.055] 6 Fig 7
-  sum  out of processed 15,750 strips discarded  831 ==> 5.2%  

 

 


Fig 1 Example of strips failing QA cut #2, MPV position out of range , random strip selection

 



Fig 2a Distribution of width of pedestal vs. strip # inside the module. For the East side I cout strips as -1,-2, ...,-150.


Fig 2b Example of strips failing QA cut #3, width of pedestal out of range , random strip selection



Fig 3 Distribution of pedestal position vs. strip # inside the module 


Fig 4a Distribution of yield from the slope fit range vs. eta-bin


Fig 4b Example of strips failing QA cut #5, yield from the slope fit range out of range , random strip selection



Fig 5a Distribution of chi2/DOF from the slope fit vs. eta-bin


Fig 5b Example of strips failing QA cut #6, chi2/DOF from the slope fit out of range , random strip selection



Fig 6 Distribution of err/slope vs. eta-bin



Fig 7a Distribution of slope vs. eta-bin


Fig 7b Example of strips failing QA cut #8, slope out of range , random strip selection



Results

Fig 8a Distribution of # of bad strips per module. 

BSMD-P modules 1,4,59,75,85  are damaged above  50%+. Ymax was set to 150, i.e. to the # of eat strips per module. Modules 16-30 served by crate 4 were not QAed. 


Fig 8b 2D Distribution of # of bad strips indexed by eta & phi strip location. Z-scale denotes error code from the 2nd column from table 1.


Fig 9 2D Distribution of slope indexed by eta & phi strip location.
TOP: slopes.  At fixed eta (horizontal line) there should be no color variation. red=dead strips
BOTTOM error of slope/slope. white=dead strip


Fig 10 2D Distribution of pedestal and pedestal width indexed by eta & phi strip location.
TOP: pedestal. dead strip have 0 residuum.
BOTTOM: pedestal width. white marks dead strips

10) SMD-P gain equalization , ver 1.1

Goal : predict BSMD-P relative gain corrections for every eta bin 

Method : find average slope per eta slice, fit gauss, determine average slope : avrSlope(iEta)

Gain correction formula is used only for extreme deviations:  

  • gainCorr_i = slope_i/avrSlope(iEta),  i =1,..,18,000
  • if( fabs(1-gainCorr)<0.10) gainCor=1.0  // do not correct

Fig 1 Example of 2 eta slices

 


Fig 2 LEFT: Slope distribution vs. eta-bin, average marked by crosses

RIGHT: predicted gain correction. Correction=1 for strips w/ undetermined gains. 


Fig 3 Predicted gain correction. Correction=1 for ~14K of 18K strips.


Fig 4 The same predicted gain correction vs. stripID.

12) investigating status of P-strips

 
The  plot below merges what we found w/ Matt in BSMDP West with  what  Oleg told us so far about know broken Anode wires.
 
Plot show is 2D location of all 9K strips. It is half of this plot shown before.
In color are various failure modes detected by our software processing minB spectra.
One narrow bar denotes one strip.
 
Module # is printed on the right.
Anode wire range is printed on the top.
 
Black labels A1, A3, etc are from Olegs table of known broken Anode wires from year 2007.
 
Conclusions for day 336 of 2007:
a) we do not see good signal from broken anode wires on BSMD-P West.
 
b) we see more broken anode wires/problems BSMD-P West
  module36: Anode12
 m54: A12
 m24: completely dead
 m30: mostly dead
 m29: odd strips dead
 m23: most of the odd strips dead
 
c) BSMD-P East is terra incognita - no  external info about any problems there
 
Problems listed in b) ,c) seems to be new for day 336 of 2007
 
 

Fig 1. West BSMD-P

 

Fig 2. East BSMD-P

p>

 

 

13) ver 1.2 : SMD-E, -P, status & relative gains, no Crate4

Automatic BSMD-E, -P   relative calibration and status tables. 

The second pass through both SMDE, SMDP was performed, learning from previous mistakes.

Main changes:

  • relax most of the QA cuts
  • smooth raw spectra to reduce significance 1-2 channels wide  of spikes and deeps in raw spectra
  • extend slop fit range to ped +[30,100]
  • still do not use crate 4 - it has to many problems
  • reduce margin for gain correction - now 10%+ deviation is corrected.  

INPUT: 3M d-AU events from day ~336 of 2007.

All spectra were pedestals subtracted, using one value per strip, CAPS=123,124,125 were excluded. Below I'll use term 'ped' instead of more accurate pedestal residuum.

Method: fit slopes to ADC =ped+30,ped+100 or 5*sig(ped) if too low.

The spectra,  fits of pedestal residuum, and slopes were QAed.

Note, BSMD rate 4 had old resistors in day 366 of 2007 and was excluded from this analysis.
This reduces # of strips from 18,000 to 15,750 . 

 

Table 1 Definition of QA cuts, all plots (PDF1) 
cut# cut code description # of discarded E strips # of discarded P strips figure in PDF1
1 1 at least 10,000 entries in the MPV bin 4 ? -
2 2  sig(ped) of gauss fit <~13 ADC ch 13 11 Fig 1
3 4  position of mean gauss within +/- 4 ADC  10 8 Fig 2
4 8  yield from [ped+30,ped+100] out of range 513 766 Fig 3
5 16  chi2/dof<2.3 from slop fit  6 1 Fig 4
6 32  slopeError/slop >10% 5 0 Fig 5
7 64  slope in range 19 6 Fig 6
-  sum  out of processed 15,750 strips discarded  635 ==> 4.0%  789 ==> 5.0%  

 

 

Relative gain corrections for every eta bin 

Method : find average slope per eta slice, fit gauss, determine average slope : avrSlope(iEta)

Gain correction formula is used only for extreme deviations:  

  • gainCorr_i = slope_i/avrSlope(iEta),  i =1,..,18,000
  • if( fabs(1-gainCorr)<0.10) gainCor=1.0  // do not correct

PDF2 plots  shows: 

  • Fig 1  SMDE status table, distribution per module
  • Fig 2  SMDP status table, distribution per module
  • Fig 3  SMDE  gain corrections,  changed 3408 strips=22%
  • Fig 4  SMDP  gain corrections,  changed 2067 strips=13%

Summary of BSMDE,P status tales and gains , ver 1.2 

14 Eval of BSMDE status tables for pp 2008, day 49,50

Method:

 from _private_ production w/o zero BSMD suppression we look at pedestal residua for raw spectra for minb events.

chain="DbV20080703 B2008a ITTF IAna ppOpt VFPPV l3onl emcDY2 fpd ftpc  
trgd ZDCvtx NosvtIT NossdIT analysis Corr4 OSpaceZ2 OGridLeak3D  
beamLine BEmcDebug"

 

The only QA was to require  MPV of the spectrum is below 100, one run contains ~80K events.


Fig 1

Good spectra look like this:

 


Fig 2. Run 9049053, 80K events,

9,292 strips out of 15,750 tested were discarded by condition MPV>100 eve

TOP: MPV value from all strips. White means 0 (zero) counts. Crate 4 was not evaluated.
BOTTOM: status table: red=bad, white means MPV>100 events


Fig 3. Run 9050022, 80K events,

9,335 strips out of 15,750 tested were discarded by condition MPV>100 eve

TOP: MPV value from all strips. White means 0 (zero) counts
BOTTOM: status table: red=bad, white means MPV>100 events


Fig 4. Run 9050088, 80K events,

9,169 strips out of 15,750 tested were discarded by condition MPV>100 eve

TOP: MPV value from all strips. White means 0 (zero) counts
BOTTOM: status table: red=bad, white means MPV>100 events

15 stability of BSDM peds, day 47 is good

 Peds from run minB 17 were used as reference

Fig1 . pedestal residua for runs 17,29,31

Fig2 . pedestal residua for run 31, full P-plane

Fig3 . pedestal residua for run 31, full E-plane

Fig4 . pedestal residua for run 31, zoom in E-plane

Fig5 . pedestal residua for run 29, West, E-plane red, P-plane black, error=ped error

 

15a ped stability day 47, take 2

 On August 8, BSMD peds in the offline DB where corrected for day 47.

Runs minb 34 & 74 were used to determine and upload DB peds.

Below I evaluated pedestal residua for 2 runs : 37 & 70, both belonging to the same RHIC fill.

I have used 500 zero-bias events from runs 37 & 70, for the official production w/o zero suppression.

All strips for which  mTables->getStatus(iEP, id, statPed,"pedestal"); returns !=1 and all events using CAP123,124,125 were dropped.

Fig 1,2 show big picture: all 38,000 strips.

Fig 3 is zoom in on some small & big problems.

Fig 4 & 5 illustrates improvement if run-by-run pedestal is used. 


Fig 1, run=9047037

 


Fig 2, run=9047070

 


Fig 3, run=9047070, zoom in

 




Private peds were determine for 16 runs for day 47 and used appropriately. Below is sum of ped residua from all 16 runs, from zero bias events.

Fig 4, run=9047001,...,83 zoom in

 

Fig 5, run=9047001,...,83 full range

 

16) Time stability by fill of BSMD pedestals

I calculated the pedestals for every PP fill for 2008. This plot shows the pedestal per stripID and fill index. The Z-axis is the value of the pedestal. Only module 13 is shown here, but the full 2D histogram (and others) are in the attached root files.

 

17) Absolute gains , take1

 Goal: reco isolated gammas from bht0,1,2 -triggered events 

Method: identify isolated EM shower and match BSMD cluster energy to tower energy, as exercised earlier on 4) demonstration of absolute calib algo on single particle M-C 

INPUT events: 7,574 events triggered by barrel HT0,1,2 (id 220500 or 220510 or 220520) from run 9047029.

Cluster finder algo (sliding window, 1+3+1 strips),  smd cluster threshold set at 5 keV,  use only barrel West.

Tower cluster is defined as 3x3 patch centered on the tower pointed by the SMD peak.

Assumed BSMD calibration:  

  • ene(GeV)= (adc-ped)*1e-7, one constant for all barrel
  • pedestals, status tables hand tuned, some modules are disabled, but crate 4 is on

Results for ~3,8K barrel triggered events (half of 7,6K was not used)


Fig 1, Any  Eta-cluster

TOP: a) Cluster (Geant) energy;

b) Cluster RMS, peak at 0.5 is from low energy pair of isolated strips with almost equal energy

c) # of cluster per event, 

BOTTOM: X-axis is eta location, 20 bins span eta [-1,+1]. d) cluster ene vs. eta, e) cluster RMS vs. eta,

f) cluster yield vs. eta & phi, white bands are masked modules.


Fig 2, Any  Phi-cluster

see Fig 1 for details

 


Fig 3, Isolated EM shower

TOP: a) cluster loss on subsequent cuts, b) # of accepted EM cluster vs. eta location,

c) ADC distribution of 3x3 tower cluster centered at SMD cluster. In principle you should see there 3 edges from bht0, bht1, and bht2 trigger.

BOTTOM: X-axis is eta location, 20 bins span eta [-1,+1].d) Eta-cluster , e) phi-cluster energy, f) hit tower ADC .

 


Fig 4a,b, Calibration plots

TOP: BSMD Eta vs. Phi  as function of pseudorapidity.
BOTTOM: BSMD vs. BTOW as function of pseudorapidity.

2  eta location of 0.1, 0.5  of reco EM cluster  are shown in 3 panels (2x2)

1D plots are ratios of the respective 2D plots.

The  mean values of 1D fits are  relative gains of BSMDP/BSMDP and  BSMD/BTOW .

 


Fig 4c, Same as above, eta=0.9

 

18 Absolute gains, take 2

 Goal: reco isolated gammas from bht0,1,2 -triggered events 

Method: identify isolated EM shower and match BSMD cluster energy to tower energy, as exercised earlier on 4) demonstration of absolute calib algo on single particle M-C 

INPUT events: 100K events triggered by barrel HT0,1,2 (id 220500 or 220510 or 220520) from day 47, runs 1..83

Cluster finder algo (sliding window, 1+4+1 strips),  smd cluster threshold set at 10 keV,  use only barrel West, BSMD CR=4 masked out.

Tower cluster is defined as 3x3 patch centered on the tower pointed by the SMD peak, must contain 90% of energy from 5x5 cluster.

Default pedestals from offline DB used.

Assumed BSMD calibration: see table 1 column J+K  

Results for ~25K barrel triggered events (7/8 of 100K was not used)

 Fig 1 is above


Fig 2, Eta strips, any cluster


Fig 3 Phi strips, any cluster


Fig 4  isolated cluster (different sort). Plot c has huge peak at 0 - X-axis is chopped. Similar but smaller peak is in fig d. Magenta are event with bht0 and bht2 trigger.


Fig 5  isolated cluster :

Left: eta & phi plane coincidence--> works,

Right: eta & phi & tower 3x3>150 fials for modules 30-60, I have mapping problem?? 


Fig 6  Example of Eta vs. Phi  and SMD vs. Tower calibrations for eta bins 0.15, 0.5, and 0.85.

19) Absolute BSMD Calibration, table ver2.0, Isolated Gamma Algo description

 BSMD calibration algo has been developed based on M-C response of BSMD & towers to single gammas.

Executive summary:

The purpose of BSMD absolute calibration summarized at this drupal page is to reconstruct integrated energy deposit (dE) in BSMD based on measured ADC.
By integrated dE in BSMD I mean sum over few strips forming EM cluster, no matter what is the cluster shape.
This calibration method accounts for the varying absorber in front of BSMD and between eta & phi planes.
This calibration will NOT help in reconstruction:  
- full energy of EM particle which gets absorbed in BEMC ( shower development after BSMD layer does not matter for this calibration).
- partial energy of hadrons passing or showering in BEMC
- correct for the incident angle of the particle passing through detector
- saturation of BSMD readout. I only state up to which  loss (DE) the formula used in reconstruction:
      dE/GeV= (rawAdc-ped) * C0 * (1 +/- C1etaBin)
- determine sampling fraction (SF) of BSMD with high accuracy 

Below you will find brief description of the algo, side by side comparison of selected plots for M-C and real data, finally PDF with many more plots.

Proposed absolute calibration coefficients are show in table 2.

 

Part 1

Description of algorithm finding isolated gammas in the Barrel.

Input events

  • M-C :  single gamma per event, 6 GeV ET, flat in eta, phi covers 3 barrel modules 12,13,14, geometry=y2006
  • DATA: BHT0,1,2 triggered pp 2008 events from day 47, total 100K, individual triggers: 44K, 33K, 40K, respectively
    events were privately produced w/ zero suppression.

Raw data processing based on muDst

  • M-C : BSMD - take geant energy deposit  ~100 keV range, towers - take ADC*0.493 to have nominal calibration of 4070 ADC=60 GeV ET.
  • Data : BSMD - use private  pedestals & status tables for day 47, use custom calibration
    use BSMD calibration dE/GeV= (rawAdc-ped) * C0 * (1 +/- C1etaBin), where '+' is for Phi-plane and '-' for eta plane, see table below
    skip strips 4 sigma above ped or with energy below 1keV, strip to strip relative gains NOT used , data from CAPs=123,124,125 were not used
     towers- take ADC as is, no offline gains correction.

Cluster finder algo (seed is sliding fixed window), tuned on M-C gamma events

  • work with 150 Eta-strips per module or 900 Phi-strips at fixed eta
  • all strips are marked as 'unused'
  1. sum  dE in fixed window of 4 unused strips, snap at location which maximizes the energy
  2. if sum below 10 keV STOP searching for clusters in this band
  3. add energy from one strip on each side, mark all 1+4+1 strips as 'used'
  4. compute energy weighted  cluster position and RMS
  5. goto 1

This cluster finder process full Barrel West, more details about clustering is in one cluster topology , definition of 'barrel cell' 

Isolated EM shower has been selected as follows, tuned on gamma events,

  • select isolated eta-cluster in every segment of 15 eta strips.
  • require cluster center is at least 3 strips away from edges  of this segment (defined by eta values of 0.0, 0.1, 0.2,....0.9, 1.0)
  • require there is only one phi-cluster in the same 0.1x0.1 eta.phi cell
    • require phi-cluster center is at least 3 strips from the edges
  • find hit tower matching to the cross of eta & phi cluster
  • sum tower energy from 3x3 patch centered on hit tower
    • require 3x3 tower  ADC sum >150 ADC (equivalent to 2.2 GeV ET, EM)
  • sum tower energy from 5x5 patch centered on hit tower
    • require 3x3 sum/ 5x5 sum >0.9  
  • require RMS of Phi & Eta-cluster  is above 0.2 strips

 Below is listing of all cuts used by this algo:

  useDbPed=1; // 0= use my private peds
  par_skipSpecCAP=1; // 0 means use all BSMD caps
  par_strWinLen=4; (3)  // length of integration window, total  1+4+1, in strips
  par_strEneThr=1.e-6;  (0.5e-6) // GeV, energy threshold for strip to start cluster search
  par_cluEneThr=10.0e-6; (2.0e-6) // GeV, energy threshold for cluster in window
  par_kSigPed=4.; (3) // ADC threshold
  par_isoRms=0.2;  (0.11) // minimal smd 1D cluster RMS 
  par_isoMinT3x3adc=150; //cut off for low tower response
  par_isoTowerEneR=0.9; //  ratio of 3x3/5x4 cluster
(in red are adjusted values for MIP or ET=1GeV cluster selection)  

 Table 1 Tower cluster cut defines energy of isolated gammas.

3x3 tower ET (GeV), trigger used MIP, BHT0,1,2 1.0, BHT0,1,2

3.4, BHT0

4.7, BHT1

5.5, BHT2

7, BHT2

3x3 tower ADC sum range 15-30 ADC 50-75 ADC 170-250 ADC 250-300 ADC 300-380 ADC 400-500 ADC
3x3 energy &  RMS (GeV) @ eta=[0.1,0.2] 0.34 +/- 0.06 0.92 +/- 0.11 3.1 +/- 0.3 4.1 +/- 0.2 5.1 +/- 0.3 6.6 +/- 0.4
3x3 energy &  RMS (GeV) @ eta=[0.4,0.5] 0.37 +/- 0.07 1.0 +/- 0.11 3.4 +/- 0.4 4.6 +/- 0.3 5.6 +/- 0.4 7.3 +/- 0.5
3x3 energy &  RMS (GeV) @ eta=[0.8,0.9] 0.47 +/- 0.09 1.3 +/- 0.16 4.3 +/- 0.4 5.7 +/- 0.3 7.1 +/- 0.5 9.3 +/- 0.6

  

Table 2 shows assumed calibration.   

Contains relative calibration  of eta vs. phi plane, different for M-C vs. data,
and single absolute DATA normalization of the ratio of SMD (Eta+Phi) cluster energy vs. 3x3 tower cluster at eta=0.5 .

Table 3 shows what comes from data & M-C analysis using calibration from table 2.

 


Part 2

Side by side comparison of M-C and real data. 

Fig 2.1 BSMD "Any cluster" properties

TOP : RMS vs. energy, only Eta-plane shown, Phi-plane looks similar
BOTTOM: eta -phi distribution of found clusters. Left is M-C - only 3 modules were 'populated'. Right is data, white bands are masked modules or whole BSMD crate 4

 

 


Fig 2.2  Crucial  cuts after coincidence & isolation was required for a pair BSMD Eta & Phi clusters 

TOP :  3x3 tower energy (black), hit-tower energy (green) , if 3x3 energy below 150 ADC cluster is discarded
BOTTOM: eta dependence of 3x3 cluster energy. M-C has 'funny' calibration - there is no reason for U-shape, Y-value at eta=0.5 is correct by construction.


Fig 2.3  None-essential cuts, left by inertia 

TOP :  ratio of 3x3 tower energy to 5x5 tower energy ,  rejected if below 0.9 
BOTTOM:  RMS of Eta & Phi cluster must be above 0.2, to exclude single strip clusters


Part 3

Examples of relative response of BSMD Eta vs. Phi AFTER  calibration above is applied. 

I'm showing examples for 3 eta slices of 0.15, 0.55, 0.85 -  plots for all eta bins  are available as PDF, posted in Table 2 at the end.

The red vertical line marks the target calibration, first 2 columns are aligned by definition, 3rd column is independent measurement confirming calibration for data holds for ~40% lower gamma energy.

Fig 3.1 Phi-cluster vs. Eta cluster for eta range [0.1,0.2]. M-C on the left, data in the middle, right.


Fig 3.2 Phi-cluster vs. Eta cluster for eta range [0.4,0.5]M-C on the left, data in the middle, right.


Fig 3.3 Phi-cluster vs. Eta cluster for eta range [0.8,0.9]M-C on the left, data in the middle, right.


Fig 3.4 Phi-cluster vs. Eta cluster for eta range [0.9,1.0]M-C on the left, data in remaining columns.

 

Part 4

Absolute response of BSMD (Eta + Phi) vs. 3x3 tower cluster,  AFTER  calibration above is applied. 

I'm showing eta slices [0.4,0.5] used to set absolute scale. The red vertical line marks the target calibration, first 2 columns are aligned by definition, 3rd column is independent measurement  for gammas with ~40% lower --> BSMD response is NOT proportional to gamma energy.

Fig 4.1 Phi-cluster vs. Eta cluster for eta range [0.4,0.5]. Only data are shown.


Fig 4.2 Absolute BSMD calibration for eta range [0.0,0.1] (top) and eta range [0.1,0.2] (bottom) . Only data are shown.
Left: Y-axis is BSMD(E+P) cluster energy, y-error is error of the mean; X-axis 3x3 tower cluster energy, x-error is RMS of distribution . Fit (magenta thick) is using only to 4 middle points - I trust them more. The MIP point is too high due to necessary SMD cluster threshold, the 7GeV point has very low stat. There is no artificial point at 0,0. Dashed line is extrapolation of the fit.
Right: only slope param (P1) from the left is used to compute full BSMD Phi & Eta-plane calibration using formulas:

slope P1_Eta=P1/2./(1-C1[xCell])/C0
slope P1_Phi=P1/2./(1+C1[xCell])/C0
Using C1[xCell],C0 from table 2.


Fig 4.3 Absolute BSMD calibration for eta range [0.2,0.3] (top) and eta range [0.3,0.4] (bottom) . Only data are shown, description as above.


Fig 4.4 Absolute BSMD calibration for eta range [0.4,0.5] (top) and eta range [0.5,0.6] (bottom) . Only data are shown, description as above.


Fig 4.5 Absolute BSMD calibration for eta range [0.6,0.7] (top) and eta range [0.7,0.8] (bottom) . Only data are shown, description as above.


Fig 4.6 Absolute BSMD calibration for eta range [0.8,0.9] (top) and eta range [0.9,0.95] (bottom) . Only data are shown, description as above.

I'm showing the last eta bin because it is completely different - I do not understand it at all. It was different on all plots above - just reporting here.


Fig 4.7 Expected BSMD gain dependence on HV, from  Oleg document The 2008 working HV=1430 V (same for eta & phi planes) - in the middle of the measured gain curve.

 Part 5

Possible extensions of this algo.

  1. cover also East barrel (for the cross check)
  2. include vertex correction in projecting SMD cluster to tower (perhaps)
  3. study energy resolution of eta & phi plane - now I just compensated relative gains but the total BSMD energy is simply sum of both planes
  4. last eta bin [0.9,1.0] is completely different, e.g. there is no MIP peak in the 2D fig 2.2 - BTOW gain (HV) is factor 2 or more way too high in this 2 bins 
    Justification: 
    Inspect right plot on figures 4.2,...,4.6, in particular note at what gamma energy the blue line reaches ADC of 1000 counts. Look at this pattern vs. eta bin. On the last plot it should happen at gamma energy of ~5 GeV but in reality it is at ~10 GeV.    
  5. crate 4 (unmodified) would have different gains - excluded in this analysis
  6. Speculation: those multiple peaks in raws BSMD spectra (seen by others) could be correlated with BHT0,1,2 triggers
  7. Scott suggestion: more detailed study of BSMD saturation could use BSMD cluster location for fiducial cut forcing gamma to be in the tower center and use just the hit tower. This needs more stats. This analysis uses 1 day of data and ends up with just ~100 entries per energy point.
  8. non-linear BSMD response does not mean we can reco cluster position with accuracy better than 1 strip.

 

 


Fig 5.1 BSMD cluster energy vs. eta of the cluster.

 

 


Fig 5.2 hit tower to 3x3 cluster energy for accepted clusters. DATA, trigger BHT2, gamma ET~5.5 GeV.

 

 


Fig 5.3 hit tower to 3x3 cluster energy for accepted clusters. M-C, single gamma ET=6 GeV, flat in eta .

 

20 BSMD saturation

 The isolated BSMD cluster algo allows to select different range of tower energy cluster as shown in Fig.1

Fig 1. Tower energy spectrum, marked range [1.2,1.8] GeV.

In the analysis 5 energy tower slices were selected: MIP, 1.5 GeV, and around BHT0,1,2 thresholds.

Plots below show example of calibrated BSMD (eta+phi) cluster energy vs. tower cluster energy. (I added point at zero with error as for next point to constrain the fit) 

Fig 2. BSMD vs. tower energy for eta of 0.15, 0.55, and 0.85.

I'm concern we are beyond the middle of BSMD dynamic range for ~6 GeV (energy) gammas at eta 0.5. Also one may argue we se already saturation.

If we want BSMD to work up to 40 GeV ET we need to think a lot how to accomplish that.

Below is dump of one event contributing to the last dot on the middle plot. It always help me to think if I see real raw event.

BSMDE
  i=526, smdE-id=6085 rawADC=87.0 ped=71.4  adc=15.6 ene/keV=0.9
  i=527, smdE-id=6086 rawADC=427.0 ped=65.0  adc=362.0 ene/keV=20.0
  i=528, smdE-id=6087 rawADC=814.0 ped=71.8 adc=742.2 ene/keV=41.0
  i=529, smdE-id=6088 rawADC=92.0 ped=66.4  adc=25.6 ene/keV=1.4


BSMDP
  i=422, smdP-id=6086 rawADC=204.0 ped=99.3  adc=104.7 ene/keV=7.8
  i=423, smdP-id=6096 rawADC=375.0 ped=98.5  adc=276.5 ene/keV=20.3
  i=424, smdP-id=6106 rawADC=692.0 ped=100.1  adc=591.9 ene/keV=47.3


2D cluster
bsmdE CL  meanId=6086 rms=0.80 ene/keV=66.80 inTw 1632.or.1612 
bsmdP CL  meanId=6106 rms=0.68 ene/keV=75.45 inTw 1631.or.1632 

BTOW
  id=1631 rawADC=43.0 ene=0.2 ped=30.0,  adc=13.0
  id=1632 rawADC=401.0 ene=6.9 ped=32.4  adc=368.6
  id=1633 rawADC=43.0 ene=0.1 ped=35.5   adc=7.5

 gotTwId=1632
 gotTwAdc=368.6
tow3x3 sum=405.7 ADC
3x3Tene=7.3GeV 

 

1) M-C : response of BSMD , single particles (Jan)

 Below are studies of BSMD response and calibration algo based on BMSD response to single particle M-C

1) BSMD-E clusters, sliding max, fixed width

 Goal: study SMDE energy resolution and cluster shape for single particles M-C

Input: single particle per event, fixed ET=6 GeV, flat eta [-0.1,1.1], flat  |phi| <5 deg, 500 eve per sample, Geant geometry y2006

Cluster finder algo (sliding fixed window), tuned on electron events

  • work with 150 Eta-strips per module
  • sum geant dE in fixed window of 4 strips
  • maximize the sum, compute energy weighted  cluster position and RMS inside the window
  • algo quits after first cluster found in given module 

Example of BSMDE response for electon:

McEve BSMD-E dump, dE is Geant energy sum from given strip, in GeV

  dE=1.61e-06  m=104 e=12 s=1 stripID=15462  

  dE=2.87e-05  m=104 e=13 s=1 stripID=15463

  dE=8.35e-06  m=104 e=14 s=1 stripID=15464

  dE=1.4e-06  m=104 e=15 s=1 stripID=15465

 ALL plots below have energy in keV , not MeV - I'll not change plots.

Results:

Fig 1. gamma - later, job crashed.


Fig 2. electron

 


Fig 3. pi0

 


Fig 4. eta

 


Fig 5. pi minus

 


Fig 6. mu minus

2) BSMDE , 1+3+1 sliding cluster finder

 Goal: test SMDE cluster finder  on single particles M-C

Input: single particle per event, fixed ET=6 GeV, flat eta [-0.1,1.1], flat  |phi| <5 deg, 5k eve per sample, Geant geometry y2006

Cluster finder algo (seed is sliding fixed window), tuned on pi0 events

  • work with 150 Eta-strips per module
  • all strips are marked as 'unused'
  • use only module 13, covering ~1/3 of probed phase space
  1. sum geant dE in fixed window of 3 unused strips, snap at location which maximizes the energy
  2. if sum below 5 keV STOP searching for clusters in this module
  3. add energy from one strip on each side, mark all 1+3+1 strips as 'used'
  4. compute energy weighted  cluster position and RMS
  5. goto 1

Example of BSMDE response for pi0:

...
strID=1932 u=0 ene/keV=0
strID=1933 u=1 ene/keV=0    +
strID=1934 u=1 ene/keV=2.0  *
strID=1935 u=1 ene/keV=48.2 *X
strID=1936 u=1 ene/keV=3.9  *
strID=1937 u=1 ene/keV=0.8  +
strID=1937 u=0 ene/keV=0
strID=1938 u=0 ene/keV=0
strID=1939 u=2 ene/keV=1.5  +
strID=1940 u=2 ene/keV=8.2  *
strID=1941 u=2 ene/keV=28.1 *X
strID=1942 u=2 ene/keV=13.8 *
strID=1943 u=2 ene/keV=4.0  +
strID=1944 u=0 ene/keV=5.6  
strID=1945 u=0 ene/keV=0.5
strID=1946 u=0 ene/keV=0
strID=1947 u=0 ene/keV=0
...

 

 

particle

any cluster found in the module, all events

Fig 1a

only events with exactly 2 found clusters 

Fig 1b

e-

. Fig 2a Fig 2b

pi0

. Fig 3a Fig 3b

eta

 

3) same algo applied to full East BSMDE,P

 

Smd cluster finder with sliding window of 3 strips + one strip on each end (total 5 strips) applied to all 9000 BSMDE,P strips, one gamma per event

 

 

 

 

4) demonstration of absolute calib algo on single particle M-C

 Goal: determine absolute calibration of BSMDE,P planes

Method: identify isolated EM shower and match BSMD cluster energy to tower energy

INPUT events: single particle per event, fixed ET=6 GeV, flat eta [-0.1,1.1], flat  |phi| <5 deg, 5k eve per sample, Geant geometry y2006

Cluster finder algo (seed is sliding fixed window), tuned on pi0 events

  • work with 150 Eta-strips per module or 900 Phi-strips at fixed eta
  • all strips are marked as 'unused'
  • use only module 13, covering ~1/3 of probed phase space
  1. sum geant dE in fixed window of 3 unused strips, snap at location which maximizes the energy
  2. if sum below 5 keV STOP searching for clusters in this module
  3. add energy from one strip on each side, mark all 1+3+1 strips as 'used'
  4. compute energy weighted  cluster position and RMS
  5. goto 1

This cluster finder process full Barrel West, more details about clustering is in one cluster topology , definition of 'barrel cell' 

Isolated EM shower has been selected as follows, tuned on gamma events,

  • select isolated eta-cluster in every segment of 15 eta strips.
  • require cluster center is at least 3 strips away from edges  of this segment (defined by eta values of 0.0, 0.1, 0.2,....0.9, 1.0)
  • require there is only one phi-cluster in the same 0.1x0.1 eta.phi cell
  • require phi-cluster center is at least 3 strips from the edges
  • find tower matching to the cross of eta & phi cluster
  • require this tower has ADC>100

Example of EM cluster passing all those criteria is below:

smdE: ene/keV= 40.6   inTw 451.or.471
 cell(15,12),   jStr=7 in xCell=15
...
id=1731  ene/keV=4.9  *
id=1732  ene/keV=34.3  X *
id=1733  ene/keV=1.5  *
...
---- end of SMDE dump


smdP: ene/keV= 28.5  inTw 471.or.472
  cell(15,12),   jStr=7 in xCell=15
...
id=1746  ene/keV=2.7  *
id=1756  ene/keV=22.0  X *
id=1766  ene/keV=3.7  *
...
---- end of SMDE dump

muDst BTOW
  id=451, m=12 rawADC=12.0 
* id=471, m=12 rawADC=643.0
  id=472, m=12 rawADC=90.0 
  id=473, m=12 rawADC=10.0 

 Results for gamma events

will be show with more details. The following PDF files contain full set of plots for all other particles.

 

particle

# of eve  plots
gamma 25K  PDF
e- 50K  PDF
pi0 50K  PDF
eta 50K  PDF
pi- 50K  PDF

 

 


Fig 1, Any  Eta-cluster, single gamma,  25K events

TOP: a) Cluster (Geant) energy; b) Cluster RMS, c) # of cluster per event, 

BOTTOM: X-axis is eta location, 20 bins span eta [-1,+1]. d) cluster ene vs. eta, e) cluster RMS vs. eta, f) cluster yield vs. eta & phi.


Fig 2, Any  Phi-cluster, single gamma,  25K events

see Fig 1 for details

 


Fig 3, Isolated EM shower, single gamma,  90K events

TOP: a) cluster loss on subsequent cuts, b) # of accepted EM cluster vs. eta location, c) ADC distribution of hit tower (some wired gains are in default M-C), tower ADC is in ET

BOTTOM: X-axis is eta location, 20 bins span eta [-1,+1]. d) Eta-cluster , e) phi-cluster energy, f) hit tower ADC .

 


Fig 4, Calibration plots, single gamma,  90K events.

TOP: BSMD Eta vs. Phi  as function of pseudorapidity.
BOTTOM: BSMD vs. BTOW as function of pseudorapidity.

3  eta location of 0.1, 0.5, 0.9 of reco EM cluster  are shown in 3 panels (2x2)

1D plots are ratios of the respective 2D plots.

The  mean values of 1D fits are  relative gains of BSMDP/BSMDP and  BSMD/BTOW , determine for 10 slice in pseudorapidity. Game is over :).

 


Fig 5, Same as above, eta=0.1, single pi0, 50K events.

 


Fig 6, Same as above, eta=0.1, single pi minus, 50K events.

 


Below are PDF plots for all particles:

Correction - label on the X-axis for 1D plots is not correct. I did not apply log10() - a regular ratio is shown, sorry.

5) Evaluation of BSMD dynamic range needed for the W program at STAR, ver 1.0

M-C  study of BSMD response to high energy electrons

 


Attachment 1:

Fig 1 & 2  reminds actual (pp data based) calibration for 2 eta location of 0.1 and 0.8, presented earlier.

Table 1 shows  M-C simulation of average cluster energy (deposit in BSMD plane), its spread, and  width as function of electron ET, separately for eta- & phi-planes of BSMD.
As expected, BSMD sampling fraction (SF, red column)  is not constant but drops with energy of electron.
The BSMD SF(ET) deviates from constant by less then 20% - it is a small effect.

Fig 3,4,5 show expected BSMD response to M-C electrons with ET of 6,20, and 40 GeV. Only for the lowest energy  the majority of EM showers fit in to the dynamic range of BSMD, which ends for energy deposit of about 60 keV per plane.
I was trying to be generous and draw the red line at DE~90 keV .
The rms of BSMD cluster is about 0.5 strips, so majority of energy is measured by just 2 strips (amplifiers). Such narrow cluster lowers  saturation threshold.

Fig 6, shows BSMD cluster energy for PYTHIA W-events.
Fig 7 shows similar response to PYTHIA QCD events.

Compare area marked with red oval - there is strong correlation between BSMD energy and electron energy and would be not wise to forgo it in the e/h algo.  

Conclusion:

The attached slides show 2008 HV setting of BSMD would lead to full saturation of BSMD response for electrons from W decay with ET as low as 20 GeV , i.e. would reduce BSDM dynamic range to 1 bit.0

 


For the reference:

* absolute BSMD calibration based on 2008 pp data.
http://drupal.star.bnl.gov/STAR/subsys/bemc/calibrations/bsmd/2008-bsmd-calibration/19-isolated-gamma-algo-description-set-2-0

* current BSMD HV are set very high , leading to saturation of BSMD at gamma energy of 7-10 GeV, depending on eta and plane. (Lets ignore difference between E & ET for this discussion, for now).
http://drupal.star.bnl.gov/STAR/subsys/bemc/calibrations/bsmd/2008-bsmd-calibration/20-bsmd-saturation

*  STAR priorities for 2009 pp run presented at Apex: http://www.c-ad.bnl.gov/APEX/APEXWorkshop2008/talks/Dunlop_Star_Apex_2008.pdf

* Attachment 2,3 show BSMD-E, -P response for electrons with  ET: 4,6,8,10,20,30,40,50 GeV and to Pythia W, QCD events (in this order)

Definition of absolute BSMD calibration

NOT FINISHED

Definitions of quantities used for empirical calibration of BSMD.

Revised January, 26, 2009

A) Model of the physics process (defines quantities: E, eta, smdE, smdEp, smdEe, C0, C1)

  1. gamma particle with fixed energy E enters projectively EMC at fixed pseudo-rapidity eta. Eta is defined in detector ref. frame.
  2. EM showers develops and BSMD (consisting of 2 planes) captures smdEtot of shower energy. Single plane captures smdE=0.5*smdEtot. The SMD cluster energy from single plane is denoted as smdE.
  3. SMD consist  of 2 planes : eta-plane closer to IP and the outer phi-plane. Each plane captures non-equal fraction ofenergy  deposited in BSMD: smdEp, smdEe, respectively.  

    The following relation holds:
                 smdEp(E,eta) =smdE(E) * [1-C1(eta)] 
                 smdEe(E,eta) =smdE(E) * [1+C1(eta)] 

    where  theta-dependent coefficient C1 accounts for all physical processes differentiating fraction of captured shower energy by eta vs. phi-planes along Z-direction. 
    allows reconstruction of full BSMD energy deposit independent on gamma angle theta if cluster energy in both plane is measured 
  4. BSMD Cluster energy is measured in each plane by few consecutive strips which:
    1. response is linear and
    2. strip-to-strip local  hardware gain variation is negligible (the "long-wave" is accounted for in C1(theta))
      Note, there are 4 low gain strip (id=50,100) in the eta-plane , seen in fig 2 of 08) SMD-E gain equalization , ver 1.1, which require ADC to be rescaled appropriately.

  5. The overall conversion constant C0=6.5e-8 (GeV/ADC chan) allows reconstruction of BSMD cluster energy in given plane based on the sum of ADCs from all strips participating in the cluster  
             smdEp(E,eta)=C0* sum{ ADC_i - ped_i}, over cluster of few strips , similar formula for smdEp(E,eta)=....
    The value of C0 was determined based on 19) Absolute BSMD Calibration, table ver2.0, Isolated Gamma Algo description, table 2. Gammas with ET=6 GeV were thrown at EMC and resulting SMD cluster ADC sum was matched to the average value seen for 2008 pp data.
  6. To summarize
    The reconstructed cluster energy in each plane with use of C0 & C1 should have eta dependence 

                 smdE(E) =C0* sum{ ADC_i - ped_i}/[1-C1(eta)] for phi-plane cluster

                 smdE(E) =C0* sum{ ADC_j - ped_j}/[1+C1(eta)]  for eta-plane cluster
                  

    Those 2 quantities are well suited to place cuts. 

B) Determination of C1(eta) was based on 19) Absolute BSMD Calibration, table ver2.0, Isolated Gamma Algo description, from crates 1,2,and 4. 

Data analysis was done for 10 pseudo-rapidity  ranges [0,0.1], [0.1,0.2] ,..., as shown in table 2, row labeled 'DATA'. 

For practical application analytical approximation is provided

    C1(eta)= C1_0 + C1_1*|eta| + C1_2*eta*eta

symmetric versus positive/negative pseudo-rapidity.  

The numerical values of expansion coefficients are: C1_0=0.014, C1_1=0.015, C1_2=0.333

 

C) Modeling of BSMD response in STAR M-C

  1. find  geantDE geant energy deposit for given BSMD strip
  2. undo simulated by GEANT +/-7% difference between eta/phi planes (see 19) Absolute BSMD Calibration, table ver2.0, Isolated Gamma Algo description, row 'M-C') 
           geantDEp=geantDE*0.93
           geantDEe=geantDE*1.07
  3. compute ADC for every i-th strip & plane
        ADCp_i= geantDEp/[1-C1(eta)]/C0 
        ADCe_i= geantDEe/[1+C1(eta)]/C0
    1. If NO saturation is assumed that is all - use ADC-values in reconstruction.
    2. To simulate full ADC saturation at 1024 assume pedestal is at ADC=100 and saturate values of ADCp_i, ADCe_i at 924. Then proceed to reconstruction.  

 

Mapping, strip to tower distance

For every strip we find the closest tower and determine the distance between tower center and strip center.

Both plots show strip ID on the X-axis, and tower ID on the Y-axis. Error in Y is distance between centers in CM.

It is not true 15 eta strips covers every 2 towers! It is only approximate since strip pitch is constant bimodal and tower width changes continuously.

There is still small problem, namely strip Z is calculated at R_smd and tower Z is calculated at the entrance of the tower

leading to clear paralax error - we are working on this. 


Fig 1.

Top plots is for SMDE for module 1. Note E-strip always spans 2 towers and we used tower IDs in one module.

Bottom plots shows phi strips for module 1. 

 

 


Fig 2.

Top plots is for SMDE for modules 1-4. 

Bottom plots shows phi strips for modules 1-4.

 

 


 

Run 10 BSMD Calibrations

Parent page for BSMD Run 10 Calibration

BSMD Status Table in run10 AuAu200GeV runs

==========

We started this task by looking at BSMD in Run 10 with some MuDst files, and found the pedestals probably need to be QA-ed as well.

http://www.star.bnl.gov/HyperNews-star/get/emc2/3500.html

and

http://www.star.bnl.gov/HyperNews-star/get/emc2/3515.html

==========

Then we started to look for Non-zero-suppressed BSMD data, and found that we have to run through the daq files to produce the NZS data files to produce the NZS data. The daq files are stored on HPSS, and we have to transfer them onto RCF. The transferred daq files were then made into root files with Willim's BSMD online monitering codes.

 

http://www.star.bnl.gov/HyperNews-star/get/emc2/3533.html

==========

We decided to use same critiria and status codes as Willim used for run09 pp500GeV.

http://drupal.star.bnl.gov/STAR/blog-entry/wleight/2009/may/13/bsmd-status-cuts-and-parameters

==========

We discovered that a modification is needed to the critiria of code bit 3 (the ratio of the integral over a window to the integral over all, i.e. the pedestal integral ratio) We found about half of the BSMD strips fail the criteria of this ratio > 0.95, but nearly most of them satisfy ratio > 0.90, so the critiria is loosed to 0.90. We think this is a reasonable difference between run09 pp and run10 AuAu collisions.

http://www.star.bnl.gov/HyperNews-star/get/emc2/3546/1/1.html

==========

The daq files are huge in size, on the order of TB for one day. In order to not disturb the run10 data production, we had to only copy 1/10~1/20 of the daq files.

http://www.star.bnl.gov/HyperNews-star/get/emc2/3589/2.html

==========

After a long period of transffering and root files making, almost all the days between Jan/02/2010 and Mar/17/2010 are done.

http://www.star.bnl.gov/HyperNews-star/get/emc2/3601.html

We found that another criteria has to be modified, because we use NZS data for the QA of the tail of ADC spectrum. The definition of the tail ranges and the limits are adjusted. Three ranges of the tail part of ADC are defined,

Range 1:peak+6*rms to peak+6*rms+50 channels
Range 2:6*rms+50 channels to 6*rms+150 channels
Range 3:6*rms+150 channels to 6*rms+350 channels

The entries (hits) in these three ranges are counted, and the ratios to all the entries in the whole spectrum are calculated. The limits for good ratios are selected based on the ratios distribution trough out all the days. They are

@font-face {
font-family: "Cambria";
}p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }

Range 1: 3.35~80 x0.001

Range 2: 0.95~40 x0.001

Range 3: 0.25 ~ 20 x0.001

See the attached newlimits.docx for more details and plots.

Also, bit 0 of the status code is supposed to indicate whether a channel is bad or not. Not every problem is fatal, i.e cause the channel to be regarded as bad.

Originally, only if all the 3 ratios are beyond the limits, a fetal condition is met; we addjusted this to be if more than one, i.e. >=2, out of the 3 ratios are beyond the limits, a fetal condition is met. One can also treat any channel with a code not equal to 1 as bad, regardless what the bit 0 is.

 

==========

A final report with sample codes in one day was presented to and approved by EMC2 group.

http://www.star.bnl.gov/HyperNews-star/get/emc2/3637/1.html

==========

Note: The map file used in bsmd montoring in run10 was with some inconsistence with the acctual hardware. An after-burn map correction was done by the EMC software coordinator, Justin.

Note: Up to now, no corrections are made to pedestals. The bad strips caused by bad pedestals are rouphly 1/3~1/4 of all the bad strips.

Wenqin Xu

16Feb2011

Run 9 BSMD Calibration

Parent page for BSMD Run 9 Calibration

01 Status of BSMDE,P at the end of pp 500 GeV run, April of 2009

Summary of BSMD performance on April 6. Input : 200K events tagged by L2W clust ET>13 GeV, days 85-94, ~all events, only ZS data are shown.


Attached PDFs shows zoom in spectra for individual modules. 1st page is summary, next I show 3 modules per row, 5 rows per page. Even pages shows zoom-in for low ADC<100, odd pages shows full ADC scale. Common maxZ=1000 is used for all plots , except page 1. 

02 offline QA of BSMD pp 500, ver1 (Willie+Jan)

 BSMD QA algorithm and results for pp 500, tune optimized for high energy  energy response

  1. QA method, details are given in Willie's blog
    Fig 1. Typical good/bad strips from the E-plane and with wide pedestals. 
    • Input: all available events from fills 10399, 10402, 10403, 10404, 10407, 10412, 10415 added together
    • evaluate shape of pedestal residua for  NZS data captured on-line by Willie's daq reader  (Blue filled histo)
    • evaluate yield in high energy range (ADC ~300,500,800) using ZS data from L2W triggered events (Magenta line-only)
    • ignored: satellite spikes around  pedestal at ADC ~32, those come from correlated noise and (most likely) such events will be discarded.
    • Example of such spectra for few strips is shown in fig 1.
    • Encoding of BSMD status bits extends existing convention to use LSB=1 for good strips and LSB=0 for bad strips. We used bits 1,2,3 to tag pedestal problems and bits 5,6,7 to tag yield problems. 
    • More plots with individual strips is in attachment A,B,C,D.
  2. Bad strip is defined as having : bad pedestal or god pedestal but no yield above it.

    Fig 2. Distribution of bad strips from both planes, details about each plane separately is in attachment E.
  3. The remaining issues:
    1. of tagging 'spiked' events (or modules?) needs to be investigated.
    2. study time dependence
  4. For the reference this directory contains PDF files with plots from all 120 BSMD modules.

 

Fig 3. # of bad strips per module.

 

03 correlated, small ADC spikes in BSMD (Jan)

Study of small ADC spikes in BSMD 

Input:

  •  all available events from fills 10399, 10402, 10403, 10404, 10407, 10412, 10415 added together
  • pedestal residua for  NZS data captured on-line by Willie's daq reader  (Blue filled histo)
  •  ZS data from L2W triggered events (Magenta line-only)

The following plots support those observation:

  1. spikes are symmetric on both sides of pedestals peak, separated by 2^N ADC counts, narrower than pedestal peak, (Fig1)
  2. spikes are correlated in even ID (or odd) strips the same plane, correlation is local, Fig 2 
  3. spikes are correlated between P-plane & E-plane strips, Fig 3
  4. energy deposit in BSMD increases probability of spikes, see peak/valley for blue vs. magenta in fig 1b.
  5. bands are visible at larger ADC, as shown by Oleg, not sure what data and how many events, fig 4 
  6. perhaps fig 1c shows yet another pathology, because it does not obey odd/even rule in fig 1a & 1b.

 


Fig 1a. Example of spikes delADC=16, in the vicinity of strip 1525-P, all strips from module 11 are shown in attachment A.


Fig 1b. Example of spikes delADC=32, in the vicinity of strip 1977-P, all strips from module 14 are shown in attachment B, module 22 looks similar.


Fig 1c. Example of spikes delADC=128, in the vicinity of strip 4562-P, all strips from module 31 are shown in attachment C, modules 51,52,57  look similar


Fig 2. Phi-Phi plane correlation of P-strip 1979 with (odd)  P-strips: 1977..1994. Attachment D contains correlation of P-strips [1977-80] with  24  strips in proximity. 


Fig 3. Phi-Eta plane correlation of P-strip 1979 with (odd)  E-strips: 1977..1994. Attachment E contains correlation of P-strips [1977-80] with  24  strips in proximity. 

 


Fig 4.Oleg observed this stripes in raw BSMD ADC spectrum, not sure what data.

2009 BSMD Relative Gains Information

The pdf posted here has a good overview of the computation of the slope for each strip, discussing the method and the various ways in which strips were marked as bad.  This page discusses the computation of the actual relative gains and statuses that went into the database.

The code used to compute the relative gains is archived at /star/institutions/mit/wleight/archive/bsmdRelGains2009/.

DELETE - Run 9 BSMD Status Update 3 (4/24)

After looking more closely at the crate 1 channels I was forced to make serious revisions to status bit 2 from the previous update.  The new status bit 2 test is as follows:

First, I scan through the strip ADC distribution looking for peaks.  A peak is defined as a channel that is greater than or equal to the four or two channels to either side (if the sigma of the fit to the strip ADC distribution is greater than or less than six, respectively), has a content that is greater than 5% of the maximum of the strip ADC distribution, and has a depth greater than 5% of the maximum of the strip ADC distribution.  The depth is calculated by first calculating the difference between the peak content and the channel content for each of the four or two channels on either side of the peak.  The maximum of these differences is obtained for the left and right sides separately, and the depth is then equal to the lesser of these two maxima.

If the strip has more than one peak and the maximum of the depths is greater than 20% of the maximum of the strip ADC distribution, then the strip is given bad status 2.  If the strip has only one peak (which is then necessarily the maximum of the entire distribution) but the distance between that peak and the peak obtained from the gaussian fit is greater than 75% of the sigma from the gaussian fit, the strip is given bad status 2 as well.  Attached is a pdf that has only the pedestal plots for all channels from crate 1.

Edit: I forgot that the BSMD crates don't increase with module number: what is labeled as crate 1 is actually crate 2, as that is the crate that has the first 15 modules, and the attachment labeled as crate 2 is the 2nd 15 modules and so actually crate 1.

2nd edit: This is now out of date, please see the new update.

DELETE - Run 9 BSMD Status Update 4 (4/27)

After further investigations -- specifically looking at strips that had a significant secondary peak, entirely separated from the main peak, with a max of ~40, which were not being caught by my cuts -- I have again revised my criteria for status bit 2.  Again, I begin by looking for peaks.  If a peak candidate is less than three sigma from the peak of the strip ADC distribution (strip and peak both taken from the gaussian fit), the same cuts are imposed: the candidate must be greater than the four (if sigma>6) or two (if sigma<6) channels on on either side of it, it's content must be greater than 5% of the maximum of the strip ADC distribution, and the depth must be at least 5% of the maximum of the strip ADC distribution.  If the strip has two such peaks with the maximum of the depths greater than 20% of the maximum of the strip ADC distribution, or has only one peak but that peak is at least one sigma away from gaussian fit peak, it is given bad status 2.  Note the only change here is that the previously a strip with only one peak could be marked bad if it was 75% of sigma away from the gaussian fit peak.

Most of the changes have to do with candidates that are at least three sigma from the gaussian fit peak.  In this case the cuts are relaxed: the bin content need only be .5% of the max, not 5%, though it still must be at least 10, and the peak depth is required to only be at least 5% of the peak itself, not of the max.  A more than three-sigma peak has the same requirements for the number of channels it must be greater than: however, none of those channels can have value 0.  Any strip with a candidate that passes these criteria is automatically given bad status 2. 

Pdfs for crates 1 and 2 are attached (but note that the crate 1 and crate 2 pdfs contain the first and second 15 modules, respectively, and therefore crate 1 should actually be labeled crate 2 and crate 2 is really crate 4).

DELETE - Run 9 BSMD Status Update 5 (4/30)

Edited on 5/1 to reflecte new status bit assigments for bits 3 and 4.

The current BSMD status bits are as follows:

Bit 2: Bad pedestal peak/multiple pedestal peaks.  This is described in more detail here.  Examples can be found in crate2_ped.pdf pp 207 and 314 and crate4_ped.pdf p 133.

Bit 3: Pedestal peak has bad sigma, sigma<1 or sigma >15

Bit 4: Chi squared value from gaussian fit is greater than 1000 (i.e., pedestal has a funny shape)

Bit 5: Strip is exactly identical to the previous strip

Bit 6: The ratio of the integral of channels 300-500 to the total integral does not fall between .0001 and .02

Bit 7: The ratio of the integral of channels 500-800 to the total integral does not fall between .00004 and .02

Bit 8: The ratio of the integral of channels greater than 800 to the total integral does not fall between .00005 and .02

Note that this means that dead channels have status 111xxxx0->448 (or greater).

The attached pdfs crate2 and crate4.pdf have the pedestal distributions, taken from NZS data, and the overall distributions, taken from ZS L2W data, overlayed; crate2_ped and crate4_ped.pdf have only the pedestal distributions.  The NZS data used was taken from my monitoring for fills 10415-10489.  The L2W data came from fills 10383-10507.  Additionally, at the beginning of each module is a summary page that has plotted the distributions for the ratios used to determine bad status bits 6, 7, and 8, and the overal distribution of status vs. strip for eta and phi.

Finally, there are a couple of possible new problems.  Page 18 in crate4_ped.pdf has several examples of pedestal distributions that have shoulders.  Page 20 has a few examples of pedestal distributions with a small, skinny peak perched on top of a large, broad distribution.  At the moment I have no bad status bit for either of these, and any peak with either of these features would almost certainly not be marked bad (even though I did manage to catch one of the ones on page 20).

Edit: Scott suggested during the phone meeting today that perhaps the problem of a small peak on a broad distribution was due to time variation of the pedestal width, and in the plot below you can see that he was correct: the strip initially has an extremely wide pedestal which then shrinks down suddenly.  Futhermore, looking at one of the strips that had a sort of shoulder to it, you can see that this is just a less-pronounced version of the double peak problem seen before: the pedestal goes up by 10 for a much shorter time frame, thus producing a shoulder rather than a second peak.  This suggests that, as Scott said, these channels should still be usable, and that once we begin breaking status down by time these funny shapes should be less of a problem.

DELETE - Run 9 BSMD Status Update 6 (5/4)

As Matt says that the maximum status is 255, I have dropped the old status bit 5 as (it was unused).  Also, I have loosened the dead strip cuts based on looking at module 55 (see pages 203 or 205 in the attached crate1.pdf, for instance).  The status bits are now as follows:

Bit 2: Bad pedestal peak/multiple pedestal peaks.

Bit 3: Pedestal peak has bad sigma, sigma<1 or sigma >15

Bit 4: Chi squared value from gaussian fit is greater than 1000 (this applies only for strips that do not have bad status 2 already)

Bit 5: The ratio of the integral of channels 300-500 to the total integral does not fall between .0005 and .02

Bit 6: The ratio of the integral of channels 500-800 to the total integral does not fall between .0002 and .02

Bit 7: The ratio of the integral of channels greater than 800 to the total integral does not fall between .0002 and .02

Below is a plot of status vs. eta and phi for BSMDE and BSMDP strips.  Note that strips with all three of bits 5, 6, 7 bad (generally, dead strips) are given the value 8 in this plot to distinguish them from strips that may have just one of those bits bad.  As some strips may have more than one bad status bit, for clarity I ranked the potential bad statuses in the order 2, 8, 7, 6, 5, 4, 3 (i.e., approximately in order of importance) and plotted for each strip only the highest-ranked status.

 

Additionally, I found a problem I had not seen before.  On page 207 of the attached crate1.pdf you can see that in the L2W data some strips have a large peak out in the tail of the ADC distribution.  However, as all these strips are caught by my code it's not a serious problem.

Final Run 9 200 GeV BSMD Status

In essence, the 200 GeV status tables were calculated the same way as the 500 GeV tables were.  Please see here for details.

Final Run 9 500 GeV BSMD Status - Willie Leight

BSMD Pedestals and Status for Run 9 pp 500 Data (June 2009, uploaded to offline DB)

The BSMD status analysis for the 500 GeV data proceeds as follows:

  1. Each strip is assigned a status for the whole run from an analysis of fills 10399, 10402, 10403, 10404, 10407, 10412, and 10415.  Pedestals are analyzed using NZS data taken by the BSMD online monitoring, which reads NZS data from evp and subtracts off pedestals which are updated each time a new BSMD pedestal run is taken.  Because NZS data is essentially minbias, high energy tails are analyzed using L2W-triggered data.  Status bits are described in detail here
  2. Once each strip has an assigned status, those strips that are not marked as bad move on to the second step.  Here the strips are examined fill-by-fill: for each fill the strip pedestal is QAed by re-applying the pedestal cuts (but not the tail cuts due to lack of statistics), and a new status for that fill is determined.
  3. Next, a pedestal correction is calculated.  The pedestal correction is just the MPV of the pedestal residua if the MPV is greater than the RMS of the pedestal residua. 
  4. Finally, we upload a number of tables to the database: for each BSMD plane there is one that contains a universal status for every strip, one for each fill containing a status for every strip, and one for each fill containing the RMS and pedestal correction for every strip. 

Attached is a pdf that presents the results of this study, including examples.

All code and root files are archived at /star/institutions/mit/wleight/archive/2009-pp500-bsmdStatus/.

Table 1: Pedestal correction, RMS, and status vs. fill for each module (Crates 1-4 are the West Barrel)

Cr 1 Mod 46 Mod 47 Mod 48 Mod 49 Mod 50 Mod 51 Mod 52 Mod 53 Mod 54 Mod 55 Mod 56 Mod 57 Mod 58 Mod 59 Mod 60
Cr 2 Mod 1 Mod 2 Mod 3 Mod 4 Mod 5 Mod 6 Mod 7 Mod 8 Mod 9 Mod 10 Mod 11 Mod 12 Mod 13 Mod 14 Mod 15
Cr 3 Mod 31 Mod 32 Mod 33 Mod 34 Mod 35 Mod 36 Mod 37 Mod 38 Mod 39 Mod 40 Mod 41 Mod 42 Mod 43 Mod 44 Mod 45
Cr 4 Mod 16 Mod 17 Mod 18 Mod 19 Mod 20 Mod 21 Mod 22 Mod 23 Mod 24 Mod 25 Mod 26 Mod 27 Mod 28 Mod 29 Mod 30
Cr 5 Mod 61 Mod 62 Mod 63 Mod 64 Mod 65 Mod 66 Mod 67 Mod 68 Mod 69 Mod 70 Mod 71 Mod 72 Mod 73 Mod 74 Mod 75
Cr 6 Mod 76 Mod 77 Mod 78 Mod 79 Mod 80 Mod 81 Mod 82 Mod 83 Mod 84 Mod 85 Mod 86 Mod 87 Mod 88 Mod 89 Mod 90
Cr 7 Mod 91 Mod 92 Mod 93 Mod 94 Mod 95 Mod 96 Mod 97 Mod 98 Mod 99 Mod 100 Mod 101 Mod 102 Mod 103 Mod 104 Mod 105
Cr 8 Mod 106 Mod 107 Mod 108 Mod 109 Mod 110 Mod 111 Mod 112 Mod 113 Mod 114 Mod 115 Mod 116 Mod 117 Mod 118 Mod 119 Mod 120

 

Table 2: BSMD spectra for 150 eta and 150 phi strips used for status determination for each module (for fills listed above).  Bad strips are identified with the status (in hex): strips with red status are marked bad, strips with green failed a cut but are not necessarily bad.  Note that these spectra are shifted up by 100 on the X-axis so that the pedestal is centered around 100 rather than 0.

Cr 1 Mod 46 Mod 47 Mod 48 Mod 49 Mod 50 Mod 51 Mod 52 Mod 53 Mod 54 Mod 55 Mod 56 Mod 57 Mod 58 Mod 59 Mod 60
Cr 2 Mod 1 Mod 2 Mod 3 Mod 4 Mod 5 Mod 6 Mod 7 Mod 8 Mod 9 Mod 10 Mod 11 Mod 12 Mod 13 Mod 14 Mod 15
Cr 3 Mod 31 Mod 32 Mod 33 Mod 34 Mod 35 Mod 36 Mod 37 Mod 38 Mod 39 Mod 40 Mod 41 Mod 42 Mod 43 Mod 44 Mod 45
Cr 4 Mod 16 Mod 17 Mod 18 Mod 19 Mod 20 Mod 21 Mod 22 Mod 23 Mod 24 Mod 25 Mod 26 Mod 27 Mod 28 Mod 29 Mod 30
Cr 5 Mod 61 Mod 62 Mod 63 Mod 64 Mod 65 Mod 66 Mod 67 Mod 68 Mod 69 Mod 70 Mod 71 Mod 72 Mod 73 Mod 74 Mod 75
Cr 6 Mod 76 Mod 77 Mod 78 Mod 79 Mod 80 Mod 81 Mod 82 Mod 83 Mod 84 Mod 85 Mod 86 Mod 87 Mod 88 Mod 89 Mod 90
Cr 7 Mod 91 Mod 92 Mod 93 Mod 94 Mod 95 Mod 96 Mod 97 Mod 98 Mod 99 Mod 100 Mod 101 Mod 102 Mod 103 Mod 104 Mod 105
Cr 8 Mod 106 Mod 107 Mod 108 Mod 109 Mod 110 Mod 111 Mod 112 Mod 113 Mod 114 Mod 115 Mod 116 Mod 117 Mod 118 Mod 119 Mod 120

 

Table 3: Fills used in this study.

#      Fill        Date          Begin run      End run   LT/pb
1 F10383 2009-03-18 R10076134 R10076161 0.00
2 F10398 2009-03-20 R10078076 R10079017 0.08
3 F10399 2009-03-20 R10079027 R10079086 0.22
4 F10402 2009-03-21 R10079129 R10079139 0.04
5 F10403 2009-03-21 R10080019 R10080022 0.01
6 F10404 2009-03-22 R10080039 R10080081 0.09
7 F10407 2009-03-22 R10081007 R10081056 0.05
8 F10412 2009-03-23 R10081096 R10082095 0.23
9 F10415 2009-03-24 R10083013 R10083058 0.24
10 F10426 2009-03-25 R10084005 R10084024 0.11
11 F10434 2009-03-26 R10085016 R10085039 0.18
12 F10439 2009-03-27 R10085096 R10086046 0.26
13 F10448 2009-03-28 R10087001 R10087041 0.29
14 F10449 2009-03-28 R10087051 R10087097 0.32
15 F10450 2009-03-29 R10087110 R10088036 0.29*
16 F10454 2009-03-29 R10088058 R10088085 0.15*
17 F10455 2009-03-30 R10088096 R10089023 0.29*
18 F10463 2009-03-31 R10089079 R10090027 0.20*
19 F10464 2009-03-31 R10090037 R10090047 0.08*
20 F10465 2009-03-31 R10090071 R10090112 0.13*
21 F10471 2009-04-02 R10091089 R10092050 0.30
22 F10476 2009-04-03 R10092084 R10093036 0.28
23 F10478 2009-04-03 R10093057 R10093085 0.08
24 F10482 2009-04-04 R10093110 R10094024 0.55
25 F10486 2009-04-05 R10094063 R10094099 0.52
26 F10490 2009-04-05 R10095019 R10095057 0.40
27 F10494 2009-04-06 R10095120 R10096027 0.61
28 F10505 2009-04-07 R10096139 R10097045 0.39
29 F10507 2009-04-08 R10097086 R10097153 0.29
30 F10508 2009-04-08 R10098029 R10098046 0.17
31 F10517 2009-04-09 R10099020 R10099078 0.32**
32 F10525 2009-04-10 R10099185 R10100032 0.68
33 F10526 2009-04-10 R10100049 R10100098 0.37
34 F10527 2009-04-11 R10100164 R10101020 0.82
35 F10528 2009-04-11 R10101028 R10101040 0.31
36 F10531 2009-04-12 R10101059 R10102003 0.86
37 F10532 2009-04-12 R10102031 R10102070 0.76
38 F10535 2009-04-13 R10102094 R10103018 0.86
39 F10536 2009-04-13 R10103027 R10103046 0.43

* Crate 2 was off for this fill

** This fill had no BSMD data

Final Run 9 BSMD Absolute Calibration

The Run 9 BSMD absolute calibration was made using few-GeV TPC-identified electrons from pp500 running, and has two pieces.  The first is a new CALIBRATION table in the database which will be used in the EMC slow simulator to improve the agreement of of MC ADC with data.  This table starts by combining the previously-determined strip-by-strip relative gains with the existing values in the table.  This is then multiplied by the ratio of the slope of a linear fit to the mean cluster ADC distribution from few-GeV isolated data electrons to the same slope in simulated electrons, where the slope is calculated in four different eta bins.  The second piece is a new GAIN table in the database which allows ADC values to be converted to energy deposited in the BSMD.  This table was determined by combining the strip-by-strip relative gains with a similar ratio as above, but using mean cluster energy deposited in the BSMD instead of reconstructed ADC values (the electron samples used for data and MC were the same) and it is calculated in ten eta bins instead of four.  Both tables are currently in the database with flavor "Wbose2": it is hoped that eventually the CALIBRATION table will migrate to flavor "ofl", but the GAIN table will have to remain "Wbose2" because it is currently used (with values all equal to 1) in some codes to determine the change in the BSMD calibration over time.  While producing two tables which are in some ways overlapping and one of which can never be flavor "ofl" is not an ideal solution, it allows us to avoid making any modifications to currently existing code (in particular the StEmcSimulator) and allows people who prefer to think of reconstructed energy from BSMD ADCs as being the full particle energy instead of the energy deposited in the BSMD to continue as they were with no change.  For more details, please see:

1. Overview of the method

2. Final cut list and data-MC comparison

3. Overall Summary

Additionally, a link to the 2009 BSMD Calibration note will be added here once it is completed.

See also a brief presentation on why we chose not to include the BSMD gas pressure in our analysis.

Run 9 BSMD Status Update 1 (4/19)

I use two datasets to QA BSMD channels: zero-suppressed data from L2W events (fills 10383-10507) and non-zero-suppressed data from online monitoring (fills 10436-10507) (note that at the moment I am not examining the time dependence of BSMD status).  NZS data is used to QA the pedestal peak of a channel, while high-energy ZS data is used to QA the tail.

Next, the ZS data are compared to the ZS data from the previous channel to check for copycat channels.  Then three quantities are calculated: the ratios of the integrals from 300-500, 500-800, and 800-the end of the spectrum to the total integral of the channel.  Each of these quantities must then fall within the following cuts: .0001-.02, .00004-.02, and .00005-.02 respectively.  Here is a sample distribution:

Also, the spectra for the strips in module 3, with status, are attached.  I have not had a chance to look closely at any other modules yet.

details about known hardware problems

Attached file  'SMD_07.xls'  contain  my notes from run 7, the only things that might be useful for
 you is in the red color, permanently disconnected anode wires and affected
 strips (again this is not SoftIds). I think that I cut out one more wire
before Run8 started, but for that I need to check logbook.
 Oleg

-----------------------

 

BSMD Wire Support Effects on GAIN

Here is a note from Oleg Tsai (and attached file "wiresup.pdf" below) concerning source
measurements of the BSMD gain behavior near the nylon wire supports:

On Fri, 18 Jul 2008, tsai@physics.ucla.edu wrote:

>    Attached plot will help you to understand what you see close to
>  strips 58 and 105. There are two nylon wire supports in the chamber
>  at distances 34.882" and 69.213" from the (eta=0 end of the chamber,
>  not from the real eta 0). Gain drops near these supports. You can
>  see this in your plots also. The attached plot shows counting rate vs
>  strip id for a typical chamber. Don't pay attention to channels
>  near 0 and 150 - these effects are due to particular way co60 source
>  was collimated (counting profile was close to 0.1/0.2/0.4/0.2/0.1)
>  0.4 in central strip. From that I estimated that eta strips
>  56-60 and 104-107 should have calib. coefficients
>  (.95,.813,.875,.971,.984) (.99,.89,.84,.970.), I don't remember
>  if I was using counting rate vs HV to derive these numbers...
>  (this is my third and final attempt :-))
>

 

details of SMD simulator, simu shower zoom-in

Fig 1. Geant simu of EM shower of one electron with ET=10 GeV at eta=0.

Note,

  • one phi-strip is parallel to cavities, extends over 1/10 of the cavity length, and integrates over 2 consecutive cavities.
  • one eta-strip is perpendicular to cavities, extends over 1/150 of the cavity length, and integrates over all 30 cavities in the module.

 Hi Jan,

the only documentation I know of is the code itself --  
hopefully you'll consider it human-readable.  Look at  
StEmcSimpleSimulator::makeRawHit() in StEmcSimulatorMaker.  We use the  
kSimpleMode case.  The GEANT energy deposition is multiplied by a  
sampling fraction that's a second-order polynomial in pseudorapidity,  
and then we take pedestals, calibration jitter, etc. into account.   
The exact parameters of the sampling fraction are defined in the  
constructor for StEmcSimpleSimulator.  I don't remember how they were  
determined.

 also meant to add that the width broadening is OFF by  
default.  To turn it on one needs to do

emcSim->setMaxCrossTalkPercentage(kBarrelEtaStripId, aNumber);

 The "width broadening" only occurs  

for eta strips and was implemented by Mike Betancourt in  
StEmcSimulatorMaker::makeCrossTalk().  He wrote a blog post about it:

http://drupal.star.bnl.gov/STAR/blog-entry/betan/2007/nov/19/cross-talk-bsmd

 

Adam 


 

Maybe I should note that the cross talk I implemented was to account  
for the capacitive cross talk between the cables carrying the eta  
strip signals to the readout, and not for any effects related to the  
energy deposition.

-Mike


Hi Jan,
Well, Oleg should probably make the definitive reply, but I think it is like this:
The amplification happens only at the wire, it is independent of the positions of the primary ionization. Of course, there is a little effect from a small amount of recombination or capture of the charge on impurities, and there must be a (hopefully small) effect from the dependence of the mean pulse shape on the position of the ionization and the dependence of the effective gain of the electronics on the mean pulse shape. But these things can't amount to much, I would think. (Of course, I don't want to discourage you from looking in the data to confirm it!)

Gerard 



 These are all quite true, small effects which will be difficult 
 to see. The bigger effect is reading out one time bucket.
 I have made some estimates before test run 98 (?) or so, see this PS

 if you look at numbers still very small effect which is unpractical
 to measure.
 Oleg  


Hi, Yes indeed, I don't know why I neglected to think of the simple effect of drift time, but it is certainly going to be a much bigger effect (~10% if I read your fig.3 correctly?) than the other two. (Perhaps still too small to see in the data, I don't know...). Anyway, given the data volume and already limited readout speed of the BSMD I am pretty sure there is no prospect to ever read more than the one fixed time sample from BSMD; this is probably something to live with. [But it is not impossible to have 2 or 3 point readout, and if we want to seriously consider it it should be brought up ~now, well that is in case we are given the green light to work on BSMD readout "mini-upgrade". If not, well it will just wait until then. But keep in mind, more points readout would offer slightly better gain accuracy but will complicate offline and calibrations too, probably you really don't want it anyway!]

 Gerard

p.s. Jan I don't know if it adds anything to the wider discussion on BSMD gain/cal but if you feel so you may surely post to hn.

p.p.s. Oleg, an important question - in your note you don't specify exactly how you obtain the drift time... I mean, yes you show a drift velocity curve, but really of course you must mean there was a calculation such as with garfield to get a drift time out of this... _So_, was that calculation done with the magnetic field on? [The wires are parallel to the magnetic field, right? So it will make possibly a very big difference in drift times.] Jan, do you/others realize the BSMD gain will probably have some systematic dependence on the magnetic field, including the sign thereof? So if you care about gain calibration it should be separated out according to the state of STAR magnet, fortunately there are only a few running states, right?

Gerard

one cluster topology , definition of 'barrel cell'

 One dimensional  BSMD cluster finder is insensitive to single dead strips and module boundaries for the phi strip.

However certain class of Phi-strip clusters (marked as red) are artificial split and reco as 2 phi-clusters.

One could use the Eta-strip cluster to recover from such phi-plane split if occupancy is low and relative scale of eta & phi cluster energy known.

 Plots below also introduce a concept of 'barrel cell' which has approximate size of 0.1 in eta & phi physical units and is aligned with barrel modules.  'barrel cell' is the smallest common unit in eta-phi space for SMD and BTOW tower topologies. 

'Barrel cell' has 2 coordinates : x[0,19] and y[0,59] . If such object is already defined for the Barrel,  uniform for East & West let me know and I'll adjust numbering scheme. 

Back to recovery of split phi cluster, shown below as red oval.  Green clusters are shown to 'calibrate' yourself to this type of representation.  


Fig 1.


Fig 2.

 

 


 

 

BTOW - Calibration Procedure

Here you will find the calibration procedure for the BTOW (as of 2013). 

Additional documentation about BTOW gain calibrations in previous years can be found here: 
Report from the STAR EMC Calibrations Workshop (2008)
2006 BEMC Tower Calibration Report
2009 BEMC Tower Calibration Report
As well as in BEMC > Calibrations section on Drupal here

1) Generating Pedestal/Status Tables

Getting Started

The files which are used for this are from L2 and they're stored on the online "starp" network.  You will need to request an account on those machines.  Following the instructions here, you should request an account on the "onlldap.starp.bnl.gov" host with the "onlmon" username (you should also request an account with your username if you can).  After your request is approved you can log in with the commands shown here:

ssh-agent > agent.sh
source agent.sh
ssh-add
ssh -X -A aohlson@rssh.rhic.bnl.gov % using your own username, of course
ssh -X -A stargw.starp.bnl.gov
ssh -X -A onlmon@onl05.starp.bnl.gov

It is also necessary to set up the proper directory structure and files. 

% after logging in you should be in /ldaphome/onlmon/
mkdir emcstatus2013 % make a directory for the appropriate year
cd emcstatus2013
mkdir pp500 % make a directory for the species/energy
cd pp500
cp ../../emcstatus2012/pp500/l2status.py ./ % copy files from a previous year
cp ../../emcstatus2012/pp500/indexWrite.py ./
cp ../../emcstatus2012/pp500/l2status2012.sqlite3 ./l2status2013.sqlite3
cp ../../emcstatus2012/pp500/empty.sqlite3 ./
cp ../../emcstatus2012/pp500/mapping.sqlite3 ./
cp ../../emcstatus2012/pp500/star_env ./
mkdir db
mkdir db/bemc
mkdir db/eemc
mkdir histoFiles
mkdir /onlineweb/www/emcStatus2013 % make the directories where the online files will be written by indexWrite.py
mkdir /onlineweb/www/emcStatus2013/pp500
mkdir /onlineweb/www/emcStatus2013/pp500/pdf
mkdir /onlineweb/www/emcStatus2013/pp500/details
cd /onlineweb/www
ln -s emcStatus2013 emcStatus % make a softlink from emcStatus to the current year's directory
cd -

Now you should go through l2status.py and indexWrite.py and change each instance of the year and species/energy to the current ones.  Also there are some lines in l2status.py which refer to runnumbers, these should be changed as well.  I also like to start fresh with the QA every year, and comment out the lines which hard-code bad channels.  The variable "minimumMedianCounts" should be changed to a value appropriate for the species/energy that is being run. 

Generating Status/Pedestal Tables with L2

In each run, as part of the L2 algorithm, a 2D histogram is filled with (channel# + 160*crate#) vs. (ADC-l2ped+20).  This histogram is named "h22" and is contained within the root files located here: /ldaphome/onlmon/L2algo2013/l2ped/output/   The python macro named "l2status.py" takes this histogram as an input, and generates individual 1D histograms of the ADC spectra for every tower.  These histograms are analyzed to determine the tower status, the ADC value of the pedestal peak, and the sigma of the pedestal peak.  See below for the status code definitions and examples. 

To execute the code do:

source star_env
python l2status.py

You can monitor the progress of the script by looking at l2status.log (for example, by opening another terminal and doing 'tail l2status.log -n50' periodically).  As the script progresses you will see the summary pdf files posted to the webpage http://online.star.bnl.gov/emcStatus/pp500/ (for the relevant species/energy), the status and pedestal tables will be written as text files into db/bemc and db/eemc, the actual root histograms will be written into the histoFiles directory, and the results will be written into the database file l2status2013.sqlite3. The statuses and pedestals should be generated once for every good fill.

In a perfect world you would run over the code over the entire dataset once and you would have the status tables which are then uploaded to the STAR database.  However, its usually not that simple.  There are often problems with a handful of channels that aren't caught by the status checking code, or some that are flagged as bad, but shouldn't be.  My suggestion would be to run the code over all the files first and then then we can use the information in the pdfs and in l2status2013.sqlite3 to isolate problem channels. Some of the parameters in the code you may need to tweak to improve the status tables, and some channels you may have to hard code as a bad status for a period of time.

If you want to clear everything and start fresh, you can do this: clean out the folders db/bemc/ and db/eemc/, remove the log file, remove runList.list, and do 'cp empty.sqlite3 l2status2013.sqlite3'

So as a first step, let l2status.py run for a while (it will take time, I often run it in screen so that I don't have to keep a terminal open).  You should kill it manually (ctrl+c) when it reaches the end of the runs that you want to look at.  When you start it running again it should just pick up (approximately) where it left off. 

The code only computes status/pedestal tables if there are enough hits to get good-quality calibrations.  The median number of hits above the pedestal must surpass some threshold (minimumMedianHits) in a given fill; this parameter should be set to an appropriate value in l2status.py (not too high that we miss a lot of fills, and not too low that we don't get good calibrations -- there are some examples for appropriate values given in the code).  When the threshold is reached you will see some messages in the log file like

2012-06-07 00:26:36 PID: 21394 INFO medianHits = 635
2012-06-07 00:26:36 PID: 21394 INFO begin status computation
2012-06-07 00:28:05 PID: 21394 INFO end status computation -- found 122 bad channels
2012-06-07 00:28:05 PID: 21394 INFO begin endcap status computation
2012-06-07 00:28:05 PID: 21394 INFO 04TB07 status=136 nonzerohists=22
2012-06-07 00:28:05 PID: 21394 INFO 06TA07 status=136 nonzerohists=18
2012-06-07 00:28:05 PID: 21394 INFO 08TB07 status=136 nonzerohists=21
2012-06-07 00:28:06 PID: 21394 INFO 11TA12 status=0 nonzerohists=60
2012-06-07 00:28:06 PID: 21394 INFO end status computation -- found 11 bad endcap channels
2012-06-07 00:28:10 PID: 21394 INFO current state has been saved to disk
2012-06-07 00:28:10 PID: 21394 INFO creating PostScript file
2012-06-07 00:29:09 PID: 21394 INFO calling pstopdf
2012-06-07 00:29:42 PID: 21394 INFO removing ps file
2012-06-07 00:29:43 PID: 21394 INFO creating endcap PostScript file
2012-06-07 00:29:52 PID: 21394 INFO calling pstopdf
2012-06-07 00:29:57 PID: 21394 INFO removing ps file
2012-06-07 00:29:58 PID: 21394 INFO Finished writing details webpage for F16732_R13108069_R13108080
2012-06-07 00:30:00 PID: 21394 INFO goodnight -- going to sleep now

To evaluate the status of each BEMC tower, its ADC spectrum is tested for various features.  If a test fails, then a bit (or multiple bits) is flipped to indicate the nature of the problem with the tower.  It is possible for a tower to fail multiple tests and therefore have a status code which indicates multiple problems.  I show examples of towers which fail each of the basic tests and are therefore assigned specific status codes.  

Status Code Definitions

000 == channel does not exist or is masked in L2ped
001 == channel is good
002 == channel is either hot or cold (see bit 16)
004 == channel has a weird pedestal (see bit 32)
008 == channel has a stuck bit (see bits 64 and 128)
016 == if off, hot (10x as many hits); if on, cold tower (10x fewer hits)
032 == if off, pedestal mean is out of bounds; if on, pedestal width is too large/small
064 == bit stuck on
128 == bit stuck off
254 == identical channel

These codes can be seen by going to the EMC Status webpage and clicking on any of the Details pages. 

I show examples from a heavy ion (U+U) run where the numbers of counts in the histograms are higher than in p+p, for clarity.  All these plots come from the pdf here

status = 1 (normal ADC spectrum)

status = 0  (channel is masked out)
Note: MOST of the channels marked with a zero status are (or were) hot channels that were caught and masked out. 

status = 2 (hot channel)
Hot channels look like the above plot, and most of them are caught in realtime and masked out, and thus assigned a status of zero.  It is unusual to actually catch a really hot channel after the fact. 

status = 18 = 2+16 (cold channel)

status = 4 or 36 (bad pedestal)
This status catches a range of problems, from weird-looking spectra (like shown here), to wide pedestals, etc.

status = 72 = 8+64 (stuck bit - on)

status = 136 = 8+128 (stuck bit - off)

status = 254 (identical channels)

2) QAing the Pedestal/Status Tables

Once the ped/stat tables have been generated, they must be QAed.  I do the QA in two parts:

1) I spot check the pdf files by eye.  I pick about 5 or 6 fills evenly spaced throughout the run and go through each page of the pdf files looking for any strange-looking towers (for example, towers with stuck bits are pretty easy to see, and they don't always get caught by the algorithm).  Yes, this takes a while, but we don't have to look at the pdfs for every fill!
For examples of the bad channels you are looking for, have a look at Suvarna's nice QA of the tables last year:
https://drupal.star.bnl.gov/STAR/blog/sra233/2012/dec/05/bemc-statuscuau200hard-coded-towers
https://drupal.star.bnl.gov/STAR/blog/sra233/2012/aug/15/bemc-statuspp200towers-with-status-marked-oscillating

2) The status and pedestal information is stored in the sqlite database file, and we can spin over this quickly in order to look at the data over many runs/fills.  I have attached a script which is used to analyze the database file l2status2013.sqlite3.  The first step is to download l2status2013.sqlite3 somewhere *locally* where you can look at it, and save statusCheckfill2.py in the same place. In statusCheckfill2.py you should change line 27 to the appropriate run range that you are analyzing, and change 2012-->2013 if necessary.  Then all you need to do to run the code is something like:
setenv PYTHONPATH $ROOTSYS/lib
python statusCheckfill2.py

This python code allows you to look at the statuses and pedestals from run to run.  At the moment I make lists of the statuses and pedestals for each tower (in the variables u and y -- sorry for my horrible naming scheme!), and then I can print these lists to the screen or graph them.  At the moment I have the code so that it prints the statuses for any tower that has status 1 some, but not all, of the time (line 138).  This script can be used to find channels which change status frequently over the course of the run... for example, sometimes there are channels which are right on the edge of satisfying the criteria for being marked as "cold" and therefore their status alternates 1 18 1 1 18 18 1 1 18 1 18 etc. Then we can look at the pdf files to see if the tower always looks cold, or if its behavior really changes frequently.  If it is truly cold, then at that point we can either adjust the criteria for being marked as cold, or hard-code the channel as status 18 (I typically just hard-code it).
Also I can look at the histograms histoPedRatio and histoPedRatioGood (which are saved out in histogramBEMCfill.root), which are plots of the ratio of the pedestal for a given run to the pedestal of the first run as a function of tower id.  histoPedRatio is filled for every tower for every run (unless the pedestal for that tower in the first run is zero), and histoPedRatioGood is only filled if the tower's status is 1.  I haven't needed to cut out any towers based on these plots, but I think they would be a good way to find any towers whose pedestals are fluctuating wildly over time (or maybe you might want want to plot the difference, not the ratio).
So you can take a look at the code and play around with it so that it allows you to do the QA that you think is best.

If you look at l2status.py, you can see where I've hard-coded a bunch of channels I thought were bad (the lines which hard-code bad channels are commented out right now because I prefer to start from scratch each year).  I've pasted the code here too:

            ## hard code few bad/hot channels
            #if int(tower.softId)==939 : ##hard code hot channel
            #    tower.status |= 2
            #if (int(tower.softId)==3481 or int(tower.softId)==3737) : ##hard code wide ped
            #    tower.status |= 36
            #if (int(tower.softId)==220 or int(tower.softId)==2415 or int(tower.softId)==1612 or int(tower.softId)==4059): ##hard code stuck bit
            #    tower.status |= 72
            if (int(tower.softId)==671 or int(tower.softId)==1612 or int(tower.softId)==2415 or int(tower.softId)==4059) : ##hard code stuck bit (not sure which bit, or stuck on/off)
                tower.status |= 8
            if (int(self.currentFill) > 16664 and (int(tower.softId)==1957 or int(tower.softId)==1958 or int(tower.softId)==1977 or int(tower.softId)==1978 or int(tower.softId)==1979 or int(tower.softId)==1980 or int(tower.softId)==1997 or int(tower.softId)==1998 or int(tower.softId)==1999 or int(tower.softId)==2000 or int(tower.softId)==2017 or int(tower.softId)==2018 or int(tower.softId)==2019 or int(tower.softId)==2020)) : ##hard code stuck bit
                tower.status |= 8
            if (int(tower.softId)==410 or int(tower.softId)==504 or int(tower.softId)==939 or int(tower.softId)==1221 or int(tower.softId)==1409 or int(tower.softId)==1567 or int(tower.softId)==2092) : ##hard code cold channel
                tower.status |= 18
            if (int(tower.softId)==875 or int(tower.softId)==2305 or int(tower.softId)==2822 or int(tower.softId)==3668 or int(tower.softId)==629 or int(tower.softId)==2969 or int(tower.softId)==4006) : ##either cold or otherwise looks weird
                tower.status |= 18

Some of these channels are persistently problematic, so I expect that your list of bad channels will look similar to mine from previous years.

It may take a few iterations of generating the tables, finding bad towers, tweaking the code or hard-coding the bad channels, regenerating the tables, etc before you are satisfied with the quality of the tables.  Once you have run l2status.py one last time and are satisfied with the quality of the tables you have generated, they are ready to be uploaded to the database by the software coordinator. 

3) Uploading Pedestal/Status Tables to the Database

Once the ped/stat tables have been QAed satisfactorily, the values need to be uploaded to the database. 

First, the files in /db/bemc/ need to be moved to RCF, where the upload will take place.  This can be done with the following commands:

ssh-agent > agent.sh
source agent.sh
ssh-add
ssh -X -A aohlson@rssh.rhic.bnl.gov % using your own username
rterm -i
mkdir bemcUpload2013 % make a directory to work in
mkdir bemcUpload2013/tables
cd bemcUpload2013/tables/
scp onlmon@onl05.starp.bnl.gov:'/ldaphome/onlmon/emcstatus2013/pp500/db/bemc/*.txt' ./
cd ../


The scripts bemcPedTxtUpload.C and bemcStatTxtUpload.C are used to perform the upload.  They take as input the name of a file which contains a list of the files to be uploaded.  To create the file lists:


% in bemcUpload2013/
mkdir lists
cd tables
ls bemcStatus*.txt > ../lists/bemcStatus.list
ls bemcPed*.txt > ../lists/bemcPed.list
cd ../

Once the file lists have been created, the scripts can be run with:

stardev
setenv DB_ACCESS_MODE write
root4star bemcStatTxtUpload.C
root4star bemcPedTxtUpload.C

 

Important!
1) The upload scripts (bemc*TxtUpload.C) contain return statements to prevent accidental uploads.  Make sure everything is working properly by running the scripts with the return statements included (nothing will be uploaded to the db) first.  When you are sure that you're ready to upload, then comment out the return statements.  After uploading, don't forget to uncomment the return statements! 
2) Try uploading one table first, and then check that it is uploaded correctly (see below), before uploading all the tables for the whole run.  If a table is uploaded wrongly, it can be disactivated by the software team, but this is not something we want to do often.  (In the case of a single table, it may be more efficient to just upload the correct table with a timestamp one second after the incorrect table.)
3) Reminder: this is not a task many people should need to do but should be limited to one or two "experts", who will be given special DB writing priviledges by Dmitry Arkhipkin (arkhipkin@bnl.gov).  


Uploaded tables can be viewed with the online BEMC DB browser.  I find it helpful to spot-check some tables to make sure they have been uploaded correctly.  I find a table with the browser, copy it into an text file, and compare it to the text file I uploaded (in tables/).  For the status tables, this can be done with a simple diff command.  For the pedestal tables, the script checkPeds.C can be used. 

At the beginning of the run (after physics has been declared, L2 is running, etc), it is good to generate a set of ped/stat tables and upload them to the database one second after the initialized values (for example, at 20131220.000001).  (Reminder: The procedure for initializing the timeline with ideal values can be found here.)  This can be done with the bemc*TxtUpload.C scripts; you will see a commented-out line that shows how to set the timestamp manually.  It is good to do this periodically throughout the run, especially if something changes with the detector or beam configuration, so that FastOffline reconstruction can pick up decent DB values.  Each time, the tables should be uploaded one second after the previous tables, so that when we upload the fully-QAed tables at the end of the run, the temporary ones will no longer be picked up. 

4) Relative Gain Calibration with MIPs

Getting Started

1) The code needed to perform the gain calibrations can be checked out from CVS and compiled. In your working directory, do:

cvs co StRoot/StEmcPool/StEmcOfflineCalibrationMaker/
cons

You can move the required files to your working directory, or make soft links. You need the following files, which are in the macros/ directory:
-- bemcCalibMacro.C
-- btow_mapping.20030401.0.txt
-- CalibrationHelperFunctions.cxx
-- CalibrationHelperFunctions.h
-- electron_drawfits.C
-- electron_histogram_maker.C
-- electron_master.C
-- electron_master_alt.C
-- electron_tree_maker.C
-- geant_fits.root
-- mip_histogram_fitter.C
-- mip_histogram_maker.C
-- SubmitCalibJobs.pl
-- runMIPjobs.sh
-- runElecJobs.sh
-- runFinalElecJobs.sh

Some of the files have lists of triggers which need to be hard-coded in for each year.  In particular, in bemcCalibrationMacro.C you should ensure that the correct trigger list is present, and the trigger IDs for the HT triggers need to be written in StEmcOfflineCalibrationMaker.cxx.  (In most cases I have already typed in the values for Run 11, but they are commented out while the Run 9 values are commented in.)  If you want to use the TOF information (in Run 11 and beyond), there are some lines of code that need to be commented in in StEmcOfflineCalibrationMaker.cxx. 

Generating Trees

2) Generate the list of runs with the following command (for Run 9)
get_file_list.pl -keys "runnumber" -distinct -cond "production=P11id,filetype=daq_reco_MuDst,sanity=1,trgsetupname=production2009_200GeV_Single,filename~st_physics,tpx=1,emc=1,storage!=HPSS" -limit 0 | sort -u > runlist.txt

or similar for other years.

3) Run SubmitCalibJobs.pl, which will execute the bemcCalibMacro.C macro.  Make sure that the correct catalog query lines are commented in/out in the submit script.  Also ensure that the appropriate directories exist ($workingDir, $schedDir, $outDir, $logDir, $scriptDir) as specified at the top of the submit script.

This macro creates trees which store primary tracks which will be further analyzed for the calibration.  For each primary track we write out the track information from the TPC, the EMC information for the 3x3 tower cluster around the track, the TOF information (in Run 11 and beyond), and the trigger information. 

The trees created by this step are stored on HPSS here:
Run 9: /home/aohlson/bemcCalib2009_x.tar where x=0,...,9
Run 11: /home/aohlson/bemcCalib2011_05_x.tar where x=0,...,14 and /home/aohlson/bemcCalib2011_07_x.tar where x=0,...,18

MIP calibration

The relative gain calibration is obtained by finding the MIP peak in each of the 4800 BEMC towers. 

The MIP energy deposit has the following functional form, which was determined from test beam data and simulations:
MIP = (264 MeV)×(1+0.056η2)/sin(θ)

From this expression we can calculate a calibration constant
C = 0.264×(1+0.056η2)/(ADCMIP×sin(θ))
where ADCMIP is the location of the MIP peak in the ADC spectrum.  This allows us to combine towers at the same η and thus find the absolute gain calibration in each crate-slice using electrons (see next section).

The procedure for obtaining the MIP calibration is as follows...
4) Make the MIP histograms with runMIPjobs.sh which executes mip_histogram_maker.C  Ensure that the correct output filenames are specified in the submit script.

Events with |vz| < 30 cm are selected, and any towers that have multiple tracks associated with them are excluded.  We select tracks with p > 1 GeV/c, and require that they enter and exit the same tower.  We require that the towers surrounding the central tower do not contain a large energy deposition.  We require that ADC-ped > 1.5*pedRMS.  After these track quality cuts we fill histograms with the ADC-ped values for each tower. 

5) Make a list of the output files from step (4) called mips.list.  Run mip_histogram_fitter.C

We fit each histogram with a gaussian on a pedestal; the histograms and fits are shown in mip.pdf. If the fit values fail basic quality cuts (such as if the mean is < 5), then the tower is assigned a bad status (!=1).  These fits are marked as red in mip.pdf.  For each tower we record the mean and sigma of the gaussian fit, and the status.

6) Check mip.pdf by eye to look for any other towers which were obviously bad.  Write a function like isBadTower2009(int id) (see examples in the code) which identifies these bad towers so that they can be assigned a bad status.  You can either put this function in mip_histogram_fitter and re-run it, or you can put it in the electron codes.  Note that most of the towers with bad MIP peaks were marked as bad (cold/hot/stuck bit) towers when the status/pedestal tables were originally computed. 

5) Absolute Gain Calibration with Electrons

Electron calibration

The absolute gain calibration is done by identifying electrons and finding the E/p peak for small groupings of towers.  It is desirable to find the E/p peak for as small groupings as possible.  In 2006 the calibration was done in rings in η, while in 2009 it was done for "crate-slices", which are groups of 8 towers in the same crate in the same η ring. 

The procedure is as follows...

7) Make the electron trees with runElecJobs.sh which will execute electron_tree_maker.C  Ensure that the correct output filenames are specified in the submit script.

The macro electron_tree_maker.C makes slimmer trees of electron candidates which satisfy the following criteria:
-- event vertex |vz| < 60
-- track must come from reconstructed vertex (ranking >= 0)
-- 1.5 < p < 20 GeV/c
-- nhits >= 10
-- matched tower status = 1
-- dE/dx > 3e-6
-- ADC-ped > 1.5*pedRMS
In this macro, if the electron track points towards a HT trigger tower then it is assigned htTrig = 2

8)
In the old calibration: Make a list of the output files from step (5) called electrons.list.  Run electron_master.C.
---- OR ----
In the new calibration: Run runFinalElec.sh, which executes electron_master_alt.C (make sure that the input/output filenames and directories are correct).  Hadd the resulting output files, and use this file as the input to electron_drawfits.C

In this macro even more stringent cuts are placed on the electron candidates:
-- ADC-ped > 2.5*pedRMS
-- track must enter and exit the same tower
-- p < 6 GeV/c
-- track does not point towards a tower which fired the HT trigger (htTrig != 2)
-- dR < 0.025 (distance from the center of the tower)
-- dE/dx > 3.4e-6
-- the maximum Et in the 3x3 cluster of towers must be in the central tower
-- there are no other tracks pointing to the central tower

The resulting histograms of E/p in each crate-slice are drawn and fit with a Gaussian plus a first-order polynomial.  If the calibration is already correct, then the E/p peak should be at 1.  The deviation from unity establishes the absolute gain calibration which, combined with the relative gain calibration from the MIP procedure, defines the overall BEMC gain calibration.

Run 12 BTOW Calibration

 This is parent page that will hold all informationa about the run12 BEMC gain calibration

200 GeV Calibration

 This calibration was done with the 200 GeV proton-proton data collected in 2012. The calibration trees are backed up on HPSS and can be found here:

/home/jkadkins/bemcCalibrations/2012/pp200/bemcCaibTrees2012-0XX.tar

where XX = 0, 1, 2, .... , 34

Run 3 BTOW Calibration

I plan on "Drupalizing" these pages soon, but for now here are links to Marco's slope calibration and Alex's MIP and Electron calibrations for the 2003 run:

Marco's tower slope calibration

Alex's MIP calibration

Alex's electron normalization

Run 4 BTOW Calibration

Introduction:

The recalibration of the BEMC towers for Run 4 includes the following improvements:

  • recovery of 158 swapped towers
  • identification and removal of 38 towers with possible light leakage / electronics problems
  • identification and removal of 24 towers with bad p/E distributions
  • MIP calibration restricted to low-multiplicity events from minimum-bias data
  • isolation requirement imposed on MIP candidates

Notes:

  • p0 * 4066 = 28.5 GeV full-scale at zero rapidity (assuming pedestal~30).
  • 2191/2400 = 91.3% of the towers have nonzero gains.
  • Despite the differences in the cuts used, the final gains are similar to the ones currently found in the DB; a histogram of (newgain-dbgain)/newgain for the towers present in both calibrations yields a mean of 0.019 and an rms of 0.03896.

Procedure:

The offline calibration of the BEMC towers for Run 4 is accomplished in three steps. In the first step, MIPs are collected for each tower and their pedestal-subtracted ADC spectra are plotted. The MPV of the distribution for each tower is translated into a gain using an equation originally established by test-beam data (SN 0433). In the second step, electrons are collected for each eta-ring and the ratio of their momentum and energy (with the energy calculated using the MIP gains from step 1) is plotted as a function of the distance between the track and the center of the tower. The calculated curve is fit to a GEANT simulation curve, allowing extraction of scale factors for the MIP gains in each eta ring. Finally, all electrons in all eta-rings are grouped together and the ratio of their energy and momentum (E/p) is plotted, with the energy calculated from the rescaled gains in the second step. The distribution is fit with a Gaussian and a scale factor is applied so that this Gaussian is centered exactly on 1.000.

Catalog query:

<inputURL="catalog:star.bnl.gov?production=P05ia||P05ib||P05ic,sanity=1,tpc=1,emc=1, trgsetupname=ProductionMinBias||productionLow||productionMid||productionHigh,filename~physics, filetype=daq_reco_mudst,magscale~FullField,storage!~HPSS,runnumber>=5028057" nFiles="all" />

Working directories:

/star/u/kocolosk/emc/offline_tower_calibration/2004/jan25_2004mip/

/star/u/kocolosk/emc/offline_tower_calibration/2004/jan30_2004electron/

MIP Cuts:

  • track momentum > 1
  • track enters, exits same tower
  • 1 track / tower
  • (ADC - ped) > 2*pedRMS
  • trigger == 15007 (mb data)
  • abs(z-vertex) < 30
  • reference multiplicity < 57 (60-100% centrality)
  • isolation cut (all neighboring towers satisfy (ADC-ped) < 2*pedRMS)

Electron Cuts:

  • 1.5 < track momentum < 20
  • track enters, exits same tower
  • 1 track / tower
  • # track points > 25
  • 3.5 < dEdx < 4.5 keV/cm
  • (ADC - ped) > 2*pedRMS
  • trigger != 15203 (excludes most ht-triggered electrons; should have been trigger == 15007 to get mb-only data)
Adam Kocoloski, 14 Feb 2006

 

Run 5 BTOW Calibration

Introduction

The final BTOW calibration for Run 5 offers the following improvements over previous database calibrations:

  • recovery of 194 swapped towers
  • identification and exclusion of 51 towers with correlated firing problems
  • identification and exclusion of 33 towers with p/E~0.6
  • exclusion of 58 towers with PMTs replaced by Stephen and Oleg during the shutdown
  • isolation cut removes background from MIP spectra, identifies correlated towers
  • 30 cm vertex cut introduced to reduce path length differences among MIPs
  • uniform scale factor (1.04613) introduced after eta ring electron normalization to set E/p==1 when integrated over all towers

p0 = 0.00696912 for the fit parameter implies an <ET> = 28.3 GeV on the west side, but the gains do not seem to be well described by the 1/sin(theta) fit

3997/4240 = 94.3% of commissioned towers have nonzero gains

Procedure

The offline calibration of the BEMC towers for Run 5 is accomplished in three steps. In the first step, MIPs are collected for each tower and their pedestal-subtracted ADC spectra are plotted. The MPV of the distribution for each tower is translated into a gain using an equation originally established by test-beam data (SN 0433). In the second step, electrons are collected for each eta-ring and the ratio of their momentum and energy (with the energy calculated using the MIP gains from step 1) is plotted as a function of the distance between the track and the center of the tower. The calculated curve is fit to a GEANT simulation curve, allowing extraction of scale factors for the MIP gains in each eta ring. Finally, electrons in all eta-rings that pass through the center of a tower are grouped together and the ratio of their energy and momentum (E/p) is plotted, using the energy calculated from the rescaled gains in the second step. The ditribution is fit with a Gaussian and a scale factor is applied so that this Gaussian is centered exactly on 1.000.

Catalog query:

<input URL="catalog:star.bnl.gov?production=P05if,tpc=1,emc=1, trgsetupname=ppProduction||ppTransProduction||ppProductionMinBias, filename~st_physics,filetype=daq_reco_mudst,storage!~HPSS" nFiles="all" />

Working directories:

/star/u/kocolosk/emc/offline_tower_calibration/2005/feb08_2005/
/star/u/kocolosk/emc/offline_tower_calibration/2005/feb08_earlyfiles/

The 2005 directory contains jobs run using Dave Relyea's offline pedestals (the bulk of the data), while the earlyfiles directory uses online pedestals for runs before April 26th that are not included in the offline pedestal calculations.

MIP Cuts:

  • track momentum > 1
  • track enters, exits same tower
  • 1 track / tower
  • (ADC - ped) > 2*pedRMS
  • abs(z-vertex) < 30
  • isolation cut (all neighboring towers satisfy (ADC-ped) < 2*pedRMS)
  • no trigger selection (previous statement of mb-only triggers was in error)

Electron Cuts:

  • 1.5 < track momentum < 20
  • track enters, exits same tower
  • 1 track / tower
  • # track points > 25
  • 3.5 < dEdx < 4.5 keV/cm
  • (ADC - ped) > 2*pedRMS
  • no trigger selection

For More Information:

Detailed information on this and other BTOW calibrations, including tower-by-tower MIP and p/E spectra and a summary of outstanding issues, is available at

http://www.star.bnl.gov/protected/spin/kocolosk/barrel_calibration/saved_tables/

The calibration summarized here has the timestamp 20050101.000001.

First Calibration using CuCu data

Runs used: 6013134 (13 Jan) - 6081062 (22 Mar)

The procedure for performing the relative calibration can be divided into 3 steps:

  1. Create a histogram of the pedestal-subtracted ADC values for minimum-ionizing particles in each tower
  2. Identify the working towers (those with a clearly identifiable MIP-peak)
  3. Use the peak of each histogram together with the location of the tower in eta to calculate a new gain
  4. Create new gain tables and rerun the data, this time looking for electrons
  5. Use the electrons to establish an absolute energy scale for each eta-ring

Using Mike's code from the calibration of the 2004 data, an executable was created to run over the 62GeV CuCu data from Run 5, produce the 2400 histograms, and calculate a new gain for each tower. It was necessary to check these histograms by hand to identify the working towers. The output of the executable is available as a 200 page PDF file:

2005_mip_spec.pdf (2400 towers - 21.6 MB)

Towers marked red or yellow are excluded from the calibration. Condensed PDFs of the excluded towers are available at the bottom of the page ("bad" and "weird"). In all, we have included 2212 / 2400 = 92% of the towers in the calibration.

Systematic Behavior of the Gains

The first plot in the top left shows the excluded towers (white blocks) in eta_phi space

In the second plot we collect the towers into 20 eta-rings (delta-eta = 0.05) and look at the change in gain as we move out in eta. This plot shows the expected increase in gain as we move into the forward region (except for eta-rings 19 and 20).

Finally we look at each individual eta-ring for for systematic variations in azimuth. There appears to be some structure around phi=0.2 and phi=3.

Electron Calibration Status

We have made an attempt to realize the absolute energy scales using electrons. Unfortunately, there does not appear to be a sufficient number of electrons in the processed 62GeV data to do this for each eta-ring. We will revisit this later when more statistics are available, but in the meantime we have established the following workaround:

  1. Fit the average gains of the first 17 eta-rings with a function that goes as 1/sin(theta)
  2. Caclulate the expected gains for eta-rings 18, 19, and 20 from this function
  3. Scale the forward eta-rings accordingly

The results of this procedure are seen in an updated plot of gain vs. eta:

Here are new bemcCalib and bemcStatus .root files created using these rescaled gains:

bemcCalib.20050113.053235.root

bemcStatus.20050113.053235.root

mip.20050113.000000.txt

The text file is a list of tower-gain pairs. A gain of zero indicates a tower that has been masked out. Finally, we have a plot of p/E summed over all eta:

Update: Addition of the East BEMC Towers

We have calculated new gains for the 1200 towers in the east half of the barrel that were turned on by March 22. Only data from March 22 were used in this calibration. The results can be found in 2005_mip_spec_newtowers.pdf

bemcCalib.20050322.000000.root

bemcStatus.20050322.000000.root

mip.20050322.000000.txt

The gains for towers 1-2400 are the same as before. These files include those entries, as well as new gains for towers 2401-4800. No attempt has been made to get an absolute energy scale for the east towers, so we still see the drop in gain for large eta:



Implementation of Tower Isolation Cut

Originally posted 11 October 2005

Previously we had calibrated the BEMC using data from all triggers. We now have enough data to restrict ourselves to minimum-bias triggers. Additionally, we have implemented a cut that requires the pedestal-subtracted ADC value of neighboring towers to be less than twice the width of the pedestal. This cut does an excellent job of removing background, especially in the high-eta region:

The cut is used in the bottom plot. Unfortunately there is also a small subset of towers where the isolation cut breaks the calibration:

Again the cut is used in the bottom plot. We have decided to use the isolation cut when possible, and use the full minimum-bias data in the case of the (55) towers for which the cut breaks the calibration. This new calibration is able to recover ~30 towers that were previously too noisy to calibrate. We are in the process of reviewing the remaining bad towers found in 2005_MinBiasBadTowers.pdf. Towers 671,1612, and 4672 appear to have problems with stuck bits in the electronics, and there are a few other strange towers, but the majority of these towers just seem to have gains that are far too high. We plan on reviewing the HV settings for these towers. An ASCII file of the bad towers is attached below.

Recalibration Using pp data

Originally posted 26 September 2005

Catalog query:
input URL="catalog:star.bnl.gov?production=P05if,tpc=1,emc=1,trgsetupname=ppProduction,filename~st_physics,filetype=daq_reco_mudst,storage=NFS" nFiles="all"

note: 3915/4240 = 92.3% of the available towers were able to be used in the calibration. Lists of bad/weird towers in the pp run are available by tower id:
bad_towers_20050914.txt
weirdtowers_20050914.txt

The relative calibration procedure for pp data is identical to the procedure we used to calibrate the barrel with CuCu data (described below). The difference between pp and CuCu lies in the electron calibration. In the pp data we were able to collect enough electrons on the west side to get an absolute calibration. On the east side we used the procedure that we had done for the west side in CuCu. We used a function that goes like 1/sin(theta) to fit the first 17 eta rings, extrapolated this function to the last three eta rings, and calculated the scale factors that we should get in that region. Here is a summary plot comparing the cucu and pp calibrations for the 2005 run:

(2005_comparison.pdf)

Comparing the first two plots, one notices immediately that many of the pp gains on the east (eta < 0) side are quite high. A glance at the last plot on the summary canvas reveals a collection of towers with phi < -pi/2 that have unusually high gains. These towers were added during the pp run and hence their gains are based on nominal HV values without any iteration (thanks Mike, Stephen). If we mask out those towers in the summary plot, the plots look much more balanced:

(old_towers_only.pdf)

The bottom plot clearly shows that the final pp gains are ~10% higher than the gains established from the mip calibration using the cucu data. Moreover, the cucu data closely follows a 1/sin(theta) curve, while the curve through the pp data is flatter. This is revealed in the shape of the bottom plot. Note that we never scaled the gains near eta=-1 for the cucu data, which is why there is still a significant drop-off in the ratio of the mean gains in that region.

New calibration and status tables (including the gains for the new towers that were turned on during pp running) are available at

bemcCalib.20050322.000001.root

bemcStatus.20050322.000001.root

final.20050926.000000.txt

It should be noted that the scale factors we used to get the final gains near eta=-1 on the east side were calculated without the new towers since they were throwing off the fit function. However, those new towers still had their final gains scaled before we produced the calibration and status tables.

Systematic Uncertainty Studies

In the 2003+2004 jet cross section and A_LL paper we quoted a 5% systematic uncertainty on the absolute BTOW calibration.  For the 2005 jet A_LL paper there is some interest in reducing the size of this systematic.

I went back to the electron ntuple used to set the absolute gains and started making some additional plots.  Here's an investigation of E_{tower} / p_{track} versus track momentum.  I only included tracks passing directly through the center of the tower (R<0.003) where the correction from shower leakage is effectively zero.

Full set of electron cuts (overall momentum acceptance 1.5 < p < 20.):

dedx>3.5 && dedx<4.5 && status==1 && np>25 && adc>2*rms && r<0.003 && id<2401

I forgot to impose a vertex constraint on these posted plots, but when I did require |vz| < 30 the central values didn't really move at all.




Here are the individual slices in track momentum used to obtain the points on that plot:







Electrons with momentum up to 20 GeV were accepted in the original sample, but there are only ~300 of them above 6 GeV and the distribution is actually rather ugly.  Integrating over the full momentum range yields a E/p measurement of 0.9978 +- 0.0023, but as you can see the contributions from invididual momentum slices scatter around 1.0 by as much as 4.5%

Next Steps?  -- I'm thinking of slicing versus eta and maybe R (distance from center of tower).

Run 6 BTOW Calibration

Introduction

This is not the final calibration for the 2006 data, but it's a big improvement over what's currently in the DB.  It uses MIPs to set relative gains for the towers in an eta ring, and then the absolute scale is set by electron E/p.

Isolation Cut Failures

This is a problem that we encountered in 2005, where several pairs of towers had good-looking spectra until the isolation cut was applied, and then quickly lost all their counts.  Well, all of the towers that I had tagged with this problem in 2005 still have it in 2006, with one exception.  Towers 1897 and 1898 seem to have miraculously recovered, and now towers 1877 and 1878 appear to have isolation failures.  Perhaps this is a clue to the source of the isolation failures?

Dataset:

everything I could find on nov07:

<input URL="catalog:star.bnl.gov?production=P06ie,sanity=1,tpc=1,emc=1,trgsetupname= ppProduction||ppProductionTrans||ppProductionLong||pp2006MinBias||ppProductionJPsi||ppProduction62||ppProductionMB62, filename~st_physics,filetype=daq_reco_mudst" nFiles="all" />

Cuts:

Same as 2005.  All MIP fits are basic Gaussians over the range 5..250 ADC-PED.  Electron fits are Gaussian + linear in a very crude attempt to estimate hadronic background.

Working Directory:

/star/u/kocolosk/emc/offline_tower_calibration/2006/nov07/

Plots:

The gains look pretty balanced on the east side and west side.  Note that I didn't multiply by sin(theta), so we expect an eta-dependence here.  The widths plot is interesting because it picks out one very badly behaved crate on the east side at phi=0. I believe it is 0x0C, EC24E (towers 4020-4180).  The tower fits are attached below if you're interested.  Bad towers are marked in red.


Electron Normalization:

E/p plots for all 40 eta-rings (first 20 on west side, 21-40 on east side) are attached below.  In general, the electrons indicate a 9-10% increase in the MIP gains is appropriate.  In the last two eta rings on each side, that number jumps to 20% and 40%.  This is more or less consistent with the scale factors found in the 2003 calibration (the last time we used a full-scale energy of 60 GeV.  If I scale the MIP gains and plot the full-scale E_T I get the plot on the left.  Fitting the eta dependence with a pol0 over the middle 36 eta rings results in a ~62 GeV scale and a nasty chi2.


So after scaling with the electrons it looks like we are actually a couple of GeV high on the absolute scale.  I'll see if this holds up once I've made the background treatment a little more sophisticated there.  I also have to figure out what went wrong with the electrons out at the edges.  I didn't E/p would be that sensitive to the extra material out there, but for some reason the normalization factors out there are far too large.  Next step will be to comapre this calibration to one using electrons exclusively.

 

Calibration Uncertainty:

Here are some links that address different parts of the calibration uncertainty that are not linked from this page:

drupal.star.bnl.gov/STAR/subsys/bemc/calibrations/run6/btow/calibration-uncertainty-calculation

drupal.star.bnl.gov/STAR/blog-entry/mattheww/2009/apr/09/2006-calibration-uncertainty

drupal.star.bnl.gov/STAR/blog-entry/mattheww/2009/apr/27/crate-systematic-2006

eta dependence

Online Work

Steve's caiibrations page contains most of the details:

http://www.star.bnl.gov/protected/spin/trent/calibration/calibration.html

Some files of slopes, etc. that I produced are currently stored at

http://web.mit.edu/kocolosk/www/slopes/

MIP check on 2006 Slope Calibration

2006_mip-.pdf

2006_mip.txt


~300k events processed using fastOffline:
Run 7079034 ~189K
Run 7079035 ~115K


Number of events with nonzero primary tracks = 109k / 309k = 35%

Still a few problems with pedestal widths in the database, although now they appear to be restricted to id > 4200. If I don't cut on adc>2*rms, the software id distribution of MIPs looks pretty isotropic:

The distribution of primary tracks also looks a lot better:

I was able to calculate MIP gains for each of the 40 eta-rings. The plot at the top fits a line to the full-scale transverse energies extracted from the gains (excluding the outer two eta-rings on each side). For the error calculations I used the error on the extraction of the mean of the MIP ADC peak and propagated this through the calculation. This is not exactly correct, but it's a pretty close approximation. In a couple of cases the error was exceedingly small (10^-5 ADC counts), so I forced it to an error of 1 GeV (the fit failed if I didn't).

As you can see in the text file, an error of 1 ADC in the MIP peak leads to an error of 3 GeV in the full-scale transverse energy. Therefore I would say that pedestal fluctuations (1-2 ADCs) give an additional error of 5 GeV to my calculations, which means that the majority of these eta-rings are consistent with a 60 GeV full-scale.

Conclusions:

  • A straight-line fit yields 56 GeV as the average full-scale transverse energy of the towers
  • We have enough stats; errors are dominated by pedestal fluctuations leading to a 5 GeV uncertainty
  • Tower response is flat in E_t across eta-rings (excluding last two rings where MIPs fail)
  • Good job Stephen and Oleg!

The First Attempt

Note: The first attempt at this analysis was plagued by poor TPC tracking and also problems with corrupted BTOW pedestal widths in the database. I'm including the content of the original page here just to document those problems.

250k events processed using fastOffline:

Run 7069023 100K
Run 7069024 100K
Run 7069025 50K

Number of events with nGlobalTracks > 0 = 30166 (12%)

On the left you see the software id distribution of slope candidates (adc-ped>3*rms, no tracking cuts). There's a sharp cutoff at id==1800. But before you go blaming the BEMC for the missing MIPs, the plot on the right shows eta-phi distribution of global tracks without any EMC coincidence. The hot region in red corresponds to 0<id<1800:

As it turns out, the missing slope candidates are likely due to wide pedestals. The pedestal values look fine, but if I plot pedrms for id<1800 and id>1800 using the slope candidates that did survive:

Is it possible that the TPC track distribution is connected to this problem?

2006 Gain uniformity

Using the gains we calculated for 2006 tower by tower from the MIPs and then corrected with the electron eta rings, I calculated how it differed from the ideal gain assuming containment of an EM shower with 60 GeV ET. After removing bad towers, we can fit the distribution of this ratio to a gaussian and we find there is approximately a 6% variation in the gains.

 

Calibration Uncertainty Calculation

Summary:

The uncertainty on the 2006 BTOW Calibration is 1.6%. This value is the combination of a 1.3% overall uncertainty and a 0.9% uncertainty caused by variations in the different crates. This uncertainty should be treated as a measure of the bias in the 2006 Calibration.

Plan:

Attached is a document how the calibration uncertainty for 2006 will be calculated:

  • Eliminate dependence on simulations through tighter fiducial volume cuts
  • Reduce trigger bias by explicitly avoiding electrons matched to trigger HTs or TPs
  • Validate modeling of E/p backgrounds. Confirm that the fit is unbiased by checking
    consistency of low-background and high-background samples.
  • Confirm that crate timing does not systematically bias the energy reconstruction.

The uncertainty on the calibration will be assigned as the maximum between |E/p −1.0| and the uncertainty on the peak position.

Method:

We did some initial studies to determine the magnitude of each of these effects, and then we generated calibration trees covering the entire 200 GeV pp run from 2006. The code used to generate these trees is available in StRoot/StEmcPool/StEmcOfflineCalibrationMaker.

We made the following cuts on the tracks to select good electrons and an unbiased sample.

List 1:

  • 3.5 < dEdx < 4.5
  • abs(vertexZ) < 30
  • 1.5 < p < 15 GeV
  • dR (from tower center to track projection) < 0.004 (in units of deta,dphi)
  • tower_id == tower_id_exit of projection
  • Energy of highest neighbor < 0.5 * track energy

After making these cuts, we fit the remaining sample to a gaussian plus a first order polynomial, based on a study of how to fit the background best.

Figure 1 uses an isolation cut to find a background rich sample to fit:

Figure 2 shows the stability of the E/p location (on the y-axis) between our fit and just a gaussian for different windows in dEdx (x-axis)

Figure 3 shows the E/p location (y-axis) for different annuli in dR (x-axis/1000), which motivated our dR cut to stay in a flat region:

After making all of these cuts, we fit E/p to the entire sample of all our electrons. We then add different cuts based on the trigger information to see how that might affect the bias. We looked at four scenarios:

List 2:

  1. All electrons after stringent cuts
  2. Electrons from events that were NOT HT/HTTP triggered
  3. Electrons from events that were only HT/HTTP triggered
  4. Electrons from HT/HTTP events with trigger turnon tracks (4.5 < p < 6.5 GeV) removed

From these scenarios we chose the largest deviation from E/p = 1.0 as the overall uncertainty on the calibration. This happens to be scenario 3, working out to 1.3%.

Figure 4: E/p for different scenarios

 We also observed a possible crate systematic by fitting E/p for each crate separately.

Figure 5 E/p for each crate:

 According to the chi^2, there is a non statistical fluctuation. To figure out how much that is, we compared the RMS of these points to that when the data is randomly put into 30 partitions. It turns out that all of it is due to that one outlier, crate 12. Since crate 12 contributes 1/15 to each eta ring that it touches, the deviation of this point from the fit causes an uncertainty of 0.9%. This additional uncertainty increases the total uncertainty to 1.6%.

Side Note - Linearity:

After removing HT/HTTP events, we took a look at this plot of p (y-axis) vs E/p (x-axis). By eye, it looks pretty flat, which we verified by splitting into p bins.

Figure 6 p vs E/p

Figure 7 E/p vs p

 Eta Dependence:

There was some question about whether there was eta dependence. This was investigated and found to be inconsequential: http://drupal.star.bnl.gov/STAR/node/14147/

Figure 8: Divided the sample into 3 separate time periods. Period 1 goes is before run 7110000. Period 2 is between runs 7110000 and 7130000. Period 3 is after run 7130000. The deviations are below 1.6%.

Figure 9: Agreement between east and west barrel:

 Figure 10: ZDC Rate vs. energy/p

Figure 11: E/p fits for three different regions in ZDC rate: 0 - 8000, 8000-10000, 10000-20000 Hz.

Comparison of Electron and MIP Calibrations

This page compares the new tower calibration performed using only electron E/p to the calibration using last year's algorithm.  The two calibrations are found to be consistent within 120 MeV in E_T.



I've also attached the tower-by-tower plots of electron E/p so you can see the results for yourself.  I'll write up a more complete description of the calibration algorithm shortly.

Comparison of Online and Offline Calibrations

Introduction:

This page compares the online and first offline calibrations.  The online calibration table was generated during data-taking using a single long run processed through fastOffline production and uploaded on March 30th.  It uses slopes to set the relative gains in an eta-ring and then normalizes the eta-rings using MIPs.  The first offline calibration uses a significant fraction of the produced transverse and late longitudinal runs.  It sets the relative gains using MIP peaks and then uses electron E/p to set the absolute scale.  It was uploaded on December 7th.

Body Counts:

138 additonal towers are masked in this first offline calibration, leaving 4517 good towers.  1 tower (2916) was masked before but is now listed as OK.  To be honest, I have no idea why it was masked in the online calib; its slope looks fine to me.

Plots:

The electron E/p scaling in the offline calibration increased the gains by an approximately uniform 10 percent (more at the edges).  This effect is seen in the following plot of offline E_T - online E_T, integrated over all towers that were good in both calibrations:


Now, the interesting thing is the relative changes of offline-online for the east side and the west side.  If I only plot the location of towers whose gains increased by more than 20% I get




There were only 12 towers whose gains decreased by 20%; all of them were on the west side. Finally here's a plot of the E_T change of the remaining towers:



I think the message here is clear:  the gains on the east side have increased more than the gains on the west side!  It's possible that the use of the online calibration in previous Run 6 jet studies is at least partially responsible for the obsereved east-west jet asymmetry.

To get quantitative about this effect we have to go to 1D.  I've attached a PDF of eta-ring by eta-ring histograms like the first one on this page.  The first two pages are the east side; the next two are the west side.  I've found it easiest to analyze if you set your Reader to view two pages at a time; then you'll be comparing towers with the same absolute value of pseudorapidity when you flip.  The conclusion is pretty clear: at midrapidity the difference in offline - online E_T floats around 5 - 8 GeV on the east side, but it's only about 2 - 5 GeV on the west side.

Gain Stability Check

I've updated my codes to do a more systematic investigation of the stability of the gains.  Instead of trying to get sufficient tower-by-tower statistics for different time periods, I'm looking at MIP peaks for single runs integrated over all towers.  Here's the plot:


Features of note include the bump covering the first couple of days after the shutdown, the bunch of runs on day 123/4 with very low average peaks, and the general decreasing slope (consistent with towers losing high voltage).  I also ran this plot for west and east separately:



I'm running jobs now to do electron selection instead of MIPs.  I think that I can probably still do this as a function of run, but certainly I'll have sufficient stats to  plot vs. fill if necessary.

Goal:  Compare the tower slopes and MIP peaks from the following three periods to check for stability.
  1. Day < 104 (97-99)
  2. 104 < Day < 134 (114-119)
  3. Day > 134 (134-139)
I just chose those periods arbitrarily.  I still have to generate the slope histograms for the longitudinal running, but of course I can extend the study to do that if it proves interesting.  Now, I filled these histograms without doing any trigger selection or status table cuts, so I wouldn't expect them to be the cleanest things in the world.  Consider it a "first look", if you like.  Anyway, here are gaussian fits to histograms of (slopeA-slopeB)/slopeA for all 4800 towers for each of the three combinations.  We see a 2.7% shift towards flatter slopes between days 97-99 and 114-119.  Note that this period includes the 10 day accident shutdown (104-114).  The slopes spread out quite a bit between periods 2 and 3, and there's also a small 1% shift towards flatter slopes.



The next set of plots compare gains extracted from MIP peak positions where the MIP peaks are generated using subsets of the Run.  Comparing before and after the shutdown yields a mean difference of 110 MeV with a width of 3 GeV.  This difference is significantly less than 1 percent.  The comparison between middle and late (essentially a comparison between transverse and late longitudinal running) indicates a 1.5 percent drop in the gains with a 2.3 GeV sigma.

Run 7 BTOW Calibration

Online Work

Steve T's page:

http://orion.star.bnl.gov/protected/spin/trent/calibration/calibration.html

First Iteration

March 27, 2007

Steve and Oleg took 600k fast events in runs 8086057, 8086058, and 8086060 to calculate the tower slopes.  Here's a summary plot of tower slopes vs. eta:


I've attached two list of slopes for each tower at the bottom of the page.  The columns are

id -- flag -- slope -- error -- chi2 -- ndf

where the flag is determined by

if(ndf < 30) x                        //empty, stuck bit
else if(chi2 > 200.) ?               //worth a closer look
else if(slope outside 4*RMS) *   //probably needs HV adjustment
else blank                             //channel OK at 4*RMS level

The file slopes_noflags.txt has the same data as the first file, but it omits all the flag information so that it can be read into a macro easily.  Flagged channels are also listed in red in the mega-PDF

Second Iteration

Same procedure as  First Iteration.  Took 600k events in runs 8089017, 8089019, 8089021.  This time the processing went smoothly and I was able to analyze ~all the events that were taken.  Here's the summary of slopes vs. eta:




I've attached at the bottom of the page lists of slopes for individual towers.  The format is the same as before, although I've adjusted the chi2 cut to 300 because of the additional statistics and I've also adjusted the 4*RMS slope cut to take into account the updated mean and RMS values from the plot above.

Third Iteration

Data gathered from runs 8090021, 8090022, 8090023.  The only change I made to the code was to tweak the parameters of the pedestal fit a little bit, since I noticed a couple of towers where it failed.

Fourth Iteration

Data gathered from Runs 8091003, 8091005, 8091007.  Same analysis codes as in Third Iteration


Fifth Iteration

Runs:  8092080, 8092081, 8092083


Seventh Iteration

Run 8095073, 600k events.

This summary plot highlights swapped towers in red.  As Steve pointed out, we weren't adjusting the HV of these tubes correctly in most of the previous iterations.

Run 8 BTOW Calibration (2008)

 2008 BTOW calibrations 

  1. BTOW HV used in 2008 are in the file 2008_i2a.csv, this file will contain a few towers
    (<50)which might have been set to zero at a later date

 

01 statistical analysis of 2008 HV

 Goal: study eta dependence of 2008 BTOW HV


Fig 1 Eta-phi distribution of HV

Formulas used to map tower ID in to eta-phi location
    int id0=id-1;
    int iwe=0; // West/East switch
    if(id>2400) iwe=1;
    int id0=id0%2400;
    int iphi=id0/20;
    int ieta=id0%20;
    if(iwe==0) ieta=ieta+20;
    else ieta=19-ieta;

Fig 2 Batch 1, Eta-phi distribution of HV

eta=0.05  mean HV=821 +/- 5  sigHV=60
eta=0.15  mean HV=820 +/- 5  sigHV=60
eta=0.25  mean HV=815 +/- 5  sigHV=62
eta=0.35  mean HV=811 +/- 5  sigHV=66
eta=0.45  mean HV=817 +/- 5  sigHV=66
eta=0.55  mean HV=820 +/- 5  sigHV=57
eta=0.65  mean HV=820 +/- 5  sigHV=59
eta=0.75  mean HV=806 +/- 5  sigHV=62
eta=0.85  mean HV=807 +/- 5  sigHV=58
eta=0.95  mean HV=829 +/- 5  sigHV=65

 


Fig 3 Batch 2, Eta-phi distribution of HV

eta=0.05  mean HV=759 +/- 6  sigHV=56
eta=0.15  mean HV=760 +/- 7  sigHV=62
eta=0.25  mean HV=756 +/- 6  sigHV=53
eta=0.35  mean HV=763 +/- 6  sigHV=55
eta=0.45  mean HV=756 +/- 7  sigHV=58
eta=0.55  mean HV=760 +/- 6  sigHV=54
eta=0.65  mean HV=743 +/- 6  sigHV=52
eta=0.75  mean HV=754 +/- 6  sigHV=54
eta=0.85  mean HV=754 +/- 7  sigHV=64
eta=0.95  mean HV=769 +/- 7  sigHV=59

 


Fig 4 Batch 3, Eta-phi distribution of HV

eta=-0.95  mean HV=730 +/- 4  sigHV=57
eta=-0.85  mean HV=723 +/- 4  sigHV=57
eta=-0.75  mean HV=727 +/- 4  sigHV=59
eta=-0.65  mean HV=718 +/- 4  sigHV=58
eta=-0.55  mean HV=723 +/- 4  sigHV=58
eta=-0.45  mean HV=720 +/- 4  sigHV=56
eta=-0.35  mean HV=727 +/- 4  sigHV=54
eta=-0.25  mean HV=724 +/- 4  sigHV=60
eta=-0.15  mean HV=735 +/- 4  sigHV=59
eta=-0.05  mean HV=729 +/- 4  sigHV=60

 

02 BTOW swaps ver=1.3

Example of MIP peak for BTOW towers pointed by TPC MIP track: Spectra for 4800 towers (raw & MIP) are in attachment 1.

 


Method of finding MIP signal in towers:

 Table 1

BTOW swaps, very likely, ver 1.3, based on 2008 pp data, fmsslow-production
 266 --> 286 , 267 --> 287 , 286 --> 266 , 287 --> 267 , 389 --> 412 ,
 390 --> 411 , 391 --> 410 , 392 --> 409 , 409 --> 392 , 410 --> 391 ,
 411 --> 390 , 412 --> 389 , 633 --> 653 , 653 --> 633 , 837 --> 857 ,
 857 --> 837 ,1026 -->1046 ,1028 -->1048 ,1046 -->1026 ,1048 -->1028 ,
1080 -->1100 ,1100 -->1080 ,1141 -->1153 ,1142 -->1154 ,1143 -->1155 ,
1144 -->1156 ,1153 -->1141 ,1154 -->1142 ,1155 -->1143 ,1156 -->1144 ,
1161 -->1173 ,1162 -->1174 ,1163 -->1175 ,1164 -->1176 ,1173 -->1161 ,
1174 -->1162 ,1175 -->1163 ,1176 -->1164 ,1753 -->1773 ,1773 -->1753 ,
2077 -->2097 ,2097 -->2077 ,3678 -->3679 ,3679 -->3678 ,3745 -->3746 ,
3746 -->3745 ,4014 -->4054 ,4015 -->4055 ,4016 -->4056 ,4017 -->4057 ,
4054 -->4014 ,4055 -->4015 ,4056 -->4016 ,4057 -->4017 ,4549 -->4569 ,
4569 -->4549 ,

 Details for all swaps are shown in attached 2.

 


Other problems found during analysis, not corrected for, FYI

 

03 MIP peak analysis (TPC tracks, 2008 pp data)

I examined runs from 2008 pp (list attached below) to create calibration trees (using StRoot/StEmcPool/StEmcOfflineCalibrationMaker). The code loops over the primary tracks in the event and selects the global track associated with it, if available. If the track can be associated to a tower and has outermomentum > 1 GeV, it is saved.

I then loop over those tracks and choose ones that fall within a certain criteria:

  • p > 1
  • adc - ped > 1.5 * ped_rms
  • tower_entrance_id = tower_exit_id
  • only a single track points to the tower
  • neighoring towers have small amounts of energy

We can sum over all runs to produce MIP spectrum in 4445 towers. Of those, 4350 pass current QA requirements. An example MIP spectrum with fit is shown below. All MIP peaks can be seen in the attached PDF. It's important to note that the uncertainty on the MIP peak location with these statistics is on average 5%. This number is a lower limit on the calibration coefficient uncertainty that can only be improved with statistics.

 

Fig 1. Typical MIP peak. Plots like this for all 4800 towers are in attachment 1.

The following plot shows MIP peak location in ADCs (in eta, phi space) of all of towers where such a peak could be found.

Fig 2. MIP Peak (Z-axis) for all towers

This plot shows the status codes of the towers. White are the towers that had 0 entries in their histograms. Red are the good towers (pushed them off scale to see the rest of the entries). Towers in the outermost eta bin received different treatment. The fit range was between [6,100] for those towers (vs [6,50] for the rest). The cuts applied for QA were slightly loosened. The rest of the codes follow this scheme:

  • bit 1: entries in histogram > 0
  • bit 2: sigma of fit below threshold (15,20)
  • bit 4: mean of fit above 5 ADCs
  • bit 8: difference of fit mean and MPV in fit range below threshold (5,10)

Fig 3. status of towers

 Spectra of towers rejected by rudimentary automatic QA were manually inspected and 91 were found to contain reasonable MIP peak and have bin re-qualified as good. PLots for those towers are in attachment 2.  

04 relative tower gains based on MIPs (gain correction 1)

 Determination of relative gains BTOW gains based on MIP peak for the purpose of balancing HV for 2009 run

Procedure:

  • use fitted 03 MIP peak analysis (TPC tracks, 2008 pp data)
  • find average MIP value for 40 eta bins
  • West barrel:  preserve absolute average MIP peak position in the , compute gain correction to equalize all towers at given eta to the average for this eta
  • East barrel: enforce absolute average MIP peak to agree with the West barrel, correct relative gains in the similar way.

In this stage of calibration process 4430 towers with well visible MIP peak were use. The remaining 370 towers will be called 'blank towers' . There are many reasons MIP peak doe snot show up e.g. dead hardware. ~60 of the 'blank towers' are 02 BTOW swaps ver=1.3- as identified earlier and not corrected in this analysis.  

 

Fig 1. Left: MIP peak (Z-axis) was found for 4430 towers shown in color. White means no peak was found - those are "blank towers".  Right: eta-phi distribution of blank towers.  On both plots East (West)  barrel is show on etaBins [1-20], (21,40).

The (iEta, iPhi ) coordinates were computed based on softID as follows:

   int jeta= 1+(id-1)%20; // range [1-20]
   int jphi= 1+(id-1)/20; // range [1-240]
   if(jphi<=120) { // West barrel
      keta=jeta+20; kphi=jphi;
   } else { // East barrel
      keta=-jeta+21; kphi=jphi-120;
   }

 

Fig 2. Average MIP position as function of eta bin. West-barrel gains are higher even of average.

 

Gain corrections (GC1) were computed as
     GC1(iEta,iPhi)= MIP(iEta,iPhi) / avrMip(iEta)

For the East-barrel we used values of avrMip(iEta)from symmetric eta-bins from the West barrel.

If computed correction was between [0.95,1.05] or if towers was "blank"   GC1=1.00 was used.

Fig 3. Left distribution of gain corrections GC1(iEta).  Right: value of   GC1(iEta,iPhi).

 

Attached spreadsheet contains computed  GC1(softID) in column 'D'  together with  MIP peak parameters (columns H-P) for all 4800 towers. Below is just first 14 towers.

Used .C macro is attached as well.

05 Absolute Calibration from Electrons

We use the MIP ADC location calculated in part 03 to get a preliminary measure of the energy scale for each tower.

The following formula from SN436 provides the calibration coefficient:

C =

where E = C * (ADC - ped).

For the electron sample I select tracks with:

  • 1.5 < p < 6.0 (GeV)
  • dR < 0.0125 (from center of face of tower in eta/phi space)
  • adc - ped > 2.5 * ped_rms in tower
  • nHits > 10
  • nSigmaElectron > -1

Here is one of the E/p distributions. Note the error on the mean is about 2%, which is about average.

 

For each electron we calculate E/p and sum over eta rings. Here is the peak of E/p for each eta bin. The error bars show the error on the mean value, and the shaded bar is the sigma. I fit over the range [-0.9,0.0] and [0.0,0.9] separately. The fit for the east reports the mean is 0.9299 +/- 0.0037. For the west, I get 0.9417 +/- 0.0035.

 

The location of the E/p peak tells us the correction that needs to be applied to the calibration coefficients of towers in each eta ring to get the correct absolute energy.

 

Reference plot (electron showers):

06 Calculating 2009 HV from Electron Calibrations

The ideal gain for each tower satisfies the following relation, where E_T = 60 GeV when ADC is max scale (~4095 - ped):

(1)

We want to calculate the total energy deposited in a tower, which is the following in the ideal case:

(2)

 In reality, we calculate the calibration according to the following formula:

(3)

where there is an electron correction for each eta ring, and the MIP location is determined for all towers.

Thus, to have ideal gains, we want the following relation to hold

(4)

To make this relation true, we need to adjust the high voltage for each tower according to the following relation:

(5)

We can check the changes using slopes.

07 BTOW absolute gains 'ab initio'

 Summary of intended BTOW HV change, February 24, 2009

Conclusion:

We will NOT change average gain of West BTOW for eta bins[1,18]. For all other eta bins we will aim at the average West Btow marked as red dashed line.

 Full presentation is in attachment 1.

Detailed E/P spectra for 40 eta bins are in attachment 2 & 3.

08 Completed Calibration and Uncertainty

The 2008 Calibraton has been uploaded into the database. I was able to find calibration coefficients for 4420 towers. Of the remaining towers, 353 were MIPless and 27 had spectrums that I could not recover by hand.

This year, differently from previous years, known sources of bias are removed. If an event had a nonHT trigger, the tracks from that event were used for the calibration (even if it had an HT trigger). Most of the triggers were fms slow triggered events. Each eta ring was calibrated separately, and then a correction for each crate was applied. After these corrections, I once again made the cuts on the electrons more stringent. No deviation from E/p =1 was found. With the biases eliminated, we now quote a systematic variance instead of a systematic bias as the uncertainty on the calibration.

The source of this uncertainty are the uncertainties on various fits used for each calibration, namely the MIP peak for each tower, the absolute calibration for each eta ring, and the correction for each crate. These uncertainties are highly correlated. I have completed this calibration by recalculating the calibration table 3,000 times varying the MIP peaks, the eta ring corrrections and the crate corrections each time. Using this method, I was able to determine the correlated uncertainty for each calibrated tower. This uncertainty is, on average, 5%.

This uncertainty is different than the systematic bias quoted for 2005 and 2006. It is a true variance on the calibration. To use this uncertainty, analyzers should recalculated their analysis using test tables generated for this purpose. The variance in the results of these multiple analyses will give a direct measure of the uncertainty due to the calibration scale uncertainty. These tables will be uploaded to the database with the names "sysNN". They should be used before clustering, jet finding, etc.

Run 9 BTOW Calibration

Parent page for BTOW 2009 Calibrations

01 BTOW HV Mapping

We started at the beginning of the run to verify the mapping between the HV cell Id and the softId it corresponds to. To begin, Stephen took 7 runs with all towers at 300V below nominal voltage except for softId % 7 == i, where i was different for each run. Here are the runs that were used:

softId%7 Run Number Events
1 10062047 31k
2 10062048 50k
3 10062049 50k
4 10062050 50k
5 10062051 50k
6 10062060 100k
0 10062061 100k

 

We used the attached file 2009_i0.txt (converted to csv) as the HV file. This file uses the voltages from 2008 and has swaps from 2007 applied. Swaps indentified were not applied and will be used to verify the map check.

File 2009_i0c.txt contains some swaps identified by the analysis of these runs.

File 2009_i1c.txt contains adjusted HV settings based on the electron/mip analysis with same mapping as 2009_i0c.txt.

Final 2009 BTOW HV set on March 14=day=73,  Run 10073039 , HV file: 2009_i2d.csv  

02 Comparing 2007, 2008 and 2009 BTOW Slopes

We have taken two sets of 1M pp events at sqrt(s)=500 GeV. For future reference, the statistical error on the slopes is typically in the range of 3.7%.  I give a plot of the relative errors in the 5th attachment.

 

  • Run 10066160 HVFile = 2009_i1c.csv (2009 HV)
  • Run 10066163 HVFile = 2009_i0c.csv (2008 HV)

    plus we have

  • 2007 Data HVFile = 2007_i8.csv (2007 HV)

    Comparing the overall uniformity of the gain settings, I give the distribution of slopes divided by the average over the region |ETA|<0.8 for 2007, 2008 and 2009 (First three attachments)

    I find that there is little difference in the overall uniformity of the slopes distribution over the detector...about +/-5%. I also give a plot showing the average slopes in each eta ring for the three different voltage settings in each year 2007. 2008 and 2009. (4th attachment) As you can see, the outer eta rings were overcorrected in 2008. They are now back to nearly the same positions as in 2007.

    _________________________________________________________________

    Here is my measurement of 'kappa' from 2007 data.  I get from 7-8, depending on how stringent the cuts are.  I think this method of determining kappa is dependent on the ADC fitting range.  In theory, we should approach kappa~10.6 as we converge to smaller voltage shifts.

     

     

     

     

    So which ones should we adjust?  Here is a fit of the slope distribution to a gaussian.

     

     As we can see, the distribution has an excess of towers with slopes >+/-20% from the average.   I checked that these problem towers are not overwhelmingly swapped towers (15/114 outliers are swapped towers; 191/4712 are swapped towers).  Here is an eta:phi distribution (phi is SoftId/20%120, NOT angle!).  The graph on the left gives ALL outliers, the graph on the right gives outliers with high slopes only (low voltage).  There are slightly more than expected in the eta~1 ring, indicating that the voltages for this ring are set to accept few particles than the center of the barrel.

     

    ______________________________________________________________________

    So Oleg Tsai and I propose to change only those outliers which satisfy the criteria:

    |slope/ave_slope - 1|>0.2  && |eta|<0.8.  (Total of 84 tubes)

    We then looked one-by-one thru the spectra for all these tubes (Run 10066010) and found 12 tubes  (not 13 as I stated in my email) with strange spectra:

    Problem Spectra

    Bad ADC Channels: Mask HV to Zero

    We then calculated the proposed changes to the 2009_i1c.csv HV using the exponent kappa=5.  Direct inspection of the spectra made us feel as though the predicted voltage changes were too large, so we recomputed them using kappa=10.48.  We give a file containing a summary of these changes:

    Proposed HV Changes (SoftId SwappedID Slope(i1c) Ave Slope Voltage(i1c) New Voltage

     

    (We have identified 3 extra tubes with |eta|>0.8 which were masked out of the HT trigger to make it cleaner....we propose to add them to list to make a total of 87-7 = 80 HV changes.)

     

    PS.  We would also like to add SoftId-3017 HV800 -> 650.  This make 81 HV changes.

     

     

     _____________________________________________________________________

    A word about stability of the PMT High Voltage and these measurements.  I compared several pairs of measurements in 2007 taken days apart.  Here is an intercomparison of 3 measurements (4,5 and 7) taken in 2007.  The slopes have a statistical accuracy of 1.4%, so the distribution of the ratio of the two measurements should have a width of sqrt(2)*1.4 = 2% The comparison of the two measurements is just what we expect from the statistical accuracy of the slope measurements.

     

     

     

     

     Comparing the two recent measurements in 2009, we have a set of about 600 tubes with a very small voltage change.  The slopes were measured with an accuracy of 3.7%, so the width of the slope ratio distribution should be sqrt(2)*3.7% =5.2%.  Again, this is exactly what we see.  I do not find any evidence that (a large percentage of) the voltages are unstable!

     

     

    ___________________________________________________________________

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

  •  

    03 study of 2009 slopes (jan)

     Purpose of this study is to evaluate how successful was our firts attempt to compute new 2009 HV for BTOW.

    Short answer: we undershoot by a factor of 2 in HV power- see fig 4 left.

    Input runs: 10066160 (new HV) and 10066163 (old HV) 

    Fig 1. Pedestal distribution and difference of peds between runs - perfect. Peds are stable, we can use the same slope fit range (ped+20,ped+60) blindly for old & new HV.

     

    Fig 2. Chosen HV change and resulting ratio of slopes - we got the sign of HV change correctly!

    Fig 3. Stability test. Plots as in fig 2, but for a subset of towers we change HV almost nothing (below 2V) but yield was large. One would hope slope stay put. They don't.  This means either slopes are not as reliable as we think or HV is not as stable as we think.  

    Fig 4.  Computed 'kappa' :  sl2/sl1=g1/g2=V1/V2^kappa for towers with good stats and HV change of at least 10 Volts, i.e. the relative HV change is more than 1%.  Right plot shows kappa as function of eta - no trend but the distribution is getting wider - no clue why?

    Fig 5.  Computed 'kappa' as on fig 4. Now negative, none physical values  of kappa are allowed.

     

    04 Spectra from Problem PMT Channels

    We set the HV = 800 Volts for all channels which have been masked out, repaired, or otherwise had problems in 2008.  I have examined the spectra for these channels and give pdfs with each of these spectra:

    drupal.star.bnl.gov/STAR/system/files/2009_800_1_12.pdf

    drupal.star.bnl.gov/STAR/system/files/2009_800_13_24.pdf

    drupal.star.bnl.gov/STAR/system/files/2009_800_25_36.pdf

    drupal.star.bnl.gov/STAR/system/files/2009_800_37_48.pdf

    drupal.star.bnl.gov/STAR/system/files/2009_800_49_60.pdf

    drupal.star.bnl.gov/STAR/system/files/2009_800_61_72.pdf

    drupal.star.bnl.gov/STAR/system/files/2009_800_72_78.pdf

     

    I identify 37 tubes which are dead, 39 tubes which can be adjusted, and 1 with a bit problem.  We should pay careful attention to these tubes in the next iteration.

     

    05 Summary of HV Adjustment Procedure

    Summary of process used to change the HV for 2009 in BTOW:HV Files: Version indicates interation/mapping (numeral/letter)

    1) 2009_i0.csv      HV/Mapping as set in 2008.  Calibration determined from 2008 data electrons/MIPS.
    http://drupal.star.bnl.gov/STAR/subsys/bemc/calibrations/run-8-btow-calibration-2008/05-absolute-calibration-electrons

    2) 2009_i0d.csv      Map of SoftId/CellId determined by taking 6 data sets with all voltages set to 300 V, except for SoftId%6=0,1,2,3,4,5,6, successively. (link to Joe's pages)
    http://drupal.star.bnl.gov/STAR/subsys/bemc/calibrations/run-8-btow-calibration-2008/08-btow-hv-mapping

    drupal.star.bnl.gov/STAR/blog-entry/seelej/2009/mar/10/run9-btow-hv-mapping-analysis-summary
    drupal.star.bnl.gov/STAR/blog-entry/seelej/2009/mar/05/run9-btow-hv-mapping-analysis
    drupal.star.bnl.gov/STAR/blog-entry/seelej/2009/mar/08/run9-btow-hv-mapping-analysis-part-2

    3) 2009_i1d.csv       HV change determined from 2008 data electrons/MIPS (g1/g2)=(V2/V1)**k, with k=10.6 (determined from LED data).
    http://drupal.star.bnl.gov/STAR/subsys/bemc/calibrations/run-8-btow-calibration-2008/06-calculating-2009-hv-electron-calibrations

    4) 2009_i2d.csv        Slopes measured for all channels.  Outliers defined as |slope/<slope> -1|>0.2 (deviation of channel slope from average slope over barrel >20%:Approx 114 channels. Outliers corrected according to (s1/s2)=(V1/V2)**k as above.  Hot towers HV reduced by hand. (Approx 10 towers)
    http://drupal.star.bnl.gov/STAR/subsys/bemc/calibrations/run-9-btow-calibration-page/02-comparing-2007-2008-and-2009-btow-slopes

    06 comparison of BTOW status bits L2ped vs. L2W , pp 500 (jan)

     Comparison of BTOW status tables generated based on minB spectra collected by L2ped ( conventional method) vs. analysis of BHT3 triggered , inclusive spectra.

    1. Details of the method are given at BTOW status table algo, pp 500 run, in short status is decided based on ADC integral [20,80] above pedestal.
    2. the only adjusted param is 'threshold' for  Int[20,80]/Neve*1000. Two used values are 1.0 and 0.2.
    3. For comparison I selected 2 fills: F10434 (day 85, ~all worked) and F10525 (day99, 2 small 'holes' in BTOW) - both have sufficient stats in minB spectra to produce conventional status table.
    4. Matt suggested the following assignment of  BTOW status bits (value of 2N is given)
      • good -->stat=1
      • bad, below thres=0.2 --> stat=2+16 (similar to minB  cold tower)
      • bad, below thres=1.0 --> stat=512 (new bit, stringent cut for dead towers for W analysis) 
      • stuck low bits --> stat=8 (not fatal for high-energy response expected for Ws)
      • broken FEE (the big hole) in some fills, soft ID [581-660] --> stat=0

     

    Fig 1. Fill F10343 thres=1.0 (red line)

    There are 108 towers below the red line, tagged as bad. Comparison to minB-based tower QA which tagged 100 towers. There are 4 combinations for tagging bad towers with 2 different algos. Table 1 shows break down, checking every tower. Attachments a, b show bad & good spectra.

    Table 1, Fill F10434,
    QA thres=1.0
      minB=ok minB=bad
    BHT3=ok 4692 0
    BHT3=bad 8 100
    QA thres=0.2
      minB=ok minB=bad
    BHT3=ok 4696 0
    BHT3=bad 4 100

     

    Conclusion 1: some the additional 8 towers tagged as bad based on BHT3 spectra are either very low HV towers, or optical  fiber is partially broken. If those towers are kept for W analysis any ADC recorded by them would yield huge energy. I'd like to exclude them from W analysis anyhow.   

    To preserve similarity to minB-based BTOW status code Matt agreed we tag as bad-for-everybody all towers rejected by BHT3 status code using lower threshold of 0.2. Towers  between BHT3 QA-thresholds [0.2,1.0] will be tagged with new bit in the offline DB and I'll reject them from W-analysis if looking for W-signal but not necessary if calculating the away side ET for vetoing of away side jet.

      


     

    Table 2, Fill F10525, 2 'holes' in BTOW
    QA thres=1.0
      minB=ok minB=bad
    BHT3=ok 4653 1
    BHT3=bad 19 127
    QAthres=0.2
      minB=ok minB=bad
    BHT3=ok 4662 2
    BHT3=bad 10 126

     

    Spectra are in attachment c). Majority of towers for which this 2 methods do not agree have softID ~500 - where this 2 holes reside (see 3rd page in PDF)

    Tower 220 has stuck lower bits, needs special treatment - I'll add detection (and Rebin()) for such cases.

     


    Automated generation of BTOW status tables for  fills listed in Table 3  has been done, attachment d) shows summary of all towers and examples of bad towers for all those fills.

     Table 3
         1  # F10398w  nBadTw=112 totEve=12194 
         2  # F10399w  nBadTw=111 totEve=22226 
         3  # F10403w  nBadTw=136 totEve=3380 
         4  # F10404w  nBadTw=115 totEve=9762 
         5  # F10407w  nBadTw=116 totEve=7353 
         6  # F10412w  nBadTw=112 totEve=27518 
         7  # F10415w  nBadTw=185 totEve=19581 
         8  # F10434w  nBadTw=108 totEve=15854 
         9  # F10439w  nBadTw=188 totEve=21358 
        10  # F10448w  nBadTw=190 totEve=18809 
        11  # F10449w  nBadTw=192 totEve=18048 
        12  # F10450w  nBadTw=115 totEve=14129 
        13  # F10454w  nBadTw=121 totEve=6804 
        14  # F10455w  nBadTw=113 totEve=16971 
        15  # F10463w  nBadTw=114 totEve=12214 
        16  # F10465w  nBadTw=112 totEve=8825 
        17  # F10471w  nBadTw=193 totEve=21003 
        18  # F10476w  nBadTw=194 totEve=9067 
        19  # F10482w  nBadTw=114 totEve=39315 
        20  # F10486w  nBadTw=191 totEve=37155 
        21  # F10490w  nBadTw=154 totEve=31083 
        22  # F10494w  nBadTw=149 totEve=40130 
        23  # F10505w  nBadTw=146 totEve=37358 
        24  # F10507w  nBadTw=147 totEve=15814 
        25  # F10508w  nBadTw=150 totEve=16049 
        26  # F10525w  nBadTw=147 totEve=50666 
        27  # F10526w  nBadTw=147 totEve=32340 
        28  # F10527w  nBadTw=149 totEve=27351 
        29  # F10528w  nBadTw=147 totEve=22466 
        30  # F10531w  nBadTw=145 totEve=9210 
        31  # F10532w  nBadTw=150 totEve=11961 
        32  # F10535w  nBadTw=176 totEve=8605 
        33  # F10536w  nBadTw=177 totEve=10434 
    

     

    07 BTOW status tables ver 1, uploaded to DB, pp 500

     BTOW status tables for 39 RHIC fills have been determined (see previous entry) and uploaded to DB.

    To verify mayor features are masked I process first 5K and last 5K  events for every fill , now all is correct. # plots below show example of pedestal residua for not masked BTOW towers, 5K L2W events from the end of the fill. Attachment a) contains 39 such plots (it is large and may crash your machine).


    Fig 1, Fill 10398, the first on, most of tower were  working

     

     


    Fig 2, Fill 10478, in the middle, the worst one

    Fig 3, Fill 10536, the last one, typical for last ~4 days, ~1/3 of acquired LT

     

     

     

    08 End or run status

    Attached are slopes plotted for run 10171078 which was towards the end of Run 9.

    09 MIP peaks calculated using L2W stream

    The MIP peaks plotted in the attachment come from the L2W data. 4564 towers had a good MIP peak, 157 towers did not have enough counts in the spectra to fit, and 79 towers had fitting failures. 52 were recovered by hand for inclusion into calibration.

    Also attached is a list of the 236 towers with bad or missing peaks.

    I compared the 157 empty spectra with towers that did not have good slopes for relative gains calculated by Joe. Of the 157 towers, 52 had good slopes from his calculation. Those are in an attached list.

     Fig 1 MIP peak position:

    10 Electron E/p from pp500 L2W events

    I ran the usual calibration code over the L2W data produced for the W measurement.

    To find an enriched electron sample, I applied the following cuts to the tracks, the tower that the track projects to, the 3x3 tower cluster, and the 11 BSMD strips in both planes under the track:

    central tower adc - pedestal > 2.5*rms

    enter tower = exit tower

    track p < 6 and track p > 1.5

    dR between track and center of tower < 0.025

    track dEdx > 3.4 keV/cm

    bsmde or bsmdp adc total > 50 ADC

    no other tracks in the 3x3 cluster

    highest energy in 3x3 is the central tower

     

    The energies were calculated using ideal gains and relative gains calculated by Joe Seele from tower slopes.

    The corrections were calculated for every 2 eta rings and each crate. The corrections for each 2 rings were calculated first and then applied. The analysis was rerun, then the E/p was calculated for each crate.

    The calibration constants will be uploaded to a different flavor to be used with the preliminary W analysis.

    Fig. 1 E/p spectrum for all electrons

    Fig. 2 p vs E/p spectrum for all electrons

    Fig. 3 BSMDP vs BSMDE for all electrons

     

    Fig. 4 corrections by eta ring

    Fig. 5 Crate corrections

    Fig. 6 difference between positrons (blue) and electrons (red):

     

    Update: I reran the code but allowed the width of the gaussian to only be in the range 0.17 - 0.175. This region agrees with almost all of the previously found widths within the uncertainties. The goal was to fix a couple fits that misbehaved. The updated corrections as a function of eta are shown below.

    Fig. U1

     

    Update 09/30/2009: Added 2 more plots.

    Fig. V1 East vs. West (no difference observed)

    Fig. V2 slices in momentum (not much difference)

    Attached are the histograms for each ring and crate.

    11 BTOW crate gains based on L2W-ET triggered ADCs

    12 Correcting Relative gains from 500 GeV L2W

    After examining the Z invariant mass peak calculated using the L2W data stream with the offline calibration applied, it seemed like there was a problem. The simplest explanation was that the relative gains were reversed, so that hypothesis was tested by examining the Z peak with the data reversed.

    The slopes were also recalculated comparing the original histogram to histograms corrected with the relative gains and the inverse of the relative gains.

    In the following two figures, black is the original value of the slopes, red has the relative gain applied, and blue has the inverse of the relative gain applied.

    Fig 1 Means of slope by eta ring with RMS

    Fig 2 RMS of slope by eta ring

    The E/p calculation improves after making the change to the inverse of the relative slopes because the effect of outliers is reduced instead of amplified.

    Fig 3 E/p by eta ring with corrected relative gains

     

    Update:

    After last week's discussion at the EMC meeting, I recalculated all of the slopes and relative gains.

    Fig 4 Update Slope RMS calculation

    I then used these relative gains to recalculate the absolute calibration. Jan used the calibration to rerun the Z analysis.

    Fig 5 Updated Z analysis

    The updated calibration constants will now supercede the current calibration constatns.

    13 Updating Calibration using the latest L2W production

    I recalculated the Barrel calibration using the latest L2W production, which relies on the latest TPC calibration. It is suggested that this calibration be used to update the current calibration.

    Fit Details:

    Negative Peak location: 0.941

    Negative Sigma: 0.14

    Positive Peak location: 0.933

    Positive Sigma: 0.17

     

    <p> = 3.24 GeV

    Fig 1. E/p for all electron (black), positive charged (blue), negative charged (red)

     

    14 200 GeV Calibration

    I selected 634 runs for calibration from the Run 9 production, processing over 300M events. The runs are listed in this list, with their field designation.

    The MIP peak for each tower was calculated. 4663 towers had MIP peaks found. 38 were marked as bad. 99 were marked as MIPless. The MIP peak fits are here.

    The electrons were selected using the following cuts:

    • |vertex Z | < 60 cm
    • vertex ranking > 0
    • track projection enters and exits same tower
    • tower status = 1
    • 1.5 < track p < 10.0 GeV/c
    • tower adc - pedestal > 2.5 * pedestal RMS
    • Scaled dR from center < 0.02
    • 3.5 < dE/dx < 5.0
    • No other tracks in 3x3 cluster
    • No energy in cluster > 0.5 central energy
    • Track can only point to HT trigger tower if a non-HT trigger fired in the event

    Fig. 1: Here is a comparison of all electrons from RFF (blue) and FF (red):

    The RFF fit mean comes to 0.965 +/- 0.001. The FF fit mean comes to 0.957 +/- 0.001. The total fit is 0.957 +/- 0.0004.

    Fig. 2 Comparison of electron (red) positron (blue):

    Positron fit results: 0.951 +/- 0.001. Electron fit results: 0.971 +/- 0.001

    Calibration was calculated using MIPs for relative calibration and absolute calibration done for eta slices by crate (30 crates, 20 eta slices per crate).

    The outer ring on each side was calibrated using the entire ring.

    2 towers were marked bad: 2439 2459 due to a peculiar E/p compared to the other in their crate slice. It is suggested this is due to bad bases.

    Fig 3 Crate Slice E/p correction to MIPs (eta on x axis, phi on y axis):

    New GEANT correction

    A new geant correction was calculated using new simulation studies done by Mike Betancourt. The energy and pseudorapidity dependence of the correction was studied, and the energy dependence is small over the range of the calibration electron energies.

    A PDF of the new corrections are here.

    Is it statistical?

    From this plot, it can be seen that most of the rings have a nonstatistical distribution of E/p values in the slices. The actual E/p values for each ring (for arbitrary slice value) can be seen here.

    Comparison to previous years

    Fig 4 Eta/phi of (data calibration)/(ideal calibration)

    Fig 5 Eta ring average of (data calibration)/(ideal calibration)

    Issues:

    • FF vs RFF (partially examined)
    • positive vs negative (partially examinced)
    • eta/phi dependence of geant correction, direction in eta/phi
    • dR dependence of calibration
    • comparison to previous year

     

     

    Database

    These pages describe how to use the BEMC database.  There is a browser-based tool that you can use to view any and all BEMC tables available at:

    http://www.star.bnl.gov/Browser/BEMC/

    Frequently Asked Questions

    How do I use the database as it looked at a particular time?

    You might be interested in this tip if e.g. you want to repeat an analysis performed before additional tables were added to the BEMC database.  Add the following lines of code to your macro after St_db_Maker is instantiated, and change myDate and myTime as appropriate:
     

    Int_t myDate = 20051231;
    Int_t myTime = 235959;
    dbMaker->SetMaxEntryTime(myDate,myTime);

    How do I force St_db_Maker to use the event time I specify?

    If you're running over simulation files, where the event timestamp is not a meaningful quantity (at least, it's not meaningful for the BEMC database), you need to choose a particular event timestamp that best represents the state of the BEMC during the data-taking period to which you're comparing the simulations.  A list of timestamps is being compiled at Simulation Timestamps.  Add the following lines of code to your macro, and change myDate and myTime as appropriate:
    Int_t myDate = 20051231;
    Int_t myTime = 235959;
    dbMaker->SetDateTime(myDate,myTime);

    Calibrations Database

    All calibration information is stored in the STAR database.  We have the following tables for BEMC calibrations:
    • For the BTOW and BPRS detectors:
      • St_emcCalib - this table contains absolute gain information for each channel
      • St_emcPed - this table contains pedestal values for each channel
      • St_emcGain - this table contains a gain correction factor vs. time for each channel (not currently used)
      • St_emcStatus - this table contains the final status for each channel
    • For the BSMDe and BSMDp detectors:
      • St_smdCalib - this table contains absolute gain information for each channel
      • St_smdPed - this table contains pedestal values for each channel
      • St_smdGain - this table contains a gain correction factor vs. time for each channel (not currently used)
      • St_smdStatus - this table contains the final status for each channel
    The tables are stored in the STAR database under the directory /Calibrations/emc/y3[DETNAME] and are called bemcCalib, bemcPed, bemcGain, and bemcStatus in the case of the BTOW detector.

    To get a pointer for those tables in an analysis maker do:
    TDataSet *DB = GetInputDB("Calibrations/emc/y3bemc"); // for towers

    St_emcCalib *table = (St_emcCalib*) DB->Find("bemcCalib");
    emcCalib_st *struct = table->GetTable();

    Important Information About Pedestal Tables

    In order to save space and make the download faster, PEDESTALS and RMS are saved as SHORT.  So, the real pedestal value is PED/100.  Similarly, in order to save tables in the database you have to multiply the real pedestal by 100.  The same goes for the RMS.

    SMD has different pedestals for different capacitors.  Only 3 pedestal values are saved:
    • Pedestal 0 is the average of 126 capacitors
    • Pedestal 1 is the pedestal value for capacitor 124
    • Pedestal 2 is the pedestal value for capacitor 125
    Capacitor numbers for the BSMD can be retrieved from an StEmcRawHit by using the calibrationType() method:
    unsigned char cap = (char) rawHit->calibrationType();
    if(cap > 127) mCap[i][did-1]-=128;

    Status Information

    The St_emcStatus and St_smdStatus tables contain final status codes for each tower.  The final status is a combination of installation/run status, pedestal status and calibration status.  The final status has a bit pattern as follows:
    • 0 - not installed
    • 1 - installed / running
    • 2 - calibration problem
    • 4 - pedestal problem
    • 8 - other problem (channel removed, dead channel, etc.)
    So, status==1 means the channel is installed and running OK.  Status==7 means that the channel is installed but that we have a calibration problem and a pedestal problem.

    To check individual bits of the final status do
    • (status&1) == 1 means tower is installed
    • (status&2) == 2 means a calibration problem
    • (status&4) == 4 means a pedestal problem
    • (status&8) == 8 means another problem

    Tables Structure

    /* emcCalib.idl
    *
    * Table: emcCalib
    *
    * description: //: Table which contains all calibration information
    */
    struct emcCalib {
    octet Status[4800]; /* status of the tower/wire (0=problem, 1=ok) */
    float AdcToE[4800][5]; /* ADC to Energy */
    };
    /* emcPed.idl
    *
    * Table: emcPed
    *
    * description: * //: Table which contains pedestal information for emctower ADCs
    */
    struct emcPed {
    octet Status[4800]; /* status of the emc tower(0=problem, 1=ok) */
    short AdcPedestal[4800]; /* ADC pedestal of emc tower x 100 */
    short AdcPedestalRMS[4800]; /* ADC pedestal RMS of emc tower x 100 */
    float ChiSquare[4800]; /* chi square of Pedestal fit */
    };
    /* emcGain.idl
    *
    * Table: emcGain
    *
    * description: //: Table which contains gain correction information
    */
    struct emcGain {
    octet Status[4800]; /* status of the tower/wire (0=problem, 1=ok) */
    float Gain[4800]; /* Gain Variation */
    };
    /* emcStatus.idl
    *
    * Table: emcStatus
    *
    * description: // which emc towers are up and running
    */
    struct emcStatus {
    octet Status[4800]; /* */
    };
    /* smdCalib.idl
    *
    * Table: smdCalib
    *
    * description: //: Table which contains all calibration information
    */
    struct smdCalib {
    octet Status[18000]; /* status of the tower/wire (0=problem, 1=ok) */
    float AdcToE[18000][5]; /* ADC to Energy */
    };
    /* smdPed.idl
    *
    * Table: smdPed
    *
    * description: * //: Table which contains pedestal information for shower max ADCs
    */
    struct smdPed {
    octet Status[18000]; /* status of the smd stripe (0=problem,1=ok) */
    short AdcPedestal[18000][3]; /* ADC pedestals of smd strip x 100 */
    short AdcPedestalRMS[18000][3]; /* ADC pedestals RMS of smd strip x 100 */
    };
    /* smdGain.idl
    *
    * Table: smdGain
    *
    * description: //: Table which contains gain information
    */
    struct smdGain {
    octet Status[18000]; /* status of the tower/wire (0=problem, 1=ok) */
    float Gain[18000]; /* Gain Variation */
    };
    /* smdStatus.idl
    *
    * Table: smdStatus
    *
    * description: // which smds are up and running
    */
    struct smdStatus {
    octet Status[18000]; /* */
    };


    Control Room

    We use a crontab on emc01.starp.bnl.gov to update the trigger database tables as well as the "offline" pedestals throughout the run.  Here's the relevant portion of /etc/crontab:

    00 4 * * * emc /home/emc/online/emc/pedestal/job

    00 * * * * emc /home/emc/online/emc/trigger/job
    10 * * * * emc /home/emc/online/emc/trigger/job
    20 * * * * emc /home/emc/online/emc/trigger/job
    30 * * * * emc /home/emc/online/emc/trigger/job
    40 * * * * emc /home/emc/online/emc/trigger/job
    50 * * * * emc /home/emc/online/emc/trigger/job


    Pedestal Job

    The job runs every day at 4:00 AM and executes the script

    $EMCONLINE/pedestal/updateOnlinePed 

    which in turn calls the root4star macro

    $EMCONLINE/pedestal/makeOnlinePed.C

    This macro calculates pedestals for all 4 subdetectors and uploads the tables to the STAR database.  A local backup copy of each table is stored in

    $EMCONLINE/pedestal/tables/tables/


    Trigger Job

    The job runs $EMCONLINE/trigger/updateTriggerDB every ten minutes.  If the file $EMCONLINE/trigger/RUNMODE contains STOP, the job will do nothing.  $EMCONLINE/trigger/startTriggerDB and $EMCONLINE/trigger/stopTriggerDB can be used to change the content of RUNMODE.  The updateTriggerDB shell script contains some decent documentation which I've reproduced here:

    # this script checks if ANY of the BEMC trigger
    # configuration had changed. If so, it updated the
    # database with the new trigger configuration
    #
    # it runs as a cronjob every 5-10 minutes in the star01
    # machine
    #
    # this script follows the steps bellow
    #
    # 1. check the file RUNMODE. If content is STOP, exit the
    # program. This is done if, for some reason, we
    # want to stop the script from updating the DB
    #
    # 2. SCP the config_crate* and pedestal_crate* files
    # from sc3.starp.bnl.gov machine
    #
    # 3. SCP the trigegr masks from startrg2.starp.bnl.gov machine
    #
    # 4. Copy these files to the sc3 and startrg2 directories
    #
    # 5. Compare these files to the files saved in the sc3.saved
    # and startrg2.saved directories
    #
    # 6. If there is no difference, clear the sc3 and startrg2
    # directories and exit
    #
    # 7. If ANY difference was found, copy the contents of the
    # sc3 and startrg2 directories to sc3.saved and startrg2.saved
    # Also saves the directory with timestamped names in the
    # backup directory
    #
    # 8. runs the root4star macro that create the tables from
    # the files in those directories and save them to the DB
    # It also creates plain text file bemcStatus.txt with the same information
    # for the trigger people and Pplots
    #
    # 9. clear the sc3 and startrg2 directories and exit
    #
    # you can also run it by hand with the command
    #
    # updateTriggerDB TIMESTAMP FORCE
    #
    # where TIMESTAMP is on the format
    #
    # YYYYMMDD.hhmmss
    #
    # if FORCE = yes we force saving the DB
    #
    # this procedure overwrites the RUNMODE variable
    #
    # AAPSUAIDE, 12/2004
    #

    Basically the job is always checking for changes to the trigger pedestals, status tables, and LUTs and uploaded a new table if any changes are found.

    Mapping DB Proposal

    Proposal

    We propose to add a new set of tables to the Calibrations_emc database that will track the electronics mapping for the BEMC, BSMD, and BPRS and allow for an alternative implementation of StEmcDecoder.

    Motivation

    The existing BEMC electronics mapping code (StDaqLib/EMC/StEmcDecoder) has become difficult to maintain. Each time we discover something about the BEMC that requires an update to our lookup tables we have to decipher the algorithms that generate these lookup tables, and more often than not our first guess about how to add the new information is wrong.

    StEmcDecoder is also inefficient because it doesn’t track the validity range of the current lookup tables and so it rebuilds the tables every event. Analysis jobs spend a non-neglible amount of CPU time rebuilding these decoder tables.

    The information in the decoder is critical for BEMC experts, but the interface to that information is less than ideal. StEmcDecoder does not even have CINT bindings. An SQL interface would allow for much easier debugging.

    For the End User

    We are preserving the StEmcDecoder interface and reimplementing it to use the DB tables. Offline users should see a seamless transition. StEmcDecoder also plays an important role in the online p-plots. We’ll need to find a solution that allows access to the DB tables in that framework.

    For Experts

    The new mapping tables will contain a row for each detector element, so we expect that querying the tables using SQL will prove to be a valuable debugging tool. A simplified query might look like:

    SELECT elementID,m,e,s FROM bemcMapping WHERE triggerPatch=5 and beginTime='2007-11-01 00:00:00';

    which would yield

    +-----------+------+------+------+
    | elementID | m | e | s |
    +-----------+------+------+------+
    | 1709 | 43 | 9 | 2 |
    | 1710 | 43 | 10 | 2 |
    | 1711 | 43 | 11 | 2 |
    | 1712 | 43 | 12 | 2 |
    | 1729 | 44 | 9 | 1 |
    | 1730 | 44 | 10 | 1 |
    | 1731 | 44 | 11 | 1 |
    | 1732 | 44 | 12 | 1 |
    | 1749 | 44 | 9 | 2 |
    | 1750 | 44 | 10 | 2 |
    | 1751 | 44 | 11 | 2 |
    | 1752 | 44 | 12 | 2 |
    | 1769 | 45 | 9 | 1 |
    | 1770 | 45 | 10 | 1 |
    | 1771 | 45 | 11 | 1 |
    | 1772 | 45 | 12 | 1 |
    +-----------+------+------+------+
    16 rows in set (0.12 sec)

    Previously, we needed to write one-off compiled programs to export this kind of information out of the decoder.

    Draft IDLs

    struct emcMapping {
    octet m; /* module 1-120 */
    octet e; /* eta index 1-20 */
    octet s; /* sub index 1-2 */
    unsigned short daqID; /* ordering of elements in DAQ file 0-4799 */
    octet crate; /* electronics crates 1-30 */
    octet crateChannel; /* index within a crate 0-159 */
    octet TDC; /* index in crate 80, 0-29 */
    unsigned short triggerPatch; /* tower belongs to this TP 0-299 */
    octet jetPatch; /* tower belongs to this JP 0-11 */
    unsigned short DSM; /* just integer div TP/10 0-29 */
    float eta; /* physical pseudorapidity of tower center */
    float phi; /* physical azimuth of tower center */
    char comment[255];
    }
    struct smdMapping {
    octet m; /* module 1-120 */
    octet e; /* eta index 1-150 (eta), 1-10 (phi) */
    octet s; /* sub index 1 (eta), 1-15 (phi) */
    octet rdo; /* readout crate 0-7 */
    unsigned short rdoChannel; /* index in crate 0-4799 */
    octet wire; /* wire number 2-80 */
    octet feeA; /* A value for FEE 1-4 */
    float eta; /* physical pseudorapidity of strip center */
    float phi; /* physical azimuth of strip center */
    char comment[255];
    }
    struct prsMapping {
    octet m; /* module 1-120 */
    octet e; /* eta index 1-20 */
    octet s; /* sub index 1-2 */
    octet PMTbox; /* PMT box 1-30 (West), 31-60 (East) */
    octet MAPMT; /* MAPMT # for this element in PMTbox 1-5 */
    octet pixel; /* index inside MAPMT 1-16 */
    octet rdo; /* readout crate 0-3 */
    unsigned short rdoChannel; /* index in readout crate 0-4799 */
    octet wire; /* wire number 1-40 */
    octet feeA; /* A value for FEE 1-2 */
    octet SCA; /* switched capacitor array 1-2 */
    octet SCAChannel; /* index inside SCA 0-15 */
    octet powerSupply; /* 1-2 */
    octet powerSupplyModule; /* 1-15 */
    octet powerSupplyChannel; /* 0-14 */
    float eta; /* physical pseudorapidity of tower center */
    float phi; /* physical azimuth of tower center */
    char comment[255];
    }

    I also proposed MySQL schemata on my blog, but I guess in STAR these IDLs define the schema.

    Performance Estimates

    I’ve temporarily installed tables on our MIT mirror and filled them with data describing the 2008 BEMC electronics mapping. Here are the stats:

    *************************** 1. row ***************************
    Name: bemcMapping
    Engine: MyISAM
    Version: 10
    Row_format: Dynamic
    Rows: 4800
    Avg_row_length: 55
    Data_length: 268732
    Max_data_length: 281474976710655
    Index_length: 163840
    Data_free: 0
    Auto_increment: 4801
    Create_time: 2008-11-14 16:03:51
    Update_time: 2008-11-14 16:05:45
    Check_time: NULL
    Collation: latin1_swedish_ci
    Checksum: NULL
    Create_options:
    Comment:
    *************************** 2. row ***************************
    Name: bprsMapping
    Engine: MyISAM
    Version: 10
    Row_format: Dynamic
    Rows: 4800
    Avg_row_length: 70
    Data_length: 336036
    Max_data_length: 281474976710655
    Index_length: 165888
    Data_free: 0
    Auto_increment: 4801
    Create_time: 2008-11-14 17:52:53
    Update_time: 2008-11-14 17:59:40
    Check_time: 2008-11-14 17:52:53
    Collation: latin1_swedish_ci
    Checksum: NULL
    Create_options:
    Comment:
    *************************** 3. row ***************************
    Name: bsmdeMapping
    Engine: MyISAM
    Version: 10
    Row_format: Dynamic
    Rows: 18000
    Avg_row_length: 52
    Data_length: 936000
    Max_data_length: 281474976710655
    Index_length: 604160
    Data_free: 0
    Auto_increment: 18001
    Create_time: 2008-11-14 16:03:51
    Update_time: 2008-11-18 01:48:36
    Check_time: NULL
    Collation: latin1_swedish_ci
    Checksum: NULL
    Create_options:
    Comment:
    *************************** 4. row ***************************
    Name: bsmdpMapping
    Engine: MyISAM
    Version: 10
    Row_format: Dynamic
    Rows: 18000
    Avg_row_length: 52
    Data_length: 936000
    Max_data_length: 281474976710655
    Index_length: 604160
    Data_free: 0
    Auto_increment: 18001
    Create_time: 2008-11-14 16:03:51
    Update_time: 2008-11-18 02:13:36
    Check_time: NULL
    Collation: latin1_swedish_ci
    Checksum: NULL
    Create_options:
    Comment:
    4 rows in set (0.11 sec)

    This information is supposed to be static, even from year-to-year. In reality, we discover some details about the mapping each year which will require updates to some of these rows. There should certainly be no intra-run changes, so StEmcDecoder will need to retrieve 4800+4800+18000+18000 rows from the DB for each BFC and user job.

    The equivalent C++ array sizes (excluding the comment field as I’m not sure how its handled) will be 101 KB (4800*21) for BEMC, 115 KB (4800*24) for BPRS and 288 KB (18000*16) for each SMD plane.

    Pedestals / Status Tables

    Code for the calculation of the BTOW & BSMD status tables has been made publicly accessible. BTOW status code is in StRoot/StEmcPool/CSMStatusUtils. The following studies of pedestals and status tables have been performed:

    BTOW

    2006
    Dave S. - status

    2004
    Thorsten - 62 GeV AuAu status tables
    Oleksandr - 200 GeV AuAu pedestals
    Oleksandr - 62 GeV AuAu pedestals

    2003
    Oleksandr - dAu status tables
    Oleksandr - pp status tables


    BSMD & BPSD

    2006
    Priscilla - SMD pedestals

    2005
    Frank - SMD status for pp (bug found in code http://rhig.physics.yale.edu/~knospe/cucu200/bsmd_status_bug.htm)
    Frank's code used to generate status tables from MuDsts

    2004
    200 GeV AuAu SMD status - taken by Martijn
    Marcia - 200 GeV AuAu SMD pedestals
    Subhasis - 62 GeV AuAu SMD & PSD pedestals

    2003
    Martijn - dAu SMD status

    CSMStatusUtils

    This is CSMStatusUtils, which outputs the status of either calorimeter to text and root status files.  Documentation is by Dave Relyea.  The code can be found at

    StRoot/StEmcPool/CSMStatusUtils

    History:
    I was given code from both Joanna and Thorsten to figure out the status of the calorimeter towers over all pp production runs.  I merged the two sets of code, and and created a package called CSMStatusUtils.

    Method:
    The first piece of code in CSMSU takes every run and fills a 2-d histogram of ALL channels vs hit number (in ADC counts, from 0 to 150).  A second routine then combines these histograms from run subsets into single histograms for each run.  From the second routine and on down, the EEMC and BEMC are done entirely separately; the code needs to be run twice, once for each detector.

    The code has an algorithm which then takes runs in each fill and combines them until the average number of hits above pedestal for all channels is greater than 100.  If runs are left over at the end of a fill, their statistics are added to prior runs in the fill.

    For each combined set of runs, the code puts each channel through a series of tests.  It finds the pedestal (and writes it out to a file, btw) and determines if the pedestal is abnormally wide, or whether it falls outside acceptable limits (ADC channels 0 to 3, or 147 to 150).  It compares all towers' mean number of hits (ten sigma) above pedestal, and then flags towers which have 10x as many hits as the average, or 40x fewer.  Finally, it looks for stuck bits (either on or off) in the 1, 2, 4, or 8 position, and flags channels with stuck bits.

    The code writes out a table (in text format) for each set of runs with the status of each channel clearly marked.  This table is also written in ROOT format, to be read by existing BEMC algorithms.  Also written is a hot tower plot, so the hot tower results can be eyeballed.  The code can also write out gif files of the spectra of every channel that failed a status test, so long as the number of channels in a given run set that failed the test is less than 25 (**for 2004 pp, gif files were not written out**).  Finally, the code creates a nice html file containing links to html subfiles detailing the channel status for each run set, which in turn link to the gif files.

    As a final step, the code takes the text files and creates a new series of text status files with the results in differential format, meaning channels whose status didn't change from run set to run set are omitted.  However, since some channels fall very near the thresholds of certain status tests (for instance, channels whose pedestals sit at 2.9 ADC counts), I make the requirement that the channel's status must not have changed more than ten percent of the time over all runs sets, excluding runs in which all channels were bad (for nominal production running, this needs to be done, of course!).  If it has, it is marked bad once at the beginning, and then does not appear in any of the differential files.


    ************************************************************************
    Running:
    To run CSMSU, the first step is to use the FileCatalog to create a list of all the files you wish to analyze.  The command I use is typically something like:

    get_file_list.pl -keys 'path,filename' -cond 'production=P04ik,tpc=1,emc=1, trgsetupname=productionPP||productionPPnoEndcap||pp||PP,filename~st_physics,filetype=daq_reco_mudst' -onefile -limit 100000 -distinct > allthephysicsfiles

    Note that the output format I use is just 'path,filename', and I keep the :: delimiter that the FileCatalog uses.  My next step is to call

    CSMSU/scripts/analysis0 allthephysicsfiles

    (YOU NEED TO CHANGE THE HEADER IN THIS FILE TO YOUR OWN OUTPUT DIRECTORY)

    which takes the "allthephysicsfiles" file from FileCatalog, splits it up into groups of 20 miniruns, and submits the entire processing job to batch.  Note - if I knew how to use the XML submission scripts, I would, but the online documentation for them doesn't mention how to code up your macro (.C) file such that the XML header file will work.  No matter.

    PLEASE NOTE: Each minirun will generate about a 200k file.  This adds up to ENORMOUS disk space for large runs.  The 2004 pp run takes up about 1.4 Gig.  The 2004 Au-Au run would be even larger.  Thus, I really need to learn how to
    use the XML submission scripts.

    (FROM HERE ON, SCRIPT FILES CALL MACROS IN THE MACROS DIRECTORY.  YOU NEED TO CHANGE THE DEFAULT ARGUMENTS OF THE MACROS TO YOUR WORKING DIRECTORY.  YOU CAN DO THIS IN THE SCRIPT, OR JUST MAKE YOUR OWN SCRIPT, SINCE THEY'RE ALL TRIVIAL ONE LINE SCRIPTS ANYWAY.  SORRY ABOUT THE CAPS.)

    After all miniruns have been processed, the next step is to combine them into runs.  The script "analysis1" does this.  

    Next, you want to run the actual status code on the files.  The script "analysis2" does this.  PLEASE NOTE:  this macro requires an x window, as root needs to be able to Draw certain things.  I don't know how to do this in batch,
    so I always run this interactively.  It's not a good solution, but for now, it's a solution.

    Finally, you want to generate the ".root" status files and the concatenated status files (to alert you to changes in calorimeter tower status).  The script "analysis3" does this.

    StBSMDStatusMaker

    This page gives a brief overview of the code Frank Simon developed to create SMD Status Tables from MuDSTs.

    Some features and limitiations of the code:

    • Create one status table per fill for each SMD plane (if enough statistics are available)
    • all triggers are used to maximize statistics (this can be a potential problem if there are hot trigger towers)
    • pedestals are taken from the data base (MuDST SMD data are zero-suppressed)
    • db-readable status tables can be created. Currently there are only to status flags implemented: good and bad. Adding more "variety" is straigth forward
    • can be used with the STAR scheduler, no specific ordering of the input files required (although some structure in the job submission is recommended, see below)

    Basic ideas behind the code:

    • based on Daves tower status code, but: there are some very important differences
    • One job runs over several MuDST files, when a new fill number is encountered, a new output file is opened. That way, the jobs can be run with the STAR scheduler and they can use files on distributed storage, since no particular ordering of the input files is needed
    • for each fill, a file with 18000 channel TH2Fs storing amplitude information for each SMD channel is created (this method of dealing with random file order is a bit disk space hungry, so make sure enough space is available (~ 10 GB for 2005 pp); this can be deleted after the next step), other information such as time stamps and pedestals are stored in text files
    • As a next step, the large number of files created by jobs on MuDSTs is consolidated into one file per fill
    • Status tables are created from each of those files (one per fill, if statistics are sufficient, otherwise no status table for that fill is created)
    • db readable files are produced from these status tables

    Running the code

    • Copy and compile the code in StRoot/StBSMDStatusMaker
    • Copy the macro that runs the code: RunStatus.C
    • Create scheduler scripts to submit your jobs. For pp2005, 50 jobs per job seems to give jobs with useful runtimes. In order to not get a totally randomized fill distribution in your jobs, submit them by day. A macro that creates a .csh that you can use to submit jobs by day is CreateSubmitScript.C, a template job describtion (you have to modify tha paths to suit your needs) is pp2005Template.xml
    • Submit your jobs
    • Once all jobs are done, create a list of all output files via ls * > FileList.list in the directory where your output ended up
    • Consolidate data using the file list: run macro DoAdding.C after compiling AddHistograms.C (via .L AddHistograms.C++), this macro takes the directory where the files are located and the file list as arguments. This creates three files per fill: Fill*.root containing the histograms, Fill*.ped containing pedestal db information and Fill*.time containing the (approximate) start time of the fill
    • Create a list of root files via ls Fill*.root > FileList.list and convert it into a list of fill numbers using the macro GetListOfFills.C. This maco takes the file list and a file name for the fill number list as argument
    • Perform the status table creation. For that, compile the shared library StatusTools.C (via .L StatusTools.C++), then run the macro ProcessList.C, with arguments RunList (created previously) and the directory where the Fill* files are located. The output of this is a number of files per fill (root file, flag file and time stamp file). The flag and the time stamp file are needed to create the db readable status table, the root file contains histograms created during the analysis process.
    • Create db readable status tables. This is done by running the macro WriteStatusFiles.C. This takes the directory where the flag and timestamp files are located as an argument. Two important notes:
      • This macro needs the full STAR environment (all other steps above except the running of jobs can be done on standalone machines)
      • The order in which the flag and time files are written (created) is crucial, since gSystem->GetDirEntry() is used to loop over all files. So care has to be taken if the files are copied from somewhere else
    • Copy the db readable files to the database location in StarDb, and test them! Use TestStatusFiles.C for example.

    For questions, please don't hesitate to contact me at fsimon@mit.edu!

    Simulation Timestamps

    All 4800 perfect:
    dbMaker->SetDateTime(20070101,000001);

    Run 5 pp, selected by Spin PWG:
    dbMaker->SetDateTime(20050506,214129);

    some more info from Frank on detailed SMD status:
    if (dbTime == 1) db1->SetDateTime(20050423,042518); //2005 stat1
    else if (dbTime == 2) db1->SetDateTime(20050521,100745); //2005 stat2
    else if (dbTime == 3) db1->SetDateTime(20050529,210408); //2005stat3
    else db1->SetDateTime(20050610,120313); //2005, Jumbo
    Run 6 pp:
    dbMaker->SetDateTime(20060522,112810);

    Table Insertion Timeline



    This page keeps a log book of all the BEMC database modifications. Please use this information in order to make sure about the version of the tables you are grabbing from DATABASE. The table is sorted by EntryTime.

    If you'd like to run an analysis using the database as it looked at some particular time, use the method
    St_db_Maker *dbMaker = new St_db_Maker("StarDb", "MySQL:StarDb");
    Int_t myDate = 20051231;
    Int_t myTime = 235959;
    dbMaker->SetMaxEntryTime(myDate,myTime);



    You can use the BEMC DB Browser to look at all the tables in the database

    row
    number
    EntryTime Tables Note
    1 2005-11-03 pp2005 bemcCalib table
    with timestamp = 2005-03-22 00:00:01
    was uploaded to the database
    Adam's calibration table
    for pp2005. Click here for details
    2 2005-12-07 pp2005 offline bemcStatus
    with timestamp between
    2005-04-19 11:36:11 and 2005-06-24 08:58:25
    were uploaded to the database
    Dave's status tables
    for the pp2005 run
    3 2006-02-08 pp2005 online bemcPed
    with timestamp between
    2005-04-19 05:37:10 and 2005-06-10 23:38:20
    were deactivated
    Corruption problems
    reported by Dave
    4 2006-02-09 pp2005 offline (Dave's) bemcPed
    with timestamp between
    2005-04-19 05:37:10 and 2005-06-10 23:38:20
    were uploaded to the database
    Replacement for
    pp2005 bemcPed tables
    5 2006-02-09 pp2004 online bemcPed
    with timestamp between
    2004-05-05 01:41:40 and 2004-05-14 23:21:19
    were deactivated
    Bad tables reported by Joanna
    with large RMS values and
    missing channels. Click here for details
    6 2006-02-09 pp2004 offline (Dave's) bemcPed
    with timestamp between
    2004-05-05 01:41:40 and 2004-05-14 23:21:19
    were uploaded to the database
    Replacement for
    pp2004 bemcPed tables
    7 2006-02-22 AuAu and pp 2004 bemcCalib table
    with timestamp 2004-01-01 00:04:00
    was uploaded to database
    Improvements in calibration by
    Adam Kocoloski.
    Click here for details
    8 2006-02-22 CuCu 2005 bemcCalib table
    with timestamp 2005-02-01 00:00:01
    was uploaded to database
    Improvements in calibration by
    Adam Kocoloski.
    Click here for details
    9 2006-02-22 pp 2005 bemcCalib table
    with timestamp 2005-03-22 00:00:02
    was uploaded to database
    Improvements in calibration by
    Adam Kocoloski.
    Click here for details
    This is a copy of the table saved in row number 8.
    This is necessary because there were calibration
    tables saved for pp2005 run
    10 2006-03-07 Saved perfect status tables
    (bemc, bsmde, bsmdp and bprs)
    for run 6 with timestamp
    2006-01-01 00:00:00
    First order status tables necessary for
    fast production and pp2006 vertex finder
    11 2006-03-30 Saved initial BTOW calibration for pp2006
    with timestamp
    2006-03-11 08:27:00
    First calibration for pp2006 (online).
    Based on eta-slices MIP peaks and
    slopes equalization.
    Click here for details
    12 2006-04-19 A set of perfect status tables
    (bemc, bsmde, bsmdp and bprs)
    was saved in DB for the CuCu2005
    with timestamp
    2005-01-01 00:00:00
    This makes sure that the 2004 status tables are not picked for any analysis/production
    done with the 2005 CuCu data while
    there is no detailed status tables
    available
    13 2006-04-20 A perfect status table for BTOW, including only the
    west side of the EMC
    was saved in DB for the
    CuCu2005
    with timestamp
    2005-01-01 00:00:01
    Added to replace previous perfect status
    table that includes the full detector because
    the east side was being commissioned
    14 2006-06-16 Offline BSMD status tables for 2005 pp running. Event timestamps are between 2005-04-16 06:48:09 and 2005-06-23 19:38:42
    Tables produced by Frank Simon.
    Click here for details
    15 2006-06-21 Offline BTOW status tables for 2005 pp running. Event timestamps are between 2005-04-19 11:36:11 and 2005-05-14 09:17:59
    These tables should have been / were uploaded back in row 2. It's not clear what happened to them.
    16 2006-06-21 Online BTOW pedestals for 2006 pp running. Event timestamps are between 2005-03-02 08:40:15 and 2005-06-19 04:41:18
    BTOW pedestals were calculated and saved to the DB automatically during the run. Unfortunately the tables were corrupted during the upload, so we need to upload these tables again with +1 second timestamps.
    17  2006-08-15
    BSMD pedestals for Run 6.
    Details
    18 2006-08-16
    BTOW status for Run 6.
    ~1 table/fill.  Should be good enough for vertex finding during production, but not necessarily the final set of tables.  Details
    19 2006-10-17
    Perfect BPRS status table for 2006 run
    Begin time 2006-01-01
    20 2006-11-10  BTOW status for Run 5 CuCu
    Link needs to be updated with a summary page.  Details
    21 2006-11-21
    Fixed timestamps for Run 5 pp status
    See starsoft post
    22 2006-11-30
    Fixed timestamps for Run 5 pp peds
    See starsoft post
    23
    2006-12-07
    Offline BTOW calibration for Run 6
    Run 6 BTOW Calibration
    24
    2007-01-17
    Final BTOW status tables for trans,long2
    Details
    25
    2007-02-13
    Corrected 3 Run6 tower peds in few runs
    Hypernews discussion
    26
    2007-02-26
    Final BSMD Run6 status tables
    Details

    Trigger Database

    This database stores all BEMC trigger information such as trigger status, masks and pedestals used to obtain the high tower and patch sum information.  The database is updated online while taking data.  We have the following table formats:

    • St_emcTriggerStatus - this table contains status/mask information for the trigger
    • St_emcTriggerPed - this table contains the pedestal and bit conversion scheme used in the trigger
    • St_emcTriggerLUT - this table contains the lookup table information.  Because the LUT is very large it is encoded in simple formulae.  The FormulaTag entry specifies the formula used for each patch.
    The tables are stored in the STAR database under the directory /Calibrations/emc/trigger and are called bemcTriggerStatus and bemcTriggerPed.  To access those tables from an analysis maker do:

    TDataSet *DB = GetInputDB("Calibrations/emc/trigger");

    St_emcTriggerStatus *table = (St_emcTriggerStatus*) DB->Find("bemcTriggerStatus");
    emcTriggerStatus_st *struct = table->GetTable();

    Important Information About Pedestal Tables

    In order to save space and make the download fast, PEDESTALS and RMS are saved as SHORT.  So, the real pedestal value is PED/100.  Similarly, in order to save tables in the database you have to multiply the real pedestal by 100. The same goes for the RMS.

    The pedestal table also includes the 6-bit conversion scheme used to generate the high tower and patch sum information.

    Status Information

    The St_emcTriggerStatus table contains the status information for each single tower, high tower and patch sum (patch sum is the sum in the 4x4 patches.  It is *not* the jet patch).  The status is a simple 0/1 that reflects the masks that are being applied to the electronicss where

    • 0 - masked out
    • 1 - included in trigger

    Tables Structure

    /*
    *
    * Table: emcCalib
    *
    * description: //: Table which contains the trigger masks
    */
    struct emcTriggerStatus
    {
    octet PatchStatus[300]; // Patch sum masks. the index is the patch number
    octet HighTowerStatus[300]; // High tower masks. the index is the patch number
    octet TowerStatus[30][160]; // Single masks. the indexex are the crate number and position in crate
    };
    /*
    *
    * Table: emcTriggerPed
    *
    * description: * //: Table which contains pedestal information and 6 bits conversion used in trigger
    */
    struct emcTriggerPed
    {
    unsigned long PedShift; // pedestal shift
    unsigned long BitConversionMode[30][10]; // 6 bits conversion mode. the indexex are the crate number and position in crate
    unsigned long Ped[30][160]; // pedestal value. the indexex are the crate number and position in crate
    };
    /*
    *
    * Table: emcTriggerLUT
    *
    * description: * //: Table which contains each patch LUT information
    */
    struct emcTriggerLUT
    {
    unsigned long FormulaTag[30][10]; // formula tag for each [crate][patch]
    unsigned long FormulaParameter0[30][10]; // Parameter 0 for the LUT formula in [crate][patch]
    unsigned long FormulaParameter1[30][10]; // Parameter 1 for the LUT formula in [crate][patch]
    unsigned long FormulaParameter2[30][10]; // Parameter 2 for the LUT formula in [crate][patch]
    unsigned long FormulaParameter3[30][10]; // Parameter 3 for the LUT formula in [crate][patch]
    unsigned long FormulaParameter4[30][10]; // Parameter 4 for the LUT formula in [crate][patch]
    unsigned long FormulaParameter5[30][10]; // Parameter 5 for the LUT formula in [crate][patch]
    };

    BEMC Online Trigger Monitoring

    During data taking the BEMC trigger information is monitored, and changes in the configuration (new pedestals, masks, etc.) are recorded.  The code relevant to this online trigger monitoring was developed by Oleksandr, and is checked into CVS here (usage instructions in the README file).  The scripts execute via a cronjob on the online machines on the starp network.  In particular, there exists directories with results from previous years at /ldaphome/onlmon/bemctrgdb20XX/ on onl08.starp.bnl.gov.

    The final location for this information is in the offline DB, and the definitions for the tables are given here.  These DB tables are used by the StBemcTriggerSimu in the StTriggerUtilities package to replicate the BEMC trigger conditions for a particular timestamp. 

    The DB tables can be uploaded while taking data or stored in ROOT files to be uploaded after data taking is complete.  To upload all the tables stored in ROOT files during data taking only a simple script is needed employing StBemcTablesWriter to read in a list of files and upload their information to the DB.  This script (uploadToDB.C) is checked into the macros directory of in CVS here.  


    Yearly Timestamp Initialization

    Yearly Timestamp Initialization

    This page will document the yearly timestamp initialization requested by the S&C group (Run 12 for example).  The purpose is to set initial DB tables for "sim" and "ofl" flavor in sync with the geometry timeline for each year.  The geometry timeline is documented here.

     

    The timestamps chosen for BEMC initialization are in the table below

      Simulation Real Data
    Run 10 2009-12-12 2009-12-25
    Run 11 2010-12-10 2010-12-20
    Run 12 2011-12-10 2011-12-20

     

    The "sim" tables used for initialization are ideal gains, pedestals and status tables.

    The "ofl" tables used for initialization are the best known gains from previous years, and a reasonable se of pedestals and status tables from a previous year.  Obviously they will be updated once the run begins and better known values are available.

     

    To simplify the initialization process from year to year, a macro (attached below) was written which copies DB tables from previous years to the current initialization timestamp. 

     

    Hardware

    BEMC_FEE_Repairs (BSMD)

    Here (below) are the BSMD FEE repair record spreadsheets from Phil Kuczewski.

    The one with label "BEMC_FEE_Repairs-2010.xls" is most recent, including repairs in 2010

    (although on 9/8/10 Oleg says that 1 more SAS was already replaced and there likely will be 2 more)

    Mapping

    Information about how the BEMC detectors are indexed (Pre run14)

    Oleksandr's Spreadsheet of Towers' Layout (Excel)
    Oleksandr's Spreadsheet of Towers' Layout (PDF)


    For run 14 there were signal cable swaps for PMT boxes 13->14, 15->16, and 45->46. The updated tower maps are here:

    Run14 Tower Layout (Excel)
    Run14 Tower Layout (PDF)

    In these new spreasheets, the yellow, light blue, and light gray boxes are where the swaps are. On the outsides, the PMT boxes are labeled (PMB) and you can clearly see the swaps. As an example, here's how it would work:

    Soft Id's 3521, 3522, 3523, 3524, ...., 3461, 3462, 3463, 3464 were swapped with
                 3441, 3442, 3443, 3444, ...., 3381, 3382, 3383, 3384

    BSMDE 2010 mapping problem and solution

    All the details of the mapping problem can be found in this ticket.  This page is a summary of the problem, solution, and the implementation.

    During Run 10 a problem with the BSMD mapping was discovered (details here).  It was decided to continue taking data with the 2 fibers swapped for future running, and simply correct the mapping in the DB to reflect the hardware configuration before production.  The mapping for the BSMD phi plane (BSMDP) was corrected (completely swapped 2 fibers in the DB) before produvtion to match the Run 10 hardware configuration, so there were no problems with BSMDP mapping in production. 

    Problem:

    The correction to the BSMD eta plane (BSMDE) mapping, however, was incomplete and did not swap the 2 fibers in the DB completely.  Ahmed found this mapping problem in "Phase I" of the 200GeV QM production (production series P10ij).  All Run 10 data produced in the P10ih and P10ij production series have this BSMDE mapping issue. 

    Solution:

    a)  For "Phase II"  of the Run 10 data production in the P10ik production series an updated DBV was used to include the correct mapping  for both the BSMDE and BSMDP planes.  This data should be analyzed as usual, with no need for a patch.

    b)  In an effort to recover the data produced with the BSMDE mapping problem (P10ih and P10ij) a patch was included in the SL10k and future libraries to correctly swap the BSMDE channels as an afterburner using StEmcDecoder. 

    Implementation:

    The implementation of part b) of the solution above is similar to the patch for previous tower mapping problems.  The software patch includes 3 libraries StEmcDecoder, StEmcADCtoEMaker, StEmcRawMaker.

    1. StEmcDecoder will use the non-corrected map by default, although it is possible to know the correction that should be applied for each channel using the method

      StEmcDecoder::GetSmdBugCorrectionShift(int id_old, int& shift)

      where id_old is the non-correct software id and shift is the shift that should be applied to the id. In this case:

       id_corrected = id_old + shift
       
    2. StBemcRaw (in StEmcRawMaker) was updated with a method

      StBemcRaw::smdMapBug(bool)

      That enables (true) or disables (false) the on-the-fly correction. The default options are:

      for StEmcRawMaker -> false (map correction IS NOT applied for P10ih or P10ij PRODUCTION)
      for StEmcADCtoEMaker ->true (map correction IS applied for USER ANALYSIS of P10ih or P10ij produced MuDsts)

      IMPORTANT: If you run your analysis with StEmcADCtoEMaker The StEmcRawHits in StEmcCollection will be automatically have the map FIXED!!!! This is not the case if you use StEmcRawMaker or StMuEmcCollection for analysis.
    The consequences (only for P10ih and P10ij data):

    1. muDST data is saved with the non-corrected software id

      If you read muDST data directly, without running ADCtoEMaker you need to correct the ids by hand. The following code is an example how to do that correction. You need to use StEmcDecoder to get the correct id

      //////////////////////////////////////////////////////////////////////////////////////////////////////////
      StEmcDecoder* decoder = new StEmcDecoder(date, time); // date and time correspond to the event timestamp
      StMuEmcCollection* muEmc = muMk->muDst()->muEmcCollection(); // get the MuEmcCollection from muDST
      
      //...................  BSMDE  ....................
      	
      
      nh=muEmc->getNSmdHits(det);
      if(det == BSMDE) {
      	for(Int_t j=0;j<nh;j++)	{
      		StMuEmcHit* hit=muEmc->getSmdHit(j,det);
              	ID = hit->getId();
              	ADC = hit->getAdc();
              	CAP = hit->getCalType();
             		int shift = 0;    
          		decoder->GetSmdBugCorrectionShift(ID,shift);
          		newID = ID + shift;      // newID is correct softID for this hit
      		if(newID < 0) continue;  // mask lost channels
      	
      		// user code starts here
      
      	} }
      //////////////////////////////////////////////////////////////////////////////////////////////////////////
      
      

    2. StEmcCollection (StEvent format) after you run StEmcADCtoEMaker at analysis level is created with correct software id.
    3. Lost Channels  :  There are 498 strips which are masked by the patch (correct softID returned by decoder is -2e4) because the correct hits for that channel are not saved in the muDst at all, due to the mapping in the DB used during production.
    4. Database :  Entries are not yet uploaded for Run 10 BSMD data, but the intention is that they will be uploaded with the correct softId mapping so no patch will be needed to get the DB.

     

     

    BTOW map problem and solution

    If you are not familiar with the map problem follow this discussion in the emc-list. Basically, bacause of swapped fiber optics or swapped signal cables some of the towers are not in the software_id position they are supposed to be. This corresponds to more or less 100 swaps (most in the west side) that corresponds to about 200 towers.

    Some of the swapped towers could be fixed because they originated from swapped signal cables. When the swap happened at fiber optics level they were left as they are because the fibers are difficult to access and are frigile. In this case, and for previous runs, a software patch was made in order to recover the swapped towers.

    The list of swapped towers can be found here:

    The software patch is implemented in 3 libraries: StDaqLib (StEmcDecoder), StEmcRawMaker and StEmcADCtoEMaker and the idea is the following
    1. for 2006 (and future) data StEmcDecoder will have the correct map and all database and productions will be done correctly. In this case the patch is invisible for the user.
       
    2. for 2004/2005 data, because of the large amount of tables in database and because of the many productions that were done, the patch in StEmcDecoder, is turned OFF by default. This was done because changing the database and old productions is too much trouble at this point. In this case, the patch works in the following way:
       
      1. StEmcDecoder will use the non-corrected map by default, although it is possible to know the correction that should be applied for each tower using the method

        StEmcDecoder::GetTowerBugCorrectionShift(int id_old, int& shift)

        where id_old is the non-correct software id and shift is the shift that should be applied to the id. In this case:

         id_corrected = id_old + shift
         
      2. StBemcRaw (in StEmcRawMaker) was updated with a method

        StBemcRaw::towerMapBug(bool)

        That enables (true) or disables (false) the on-the-fly correction. The default options are:

        for StEmcRawMaker -> false (map correction IS NOT applied for PRODUCTION)
        for StEmcADCtoEMaker ->true (map correction IS applied for USER ANALYSIS)

        IMPORTANT: If you run your analysis with StEmcADCtoEMaker The StEmcRawHits in StEmcCollection will be automatically have the map FIXED!!!! This is not the case if you use StEmcRawMaker or StMuEmcCollection for analysis.

    The consequences (only for 2004/2005 data):

    1. muDST data is saved with the non-corrected software id

      If you read muDST data directly, without running ADCtoEMaker you need to correct the ids by hand. The following code is an example how to do that correction. You need to use StEmcDecoder to get the correct id

      //////////////////////////////////////////////////////////////////////////////////////////////////////////
      StEmcDecoder* decoder = new StEmcDecoder(date, time); // date and time correspond to the event timestamp
      StMuEmcCollection* emc = muMk->muDst()->muEmcCollection(); // get the MuEmcCollection from muDST

      //...................  B T O W   ....................
      for (int idOld = 1; idOld <= 4800 ; idOld++) 
      {
          int rawAdc= emc->getTowerADC(idOld);
          int shift = 0;   
      decoder->GetTowerBugCorrectionShift(idOld,shift);
      idNew = idOld + shift;

      // user code starts here
      }
      //////////////////////////////////////////////////////////////////////////////////////////////////////////

    2. Database is saved with non-corrected software id
    3. StEmcCollection (StEvent format) after you run StEmcADCtoEMaker at analysis level is created with correct software id.
    4. StBemcTables. To enable the tower map bug, use the flag kTRUE in StBemcTables constructor. See examples bellow
      // This method returns values for NON-CORRECTED IDs (old ids, as the tables are saved in DB)
      tables = new StBemcTables();
      float pedestal, rms;
      tables->getPedestal(BTOW, idOld, 0, pedestal, rms);

      // This method returns values for CORRECTED IDs
      tables = new StBemcTables(kTRUE); // Use kTRUE to enable the BTOW map correction in StBemcTables. Default is kFALSE
      float pedestal, rms;
      tables->getPedestal(BTOW, idNew, 0, pedestal, rms);

    Marcia's BPRS mapping page

    This page was written by Marcia Maria de Moura in January 2005 and ported into Drupal in October 2007

    We are interested in determine in which sequence the measurements of the pre-shower "cells" come into the DAQ system.

    The path from the detector to the DAQ system is not so trivial. There are many connections and they are all tagged but, sometimes, in order to allow a better assembly of the system, the final sequence in which the data from the cells is sent to DAQ is not an ordinary one.

    In order to ilustrate better what is the mapping for the pre-shower, we show the correlation of some connections and also the correlation with the towers.

    In figure 1 we show part of the barrel EMC corresponding to three modules. The modules are presented from η=0 to η=1. The coloured boxes are the towers. The four rows correspond to a photomultiplier box (PMB). One PMB corresponds to one entire module plus two halves of neighbor modules on each side of the central one. In the left side of the figure there is the legend for the series of conectors ST1,ST2,ST3 and ST4 for the towers. In the figure it is shown how this connectors are related to the towers.


    Figure 1 - Correlation of PMT connectors and towers' positions. For a larger view, click on figure.

    In figure 2 we show an example of numbering for towers for the PMB 11W (W stands for west). This numbering is the one used for analysis and we call it software id. By analogy, we aplly the same numbering for the pre-shower cells. For the complete numbering of towers and pre-shower cells, see EMC distribution.


    Figure 2 - Example of tower numbering for PMB 11W

    In the case of the pre-shower cells, the light is not send to the same photomultiplier tubes as the ones for towers, but to sets of Multi-Anode Photomultipliers (MPMT). There are five sets of MPMT's in each PMB and the conecctors for that are identified by MP1, MP2, MP3 MP4 and MP5. In figure 3 we show the correlation of MPX connections to the pre-shower cells, also for PMB 11W. The distribution looks similar but actually the rows are different from figure 2. In the right side of the figure is indicated which is the correspondence in rows with the figure 2. From Vladimir Petrov we obtained how the MPX connectors are related to some eletronics ids (FEE number, SCA number and SCA channel) which are correlated to the muxwire number. The muxwire number is the one that determines the sequence of data to DAQ. In figure 3 it is shown the muxwire, just below the software id.

    From all the eletronics parameters above, the algorythm to associate the software id with the data from DAQ was determined, and is in the StEmcDecoder. In previous attempts. some information about the electronics ids was not updated, which caused to a wrong mapping in the first place. One parameter of the algorythm, an offset (PsdOffset_tmp[40])was wrongly determined. The electronics ids information has been updated and the algorythm has been corrected. In table 1 and 2 we show the values of some parameters for PMB 11W. They are analogous to other PMB's and the only thing that changes is the software id, but the distribution in position is the same. Among the many parameters in table 1 and 2, the updated offset values that are used in the StEmcDecoder are displayed.

    Of course there are aditional parameters in the algorythm but there is no meaning in explaining all of it here. For more details, go to StEmcDecoder STAR Computing software guide.


    The following image has the power supply number, module, and channel for PMT box 11 on both east and west.

    Service Tasks



    HIGH PRIORITY

     

    BSMD Calibration Studies

    Description:  The calibration coefficients for the BSMD were derived from old test beam studies.  They are set so that the energy of an SMD cluster should be equal to the energy of the tower above it.  One way to check this is to select low-energy electrons and plot SMD energy vs. tower energy.  Initial studies could be accomplished on a month timescale.

    Assigned to: 
    Willie Leight (MIT)

    Status:
      In Progress.

    BPRS Calibration Studies

    Description:  We have no offline calibration coefficients for the preshower.  While the absolute energy scale is not overly important, we need to verify that the channels are well-calibrated relative to each other

    Assigned to:  Rory Clarke (TAMU)

    Status:  In Progress -- see link

    Neutral Pion Tower Calibration Algorithm

    Description:  Current BTOW calibrations rely on the TPC to a combination MIP/electron-based analysis.  A calibration using neutral pions would be complementary and possibly more precise.

    Assigned to:  Alan Hoffman (MIT)

    Status:  In Progress.

    Trigger Simulator

    Description:  The goal here is to extend the existing BEMC trigger simulator to support "exact" trigger simulation using the logic that we actually use online, to include L2 triggers, and to integrate this simulator with ones provided by other subdetectors.

    Assigned to:  Jan Balewski (IUCF -- Endcap and L2), Renee Fatemi (UK -- BEMC), and Adam Kocoloski (MIT -- BEMC)

    Status:  In Progress

    Control Room Software Maintenance

    Description:  We'll hopefully be replacing most of the BEMC Control Room computing hardware before Run 8, and we'll need someone to check that everything still works as before.  Additionally, we need to fix the code that auto-uploads DB tables (should be an easy fix), and we should work on uploading BSMD pedestal tables that include separate values for CAP == 124 and 125.

    Assigned to:  ???

    Online BTOW Status Table Generation Using L2

    Description:  L2 can accumulate sufficient statistics in real time to allow us to record BTOW status tables without waiting for fastOffline or full production.  "Real-time" status tables in the DB will allow for better fastOffline QA and could decrease the amount of time needed to analyze the data.

    Assigned to:  ???
     

    NORMAL PRIORITY

     

    STAR DB Browser Maintenance

    Description:  Not really BEMC-specific, but Dmitri Arkhipkin is looking for a maintainer for the STAR DB Browser, which is a very useful tool for people working on the BEMC.

    Assigned to:  ???

    Database-Driven BEMC Mapping

    Description:  We use the StEmcDecoder class to generate time-dependent mapping tables on-the-fly.  I'd like to investigate using a DB to store this information.  First step would be to decide on a schema (i.e. is each tower a single row, or do we do the old trick of storing an entire mapping as a blob?).  Next we would populate the DB with tables that cover all the possible StEmcDecoder configurations (~20 or so).  It may be that we never use this DB for more than browsing the mapping; even so, it could still prove to be a useful tool.  On the other hand, we may decide to use it instead of the decoder for some uses.  Minimal MySQL knowledge would be helpful, but not required.

    Assigned to:  ???

    Run 7 BTOW Calibration

    Description:  Begin with MIP + electron calibration procedure used in previous years and investigate refinements.  Typically a couple months of work.

    Assigned to:  Matthew Walker (MIT)

    Status:  Waiting on Production.

    Run 7 BTOW Status

    Description:  Track tower status and upload DB tables for offline analysis.  Timescale of ~1 month plus occasional updates, corrections.  A set of tables are currently in the DB, but these ignore hot towers.

    Assigned to:  ???

    Run 7 BSMD Status

    Description:  Track SMD status and upload DB tables for offline analysis.  Timescale of ~1 month plus occasional updates, corrections.

    Assigned to:  ???

    Run 7 BPRS Status

    Description:  Adapt the existing BTOW status code for the preshower and use it to generate DB tables for offline analysis.

    Assigned to:  Matt Cervantes (TAMU)

    Status:  In Progress

     

    COMPLETE

     

    BPRS Mapping Update

    Description:  The BPRS mapping needs to be fixed to take into account the tower swaps we discovered in Run 5, plus any other problems that might turn up with the increased acceptance and statistics we have now.

    Assigned to:  Rory Clarke (TAMU)

    Status:  Complete -- mapping checked into CVS to fix Run 6 and Run 7 data at analysis level (StEmcADCtoEMaker).

    Single-Pass Embedding with Calorimeters

    Description:  BEMC embedding is typically done as an afterburner on the regular embedding.  This has the advantage of allowing multiple configurations to be run without needing to redo the full TPC embedding, but we should also support the standard embedding mode of a single BFC chain.

    Assigned to: 
    Wei-Ming Zhang (KSU)

    Status:  Complete

    Run 6 BTOW Calibration

    Description:  Perform MIP + electron calibration using Run 6 data.  Investigate electron-only and pi0-based calibrations.

    Assigned to:  Adam Kocoloski (MIT)

    Status:  Complete -- see Run 6 BTOW Calibration

    Run 6 BTOW Status

    Description:  Track tower status and upload DB tables for offline analysis.  Timescale of ~1 month plus occasional updates, corrections.

    Assigned to:  David Staszak (UCLA)

    Status:  Complete -- (link goes here)

    Run 6 BSMD Status

    Description:  Develop standard code to track SMD status and upload DB tables for offline analysis.  Investigate possibility of SMD status codes for individual capacitors

    Assigned to:  Priscilla Kurnadi (UCLA)

    Status:  Complete -- (link goes here)


    Of course, many others have contributed to the calibrations and physics program of the BEMC over the years.  You know who you are.  Thanks!

    --Adam Kocoloski

    Software

    The main EMC offline reconstruction chain consists of:  
    • StEmcAdcToEMaker - This maker gets ADC values for all EMC sub detectors and applies the calibration to get the proper energy value.
    • StPreEclMaker - This maker does clustering in all EMC sub detectors.
    • StEpcMaker - This maker matches the clusters in EMC sub detectors to get the final EMC point.
    To have this chain working properly, some others programs are necessary. The full offline reconstruction chain is shown in the schema below:

    Aside from Drupal, one can also find some very old BEMC documentation at http://rhic.physics.wayne.edu/emc/

    Clustering

    Second most important step in EMC data reduction is to find clusters from EMC data.

    Main code performing EMC clustering is located in StRoot/StPreEclMaker and StRoot/StEpcMaker

    The main idea behind the BEMC cluster finder is to allow multiple clustering algorithms. With that in mind, any user can develop his/her own finder and plug it in the main cluster finder.

    In order to develop a new finder the user should follow the guidelines described in this page.

    Display Macro

    A Small cluster finder viewer was written for the BEMC detector. In order to run it, use the macro:

    $CVSROOT/StRoot/StPreEclMaker/macros/clusterDisplay.C

    This macro loops over events in the muDST files, runs the BEMC cluster finder and creates an event by event display of the hits and clusters in the detector. This is an important tool for cluster QA because you can test the cluster parameters and check the results online. It also has a method to do statistical cluster QA over many events.

    The commands available are:

    setHitThreshold(Int_t det, Float_t th)

    This method defines the energy threshold for displaying a detector HIT in the display. The default value in the code is 0.2 GeV. Values should de entered in GeV.  The parameters are:

    • Int_t det = detector name. (BTOW = 1, BPRS = 2, BSMDE = 3 and BSMDP = 4)
    • Int_t th = hit energy threshold in GeV. 

    next()

    This method displays the next event in the queue.

    qa(Int_t nevents)

    This method loops over many events in order to fill some clusters QA histograms. The user needs to open a TBrowser n order to access the histograms.  The parameters are:

    • Int_t nevents = number of events to be processed in QA 

    help()

    Displays a small help in the screen.

    How to write a new Cluster Finder

    To create a new algorithm it is important to understand how the cluster finder works.

    EmcClusterAlgorithm.h

    This file defines a simple enumerator for the many possible cluster algorithms. In order to add a new algorithm the user should add a new position in this enumerator

    StPreEclMaker.h and .cxx

    The cluster finder itself (StPreEclMaker) is a very simple maker. This maker is responsible only for creating the finder (in the Init() method) and call some basic functions in the Make() method.

    StPreEclMaker::Init()

    This method just instantiates the finder. In the very beginning of Init() method, the code checks which algorithm is being requested and creates the proper finder.

    StPreEclMaker::Make()

    The Make() method grabs the event (StEvent only) and exits if no event is found. If the event is found it calls 4 methods in the finder. These methods are:

    • mFinder->clear(); // to clear the previous event local cluster
    • mFinder->clear(StEvent*); // to clean any old cluster or pointer in StEvent
    • mFinder->findClusters(StEvent*); // to find the clusters in this event
    • mFinder->fillStEvent(StEvent*); // to fill StEvent with the new clusters
    • mFinder->fillHistograms(StEvent*); // to fill QA histograms

    The modifications the user should do in StPreEclMaker.cxx are only to add the possibility of instantiating his/her finder in the StPreEclMaker::Init() method.

    Creating a new cluster algorithm

    Before creating a new cluster algorithm it is important to know the basic idea behind the code.  The basic classes are

    StEmcPreCluster

    There is an internal data format for the clusters in the code. The clusters are StEmcPreCluster which are derived from plain ROOT TObject. StEmcPreCluster is more complete than the regular StEvent cluster object (StEmcCluster) because it has methods to add and remove hits, split and merge clusters and well set matching id between different detectors.

    StEmcPreClusterCollection

    This is a placeholder for the StEmcPreCluster objects that are created. This object derives from the regular ROOT TList object. It has methods to create, add, remove and delete StEmcPreClusters. StEmcPreCluster objects that are created or added to the collections are owned by the collection so be careful.

    StEmcVirtualFinder

    This is the basic finder class. Any cluster algorithm should inherit from this class. It already create the necessary collections for the clusters in each detector.

    To create a new finder algorithm you should define a finder class that inherits from the StEmcVirtualFinder and overwrite the method findClusters(StEvent*). Lets suppose you want to create a StEmcMyFinder algorithm. You should create a class with, at least, the following:
    StEmcMyFinder.h

    #ifndef STAR_StEmcMyFinder
    #define STAR_StEmcMyFinder

    #include "StEmcVirtualFinder.h"

    class StEvent;

    class StEmcOldFinder : public StEmcVirtualFinder
    {
    private:

    protected:
    public:
    StEmcOldFinder();
    virtual ~StEmcOldFinder();
    virtual Bool_t findClusters(StEvent*);

    ClassDef(StEmcMyFinder,1)
    };

    #endif

    StEmcMyFinder.cxx

    #include "StEmcMyFinder.h"
    #include "StEvent.h"
    #include "StEventTypes.h"

    ClassImp(StEmcOldFinder)

    StEmcMyFinder::StEmcMyFinder():StEmcVirtualFinder()
    {
    // initialize your stuff in here
    }
    StEmcMyFinder::~StEmcMyFinder()
    {
    }
    Bool_t StEmcMyFinder::findClusters(StEvent* event)
    {
    // check if there is an emc collection

    StEmcCollection *emc = event->emcCollection();
    if(!emc) return kFALSE;

    // find your clusters

    return kTRUE;
    }

    The method findClusters(StEvent*) is the method that StPreEclMaker will call in order to find clusters in the event. All the StEmcVirtualFinder methods are available for the user.

    The user has 4 pre cluster collections available. They are named

    mColl[det-1]

    where det =1, 2, 3 and 4 (btow, bprs, bsmde and bsmdp)

    How to deal with clusters and collections

    Lets suppose you identified a cluster in a given detector. How do I work with StEmcPreCluster objects and the collections? Please look at the code itself to be more familiar with the interface. The next lines will give basic instructions with the most common tools:

    To create and add a cluster to the collection for the detector ´det´
    StEmcPreCluster *cl = mColl[det-1]->newCluster();

    This line creates a cluster in the collection and returns its pointer.
     

    To remove and delete a cluster from a collection
    mColl[det-1]->removeCluster(cl); // cl is the pointer to the cluster

    or

    mColl[det-1]->removeCluster(i); // i is the index of the cluster in the collection

    This will remove AND delete the cluster.
     

    To get the number of clusters in a collection
    mColl[det-1]->getNClusters();
     

    To add hits to a StEmcPreCluster (pointer to the cluster is called ´cl´)
    cl->addHit(hit);

    where hit is a pointer to a StEmcRawHit object.
     

    To add the content of a pre cluster ´cl1´ to a cluster ´cl´
    cl->addCluster(cl1);

    The added cluster ´cl1´ is not deleted. This is very useful if you identified a spitted cluster and would like to merge them.
     

    How to do matching in the cluster finder?
    Depending on the cluster finder algorithm one can do clusters in one detector using the information in the other detector as seed. In this sense, it is important do have some kind of matching available in the cluster finder. In the original software scheme, StEpcMaker is the maker responsible for matching the clusters in the BEMC sub detectors. This maker is still in the chain.

    Because of StEpcMaker, we CAN NOT create StEmcPoints in the cluster finder. This should be done *ONLY* by StEpcMaker. In order to have a solution for that, StEmcPreCluster object has a member that flags the cluster with matching information. This is done by setting a matching id in the cluster. Use the methods matchingId() and setMatchingId(Int_t).

    The matching id is an integer number. If matching id = 0 (default value), no matching was done and the matching will be decided in StEpcMaker. If matching id is not equal to 0, StEpcMaker will create points with clusters with the same matching Id.

    Using this procedure, we can develop advanced matching methods in the cluster finder algorithm.

    How to plug your finder into the cluster finder

    In order to plug your algorithm to the cluster finder you need to change two files in the main finder

    EmcClusterAlgorithm.h
    This file defines an enumerator with the cluster finders. To add your algorithm in this enumerator add an entry in the enumerator definition, for example:


    enum EmcClusterAlgorithm
    { kEmcClNoFinder = 0, kEmcClDefault = 1, kEmcClOld = 2, kEmcMyFinder = 3};

    StPreEclMaker.cxx
    You should change the Init() method in StPreEclMaker in order to instantiate your finder. To instantiate your StEmcMyFinder object, add, in the Init() method of StPreEclMaker:

    if(!mFinder)
    {
    if(mAlg == kEmcClOld) mFinder = new StEmcOldFinder();
    if(mAlg == kEmcMyFinder) mFinder = new StEmcMyFinder();
    }

     

    Original cluster algorithm (kEmcClOld)

    This page describes the original cluster finder algorithm. In order to use this algorithm you should set with the algorithm kEmcClOld.

    Salient features of the method implemented in the program are,

    • Clustering have been performed for each sub detector separately.
    • Currently clusters are found for each module in the sub detectors. There are some specific reasons for adopting this approach especially for Shower Max Detectors (SMD's). For towers, we are still discussing how it should be implemented properly. We have tried to give some evaluation results for this cluster finder.
    • There are some parameters used in the code with their default values. These default values are obtained after preliminary evaluation, but for very specific study it might be necessary to change these parameters. 
    • The output is written in StEvent format.

    Cluster algorithm

    • Performs clustering module by module
    • Loops over hits for each sub detector module
    • Looks for local maximums

    Cluster parameters

    • mEnergySeed – minimum hit energy to start looking for a cluster
    • mEnergyAdd -- minimum hit energy to consider the hit part of a cluster
    • mSizeMax – maximum size of a cluster
    • mEnergyThresholdAll – minimum hit energy a cluster should have in order to be saved

    Neighborhood criterion

    Because of the difference in dimension and of readout pattern in different sub detectors, we need to adopt different criterion for obtaining the members of the clusters.

    • BEMC: Tower gets added to the existing cluster if it is adjacent to the seed. Maximum number of towers in a cluster is governed by the parameter mSizeMax. It should be noted that BEMC, which takes tower as unit is 2-dimensional detector and by adjacent tower, it includes the immediate neighbors in eta and in phi.
    • BSMD: As SMDs are basically one-dimensional detector, so the neighborhood criterion is given by the condition that for a strip to be a neighbor, it has to be adjacent to any of the existing members of the clusters. Here also maximum number of strips in a cluster is governed by mSizeMax parameter.

    Cluster Object

    After obtaining the clusters, following properties are obtained for each cluster and they are used as the members of the cluster object.

    • Cluster Energy (Total energy of the cluster member).
    • Eta cluster (Mean eta position of the cluster).
    • Phi cluster (Mean phi position of the cluster).
    • Sigma eta, Sigma phi (widths of the cluster in eta nd in phi).
    • Hits (Members of the cluster).

    Some Plots

    BSMDE clusters for single photon events

    Performance Evaluation

    Note:  This is a rather old study that I found on BEMC public AFS space and ported into Drupal.  I don't know who conducted it or when.  -- A. Kocoloski

    On the subject of reconstruction made by cluster finder and matching, evaluation has been developed  to determine how good is the reconstruction, efficiency, purity, energy and position resolution of reconstructed particles originally generated by simulation data.

    Cluster finder is being evaluated at the moment using single particle events, which favors the evaluation of efficiency and purity. In this case, we can define efficiency and purity for single particle events as:

    • efficiency - ratio between the number of events with more than 1 cluster and the total number of events.
    • purity - ratio between the number of events with only 1 cluster and the number of events with at least one cluster.

    There are other quantities that could be used for evaluation. They are:

    • energy ratio - ratio between the cluster energy and the geant energy.
    • position resolution - difference between cluster position and particle position.

    All these quantities can be studied as a function of the cluster finder parameters,  mSizeMax, mEnergySeed and mEnergyThresholdAll. The results are summarized below.

    BTOW Clusters

    mEnergyThresholdAll evaluation

    Nothing was done to evaluate this parameter.

    mEnergySeed evaluation

    The purity as a function of mEnergySeed gets better and the efficiency goes down as this parameter increases.  Some figures (the others parameters were kept as minimum as possible) are shown for for different values of mSizeMax for single photons in the middle of a tower with pt = 1,4,7 and 10 GeV/c.

    The following plots were generated using energy seeds of 0.1 GeV (left), 0.5 GeV (middle), and 1.5 GeV (right).  Full-size plots are available by clicking on each image:

    Eta dfference

    Phi difference


    Number of clusters



    Energy ratio

    mSizeMax evaluation

    Nothing was done to evaluate this parameter.

    BSMD Clusters

    mEnergyThresholdAll evaluation

    Nothing was done to evaluate this parameter.

    mEnergySeed evaluation

    The purity as a function of mEnergySeed gets better and the efficiency goes down as this parameter increases.  Some figures (the others parameters were kept as minimum as possible) are shown for for different values of mSizeMax for single photons in the middle of a tower with pt = 1,4,7 and 10 GeV/c.

    The following plots were generated using energy seeds of 0.05 GeV (left), 0.2 GeV (middle), and 0.8 GeV (right).  Full-size plots are available by clicking on each image:

    Eta Difference (BSMDE only)


    Eta difference RMS as a function of photon energy for different seed energies:


    Phi Difference (BSMDP only)

    Phi difference RMS as a function of photon energy for different seed energies:

    Number of clusters for BMSDE

    Number of clusters for BSMDP:

    BSMDE efficiency and purity as a function of seed energy

    BSMDP efficiency and purity as a function of photon energy for different seed energies

    mMaxSize evaluation

    The efficiency and purity as a function of mSizeMax get better as this parameter increases but for values greater than 4 there is no difference. Some figures (the others parameters were kept as minimum as possible) are shown for for different values of mSizeMax for single photons in the middle of a tower with pt = 5 GeV/c.

    Point Maker

    After obtaining the clusters for 4 subdetectors, we need to obtain the information about the incident shower by making the proper matching between clusters from different subdetectors.

    It should be mentioned that, because of the better energy resolution and higher depth BEMC is to be used for obtaining the energy of the shower. SMDs on the other hand will be used for obtaining the position of the showers because of their better position resolution.

    Currently Preshower detector (PSD) has not been included in the scheme of matching, so we discuss here the details of the method adopted for matching other 3 sub detectors.

    Following steps are adopted to obtain proper matching:

    • The clusters obtained from different sub detectors are sorted to obtain the clusters in a single subsection of the SMD phi.
      • Each module of SMD phi consists of 10 subsections. 
        • Each subsection consists of 15 strips along eta, giving phi positions of the clusters for the eta-integrated region of 0.1. 
      • In SMD eta same subsection region consists of 15 strips along phi and the same region consists of 2 x 2 towers. 
    • In the present scheme of matching, we match the clusters obtained in 3 sub detectors in each of this SMD phi subsection.
    • There are two levels of matching:
      • SMD eta - SMD phi match
        • First matching aims to obtain the position of the shower particle in (eta, phi). Because of the integration of signals in SMD etas in the SMD phi subsection region, (i.e.. each SMD eta strip adds the signal in 0.1rad of phi and each SMD phi strip adds the signal in delta eta=0.1 region.) For this type of configuration of detectors, we have decided to get the position matching for each SMD phi subsection. Even though for cases like 1 cluster in SMD eta and 1 cluster in SMD phi in the subsection, we have mostly unique matching of (eta, phi), but for cases where SMD eta/SMD phi subsection consists of more than 1 cluster (see figure) then the possibility of matching does not remain unique. the task is handled from the following standpoint, The number of matched (eta, phi) pairs are taken as minimum of (number of clusters in SMD eta, number of clusters in SMD phi). The (eta, phi) assignment for the pairs are given considering the pair having minimum of (E1-E2)/(E1+E2), where E1, E2 are the cluster energy of SMD eta and SMD phi clusters.
      • Position - energy match
        • It should be mentioned that position/energy matching is done only when there exists a cluster in BEMC for the subsection under study. Because of the large dimension of towers, the clusters obtained in BEMC is made up of more than one incident particles. In case of AuAu data, where particle density is very large, the definition of clusters in towers become ambiguous, especially for low energy particles. In the present scheme, as we have discussed earlier, we have followed the method of peak search for finding clusters, where maximum cluster size is 4 towers. 4 towers make the region of one SMD phi subsection, so we have obtained the energy for the pairs obtained in (SMD eta, SMD phi) from the cluster energy of BEMC cluster in the same subsection. For the cases where we have 1 cluster in BEMC, 1 cluster in SMD eta, 1 cluster in SMD phi, we assign the BEMC cluster energy to (eta, phi) pairs. But for the cases when the number of matched pairs is more than one, then we need to split the BEMC cluster energy. Presently it is done according to the ratio of the strengths of (eta, phi) pair energies. This method of splitting has the drawback of using SMD energies, where energy resolution is worse compared to BEMC energy resolution.

    Codes for the point maker are found in StRoot/StEpcMaker

    Usage

    The cluster finder can run in many different chains. The basic modes of running it are:

    1. bfc chain: This is the usual STAR reconstruction chain for real data and simulation.
    2. standard chain: this is the most common way of running the cluster finder. The cluster finder runs with real data and simulated data.
    3. embedding chain

    There are some rules a user needs to follow in order to run the cluster finder properly:

    1. Include the file $STAR/StRoot/StPreEclMaker/EmcClusterAlgorithm.h in order to define the cluster algorithm enumerator
    2. Need to load the shared libraries
    3. StPreEclMaker should run *AFTER* the BEMC hits are created (StADCtoEMaker or StEmcSimulatorMaker)
    4. The cluster algorithm should be defined *BEFORE* Init() method is called
    5. Any change in the cluster parameters should be done *AFTER* Init() is called.

    The following is an example on how to run the cluster finder in a standard chain:

    // include the definitions of the algorithms
    #include "StRoot/StPreEclMaker/EmcClusterAlgorithm.h"

    class StChain;
    StChain *chain=0;

    void DoMicroDst(char* list = "./file.lis",
    int nFiles = 10, int nevents = 2000)
    {
    gROOT->LoadMacro("$STAR/StRoot/StMuDSTMaker/COMMON/macros/loadSharedLibraries.C");
    loadSharedLibraries();
    gSystem->Load("StDbUtilities");
    gSystem->Load("StDbLib");
    gSystem->Load("StDbBroker");
    gSystem->Load("St_db_Maker");
    gSystem->Load("libgeometry_Tables");
    gSystem->Load("StDaqLib");
    gSystem->Load("StEmcRawMaker");
    gSystem->Load("StEmcADCtoEMaker");

    // create chain
    chain = new StChain("StChain");

    // Now we add Makers to the chain...
    maker = new StMuDstMaker(0,0,"",list,"",nFiles);
    StMuDbReader* db = StMuDbReader::instance();
    StMuDst2StEventMaker *m = new StMuDst2StEventMaker();
    St_db_Maker *dbMk = new St_db_Maker("StarDb","MySQL:StarDb");

    StEmcADCtoEMaker *adc=new StEmcADCtoEMaker();
    adc->setPrint(kFALSE);

    StPreEclMaker *ecl=new StPreEclMaker(); // instantiate the maker
    ecl->setAlgorithm(kEmcClDefault); // set the algorithm
    ecl->setPrint(kFALSE); // disables printing

    chain->Init();

    StEmcOldFinder* finder = (StEmcOldFinder*)ecl->finder(); // gets pointer to the finder
    finder->setEnergySeed(1,0.8); // change some setting in the finder

    int n=0;
    int stat=0;
    int count = 1;
    TMemStat memory;

    while ( (stat==0 || stat==1) && n<nevents)
    {
    chain->Clear();
    stat = chain->Make();
    n++;
    }
    chain->Finish();
    }

    Data Format

    BEMC data is always available using standard StEvent collections.  This is true regardless of whether one is analyzing simulations or real data, or whether the actual input file is in a geant.root,  event.root, or mudst.root format.  The appropriate BEMC maker (StEmcADCtoEMaker for real data, StEmcSimulatorMaker for simulations) will create an StEvent in-memory if necessary and fill the BEMC collections wth the correct data.

    Three different types of calorimeter objects are available:

    StEmcRawHit -- a hit for a single detector element (tower, smd strip, or preshower channel)

    StEmcCluster -- a cluster is formed from a collection of hits for a single BEMC subdetector (tower / smd-eta / smd-phi / preshower)

    StEmcPoint -- a point combines StEmcClusters from the different subdetectors.  Typically the SMD clusters are used to determine the position of points, and in the case of e.g. pi0 decay photons they also determine the fraction of the tower cluster energy assigned to each photon.  The absolute energy scale is set by the tower cluster energy.  Current point-making algorithms do not use the preshower information.

    For more information see the StEvent manual.

    Embedding

    Update:  New B/EEMC embedding framework from Wei-Ming Zhang currently in peer review


    The present BEMC embedding works as an afterburner that must be done after the TPC embedding.  Wei-Ming Zhang is working on a version that work in parallel with the TPC embedding.  Here's the current workflow:

    During TPC embedding

    In this step, only real EMC data is processed with all of the TPC embedding.  In the end, there are two files:
    1. .event.root - This file contains the TPC reconstructed tracks (real data + simulation) and BEMC reco information (real data only)
    2. .geant.root - This files contains the simulated tracks (TPC + EMC simulated data)
    Once again, no BEMC embedding is done at this level.  The chain that process the BEMC real data is the same as the one used for production (StEmcRawMaker, StPreEclMaker and StEpcMaker)

    After TPC embedding

    This is where the BEMC embedding happens.  The idea is to get the output from the TPC embedding (.geant.root and .event.root files) and mix the BEMC simulated data with the real BEMC data.  The mixing is performed by StEmcMixerMaker and the main idea is to add the simulated ADC values to the real event ADC values.  In order to do that we need to follow some simple rules:
    1. We have to simulate the ADC values with the same calibration table used for the real data
    2. We cannot add pedestals or noise to the simulated data.  The real data already include the pedestals and noise.  If we simulate them we will end up overestimating both quantities.
    3. We should embed simulated hits only for the towers with status==1.
    The macro that runs the embedding can be found at StRoot/StEmcMixerMaker/macros/doEmcEmbedEvent.C.  The basic chain that runs in this macro is
    1. StIOMaker - to read the .event.root and .geant.root files
    2. St_db_Maker - to make the interface with the STAR database and get the BEMC tables
    3. StEmcADCtoEMaker - reprocesses the real BEMC data and fills an StEvent object with the calibrated real data.
    4. StEmcPreMixerMaker - this is a simple maker whose only function is to make sure the chain timestamp is set using the real data event time.  This is very important to make sure the tables are correctly retrieved from the database.
    5. StMcEventMaker - builds an StMcEvent object in memory.  This is necessary because the simulator maker needs to get the BEMC simulated data in this format.
    6. StEmcSimulatorMaker - this maker gets the StMcEvent in memory that contains the BEMC simulated hits and does a slow simulation of the detector, generating the ADC values with the correct calibration tables.  At this point we have two StEmcCollections in memory: the one with the real BEMC data from step 3, and the one with the simulated data from the simulator maker.
    7. StEmcMixerMaker - this maker gets the two StEmcCollections and adds the simulated one to the real data.  This is a simple ADC_total = ADC1 + ADC2 for each channel in each detector.  Only simulated hits belonging to channels that were good in the real data are added.
    8. StEmcADCtoEMaker - reconstructs the event once more.  This step gets the new ADC values and recalibrates them.
    9. StPreEclMaker - reconstructs the clusters with the embedded hits.
    10. StEpcMaker - reconstructs the points
    11. StAssociationMaker - does the TPC association
    12. StEmcAssociationMaker - does the BEMC association
    13. Add your own analysis maker
    The embedding flow is illustrated in the following diagram:



    StEmcAssociationMaker

    Introduction

    To treat many-particle events (pp and AuAu), we need to know which particle generated which cluster/point before evaluating the reconstructed clusters and points.  StEmcAssociationMaker accomplishes this by matching the many clusters and points in a given event with the respective simulated particles.

    Association Matrix

    In order to associate the clusters/points to the corresponding particles, the Association software creates matrices.  The matrix columns correspond to the reconstructed clusters/points and the lines correspond to the particles.  We have three kinds of matrices:
    • Simple Matrix - matrix elements are 0 or 1.
    • Particle Fraction Matrix - matrix elements are the fraction of the total particle energy in a cluster.
    • Cluster Fraction matrix - matrix elements are the fraction of energy of a cluster that comes from a given particle.

                    Simple Matrix                      Particle Fraction Matrix                 Cluster Fraction Matrix


    A simple example using association matrices is presented below.  In the case of double photon events, we plot the purity of the measured clusters in SMD eta as a function of the photon distance.  The purity is obtained from the "Cluster Fraction" Association Matrix.

    How to Use

    In order to follow the same scheme done in StAssociationMaker we save the association information in multimaps.  Multimaps make it very easy for the user to get the association information for a given StMcTrack, cluster or point.  Detailed documentation for StAssociationMaker is available <a href="http://www.star.bnl.gov/STAR/comp/pkg/dev/StRoot/StAssociationMaker/doc/">here</a>.  There are essentially four multimaps defined for the BEMC:
    1. multiEmcTrackCluster - correlates the StMcTrack with StEmcClusters
    2. multiEmcClusterTrack - corrrelates the StEmcCluster with StMcTracks
    3. multiEmcTrackPoint - correlates the StMcTrack with StEmcPoints
    4. multiEmcPointTrack - correlates the StEmcPoint with StMcTracks
    The following sample code can be used to access one of these multimaps provided StEmcAssociationMaker is available in the chain:
    StEmcAssociationMaker *emcAssoc = GetMaker("EmcAssoc");
    if(!emcAssoc) return;
    multiEmcTrackCluster *map = emcAssoc->getTrackClusterMap(1);
    if(!map) return;
    for(multiEmcTrackClusterIter j=map->begin(); j!=map->end(); j++)
    {
    StMcTrack* track = (StMcTrack*)(*j).first;
    StEmcClusterAssociation* value = (StEmcClusterAssociation*)(*j).second;
    if(track && value)
    {
    StEmcCluster *c = (StEmcCluster*) value->getCluster();
    if(c)
    {
    cout <<" McTrack = "<<track<<" GeantId = "<<track->geantId()
    <<" pt = "<<track->pt()<<" TReta = "<<track->pseudoRapidity()
    <<" Cl = "<<c<<" E = "<<c->energy()<<" eta = "<<c->eta()
    <<" phi = "<<c->phi()
    <<" FrTr = "<<value->getFractionTrack()
    <<" FrCl = "<<value->getFractionCluster()<<endl;
    }
    }
    }
    For a detailed example of how to use the BEMC multimaps please have a look at the StEmcAssociationMaker::printMaps() method.

    Makers

    The BEMC makers available in STAR include:

    StEmcADCtoEMaker - This maker converts plain ADC values in energy. It subtracts pedestals, applies calibrations and creates the StEmcRawHits in StEvent. This maker also checks for corrupted headers. The input data format for this maker can be ANY of the formats below:

    • DAQ format (from DAQ files)
    • StEmcRawData format (StEvent)
    • StEmcCollection (StEvent)
    • StMuEmcCollection (muDST)

    A light version of this maker (StEmcRawMaker) runs during production. StEmcRawMaker uses only DAQ or StEmcRawData format as input.
     
    StPreEclMaker - This maker implements clustering of the BEMC detectors.
     
    StEpcMaker - This maker matches the clusters in the BEMC detectors making what we call an StEmcPoint. It also matches tracks with points, if the input format is StEvent.
     
    StEmcTriggerMaker - This maker simulates the BEMC level 0 trigger response using the plain ADC information from the towers. It works both with real and simulated data and it applies exactly the same trigger algorithm as in the BEMC hardware.
     
    StEmcSimulatorMaker - This is the slow BEMC simulator.  It takes an StMcEvent as input and fills the StEmcRawHit collections of StEvent with simulated ADC responses for all subdetectors.
     
    StEmcCalibrationMaker - This is the maker used for BEMC calibration. It has methods to calculate pedestals for all the BEMC detectors as well as detector equalization, based on spectra shape and MIP analysis. This maker runs online as our online pedestal calculator.
     
    StEmcMixerMaker - This maker is the basic BEMC embedding maker. It mixes hits from two StEvent objects in the memory. The hits from the second StEvent are mixed in the first one.  It can run in a standard BFC embedding chain or as an afterburner (afterburner mode requires an event.root file from non-BEMC embedding plus a geant.root file containing BEMC information for simulated particles to be embedded).

    StEmcTriggerMaker

    Documentation provided by Renee Fatemi

    MOTIVATION:
    1. Apply online trigger algorithm to simulated data
    2. Apply "software trigger" to triggered data
    3. Ensure that the same code is used for case 1. and 2.

    How the Code Works:
    StEmcTriggerMaker is the class that provides the user access functions to the trigger decisions. The workhorse is the StBemcTrigger class. StEmcTriggerMaker passes a StEvent pointer from either StEmcADCtoEMaker (real data) or StEmcSimulatorMaker (simulated data). The code accesses all offline status/ped/calibrations from StBemcTables, which are set in the macro used to run the code (ideal status/gain/ped can also be set). Code uses StEmcCollection to access ADC for all WEST BEMC channels which are not masked out with the status code and perform FPGA+L0 trigger on 12 bit ADC and 12 bit PED.

    Access Function Examples:
    // if event fulfills trigger return 1 else return 0 problems return -1
    int is2005HT1() {return mIs2005HT1;}
    int is2005JP1() {return mIs2005JP1;}

    // The ID of the candidate HT (JP)
    int get2005HT1_ID() {return HT1_ID_2005;}
    int get2005JP1_ID() {return JP1_ID_2005;}

    // The DSM ADC of the candidate HT (JP)
    int get2005HT1_ADC() {return HT1_DSM_2005;}
    int get2005JP1_ADC() {return JP1_DSM_2005;}

    // The number of towers(patches) and the id’s which fulfill the trigger
    void get2005HT1_TOWS(int, int*);//array of tow ids passing HT1_2005 trig
    int get2005HT1_NTOWS() {return numHT1_2005;}//# tows passing HT1_2005 trig
    void get2005JP1_PATCHES(int, int*);//array of patches passing JP1_2005
    int get2005JP1_NPATCHES() {return numJP1_2005;}//# patches passing JP1_2005

    These access functions exist for 2003 HT1, 2004 HT1+JP1, 2005 HT1+HT2+JP1+JP2

    CODE EXAMPLE:
    //get trigger info
    trgMaker=(StEmcTriggerMaker*)GetMaker("StEmcTriggerMaker");
    HT1_2005_evt=-1;
    HT1_2005_id=-1;
    HT1_2005_dsm=-1;
    numHT1_2005=0;
    HT1_2005_evt=trgMaker->is2005HT1();
    HT1_2005_id=trgMaker->get2005HT1_ID();
    HT1_2005_dsm=trgMaker->get2005HT1_ADC();
    numHT1_2005=trgMaker->get2005HT1_NTOWS();
    for (int i=0;i<numHT1_2005;i++){
    int towerid=-1;
    trgMaker->get2005HT1_TOWS(i,&towerid);
    HT1_2005_array[i]=towerid;
    }

    COMMENTs for DISCUSSION:
    Conceptually this code was designed from a software/analysis user point of view. Consequently I do not explicitly calculate and store all levels of DSM logic (it is unnecessary). From my understanding this is precisely what the StEmcTriggerDetector class was written to do -- return all DSM levels. It propagates all of the trigger information to the MuDst. BUT the problem is that this class is not propagated to the simulation (as far as I can tell). Even if it was propagated we probably wouldn’t want to take this option because it limits the usefulness of the simulation, whereas the StEmcTriggerMaker allows flexibility in applying the trigger algo to any simulation set. It is certainly possible code up the 2006 part of StEmcTriggerMaker to return all BEMC and EEMC trigger patch information at all levels of DSM, but it doesn’t exist in the current code.

    Utilities

    These are not Makers but codes that prove useful for BEMC analyses.  The headings indicate where the codes can be found in the STAR source code repository.

    StRoot/StDaqLib

    EMC/StEmcDecoder

    This utility converts the daq indexes for any BEMC detector (crate number, position in the crate, etc) in software ids (softId, module, eta, sub).

    It also converts the Level-0 BEMC trigger patches ids in the corresponding tower crate number and index in the crate. It is quite useful to determine the correspondent tower that fired the trigger
     
    StRoot/StEmcUtil

    geometry/StEmcGeom

    This class is used to convert softId in real eta and phi position in the detector and vice-versa. Also converts softId in module, eta and sub indexes that are used in StEvent.

    It has methods to get geometrical information of the BEMC, such as number of channels, radius, etc. This is a very important utility in the BEMC software.
     

    projection/StEmcPosition

    Utility to project tracks in the BEMC detector

    database/StBemcTables

    Utility to load the database tables and get any information from them. It works only in makers running in the chain and has methods to return pedestal, calibrations, gain and status values for all the BEMC detectors. To use it do, in your maker:

    // in the constructor. Supposing you have a private
    // member StBemcTables *mTables;
    mTables = new StBemcTables();

    // in the InitRun() or Make() method
    // this method loads all the BEMC tables
    // it is necessary to have St_db_Maker running
    mTables-&gt;loadTables((StMaker*)this);

    // Getting information
    // for example, getting pedestal and RMS for the
    // BTOW Detector softId = 100
    float ped = mTables->pedestal(BTOW, 100);
    float rms = mTables->pedestalRMS(BTOW, 100);

    database/StBemcTablesWriter

    This class inherits from StBemcTables and is useful for performing all kinds of more advanced BEMC DB operations, including uploading tables if you've got the privileges for it.

    I'm not sure if anyone is actually using the rest of these, nor am I confident that they actually work in DEV.  Feel free to leave some feedback if you have some experience with them. -- Adam

    hadBackground/StEmcEnergy

    Utility to subtract hadronic energy from EMC towers. It uses StEmcHadDE to subtract the hadronic energy in the calorimeter in a event by event basis.
     

    hadBackground/StEmcHadDE

    Utility to calculate hadronic energy profile in the BEMC. It calculates the amount of energy deposited by a single hadron as a function of its momentum, position in the detector and distance to the center of the tower.
     

    voltageCalib/VoltCalibrator

    This utility calculates high voltages for the towers for a given absolute gain.
     

    neuralNet/StEmcNeuralNet

    Basic EMC neural network interface.

    filters/StEmcFilter

    Event filter for BEMC (StEvent and StMcEvent only)

    This utility has a collection of tools to do event filtering in the BEMC. Some of the filters available are:

    • basic event filter: vertex, multiplicity, etc
    • tower filters: tower isolation filter, number of tracks in tower, energy cuts, tower energy and associated pt cuts, etc.

    Useful Documents

     A full list of attached documents is available below, but I wanted to highlight a few here:

    BEMC Technical Design Report (PDF)

     

    Run 9 BSMD Online Monitoring Documentation

    For run 9, BSMD performance was monitored by looking at non-zero-suppressed events.  The code is archived at /star/institutions/mit/wleight/archive/bsmdMonitoring2009/.  This blog page will focus on the details of how to run the monitoring and the importance of various pieces of code involved in the monitoring.  The actual monitoring plots produced are discussed here.

    A few general notes to begin:

    Since the code all sat in the same directory constantly, a number of directories have been hard-coded in.  Please make sure to change these if necessary.

    The code for actually reading in events is directly dependent on the RTS daq-reader structure: therefore it sits in StRoot/RTS/src/RTS_EXAMPLE/.

    All compilation is done using makefiles.

    Brief Description of the Codes:

    This section lists all the pieces of code used and adds a one- or two-sentence description.  Below is a description of the program flow which shows how everything fits together.  Any other code files present are obsolete.

    In the main folder:

    runOnlineBsmdPSQA.py: The central script that runs everything.

    onlBsmdPlotter.cxx and .h: The code that generates the actual monitoring plots.  To add new plots, create a new method and call it from the doQA method.

    makeOnlBsmdPlots.C: Runs the onlBsmdPlotter code.

    GetNewPedestalRun.C: Finds the newest BSMD pedestal run by querying the database.

    onlBsmdMonConstants.h: Contains some useful constants.

    cleanDir.py: Deletes surplus files (postscript files that have been combined into pdfs, for instance).  This script is NOT run by runOnlineBsmdPSQA.py and must be run seperately.

    In the folder StRoot/RTS/src/RTS_EXAMPLE/

    bsmdPedMonitor.C: Reads the pedestal file from evp and fills and saves to file the pedestal histograms, as well as generating ps and txt files describing the pedestals.

    makeMapping.C: Creates a histogram that contains the mapping from fiber and rdo to softId and saves it to mapping.root.  Unless the mapping changes or the file mapping.root is lost there is no need to run this macro.

    onlBsmdMonitor.C: Reads BSMD non-zero-suppressed events as they arrive and creates a readBarrelNT object to fill histograms of pedestal-subtracted ADC vs. softId and capId, as well as rates and the number of zero-suppressed events per module.  When the run ends, it saves the histograms to file and quits.

    readBarrelNT.C and .h: This class does the actual work of filling the histograms: used by onlBsmdMonitor.C.

    testBsmdStatus.C: Checks the quality of the most recent pedestal run and generates QA ps files.

    rts_example.C: This is not used but serves as the template for all the daq-reading code.

    Program Flow:

    The central script, runOnlineBsmdPSQA.py, has a number of options when it is started:

    Script Options

    -p: print.  This is a debugging option: it causes the code to print the ADC value for every channel.

    -n: number of events (per run).  Occasionally it is useful to limit the number of (non-zero-suppressed) events the monitoring program looks at for tests.  During actual monitoring, we wished to catch all possible events so the script was always initialized with -n 1000000: the actual number of non-zero-suppressed events in a run is a few hundred at most.

    -t: this option is not used in the current version of the code.

    -r: raw data.  This option was added during testing when online reading of pedestal files had not been implemented: if it is turned on then the code will not attempt to subtract pedestals from the data it reads.

    -v: mpv cut.  If a channel has MPV greater than this value, it is considered to have a bad MPV.  The default is 5.

    -m: mountpoint.  For monitoring, the mounpoint is always set to /evp/.  During tests, it may be useful not to have a mountpoint and read from a file instead, in which case no mountpoint will be given.

    The standard initialization of the script for monitoring is then:

    python runOnlineBsmdPSQA.py -n 1000000 -m /evp/

    Usually the output is then piped to a log file.  For testing purposes, as noted above, the -m option can be left off and the mountpoint replaced with a file name.  (Side note: if neither a mountpoint nor a file name is given, the code will look for the newest file in the directory /ldaphome/onlmon/bsmd2009/daqData/, checking to be sure that the file does not have the same run number as that of the last processed run.  This last option was implemented to allow for continuous processing of daq files but never actually used.)

    The main body of the monitoring script is an infinite loop.  This loop checks first to see what the current BSMD pedestal run is: if it is newer than the one currently used, the script processes it to produce the pedestal histograms that will be used by the data processing code.  This process happens in several steps:

    Pedestal Processing

    Step 1: The script calls the C macro GetNewPedestalRun.C, passing it the time in Unix time format.  The macro then queries the database for all pedestal runs over the last week.  Starting with the most recent, it checks to see if the BSMD was present in the run: if so, the number of that run is written to the file newestPedRunNumber.txt and the macro quits.

    Step 2: The script opens newestPedRunNumber.txt and reads the run number there.  It then checks to see if the pedestals are up-to-date.  If not, it moves to step 3.

    Step 3: The script moves to StRoot/RTS/src/RTS_EXAMPLE/ and calls bsmdPedMonitor, passing it the evp location of the newest pedestal run, /evp/a/runnumber/ (bsmdPedMonitor does take a couple of options but those are never used).  The main function of bsmdPedMonitor is to produce a root file containing three histograms (BSMDE, BSMDP, BPRS) each of which has a pedestal value for each softId-cap combination.  The pedestal monitoring code starts by reading in the BSMD and BPRS status table (StatusTable.txt) and the mapping histogram (from the file mapping.root) which gives the mapping from fiber and rdo to softId.  Then it goes into an infinite loop through the events in the file.  In a pedestal file, the pedestals are stored in the event with token 0: therefore the code rejects events with non-zero token.  Once it has the token-0 event, it loops through the fibers, grabs the pedestal and rms databanks for each fiber, and fills its histograms.  Finally the code generates a text file that lists every pedestal and several ps files containing the pedestals plotted vs softId and cap.

    Step 4: The script calls testPedStatus.C, passing it the root file just generated.  This macro checks for each good channel to make sure that the pedestals and rms just obtained are within certain (hardcoded) limits.  If the number of channels that have pedestal or rms outside of these limits is above a (hardcoded) threshold for a given crate, that crate is marked as bad.  A ps file is generated containing a table in which crates are marked as good or bad with green or red and several more with softId vs cap histograms in which bad pedestals are marked in red.

    Step 5: The postscript files are combined into pdfs (one for the pedestals and one for the statuses), which are, along with the text file, copied to the monitoring webpage.  The index.html file for the monitoring webpage is updated to reflect that there are new pedestals and to link to the files generated from the new pedestal file.  Finally, the variable containing the number of the old pedestal run is changed to that of the current pedestal run.

    The next step is to call onlBsmdMonitor, which reads the events as they arrive.  This code has several options:

    Daq Reading Options:

    -d: allows you to change the buillt-in rts log level.

    -D: turns on or off printing: corresponds to -p from the script.

    -m:  mountpoint, the same as -m from the script.

    -n: nevents, the same as -n from the script.

    -l: last run, which tells the code what the last run was so that it knows when a new run is starting.

    -r: raw data, same as -r from the script.

    The script thus picks what options to give to onlBsmdMonitor from the options it was given, as well as from the value it has recorded for the last run number (1 if the script is starting up).  Output from onlBsmdMonitor is piped to a log file named currentrun.log.  OnlBsmdMonitor also consists of an infinite loop.  As the loop starts, it gets the current event, checks to see if we are in a new run, and if so that this new run has the BSMD in.  If so, it initializes a new instance of the class readBarrelNT, which does the actual processing, and then processes the event.  If instead this is an already-occuring run, it checks to make sure that we have a new event, and if so processes it.  If we are in between runs or the event is not new, the loop is simply restarted, in some cases after a slight delay.  The main task of onlBsmdMonitor is to obtain histograms of pedestal-subtracted ADC vs softId and capId for the BPRS, BSMDE, and BSMDP and save them to a root file.  It also histograms the rate at which events are arriving and the number of zero-suppressed events per module.  The root file also contains a status histogram which is obtained from file StatusTable.txt.  When the run is over, the histograms are written to file and onlBsmdMonitor quits, writing out a line to its log file to indicate its status at the point at which it ends.

    The script now looks at the last line of the log file.  If this line indicates that the run had no BSMD, a new entry in the monitoring webpag's table is created to reflect this fact.  If not, the script first checks to see if we are in a new day: if so, a new line for this day is added to the webpage and a new table is started.  The script then extracts the number of non-zero-suppressed BSMD events, the total number of events in the run, the number of ZS events, and the duration of the run from the last line of the log file.  If the run is shorter than a (hardcoded) length cutoff, has no NZS BSMD events, or has fewer NZS BSMD events then a (hardcoded) threshold, the appropriate new line in the table is created, with each of the values obtained from the log file entered into the table, a link to the runlog added, and the problem with the run noted.  If the run passes all quality checks, makeOnlBsmdPlots is called.  This code generates the actual monitoring plots: the script then takes the resulting ps files, combines them into a pdf, and creates a new entry in the table with all the run information, a link to the pdf, a link to the run log, and a link to pedestal QA pdf.  The plotting code requires to be given the mpv cut value and the elapsed time, as well as the name of the root file containing the histogram for the runs.  Any new monitoring plots that are desired can be added by adding a method to create them to onlBsmdPlotter.h and onlBsmdPlotter.cxx and calling them in the method doQA.  Having thus completed a run, the loop restarts.

    Some BSMD hardware documents

    Some BSMD hardware documents, below

    BTOF

     Barrel Time-of-Flight Detector



    BTOF Operations (incl. slow controls)

     Barrel Time-of-Flight Operations

    1. TOF Gas: Bottle Switchover Procedure
    2. Slow Controls
    3. On-call Experts
    4. TOF Error Handling




    On-call expert

    Last updated: March 24th 2010 

    How To Clear TOF Errors


     TOF on-call expert for this week:  Yi Zhou (cell: 631-965-6132)

                                                            Frank Geurts (Apartment: 631-344-1042)


     

    Important Notes: 

    The Anaconda window should always running at the TOF Station!  Do Not Close this window.

    Shift crews must check the Warning and Error sections of the Anaconda window after every new run is begun. Warnings will automatically cleared when runs are begun. If an Error is not cleared automatically when a new run is begun, see sections 4 and 1 below. 

    For all TOF alarms or errors, leave a note in the shift log.  Be detailed so the experts will understand what happened and you actually did.

    Never click the red 3 button in the Anaconda window or power cycle a tray during a run, only when the run is stopped.

     

    Additional actions: 

    1. Electronics errors and/or bunchid errors
      Stop the run and determine which tray has the issue (there likely will be a yellow LV current alarm). Clear all (other) errors in Anaconda by pressing the red (3) button. The affected tray will most likely turn gray in Anaconda. In Anaconda, turn off "Auto-Refresh" (in "Command" menu). Go to the main voltage control GUI, turn the LV for that tray off, wait 10 seconds, and then turn the LV for that tray on again. Wait 60 seconds. Then click the red (3) button in the Anaconda window. The tray status should turn green. Turn on "Auto-Refresh" again. Start a new run. Make a note in the shift log. If the problem isn't fixed, then call the expert. 

       

    2. Yellow alarm at LV power supply associated with a tray.
      First, check TOF QA plots and look for electronic errors or bunchid with electronic errors. If there are errors, then follow step 1. Otherwise don't call an expert, just make a note in the shift log.

       

    3. Missing tray or half-tray in TOF online plots (specifically Multiplicity plots), follow procedure 1.

       

    4. In the TOF_MON GUI (Anaconda II):
      1. If you see red errors and/or the online plots indicate problems, stop and restart run. If error doesn't get cleared by this, make sure that all of the small LV circles in the main voltage GUI are green. If not, follow procedure 1 above. Otherwise, call the expert.
      2. If you see yellow alarms, no need to call experts, these clear automatically at the beginning of the next run.
      3. If it is grey alarm, click the "Refresh" button (left of the "1" button) to see if this clears it, otherwise follow procedure 1.

       

    5. TOF readout 100% DAQ errors and one of the RDO 1-4 lights on the TOF DAQ receiver is not responding (LED is black instead of blue or purple). The cure is to stop the run, mark it bad, and start a new run. Nothing else is required. This has worked 100% of the time so far. Make a note in the shift log.

       

    6. Gas alarm on FM2, which is for iso-butane gas.
      When it's cold, the pressure on FM2 will decrease and may cause alarms. In this case, no need to call expert. 

       

    7. If you see the LV off, i.e. dots are blue. (In the normal situation, the dots are green, indicating the LVs are on), please call expert.

       

    8. TOF HV slow control lost communication, please leave a note in the shift log, and follow the procedure in the trouble shooting of HV manual to fix it. Call expert for help in case you do not fix the problem.

     

    Slow Controls

    Slow Controls Detector Operator Manuals

     by Betrand J.H. Biritz (linked version: Jan.5, 2010)
     

     EPICs interface

     

    HV manual

     

    LV manual


     

     

    BTOF-MTD-ETOF operation instruction documents (Run21 and After)

     

    • The instructions for the control room can be found at STAR operation page:
      • STAR Operation (BTOF/MTD/ETOF) both .doc and .pdf file are attached 
    • Expert Knowledge Manual (Only for Oncall Expert)
    • Note that, Anaconda was not in use since Run2021, all the procedures related to Anaconda can be ignored at this moment unless we include Anaconda back to the operations.

    ===============================================================

    Run 15 TOF+MTD Operations Documents

    Run 15 TOF + MTD Operations Documents

     

    This page is dedicated to hosting documents  to aide in the operation of the TOF, MTD, and VPD subsystems.  Listed below are a list of the documents attached to this page and a brief description.  The .docx files are listed to update the documents as needed.

    Canbus_Restart_Procedure.pdf
    The first part of the document describes how to restart the canbus interface and the processes dependent on it (recoveryd and anaconda).
    The second part adds a powercycle of the interface to the procedure.

    Expert_Knowledge_Base.pdf (document formerly known as Etc.pdf)
    A collection of tidbits for TOF+MTD folk.  Not quite intended for det. operators.

    MTD_HVIOC_Restart.pdf
    This guides the detector operator on how to restart an HVIOC for the MTD.
      If you want to get super fancy with restarting IOCs, you might want to check their saved states in the ~/HV/HVCAEN*/iocBoot/save_restore_module/*.sav* and see what the IOC is initializing with.  The same thing applies for the LV (MTD/LV/LVWiener*/iocBoot/save_restore_module/) and TOF.

    MTD_Power_Cycle_Procedure.pdf
    Instructions for the detector operator on performing a LV powercycle.

    Recoveryd_Checking_Restarting.pdf
     This document describes how to check and restart recoveryd.

    Responding_To_DAQ_Powercycle_Requests_Monitoring_And_Clearing_Errors.pdf (Formerly known as General_Instructions)
      This document is used as a guide to respond to DAQ powercycle requests and monitoring and clearing errors given by DAQ.

    TOF_Freon_Bottle_Switchover_Procedure.pdf
     The document decribes how to monitor the freon and what to do when it runs out.

    TOF_HVIOC_Restart.pdf 
    To guide the detector operator through a TOF HVIOC restart.

    TOF_Power_Cycle_Procedure.pdf
    Instructions for the detector operator on performing a LV powercycle.

     

    Run 16 TOF+MTD Operations Documents

    This page holds TOF and MTD operations documents used during Run-16. Previous versions and possibly current versions may be found on Run-15's equivalent page. [ie: If you do not see the document attached to the bottom of this page, it has not been updated since last run and may be found on the Run-15 page.]

    Run 16 TOF + MTD Operations Documents

     

    This page is dedicated to hosting documents  to aide in the operation of the TOF, MTD, and VPD subsystems.  Listed below are a list of the documents attached to this page and a brief description.  The .docx files are listed to update the documents as needed.

    Canbus_Restart_Procedure.pdf
    The first part of the document describes how to restart the canbus interface and the processes dependent on it (recoveryd and anaconda). (This document has been updated since Run-15)
    The second part adds a powercycle of the interface to the procedure.

    Expert_Knowledge_Base.pdf (This document has been updated since Run-15)
    A collection of tidbits for TOF+MTD folk.  Not quite intended for det. operators.

    MTD_HVIOC_Restart.pdf
    This guides the detector operator on how to restart an HVIOC for the MTD.
      If you want to get super fancy with restarting IOCs, you might want to check their saved states in the ~/HV/HVCAEN*/iocBoot/save_restore_module/*.sav* and see what the IOC is initializing with.  The same thing applies for the LV (MTD/LV/LVWiener*/iocBoot/save_restore_module/) and TOF.

    MTD_Power_Cycle_Procedure.pdf
    Instructions for the detector operator on performing a LV powercycle.

    MTD_LVIOC_Restart.pdf  (This document is new since Run-15)
    To guide the detector operator through a MTD LVIOC restart.

    Recoveryd_Checking_Restarting.pdf
     This document describes how to check and restart recoveryd.

    General_Instructions.pdf (This document has been updated since Run-15) (The document formerly known as "The document Formerly known as General_Instructions" or Responding_To_DAQ_Powercycle_Requests_Monitoring_And_Clearing)
      This document is used as a guide to respond to DAQ powercycle requests and monitoring and clearing errors given by DAQ.

    TOF_Freon_Bottle_Switchover_Procedure.pdf
     The document decribes how to monitor the freon and what to do when it runs out.

    TOF_HVIOC_Restart.pdf  (This document has been updated since Run-15)
    To guide the detector operator through a TOF HVIOC restart.

    TOF_LVIOC_Restart.pdf  (This document is new since Run-15)
    To guide the detector operator through a TOF LVIOC restart.

    TOF_Power_Cycle_Procedure.pdf
    Instructions for the detector operator on performing a LV powercycle.

     

    Run19 TOF and MTD operation instruction documents

     This page summarizes all the documents for the BTOF and MTD operation.

    Run20 TOF and MTD operation instruction documents

    This page summarizes all the documents for the BTOF and MTD operation during Run 20.

    Attached files:

    • version 1/6/20 by Zaochen Ye

    Know issues:

    TOF Error Handling

    Last updated: March 23nd 2010 

    TOF Error Handling


     TOF on-call expert for this week: Bill Llope (Apartment: 631-344-1042, Cell: 713-256-4671)


     

    The Anaconda window should always be running on the TOF work station. It keeps a log of important operating conditions and automatically clears most type of electronics errors at the beginning of each run.

    The most important indication of the TOF system "health" are the online plots. They should be checked after the beginning of each run.

    Shift crews should check the Warning and Error sections of the Anaconda window to see that Warnings and Errors are being cleared automatically at the beginning of each run.

    Please make a note of all TOF alarms or any error not automatically cleared by Anaconda in the shift log. Please include enough information so the experts can understand what happened and what you actually did.

    Never click the red 3 button in the Anaconda window or power cycle a tray during a run, only when the run is stopped.

     

    Additional actions: 

    1. Electronics errors and/or bunchid errors
      Stop the run and determine which tray has the issue (there likely will be a yellow LV current alarm). Clear all (other) errors in Anaconda by pressing the red (3) button. The affected tray will most likely turn gray in Anaconda. In Anaconda, turn off "Auto-Refresh" (in "Command" menu). Go to the main voltage control GUI, turn the LV for that tray off, wait 10 seconds, and then turn the LV for that tray on again. Wait 60 seconds. Then click the red (3) button in the Anaconda window. The tray status should turn green. Turn on "Auto-Refresh" again. Start a new run. Make a note in the shift log. If the problem isn't fixed, then call the expert. 

       

    2. Yellow alarm at LV power supply associated with a tray.
      First, check TOF QA plots and look for electronic errors or bunchid with electronic errors. If there are errors, then follow step 1. Otherwise don't call an expert, just make a note in the shift log.

       

    3. Missing tray or half-tray in TOF online plots (specifically Multiplicity plots), follow procedure 1.

       

    4. In the TOF_MON GUI (Anaconda II):
      1. If you see red errors and/or the online plots indicate problems, stop and restart run. If error doesn't get cleared by this, make sure that all of the small LV circles in the main voltage GUI are green. If not, follow procedure 1 above. Otherwise, call the expert.
      2. If you see yellow alarms, no need to call experts, these clear automatically at the beginning of the next run.
      3. If it is grey alarm, click the "Refresh" button (left of the "1" button) to see if this clears it, otherwise follow procedure 1.

       

    5. TOF readout 100% DAQ errors and one of the RDO 1-4 lights on the TOF DAQ receiver is not responding (LED is black instead of blue or purple). The cure is to stop the run, mark it bad, and start a new run. Nothing else is required. This has worked 100% of the time so far. Make a note in the shift log.

       

    6. Gas alarm on FM2, which is for iso-butane gas.
      When it's cold, the pressure on FM2 will decrease and may cause alarms. In this case, no need to call expert. 

       

    7. If you see the LV off, i.e. dots are blue. (In the normal situation, the dots are green, indicating the LVs are on), please call expert.

       

    8. TOF HV slow control lost communication, please leave a note in the shift log, and follow the procedure in the trouble shooting of HV manual to fix it. Call expert for help in case you do not fix the problem.

     

    TOF On-call expert

    Last updated: February 8th 2010

     

    TOF on-call expert for this week:Lijuan Ruan, Yi Zhou, Geary Eppley and Jo Schambach

     

    Expert information is also below:

    Today through Thursday (Feb 09-Feb 11th):
    Feb 12th 0:30-7:30 shift is included.
    Lijuan Ruan: 510-637-8691 (cell)
    I will leave for APS meeting on Friday morning.

    Friday-Saturday (Feb 12th - 13th):
    Feb 14th 0:30-7:30 shift is included.
    Yi Zhou: 631-344-1014 (apt #), 631-344-7635 (office).

    Sunday-Monday (Feb 14th-15th):
    Feb 16th 0:30-7:30 shift is included.
    Geary Eppley: 713-628-2738 (cell)

    Jo will arrive on the night of Feb 15 th, so for Feb 16th shift, please call
    Jo Schambach: 631-344-1042 (Apt #) for help.

     

    For all TOF alarms, leave a note in the shift log. Additional actions:

     

    1. Gas alarm on FM2, which is for iso-butane gas.
      When it's cold, the pressure on FM2 will be changed, this will lead to the flow of isobutene dropping, thus give us alarms. In this case, no need to call expert. We expect isobutene to run out sometime in Feb.

       

    2. Temperature alarms on the LV powersupply, no need to call expert.

       

    3. Electronic errors or bunchid errors with electronic errors
      Stop the run and determine which tray has the issue (there might be a yellow LV current alarm). Turn the LV off, wait 10 seconds and then turn the LV on again. After waiting 10 seconds click the red 3 button in the Anaconda window, the tray status should turn green. Start a new run - if the problem isn't fixed then call the expert.

       

    4. Yellow alarm at LV poweruspply associated with a tray.
      First, check TOF QA plots and look for electronic errors or bunchid with electronic errors. If there are then follow step 3, otherwise don't call but make a note in the shift log.

       

    5. Missing tray on online plots, call expert.

       

    6. If you see the LV off, the dots are blue. (In the normal situation, the dots are green, indicating the LVs are on), please call expert.

       

    7. In the TOF_MON GUI:
      1. If you see red errors, stop and restart run. If error doesn't get cleared by this, call expert.
      2. If you see yellow alarms, no need to call experts.
      3. If it is grey alarm, call the expert.

       

    8. TOF HV slow control lost communication, please leave a note in the shift log, and follow the procedure in the trouble shooting of HV manual to fix it. Call expert for help in case you do not fix the problem.

    TOF noise rate pattern

    BTOF and MTD noise patterns are determined from dedicated TOF noise runs. Such runs are typically executed once per day -time permitting- during each RHIC Run at times when there is no beam stored in the rings. With a dedicated trigger 4M events are collected with BTOF and MTD systems both included with HV switched on.

    The results are archived at this location: https://www4.rcf.bnl.gov/~geurts/noise/

    Input data:

    • TRG Setup: pedAsPhys_tcd_only
    • Number of events: 4M
    • Detectors: tof {mtd, etof} dan trg
    • RHIC status: no beam
    • Detector status: HV fully ramped up

    recent example in STAR Runlog: Run 24151017

    Ideally, shift crew note in the Shift Log that these runs are successfully executed TOF noise runs. Make sure to check the Shift Log and/or SlowControls to verify.

    Potential error modes:

    • HV was not ramped up (check slow controls status)
    • Beam was still in RHIC or beam injection happened (check shift log or RHIC beam current info in slow controls)
    • Not all relevant detectors have been included (check list of detectors for this run in Run Log)




    BTOF Hardware

     BTOF Hardware & Electronics

    • Bad Tray History (Aug. 20, 2021)
    • Tray 102 has a bad POS HV cable. It is behind the TPC support and cannot be repaired.

      Run 10 start  58

            end    10
      Run 11 start  25
            end    25
      Run 12 start  95
            end    95
      Run 13 start  46 106
            end    46 106
      Run 14 start  20  22  35 103
            end    20  22  35  78 103
      Run 15 start  38  41
            end    38  41
      Run 16 start 105
            end   105
      Run 17 start  36  45  80
            end    36  45  80  91 116
      Run 18 start   8  43 116
            end     2   8  40  43 116
      Run 19 start  30  34             (water leak on THUB NW trays 21-50)
            end    22  25  27  30  34  67  97
      Run 20 start  22  25  67
            end     8  22  25  48  64  67  88 101 118 119
      Run 21 start   8  22  48  64  67  88 101 118 119
            end     8  22  48  49  64  67  88 101 118 119

      pp500/510 history:

      Run  9  10 pb-1
         11  37 pb-1
         12  82 pb-1
         13 300 pb-1
         17 320 pb-1
         22 400 pb-1 (BUR)

      Current bad trays:

       8 - 8:0 bunch id error, 8:1 missing
      22 - no canbus
      48 - 48:0,1 missing
      49 - no canbus
      64 - TDIG 59 no response
      67 - may be ok now
      88 - 88:0 missing
      101 - 101:0 missing, 101:1 bunch id error
      118 - bad LV connection, seems ok now
      119 - 119:0 HPTDC 0x6 error

    • BTOF Electronics
    • Low Voltage Power Supplies:
      • Wiener PL512 Technical Manual
      • default SNMP community names for all TOF and MTD power supplies: {tof-pub, tof-priv, admin, guru}:

        [geurts@tofcontrol ~]$ snmpwalk -v 2c -m +WIENER-CRATE-MIB -c guru tof-lvps1 snmpCommunityName
        WIENER-CRATE-MIB::snmpCommunityName.public = STRING: "tof-pub"
        WIENER-CRATE-MIB::snmpCommunityName.private = STRING: "tof-priv"
        WIENER-CRATE-MIB::snmpCommunityName.admin = STRING: "admin"
        WIENER-CRATE-MIB::snmpCommunityName.guru = STRING: "guru"

      • power supply hostnames: see Tray Mapping documents
    • Network Power Switches:


    TOF HV Hardware

    This page is intended to host information about TOF's high voltage equipment.

    A1534s (HV Supply Boards)

    The A1534 is the HV Supply board that is inserted into our SY1527(MTD:SY4527 ).

    Currently used boards(03/01/2016) for TOF + MTD: serial number and firmware version

    TOF(counting from 0):
    slot 1(-): 54, 3.01
    slot 3(+): 56, 2.02
    slot 5(-): 55, 2.02
    slot 7(+): 57, 2.02

    MTD(counting from 0):
    slot 1(-): 66, 04.
    slot 3(+): 69, 3.01
    slot 5(-): 71, 04.
    slot 7(+): 61, 03.
    slot 9(-): 59, 3.01

    Spare(-): serial number 58

    Bad boards:
    #59(Neg) - channel 3 bad (ch. 0-5) [occurred ~2/5/16 in SY4527 while in slot 1, was replaced with #66 on 2/17/16, then reinstalled in slot 9]

    #66(Neg) - channel 3 bad (ch. 0-5) [occurred ~6/16/15 in SY1527LC spare while in slot 5, then installed and bl30,1 no data but reading back values when in slot 1 in 02/17/16]
    -Email thread MTD BL 15-17 HV on MTDops 6/22-6/26/15.
    -Not sure if this board was repaired between run15+16 and then failed again.

    Spare:
    #58(Neg) - possible bad channel 3 (ch.0-5). I think this was swapped in for #66 ~6/16/15(spare SY1257LC) and out for #71 ~2/02/16 (SY4527). Would have been in slot 5
    Unaware of the positive spares available.

    BTW: reoccurring issues on MTD BL15-17 from at least 1/17/2015 [slot 5 channel 3] and 5/11/2014, 1/31/2014...

    Also some board repair history:

    Forum: TOF sub-system forum
    Re: HV boards in need for repair (Bertrand H.J. Biritz)
    Date: 2010, Jul 27
    From: W.J. Llope

    Hi Bertrand, since there are so many bad boards now - Geary and I agree that we
    should probably send them all in together a.s.a.p...

    thanks, cheers,
    bill

    On Jul 26, 2010, at 7:34 PM, Bertrand H.J. Biritz wrote:

    > Hi All,
    >
    > Just wanted to open the discussion on how to proceed with the HV board repairs.
    >
    > In total there are three boards in need of repair, with board 54 already having been sent in:
    >
    > Board 54: Channel 0 is arching [In use and fixed for TOF. Would explain the updated firmware.-joey ]
    >
    > Board 58: Channel 0 has ~130V offset and channel 3 no longer works [Could be a reoccurring issue with channel 3...?or explain why it failed, if it was the one swapped above-joey]
    >
    > Board 56: Channel 5 no longer works [In use on TOF--repaired.-joey]
    >
    > Board 58 is powering the East negative side of TOF (and MTD) while board 56 is for the East positive side of TOF (and MTD). We no longer have enough spares to make up for the lose of these boards.
    >
    > During last weeks TOF phone conference Geary said board 58 should be sent in for repair once the last noise rate data run was taken.
    > Does the fact that an additional board is in need of repair change this and should board 56 be sent in at the same time as well?
    >
    > CAEN will be entering their three week summer holiday beginning of August. My unfounded guesstimation for getting the boards back is end of September at the earliest.
    >
    > Bertrand
    >
    >
    > -------------------------------------------------------------
    > Visit this STAR HyperNews message (to reply or unsubscribe) at:
    > http://www.star.bnl.gov/HyperNews-star/get/startof/2362.html
    >

    TOF NPS Maps

    This page hosts the mapping of TOF NPSs.


    MTD NPSs page can be found here: https://drupal.star.bnl.gov/STAR/subsys/mtd/mtd-operation/mtd-nps-maps

    Version control:
    • list updated: February 4, 2021
    • includes MTD and eTOF NPS mappings

    Format:
    NPS [login protocol] (comment)
    plug: name (comment) 

    tofrnps[ssh] 
    sample instruction:
    apc>olStatus all
    E000: Success
    
    1: MTD THUB FANS
    2: TOCK
    3: THUB_LVPS PL512 LV supply, also powers VPD
    4: ZDC_TCIM
    5: unused
    6: unused
    7: unused
    8: unused

    tofpnps[ssh]
    A1: CAEN_HV
    A2: UPS (tofcontrol & systec?)
    A3: Start-box fans (VPD)
    A4: TOF_THUB_fans
    B1: tofcontrol_pc
    B2: can_if_1-8
    B3: can_if_9-16
    B4: unused 

    tofnps1[telnet] (LV for west trays)
    1: trays_1-12_W1
    2: trays_14-24_W2
    3: trays_25-36_W3
    4: trays_37-48_W4
    5: trays_49-60_W5
    6: empty
    7: empty
    8: empty 

    tofnps2[telnet] (LV for east trays)
    1: trays_61-72_E1
    2: trays_73-84_E2
    3: trays_85-96_E3
    4: trays_97-108_E4
    5: trays_109-120_E5
    6: empty
    7: empty
    8: stp_conc_1 (cf. trigger group)
    

    mtdnps[telnet]
    1: MTD LV 1: 25-6
    2: MTD LV 2: BL 7-13, 24
    3: MTD LV 3: BL 14-22
    4: MTD-HV (CAEN)
    

    etof-nps [telnet]
    Plug | Name             | Password    | Status | Boot/Seq. Delay | Default |
    -----+------------------+-------------+--------+-----------------+---------+
     1   | _12V_LVPS-A      | (defined)   |   ON   |     0.5 Secs    |   ON    |
     2   | _12V_LVPS-B      | (defined)   |   ON   |     0.5 Secs    |   ON    |
     3   | mTCA-P13-Backup  | (defined)   |   ON   |     0.5 Secs    |   ON    |
     4   | mTCA-P11-Primary | (defined)   |   ON   |     0.5 Secs    |   ON    |
     5   | mTCA-P12-Primary | (defined)   |   ON   |     0.5 Secs    |   ON    |
     6   | mTCA-P14-Backup  | (defined)   |   ON   |     0.5 Secs    |   ON    |
     7   | _12V_LV-RKP-Chas | (defined)   |   ON   |     15 Secs     |   ON    |
     8   | RaspPi-LV-Cntrl  | (defined)   |   ON   |     15 Secs     |   ON    |
    -----+------------------+-------------+--------+-----------------+---------+
    

    BTOF Calibrations

    Barrel Time-of-Flight Detector Calibrations Overview

     

    TOF Calibration Requirements


     

    Links to several older calibration details with historical data


    Run 8 - 10 time resolutions
    Run 8 - 10 BTOF time resolutions


    TOF MRPC Prototype resolutions

     TOF MRPC prototype time resolutions

    Run 10 - 200 GeV Calibrations

    Run 10 Calibrations

     

    Run 10 200 GeV FF


    [This a test page for now.  Soon to be edited.]

     

    The calibration began with ntuples generated from a preproduction:

      /star/data57/reco/AuAu200_production/FullField/P10ih_calib1/2010/*/*
    
        log files on /star/rcf/prodlog/P10ih_calib1/log/daq
    

    The ntuples can be found:

    28 files (61810 events):
    /star/data05/scratch/rderradi/run10/TofCalib/ntuples/200GeV_1/out/
    
    316 files (2398210 events):
    /star/data05/scratch/rderradi/run10/TofCalib/ntuples/200GeV_2/out/
    
    77 files (497944 events) - not properly closed:
    /star/data05/scratch/rderradi/run10/TofCalib/ntuples/200GeV_3/out/

    The 77 files not properly closed were merged with hadd to properly close them.

    Then the ntuples were used for the start side(vpd) calibration.  The code package used for this is attached and can be found here:

    http://drupal.star.bnl.gov/STAR/system/files/startside.tar_.gz

    To run the start side calibration: ./doCalib filelist traynumber

    Tray number for start side can be any tray number, I recommend 0.   Start side calibration used a 20% outlier cut.  This can be selected in doCalib.cpp and changing the value pCut =0.20.

    The outlier removes the upper 20% of slowest hits--highest leading edge times.

     Produced from the start side calibration is pvpdCali.dat and pvpdCali.root.  These files are used to shift the vpdvz offset between the east and west vpd, and ultimately, the vpd's calibration parameters that are loaded into the database.  To shift the offset, perform a Gaussian fit to VzCorr2->ProjectionY() and take that mean value.  This is the VzOffset.   Apply it to the convertVpd.C macro (http://drupal.star.bnl.gov/STAR/system/files/convertVpd.tar_.gz) on the line: 2454.21696153-VzOffset.  This macro generates pvpdCali_4DB.dat.  These are the parameters loaded into the database. 

    Now apply the shift to pvpdCali.root so that T0's are properly corrected for the stop side(TOF) calibration.   This is done with the code in this package:

    http://drupal.star.bnl.gov/STAR/system/files/shiftedT0s.tar_.gz

    To run the code: use ./doCalib filelist traynumber .  The traynumber does not matter, use 0 again.  Be sure this is the same file list as before and that the generated pvpdCali.root and pvpdCali_4DB.dat files are in the same directory.  This way the VzOffset is applied.  An updated pvpdCali.root is produced.

     Typically, the pvpdCali.root is then used in the stop side calibration.  But because we are doing a cell based calibration, we wanted to increase statistics.  This caused an increase in IO from disk and delayed calibration since data for all trays is cycled over regardless of which tray is being calibrated.  To get around this, we used a splitting macro that reads in the ntuples and pvpdCali.root, and then it splits the ntuples into TDIG based files with the startside information stored with it.  The splitting macro is attached to this page here:

    http://drupal.star.bnl.gov/STAR/system/files/Splitter.tar_.gz
     

    The board based ntuples are then used to calibrate on a cell level for each board.  This is done with this code package:  

    http://drupal.star.bnl.gov/STAR/system/files/stopside.tar__0.gz

     

    To run it: ./doCalib filelist traynumber boardnumber.  Here tray number is 0-119 and board number is 0-959.  It is important that the proper board is selected from the given tray number.  For example tray 0 has boards 0-7, tray 1 has boards 8-15 and so on.

    Produced among other files are tray_$tray#_board_$board#_$iteration#.dat files.  These files are added together for all trays and create the parameter files used for the database.  To add them together, the macros addZtdig.C, addT0Ttdig.C, and addt0tig.C were used(http://drupal.star.bnl.gov/STAR/system/files/addstopsidemacros.tar_.gz) and generated zCali_4DB.dat, totCali_4DB.dat, and t0_4DB.dat respectively.  

    To check the produced calibration parameters, use a check macro that reads in matchmaker produced ntuples and applies the parameters (http://drupal.star.bnl.gov/STAR/system/files/simpleQAmacro.tar_.gz).  The result is a .root file that can be used QA checking.  The macro needs to be edited to find the proper parameters and the proper name for an output file. It works with './checknew filelist_location'    In addition, there are other methods for QA checking.

    The calibration was performed over:

    Run #        -events
    >
    > 11004071   -623835
    > 11009017   -497717
    > 11015019   -499030
    > 11020023   -500001
    > 11026034   -500001
    > 11035026   -699977
    >
    > total:3320561 events.
    

    ---Issues that came up during the calibration procedure---

    First off is the outlier rejection.  Not inserting the outlier rejection in all stages of the start side calibration caused issues.  This was seen when shifting the T0s for the stopside calibration and in comparing different QA checking macros.  This can lead to 40ps differences in the time.  Also, there is  a slight difference in the way the outlier is handled in the calibration procedure versus the vpdcalibmaker.  This difference of T0s averages out over events to be on the order of 2-3 ps. We kept this difference in place for 200, because this method used in the  production of 39 earlier in the year.

       Other differences  include the selection criteria for events.  Needed to replace dcaZ with vertexZ since in heavy ion the primary vertex is filled and more reliable(dcaZ was used in pp where the primary vertex was not always filled).  Same with dcaX and dcaY to vertexX and vertexY respectively.  Also dropped the if(sqrt(pow(tofr->vertexX,2)+pow(tofr->vertexY,2))>rCut) cut.  This is because by selecting a primary vertex with vertexZ, this should have already been applied in order to be a primary vertex.

     

    Another minor one was in the QA macro calculating the vpdMean.  Turns out it was being handled incorrectly, but okay now.  Originally it looked like (bad structure):Old vpdMean

    And with the fix, it became:

    fixed vpdMean




     

    Run 10 200GeV RFF


    Place holder.  Two samples used.

    Run 11 - 19 GeV Calibration

    Place holder.  Board+cell based calibration.

    Run 11 - 62 GeV Calibration

    Place holder.  40ns.

    Run 17 calibrations

     Run 17 Calibrations

    Run 18 Calibrations

    Isobar Calibrations


    27 GeV Calibrations

    First Half ( days - 141)

    Second Half (days 142-168)

    • VPD calibration QA
    • BTOF alignment
    • BTOF T0
    • Comments:
      • BTOF slewing and Local-Z re-used from past full calibration (Run16)
      • calalgo=1 (startless mode, VPD is not used for the BTOF start time)

    Fixed Target Calibrations 




    Run 19 Calibrations

     Run 19 Calibrations

    19.6 GeV Calibrations

    • VPD Calibration QA
    • BTOF alignment 
      • xqa
      • yqa
    • BTOF T0
      • SummaryQA
    • Comments:
      • BTOF slewing and Local-Z re-used from past full calibration (Run16)
      • calalgo=1 (startless mode, VPD is not used for the BTOF start time)

    Fixed Target Calibrations 

    200 GeV Calibrations

    • VPD Calibration QA
    • BTOF alignment 
      • xqa
      • yqa
    • BTOF T0
      • SummaryQA

    Run 20 Calibrations

     Run 20 Calibrations

    BToF time resolution for Startless mode is around 0.06ns and is 0.14ns (11.5GeV), 0.16ns(9.2GeV) for Vpdstart mode.
    11.5 GeV Calibrations

    • VPD Calibration QA
    • BTOF alignment 
      • xqa
      • yqa
    • BTOF T0
      • SummaryQA

    9.2 GeV Calibrations

    • VPD Calibration QA
    • BTOF alignment 
      • xqa
      • yqa
    • BTOF T0
      • SummaryQA

    Run 21 Calibrations

     

    Run21 Calibrations

    AuAu 7.7 GeV Calibrations

    • VPD Calibration QA
    • BTOF alignment 
      • xqa
      • yqa
    • BTOF T0
      • SummaryQA

    OO 200GeV Calibrations

    • VPD Calibration QA
    • BTOF alignment 
      • xqa
      • yqa
    • BTOF T0
      • SummaryQA
    • charged particle mass splitting for FF data

    AuAu 17.3 GeV Calibrations

    • VPD Calibration QA
    • BTOF alignment 
      • xqa
      • yqa
    • BTOF T0
      • SummaryQA

    dAu 200 GeV Calibrations

    • VPD Calibration QA
    • BTOF alignment 
      • xqa
      • yqa
    • BTOF T0
      • SummaryQA
    • asymmetry in VpdVz-TPCVz plot

    AuAu FXT Calibrations (3.85GeV, 26.5GeV, 44.5GeV, 70GeV, 100GeV)

    • Using Run21 17.3GeV parameter tables
    • BTOF T0
      • SummaryQA


    TOF/VPD Calibration How-To's

     

    Trigger Window Determination

    Here is documentation on how to determine the trigger windows for ToFs 120 trays and the 2 start detectors.

    I determined the trigger windows on run 13037090.  To do this, I used StRoot.tar.gz and extracted it to my work directory.  Then changed directory to StRoot/RTS/src/RTS_EXAMPLE and ran Rts_Example.sh.  This creates the executable rts_example from rts_example.C.  Both Rts_Example.sh and rts_example.C have been customized from the typical StRoot version to translate information given in a .daq file into a useful .root file.  To run rts_example, I use run_test.csh.  This is essentially:

    starver SL11e

    rts_example -o tof_13037090.root -D tof inputfilepath/st_physics_13037090_raw_1010001.daq

    where tof_run#.root is your output file and .daq is your input file.

    Then the trigger windows are determined from the .root file by running over it with plot_trgwindow.C.

    To do this:

    root

    .x plot_trgwindow.C("run#");

    run number is important because this is how the script knows to read in tof_run#.root.

    Produced is a .dat file that lists the high and low limits for each tray's trigger window and a postscript that shows the new limits in blue for each tray and if defined inside plot_trgwindow.C, red lines for the old trigger window limits(testrun11.dat in this example).

    Attached to this page is the StRoot.tar.gz package, run_test.csh, plot_trgwindow.C, old trigger window limits, new trigger window limits, and a postscript displaying the limits on top of the data.  (I put them in zip containers due to the file attachment restrictions.)

    VPD Slewing corrections for bbq & mxq

     

    1. Acquiring VPD gain setting data

    At the sc5.starp station, there are two GUIs on the upVPD monitor:

    "large GUI" shows all the channel values, and where one powers them on/off...
    "small GUI" selects between different gain sets A, B, C, & default...

    start with the VPD HV **off**

    once there is a decent non-VPD-based trigger and stable beam:

    1. on small GUI, click "! upVPD HV A" button
    2. on large GUI, click in lower left corner to power on upVPD HV
    3. wait until all channels are "green" (~20 sec).
    4. make sure the values (upper left corner of large GUI,
         leftmost column) are as expected
    5. wait ~1min for PMTs to settle (or take some other test run)
    6. start run: TRG+DAQ+TOF only, 300k events.
         use string "upVPD HV A" in the shiftlog entry
    7. power off HV (large GUI lower left corner)
    8. wait until channels reach 0 (~20 sec)

    9. on small GUI, click "! upVPD HV B" button
    10. on large GUI, click in lower left corner to power on upVPD HV
    11. wait until all channels are "green" (~20 sec).
    12. make sure the values (upper left corner of large GUI,
         leftmost column) are as expected
    13. wait ~1min for PMTs to settle (or take some other test run)
    14. start run: TRG+DAQ+TOF only, 300k events.
         use string "upVPD HV B" in the shiftlog entry
    15. power off HV (large GUI lower left corner)

    16. on small GUI, click "! upVPD HV C" button
    17. on large GUI, click in lower left corner to power on upVPD HV
    18. wait until all channels are "green" (~20 sec).
    19. make sure the values (upper left corner of large GUI,
         leftmost column) are as expected
    20. wait ~1min for PMTs to settle (or take some other test run)
    21. start run: TRG+DAQ+TOF only, 300k events.
         use string "upVPD HV C" in the shiftlog entry
    22. power off HV (large GUI lower left corner)

    23. on small GUI, click "! DEFAULT" button.
    24. power on the VPD HV. 

    Do not use the small GUI anymore. In fact, feel free to close it!

    At this point, I will get the data from HPSS, calculate the new gains, and then upload them to sc5. 

    make sure the shift log says which gain set (A, B, or C) was used for a given run 

    2. Calculate VPD Gains


    1. Get DAQ files containing gain fit runs
    Copied Run 160390{19, 22, 23} over from hpss :
    hpss_user.pl /home/starsink/raw/daq/2015/039/16039019/st_physics_16039019_raw_0000001.daq   /star/data03/daq/jdb/pp200/vpdGainFit/
     
     
    2. Running DaqDoer:
    ./daqdoer /star/data03/daq/jdb/pp200/vpdGainFit/st_physics_16039019_raw_0000001.daq
    Then:
    What kind of Data?
    0=beam,  save trigger detector info, no trees...
    1=noise, save TOF/MTD hits trees, no coarse counter cut...
    2=beam,  save **MTD** hits trees, w/ coarse counter cut...
    3=beam,  no coarse counter cut for trigger time plots (TOF&MTD)...

    0->online->gainfit, 1->noise, 2->mtdview (_mtd), 3->thub (_tof)
    0 >> enter
    Then:
    Enter run string for output file name
    (16039019) Run# >> enter
     
    let it work and it will produce a file daqdoer_***.root
    - Rename it to be daqdoer_run#.root if not already
     
    3. Run the Online Plot Maker
    - cd into online working dir
    - move daq files from daqdoer into dd/
    - ensure that daq files have name daqdoer_run#.root
    - run make to ensure online is fully built
    - Run online util with "-r run#" :
    ./online -r 16039019
    and output will be something like
    ..... Main ... krun = 16039019
    ..... online::online kRunUse = 16039019
    Error in <TTree::SetBranchAddress>: unknown branch -> p2p_sin
    ..... online::loop Opening online_16039019.root
    0x866d270
    ..... online::loop Nentries = 300000
    1 Processing 0  run 16039019 16039019
    ...
    and it will produce 3 files 
    1. online_run#.root
    2. online_run#.ps (example attached)
     
    3. online_run#.pdf
     
    Repeat this on the files from all 3 gain runs { A, B, C}
     
    4. Running gainfit.C Macro
    1. Copy from sc5.starp.bnl.gov:/home/sysuser/llope/VPD/set{A, B, C}.save to working dir
    2. Copy the online_run#*.root to working dir
    3. Open gainfit.C and change the run# in each filename to match the 3 you are using
    4. Run the GainFit.C macro: root -l gainfit.C
    - Will produce gainfit.root and gainfit.ps (example attached )
     
    5. Export the newvpdhv to a text file with correct format
    6. Fill in the 6 tof only trays using the correlation plot on left of page 4 in gainfit.ps - in the past this has ben roughly equivalent to adding ~160V to the hv calculated with <ADC> only. ( right now I am doing this step by hand but it would be better to write a script - TODO)
    7. The gains should undergo a sanity check - none should be too low or too high (above ~2100 V )

     

    5. Upload the gains to the auto-loaded file on
    - Make sure that the VPD HV is **OFF** before uploading - otherwise the HV values will not be set properly.
    - autoload location : sc5.starp.bnl.gov:/home/sysuser/epics3.13.10/Application/BBCApp/config/upVPD/upVPD.save
     
    6. Have someone ( lijuan’s team ) take new runs with the “good” gain settings to set the TAC and MXQ offsets.
     
    7. VPD is commissioned 

    3. Collect data for VPD Slewing corrections

    1. Data should be taken using a VPD based trigger
    2. For AuAu collisions ~ 100K minimum events are needed
    3. For pp200 ~200K events were needed
    4. As soon as the data is acquired (or even before ) make arrangments with Lidia for the fast offline production to be started. It needs to start immediately

    4. Perform VPD Slewing corrections for bbq & mxq crates

    1. Plot the TAC{East} - TAC{West} + 4096 vs. TPC vz to determine the TAC to ps conversion factor. Bbq and Mxq are not neccessarily the same - so check both.

    2. Create calibration tuples from MuDsts - these contain just the VPD data, TPC z vertex etc. that we need.
    Current tuple maker is at : /star/institutions/rice/jdb/calib/ntupler/

    3. Setup a config file for slewing corrections - set data uri, bbq or mxq datasource and output filenames for qa and parameters

    4. Run the slewing jobs 
    5. Check the QA and if there is time before uploading have Shuai check the resolution.
    6. Upload parameter files to startrg.starp.bnl.gov in the ~staruser/ location with names like (m)vpd_slew_corr.<date>.txt for bbq (mxq)
    7. check the md5sums of the two files you just made and compare them to the original parameter files - sometimes the line-endings get scrambled
    7. Make a soft link from 
     (m)vpd_slew_corr.<date>.txt to (m)vpd_slew_corr.txt for bbq (mxq).
    8. Have Jack run the script to write slewing corrections to crates.


    5. Check VPD plots - if everything looks good then you are done


    QA from run15 pA200 is attached for the full statistics runs







    BTOF Offline Reconstruction

    BTOF Offline Reconstruction



    Known/solved issues:

    • P18ih (Run 9 pp500): TpcRefSys issue keeps BTOF geometry from getting correctly set up
      • this affects all runs before 2013 (see StBTofGeometry.cxx)
      • fixed in SL19d library
      • quick fix: run BTOF afterburner on MuDSTs to recreate BTOF matching and calibrations
    • SL17h/SL17i: Optimized libraries missing west BTOF geometry.

     

    PPV with the use of BTOF

    This area contains the study on the Pileup-Proof Vertex (PPV) finder with the use of hits from Barrel Time-Of-Flight (BTOF).

    Coding

    Checklist:

    This list the items that need to be implemted or QAed to include BTOF information in the PPV.

    Coding: - almost done, need review by experts

    1) BTOF hits (StBTofHitMaker) to be loaded before PPV - done

    2) BTOF geometry need to be initialized for the PPV - done

        As PPV is executed before the StBTofMatchMaker, I think in the future, the BTOF geometry will be firstly initialized in PPV in the chain and added to the memory. StBTofMatchMaker will then load this directly w/o creating its own BTof geometry.

    3) Creation of BtofHitList - done

        The BTOF part is rather segmented accroding to tray/module/cell, but BTOF modules don't have the same deta coverage. The binning is segmented according to module numbers.

        The justification of match and veto is different from other sub-systems as we now allow one track to project onto multiple modules considering the Vz spread and track curvature.

        Define: Match - any of these projected modules has a valid BTOF hit.   Veto - At least one projected module is active and none of the active projected modules has a valid BTOF hit.

    4) Update on TrackData VertexData to include BTOF variables - done

    5) Main matching function: matchTrack2BTOF(...) - done

        Currently, the match/veto is done at the module level. I set a localZ cut (|z|<3cm currently and also possibly remove cell 1 and 6 to require the track pointing to the center of the module). But this can be tuned in the future. Also whether we need to to match at the cell level can also be discussed.

    6) Update on StEvent/StPrimaryVertex, add mNumMatchesWithBTOF - done (need update in CVS)

    7) A switch to decide whether to use BTOF or not. - done (but need to add an additional bfc option)

    The lastest version is located in /star/u/dongx/lbl/tof/NewEvent/Run9/PPV/StRoot

    QA: - ongoing

     

    MC simulation study

    Default PPV for PYTHIA minibias events in y2009

    The first check is to test the default PPV and check whether the result is consitent with those from Vertex-Group experts.

    GSTAR setup

    geometry y2009

    BTOF geometry setup:  btofConfig = 12 (Run 9 with 94 trays)

    vsig  0.01  60.0
    gkine -1 0 0 100 -6.3 6.3 0 6.29 -100.0 100.0
     

    PYTHIA setup

    MSEL 1         ! Collision type
    MSTP (51)=7
    MSTP (82)=4
    PARP (82)=2.0
    PARP (83)=0.5
    PARP (84)=0.4
    PARP (85)=0.9
    PARP (86)=0.95
    PARP (89)=1800
    PARP (90)=0.25
    PARP (91)=1.0
    PARP (67)=4.0
     

    BFC chain reconstruction options

    trs fss y2009 Idst IAna l0 tpcI fcf ftpc Tree logger ITTF Sti VFPPV NoSvtIt NoSsdIt bbcSim tofsim tags emcY2 EEfs evout -dstout IdTruth geantout big fzin MiniMcMk eemcDb beamLine clearmem

    Just for record about the PPV cuts:

    StGenericVertexMaker:INFO  - PPV::cuts
     MinFitPfrac=nFit/nPos  =0.7
     MaxTrkDcaRxy/cm=3
     MinTrkPt GeV/c =0.2
     MinMatchTr of prim tracks =2
     MaxZrange (cm)for glob tracks =200
     MaxZradius (cm) for prim tracks &Likelihood  =3
     MinAdcBemc for MIP =8
     MinAdcEemc for MIP =5
     bool   isMC =1
     bool useCtb =1
     bool DropPostCrossingTrack =1
     Store # of UnqualifiedVertex =5
     Store=1 oneTrack-vertex if track PT/GeV>10
     dump tracks for beamLine study =0
     

    Results

    Total 999 events PYTHIA mb events were processed. Among these, 990 events have at least one reconstructed vertex (frac = 99.1 +/- 0.3 %). The following plot shows the "funnyR" plot of the vertex ranking for all found vertices.

    Clearly seen is there are lots of vertices with negative ranking. If we define the vertices with positive ranking are "good" vertices, the left plot in the following shows the "good" vertex statistics

    Only 376 events (frac 37.6 +/- 1.5 %) have at least one "good" vertex. The middle plot shows the Vz distributions for MC input and the reconstructed first "good" vertices. However, if you look at the right plot which shows the Vz difference between reconstructed vertex and the MC input vertex, not only all good vertices are well distributed, most of any-found vertices even with negative ranking are within 1cm difference.

    If we define the "good" vertex is |Vz(rec)-Vz(MC)|<1cm, as Jan Balewski studied in this page: http://www.star.bnl.gov/protected/spin/balewski/2005-PPV-vertex/effiMC/ then 962 events (frac 96.3 +/- 0.6 %) have at least one "good" vertex.

    One note about the bfc log, I notice there is a message as the following:

    ......

     BTOW status tables questionable,
     PPV results qauestionable,

      F I X    B T O W    S T A T U S     T A B L E S     B E F O R E     U S E  !!
     
     chain will continue taking whatever is loaded in to DB
      Jan Balewski, January 2006
    ......

    The full log file is /star/u/dongx/institutions/tof/simulator/simu_PPV/code/default/test.log

     

    Update 9/10/2009

    With including BTOF in the PPV, please find the vertex ranking distributions below. (Note: only 94 trays in y2009)

    The # of events containing at least one vertex with ranking>0 is 584 (frac. 58.5 +/- 1.6 %). This number is more close to what I have in mind the vertex finding efficiency for pp minibias events. So the early low efficiency was due to missing CTB, while BTOF now is acting like CTB ???

     

    Update 9/22/2009

    After several rounds of message exchange with Jan, Rosi etc, I found several places that can be improved.

    1) Usually we use the BBC triggered MB events for study. So in the following analysis, I also only select the BBC triggered MB events for the vertex efficiency study. To select BBC triggered events, please refer to the code $STAR/StRoot/StTriggerUtilities/Bbc on how to implement it.

    2) Use ideal ped/gain/status for BEMC in the simulation instead of the pars for real data. To turn on this, one need to modify the bfc.C file: add the following lines for the db maker in the bfc.C (after line 123)

        dbMk->SetFlavor("sim","bemcPed"); // set all ped=0 <==THIS
        dbMk->SetFlavor("sim","bemcStatus");  // ideal, all=on
        dbMk->SetFlavor("sim","bemcCalib"); // use ideal gains

    These two changes significantly improves the final vertex efficiency (I will show later). The following two are also suggested, although the impact is marginal.

    3) Similarly use ideal ped/gain/status for EEMC.

        dbMk->SetFlavor("sim","eemcDbPMTped");
        dbMk->SetFlavor("sim","eemcDbPMTstat");
        dbMk->SetFlavor("sim","eemcDbPMTcal");

    4) Use ideal TPC RDO mask. You can find an example here: /star/u/dongx/institutions/tof/simulator/simu_PPV/test/StarDb/RunLog/onl/tpcRDOMasks.y2009.C

    With these updates included, the following plot shows the # of good vertex distribution from 500 PYTHIA mb events test.

    The vertex efficiency is now raised to ~50%. (OK? low?)

    Just as a check on the BBC efficiency, here I accepted 418 events with BBC triggers out of 500 events in total. Eff = 83.6 +/- 1.7 %, which is reasonable.

     

    MC study on PPV with BTOF in Run 9 geometry

    This MC simulation study is to illustrate the performance of PPV vertex finder with BTOF added under different pileup assumptions. All the PPV coding parts with BTOF included are updated in this page.

    The geometry used here is y2009 geometry (with 94 BTOF trays). Generator used is PYTHIA with CDF "TuneA" setting. The details of gstar setup and reconstruction chain can be found here. The default PPV efficiency for this setup (with BBC trigger selection) is ~45-50%.

    The triggered event selected are BBC triggered minibias events. The simulation has included the TPC pileup minibias events for several different pileup conditions. The pileup simulation procedure is well described in Jan Balewski's web page.  I choose the following pileup setup:

    mode BTOF back 1; mode TPCE back 3761376; gback 470 470 0.15 106. 1.5; rndm 10 1200

    A few explanations:

    1. 'mode BTOF back 1' means try pileup for BTOF only in the same bXing
    2. '3761376' means for TPC try pileup for 376 bXing before , in, and after trigger event. TRS is set up to handle  pileup correctly. Note, 376*107=40 musec - the TPC drift time.
    3. gback is deciding how pileup events will be pooled from your minb.fzd file.
      • '470' is # of tried bXIngs back & forward in time.
      • 0.15 is average # of added events for given bXIng, drawn with Poisson distribution - multiple interactions for the same bXing may happen if prob is large. I choose this number to be 0.0001, 0.02, 0.05, 0.10, 0.15, 0.20, 0.30, 0.40, 0.50 for different pileup levels.
      • 106. is time interval , multiplied by bXIng offset and presented to the peilup-enabled slow simulators, so the code know how much in time analog signal needs to be shifted and in which direction.
      •  1.5 if averaged # of skipped events in minb.fzd file. Once file is exhausted it reopens it again. If you skip too few soon your pileup events start to repeat. If you skip too much you read inpute file like creazy
    4. 'rndm' is probably seed for pileup random pileup generator

    Here shows the results:

    • Vertex efficiency

    Fig. 1: Vertex efficiencies in different pileup levels for cases of w/ BTOF and w/o BTOF.

    Here: good vertex is definited as vertex with positiving ranking. real vertex is definited as good vertex && |Vz-Vz_MC|<1 cm.

    • # of BTOF/BEMC Matches

    Fig. 2: # of BTOF/BEMC matches for the first good & real vertex in different pileup levels.

    • Ranking distributions

    Fig. 3: Vertex ranking distributions in each pileup level for both w/ and w/o BTOF cases.

     

    • Vertex z difference

    Fig. 4: Vertex z difference (Vzrec - VzMC) distributions in different pileup levels for both w/ and w/o BTOF cases. Two plots in each row in each plot are the same distribution, but shown in two different ranges.

      pileup = 0.0001 pileup = 0.02 pileup = 0.05

    w/o

    BTOF

    w/

    BTOF

      pileup = 0.10 pileup = 0.15 pileup = 0.20
    w/o BTOF

    w/

    BTOF

      pileup = 0.30 pileup = 0.40 pileup = 0.50

    w/o

    BTOF

    w/

    BTOF

     To quantify these distributions, I use the following two variables: Gaussian width of the main peak around 0 and the RMS of the whole distribution. Fig. 5 and 6 show the distributions of these two:

    Fig. 5: Peak width of Vzrec-VzMC in Fig. 4 in different pileup levels.

     

    Fig. 6: RMS of Vzrec-VzMC distributions in different pileup levels.

    • CPU time

    The CPU time in processing pileup simulation grows rapidly as the pileup level increases. Fig. 7 shows the CPU time needed per event as a function of pileup level. The time shown here is the averaged time for 1000 events splitted into 10 jobs executed on RCF nodes. I see there is a couple out of these 10 jobs took significantly shorter time than others, and I guess this is due to the performance on different node. But I haven't confirmed it yet.

    Fig. 7: CPU time vs pielup levels

     

    Update on 12/23/2009

    There were some questions raised at the S&C meeting about why the resolution w/ TOF degrades at low pileup cases. However, as we know, including BTOF would increase the fraction of these events finding a good vertex. While this improvement is mainly on those events with fewer # of EMC matches that will not be reconstructed with one good vertex if BTOF is not included (see the attached plot for the comparison between w/ and w/o BTOF at 0.0001 pileup level). Events entering into Fig. 5 are any of those who has at least one good vertex. In the case of w/ BTOF, many events with only 1 or 0 EMCs matches can have one reconstructed vertex because of BTOF matches included. While tracks with small pT can reach BTOF easier than BEMC. So one would expect the mean pT of the tracks from these good vertices if we include BTOF would be smaller (not sure how big quantitatively), then resulting in worse projection uncertainty to the beamline, thus these event sample will have slight worse Vz resolution.

    I don't have a better idea to pick up the same event sample in the case of w/ BTOF as that in the case of w/o BTOF rather than I select the number of BEMC matches >=2 for the vertices that will be filled in my new Vz difference plot. Fig. 8 shows the same distribution as that in Fig. 5 but with nBEMCMatch>=2.

     

    One can see the change is in the right direction, but still it seems not perfect from this plot for very low pileup cases. I also went back to compare the reconstructed vertices event-by-event, here are some output files:
    /star/u/dongx/lbl/tof/simulator/simu_PPV/ana/woTOF_0.0001.log and wTOF_0.0001.log
    The results are very similar except a couple of 0.1cm movements in some events (I attribute this to the PPV step size). Furthermore, in the new peak width plot I show here, for these very low pileup cases, the difference between two are even much smaller 0.1cm, which I expect to be the step size in the PPV.

     

    Test with real data

    The PPV with BTOF included is then tested with Run 9 real data. The detail of the coding explanation can be found here.

    The PPV is tested for two cases: w/ BTOF and w/o BTOF and then I make comparisions later. The data files used in this test are (1862 events in total):

    st_physics_10085140_raw_2020001.daq

    st_physics_10096026_raw_1020001.daq

    (Note that in 500 GeV runs, many of the triggered events are HT or JP, presumably with higher multiplicity compared with MB triggers.) And the chain options used in the production are:

    pp2009a ITTF BEmcChkStat QAalltrigs btofDat Corr3 OSpaceZ2 OGridLeak3D beamLine -VFMinuit VFPPVnoCTB -dstout -evout

    Production was done using the DEV lib on 11/05/2009. Here shows the results:

    Fig. 1: The 2-D correlation plot of # of good vertices in each event for PPV w/ TOF and w/o TOF.

    Fig. 2: The ranking distributions for all vertices from PPV w/ and w/o TOF

    Fig. 3: Vz correlation for the good vertex (ranking>0) with the highest ranking in each event for PPV w/ and w/o TOF. Note that, if the event doesn't have any good vertex, the Vz is set to be -999 in my calculation, which appears in the underflow in the histogram statstics.

    Fig. 4: Vz difference between vertices found in PPV w/ TOF and w/o TOF for the first good vertex in each event if any. The 0.1 cm step seems to come from the PPV.

     

    Update on 4/5/2010:

    I have also run some test with the 200 GeV real data. The test has been carried out on the following daq file

    st_physics_10124075_raw_1030001.MuDst.root

    All the rest setup are the same as described above. Here are the test results:

    Fig. 5: The 2-D correlation plot of # of good vertices in each event for PPV w/ TOF and w/o TOF (200 GeV)

    Fig. 6: The ranking distributions for all vertices from PPV w/ and w/o TOF (left) and also the correlation between the funnyR values for the first primary vertex in two cases. (200 GeV)

    Fig. 7: Vz correlation for the good vertex (ranking>0) with the highest ranking in each event for PPV w/ and w/o TOF (200 GeV).

    Fig. 8: Vz difference between vertices found in PPV w/ TOF and w/o TOF for the first good vertex in each event if any (200 GeV)


    Conclusion:

    The above tests with real data have shown the expected PPV performance with inclusion of BTOF hits.

     

    BTOF Operations

    Note that the only authoritative location of operations information for BTOF (and other subsystems) is at https://drupal.star.bnl.gov/STAR/public/operations

    This page only serves as a short cut to (recent) BTOF-related operations manuals. Always check the date and consult with the BTOF experts in case of any doubt.

    BTOF Simulations

     

    Barrel TOF/CTB Geometry Configurations

    The definitions of Barrel TOF/CTB geometry configurations are shown here:

    BtofConfigure = 1;      /// All CTB trays

    BtofConfigure = 2;      /// All TOFp trays

    BtofConfigure = 3;      /// big TOFp path (trays from 46 to 60), rest are CTB trays

    BtofConfigure = 4;      /// Run-2: one TOFp tray (id=92), rest CTB

    BtofConfigure = 5;      /// Run-3: one TOFp (id=92) tray and one TOFr (id=83) tray, rest CTB

    BtofConfigure = 6;     /// Full barrel MRPC-TOF

    BtofConfigure = 7;      /// Run-4: one TOFp (id=93) tray and one TOFr (id=83) tray, rest CTB

    BtofConfigure = 8;      /// Run-5: one TOFr5 (id=83) tray, rest CTB

    BtofConfigure = 9;      /// Run-6: same as Run-5

    BtofConfigure = 10;    /// Run-7: same as Run-5

    BtofConfigure = 11;    /// Run-8: Five TOFr8 trays (id=76-80), rest CTB

    BtofConfigure = 12;    /// Run-9: 94 installed trays, rest slots are empty

     

    TOF selections in different geometry tags should be in the followints:

    year 2002:     Itof=2;    BtofConfigure=4;

    year 2003:     Itof=2;    BtofConfigure=5;

    year 2004:     Itof=2;    BtofConfigure=7;

    year 2005:     Itof=4;    BtofConfigure=8;

    year 2006;     Itof=5;    BtofConfigure=9;

    year 2007:     Itof=5;    BtofConfigure=10;

    year 2008:     Itof=6;    BtofConfigure=11;

    year 2009:     Itof=6;    BtofConfigure=12;

    All geometry in UPGRxx should use BtofConfigure=6 (full TOF configuration).

    TOF Simulation Resolution Database

     Variables:
        unsigned short resolution[23040]; // The BTOF time resolution for a given cell used in simulation.
        octet algoFlag[120]; // information about granularity of parameters 0-cell by cell, 1-module by module, 2-TDIG, 3-tray
     
    Frequency:
    This table will be updated whenever the Calibrations_tof::* tables are updated, generally once per RHIC Run
     
     
    Index:
    *table is not indexed
     
     
    Size:
    The size of one row is 2*23040 + 1*120 = 46200 bytes (also verified by compiling and checking).
     
    Write Access:
    'jdb' - Daniel Brandenburg (Rice University)
    'geurts' - Frank Geurts (Rice University)
     
    See below the full .idl file
     
    TofSimResParams.idl:
     
    /* TofSimResParams.idl:
    *
    * Table: tofSimResParams
    *
    * description: Parameters used to set the BTOF time resolution in simulation
    *
    * author: Daniel Brandenburg (Rice University)
    *
    */
     
    struct tofSimResParams{
    unsigned short resolution[23040];         /* Cell Res in picoseconds*/
    octet algoFlag[120];             /* granularity of parameters*/
    };
     
    /* End tofSimResParams.idl */

    Database

    These pages describe how to use the Barrel TOF (including VPD) database.

    One can use the following browser to view the TOF tables: http://www.star.bnl.gov/Browser/STAR/browse-Calibrations-tof.html

    More information on how to access the various (TOF) databases can be found in the Offline DB Structure Explorer at the following link: http://online.star.bnl.gov/dbExplorer/

     

     

    Add a flag in vpdTotCorr.idl for noVPD start calibration switch

    In low energy runs, the VPD acceptance and efficiency becomes a potential issue. We are planning to develop a barrel TOF self calibration algorithm based on only the hits on the barrel trays (we call it non-vpd-start calibration). The calibration constant structures should be similar as the conventional calibration using vpd for the start time. But certainly the algorithm in applying these constants will be different. We thus need to introduce an additional flag in the calibration table to tell the offline maker to load the corresponding algorithm then. The proposed the change is on the vpdTotCorr.idl table. The current structure is like this:

    struct vpdTotCorr {
      short tubeId;   /* tubeId (1:38), West (1:19), East (20:38) */
      float tot[128]; /* edge of tot intervals for corr */
      float corr[128];   /* absolute corr value */
    };
     

    We would like to add another short variable "corralgo", then the modified idl file should be like this:

    struct vpdTotCorr {
      short tubeId;   /* tubeId (1:38), West (1:19), East (20:38) */
      float tot[128]; /* edge of tot intervals for corr */
      float corr[128];   /* absolute corr value */
      short corralgo;     /* 0 - default vpd-start calibration algorithm */
                                 /* 1 - non-vpd-start calibration algorithm */
    };
     

    We will make the corresponding modifications to the StVpdCalibMaker and StBTofCalibMaker to implement the new calibration algorithm.

     

    Proposal of new TOF tables for Run9++ (Jan. 2009)

    We are proposing to create several new TOF tables for future full barrel TOF runs.

    Draft IDLs:

    /* tofINLSCorr
     *
     * Tables: tofINLSCorr
     *
     * description: // INL correction tables information for TOF TDIGs (in short)
     */

    struct tofINLSCorr {
      short tdigId;         /*  TDIG board serial number id  */
      short tdcChanId;      /*  tdcId(0:2)*8+chan(0:7)       */
      short INLCorr[1024];  /*  INL correction values        */
    };

    INDEX: trayCell [ table: trayCellIDs ], range: [1 .. 30000]

    Note: This table is an updated one from the previous tofINLCorr. Considering the considerable I/O stream load for this table in future full system, we changed the precision of the correction values from float to short (the value will be stored as (int)(val*100)). From the intial test, we won't lose any electronic resolution.

    /* tofGeomAlign
     *
     * Tables: tofGeomAlign
     *
     * description: // tof alignment parameters for trays, trayId will be
     *              // indicated as the elementID (+1) in data base
     */

    struct tofGeomAlign {
      float z0;     /* offset in z */
      float x0;     /* offset in radius */
      float phi0;   /* offset in phi (local y direction) */
      float angle0; /* tilt angle in xy plane relative to ideal case */
    };

    INDEX: trgwin [ table : trgwinIDs ], range : [1..122]

    Note: This table contains the necessary geometry alignment parameters for real detector position shifting from the ideal position in GEANT. The trayId will be indicated as the elementID in the database.

    /* tofStatus
     *
     * Tables: tofStatus
     *
     * description: // tof status table
     */

    struct tofINLCorr {
      unsigned short status[24000];    /*  status code */
    };

    INDEX: None

    Note: TOF status table. The definition of the status code is being finalized.

    /* tofTrgWindow
     *
     * Tables: tofTrgWindow
     *
     * description: // tof trigger timestamp window cuts to select physical hits
     */

    struct tofTrgWindow {
      unsigned short trgWindow_Min;    /*  trigger time window cuts */       
      unsigned short trgWindow_Max;    /*  trigger time window cuts */
    };

    INDEX: trgwin [ table: trgwinIDs ], range: [ 1 .. 122 ]

    Note: TOF trigger time window cuts to seletect physical hits. There will be 120 (trays) + 2 (E/W Vpds) elements in total.

    Load Estimate

    tofINLSCorr:  There will be one row for each channel. In total for full barrel system, there will be ~24000 rows. We also have some spare boards, to have all in one place, the maximum # of rows is better to set as 30000. The total load size for DB I/O will be ~30000*1024*2 Byte ~ 60 MB

    tofGeomAlign:  There will be 120 rows in each fill. The load size is negligible.

    tofStatus:    All channels are combined in a single array.

    tofTrgWindow:  120+2 elements.

     

    DAQ

    Shift Crew Documentation:
         * Run Control Documentation
         * Online JEVP Histogram Documentation


    RTS READER Documentation

    RTS_READER on Google Docs

    Detector Upgrades

    This Drupal section is reserved for R&D detectors. Each detector would
    be moved to the root area whenever ready.

    Event Plane Detector

     

    Event Plane Detector group:

    • Daniel Cebra (Davis)
    • Xin Dong (LBNL)
    • Geary Eppley (Rice)
    • Frank Geurts (Rice)
    • Mike Lisa (OSU)
    • Bill Llope (WSU)
    • Grazyna Odyniec (LBNL)
    • Robert Pak (BNL)
    • Alex Schmah (LBNL)
    • Prashanth Shanmuganathan (Kent)
    • Subhash Singha (Kent)
    • Mikhail Stepanov (Purdue)
    • Xu Sun (LBNL)
    • Aihong Tang (BNL)
    • Jim Thomas (LBNL)
    • Isaac Upsal (OSU)
    • Fuqiang Wang (Purdue)
    • Wei Xie (Purdue)
    • Rosi Reed (Lehigh)

    Available R&D funds: 30k

    R&D proposal: http://www.star.bnl.gov/protected/heavy/aschmah/EPD/STAR_R_and_D_proposal_EPD.pdf

    Task Responsible
    person
    Timeline Resources
    needed
    GEANT4 simulation

    • Detector response function (input of different particles,
    • energies, angles, scintillator materials, scintillator
    • geometries, etc.)
    • Do we need wave length shifting fibres?
    • Light guides.
    Issac Upsal (OSU), Alex Schmah (LBNL) 08/14 - 09/14 none
    Tile geometry optimization

    • Minimizing the amount of different tiles.
    • Event plane resolution for new geometry.
    • Centrality resolution for new geometry.
    Subhash Singha (Kent)
    Mikhail Stepanov (Purdue)
    09/14 - 10/14 none
    Radiation tests of SiPMs

    • Estimation of expected radiation for BES II.
    • Comparison to BNL radiation tests (Akio).
    • Setup of radiation test for prototope detector.
    Daniel cebra (UCD)    
    Specifications of scintillators

    • Compare scintillator specs. 
    • Survey of possible vendors.
         
    Specifications of SiPMs

    • Compare SiPM specs.
    • Survey of possible vendors.
    • Survey of what other experiments have used.
         
    Setup of readout system

    • Simple readout system for basic tests.
    • Advanced readout system for prototype.
    • QT boards, PXL readout,...
         
    Setup of basic test system

    • Cosmic tests.
    • Test of readout system.
    • Comparison to GEANT4 simulation.
     Mikhail Stepanov (Purdue)    
    Development of wrapping technique

    • Optimized for  about 2000 tiles.
         
    Development of techniques to install wave length shifting fibres

    • + connection to light guides and/or SiPMs
         
    Mechanical construction for prototype

    • Two sector prototype.
         
    Integration into STAR trigger system

    • Survey of requirements.
    • Contact STAR trigger group.
         
    Trigger requirements (simulation/calculation)

    • For fixed target collisions.
    • Background suppression.
    Daniel Cebra (UCD)    

    Observable/Trigger Detector specification
    v1  
    v2  
    v3  
    HBT  
    Fixed target  
    Centrality  

    EPD Conference Files

    EPD presentations and posters for various conferences

    EPD meeting page

    This is the page for the EPD meetings. I suppose we could add a subpage for each individual meeting, that way only files relevant to that meeting would appear there.

    Isaac, work faster!

    EPD meeting April 13, 2016

    Agenda copied from Alex's email:

    Hi All,
     
    We have a brief EPD meeting today to discuss:
     
    - EPD review: To do (so far)
     
    - Fiber to SiPM coupling (status)
     
    - aob
     
    Best,
     
    Alex

    Attached is Isaac's talk on the SiPM board design.

    EPD meeting April 20, 2016

    EPD meeting April 20, 2016

    Hi All,
     
    We have our EPD meeting today:
     
    - Review report: Summary and to do
     
    - How to proceed with the proposal (to be submitted by May 5th)
     
    Best,
     
    Alex
     
     
     
    Title:          STAR Event Plane Detector Meeting
    Description:
    Community:      STAR
     Meeting type: Open Meeting (Round Table)

     Meeting Access Information:
            SeeVoghRN Application
    http://research.seevogh.com/joinSRN?meeting=MsMiMI2n2vDlD9989IDt9s
            Mobile App :  Meeting ID: 101 3768   or  Link:
    http://research.seevogh.com/join?meeting=MsMiMI2n2vDlD9989IDt9s

    EPD meeting August 10, 2016

     

    EPD meeting July 20, 2016

     

    EPD meeting July 20, 2016

     

    EPD meeting July 20, 2016

     

    EPD meeting July 20, 2016

     

    EPD meeting July 20, 2016

     

    EPD meeting June 22, 2016

     

    EPD meeting June 22, 2016

     

    EPD meeting October 12, 2016

     

    EPD meeting October 12, 2016

     

    EPD meeting October 12, 2016

     

    EPD meeting October 12, 2016

     

    EPD meeting October 12, 2016

     

    FCAL

    This is a test page ...

    GMT

    Preview of the 2006/12 review findings

    The panel recommends simulation
    and optimization of the following possible configuration, one of which should
    be chosen; either

    • Two layers of active pixel sensors followed
      radially outward by a layer of ALICE
      style pixels (the HPD) located at a slightly larger radius than its present
      location, followed by the existing STAR SSD.
    • Two layers of active pixel sensors followed
      radially outward by the IST1 and IST2 layers both at somewhat smaller radii
      than presently proposed, followed by the existing STAR SSD.

    In the event the existing SSD can
    not be counted on as part of the future mid-rapidity tracking system, the panel
    recommends the following configuration be optimized:

    • One IST layer (IST2) should remain approximately
      at its present location to recoup the functionality of the SSD. Inside this
      tracking layer, two layers of active pixel sensors should be followed radially
      outward by either a layer of ALICE
      style pixels at the present location of the HPD or a second IST layer (IST1) moved somewhat further in.

    Tracking Upgrade Review Material



    The TUP Review is scheduled for Dec. 7th, 2006, and will consist of material from the HFT, IST and HPD groups. The draft charge for the review will be similar to the HFT review charge, archived here: Tracking Review Committee Charge.

    The datasets for the upgrade reviews are archived here.

    UPGR05
    Hit Occupancy
    Pion Efficiency (pt, &eta)
    Ghosting (pt, &eta, centrality, pileup)
    Pion Acceptance
    Hits X,Y
    Hit Occupancy
    Inter-hit distance (r,centrality)
    Cluster Finding Eff. (centrality)
    Residuals (r,eta,pt,z,centrality)
    DCA (pt,centrality,signal)
    UPGR06
    Hit Occupancy
    Pion Efficiency (pt, &eta)
    Ghosting (pt, &eta, centrality, pileup)
    Pion Acceptance
    Hits X,Y
    Hit Efficiency
    Inter-hit distance (r,centrality)
    Cluster Finding Eff. (centrality)
    Residuals (r,eta,pt,z,centrality)
    DCA (pt,centrality,signal)


    Study

    Geometry

    IST(1),HPD,HFT IST(1),HFT UPGR06 UPGR09 UPGR10 UPGR11
    Hit Occupancy
    Pion Efficiency (pt, eta)
    Pion Acceptance (pt, eta)
    Ghosting (pt, eta, centrality, HFT pileup)
    Hits X,Y * * * *
    Hit Efficiencies X,Y * * * *
    Inter-hit distance (r,centrality)
    Cluster Finding Eff. (centrality)
    Residuals (r,eta,pt,z,centrality)
    DCA (pt,centrality,signal)
    Secondary vertex Resolution
    (D trajectory verctor, phi, centrality)

    IST presentation

    Residual and Pull plots (HowTo)

    Sti has a nice utility for providing Pull and residual plots for all detectors. The chain option is "StiPulls". The resulting ntuple is written to the .tag.root file.

    Among other things, the ntuple stores the (position of the hit - position of the track) in the variables ending in "Pull". It should be noted that the variable contains this difference (or residual), and NOT the difference scaled by the error. This is left for the user. There are several branches of the pulls tree; one for global tracks, one for primary tracks, and one filled only during the outside-in pass of tracking.

    The outside-in pass is stored in the branch mHitsR, as described in Victor's post. The information stored here is the residual between hit and track positions, before the hit is added to the track. This information is useful for evaluating the progresion of the track error as the tracker steps in toward the vertex. It's also essential for those of us trying to evaluate potential detector configurations.

    So, to evaluate the track residual, on can simply use the root command prompt:

    root> StiPulls->Draw("mHitsR.lYPul>>residual","mHitsR.lXHit<5. && mHitsR.lXHit>2.2")
    

    This will give you a histogram "residual" which has residuals in &phi for hits from the inner HFT only ( 2.2cm< inner HFT < 5.cm). One can also use the detector id, which is also stored in the tree.

    For residuals as a function of Pt, one can try:
    root> StiPulls->Draw("mHitsR.lYPul:mHitsR.mPt>>residualPt","mHitsR.lXHit<5. && mHitsR.lXHit>2.2")
    root>residualPt->FitSlicesY()
    root>residualPt_2->Draw()
    
    This gives you a plot comparable to the pointing resolutions derived in Jim's hand calculations.


    This page is a compilation of posts to the ittf hypernews list (Victor's original post, Mike's requested changes, and Victor's response), as well as documents provided by the STAR S&C group (StiPullEvent, StiPullHit, and StiStEventFiller).

    Tracking Review Committee Charge

    The charge to the Review Committee for evaluation of the HFT proposal is archived here. The online document can be found under http://hepwww.physics.yale.edu/star/upgrades/Draft-Charge.pdf

    The Review Committee is asked to review the proposed tracking upgrades to STAR and to comment on the following:

    1. Scientific Merit: Will the proposed detectors significantly extend the physics reach of STAR? Is the science that will be possible with the addition of this upgrade sufficiently compelling to justify the proposed scope of the project?

    2. Technology Choice and Technical Feasibility: Are the proposed technologies appropriate, viable, and robust; are there outstanding R&D or technical issues which must be resolved before proceeding to a fully detailed construction plan covering technical, cost, and schedule issues?

    3. Technical specifications: Are the physics-driven requirements for this detector sufficiently understood, and will the proposed mechanical and electronics implementations meet those requirements? Is the proposed design reasonably optimized? Is the proposed scope of the upgrade justified by the physics driven requirements?

    4. Detector Integration: Is the impact of integrating this detector into STAR understood and manageable: are there potential "show-stoppers" with regard to mechanical support, utilities, cabling, integration into trigger, DAQ, etc.?

    5. Resources, Cost, and Schedule: Is the costing of the detector realistic; is the basis of estimate sound; has the full scope been included in the estimate; is the level of contingency realistic? Does there appear to be sufficient manpower to carry the project out successfully – including manpower for developing calibration and analysis software? Is the technically driven schedule achievable?

    eTOF Proposal

     A proposal to install CBM TOF detectors on the east pole tip for BES-II



    iTPC

    an Upgrade to Inner Sectors of STAR Time Projection Chamber


    proposal draft (with link to bookpage)

    SDU iTPC blog (Qinghu Xu)

    Chinese iTPC project (part of Project 973 for RHIC physics)

    STAR TPC

    STAR TPC 2003 NIMA paper

    mailing list: (itpc-l@lists.bnl.gov)

    September 2016 NP iTPC review

    September 2017 DOE Progress Review

      A talk on the iTPC was given to the instrumentation group on December 03 by FV. The talks is attached to this page.

     

     

     

    An upgrade to Inner Sectors of Time Projection Chamber

    The iTPC was developed into a proposal. The technical design report is available as a STAR note SN0644.

    Historical remarks:

    We propose to upgrade the inner sectors of the STAR TPC to increase the segmentation on the inner pad plane and to renew the inner sector wires which are showing signs of aging. The upgrade will provide better momentum resolution, better dE/dx resolution, and most importantly it will provide improved acceptance at high rapidity to |eta|<1.7 compared to the current TPC configuration of |eta|<~1.0. In this proposal, we demonstrate that acceptance at high rapidity is a crucial part of STAR’s future as we contemplate forward physics topics such as p-A, e-A and the proposed phase II of the Beam Energy Scan program (BES II). Unlike the outer TPC sectors, the current inner TPC pad row geometry does not provide hermetic coverage at all radii. The inner pads are 11.5 mm tall yet the spacing between rows is variable but always greater than 5 cm, resulting in "missing rows". Approximately, only 20% of the path length of the charged particle traversing the TPC inner sector has been sampled by the electronics readout.

    https://drupal.star.bnl.gov/STAR/event/2014/02/10/star-rd-2014-and-itpc-review  

    internal review: 
    https://drupal.star.bnl.gov/STAR/event/2015/02/05/itpc-internal-review 

    Electronics

    New Electronics

    Mechanic design of strongback

    optimize strongback for more electronics readout channels and for reducing materials.

    Drawings from the original TPC design

    prototype iTPC strongback machining at UT Austin:
    machining strongback 10/15/2013

    TPC insertion tool:

    Multiple Wire Proportional Chambers

    Fabrication of wire chambers

    Pad size vs anode wire distance to padplance  
    STAR Note #0263

    Design of a prototype mini-drift TPC at SDU:
    https://drupal.star.bnl.gov/STAR/system/files/iTPCmtg_0912.pdf

    tools for measuring wire tension:
    https://drupal.star.bnl.gov/STAR/system/files/wire%20tension%20measurement_1.pdf

    wire tension parameters:

    Physics motivations

    Searching for the possible tri-critical point in the QCD phase diagram is one of the major scientific tasks in heavy-ion physics.

    Elliptic flow of identified particles has been used to study the properties of the strongly interacting Quark-Gluon Plasma.

    Directed flow (v1) excitation functions have been proposed as promising observables for uncovering evidence of crossing a first-order phase transition, based on hydrodynamic calculations.

    In addition to the above highlights of physics impact of the iTPC upgrade, the upgrade improves the tracking efficiency at low momentum.

    The upgrade also significantly enhances STAR’s physics capability at RHIC top energy. The improved dE/dx resolution allows better separation of charged kaons and protons at high momentum.

    BES II

    Project Schedule, Cost and Management

    This page have been repurposed to be the main page for management for the iTPC that was approved as a BNL Capital 
    project (< 5M%$). The subpages will contain the main point for iTPC manegement. The old pages have been deleted at this point.

    Cost and Schedule

    Following the DOE review the project files are being updated.

    November 20, 2016
    Draft milestone table from current WBS  excel file 
    The greyed out lines are poposed not to be in the Project Management Plan

    Older files:
    Management Forms from BNL; Project and cost files
    The cost spreadsheet is from March 21, 2016. The numbers are used in the project management plan

    Management plan and other documents

    Project Management Plan: 
    Version 12: updated version for September review.
    Verdion 20: updated with KPP and clarification- changes since December marker in red.
    Version 21: updated org chart  (Feb 2018)

    Reports

     Reports quarterly.


    Reports

     Reports quarterly.


    ES&H reviews

     The material here is from several ES&H reviews and their follow ups.

    compiled by Robert Pak 8/15/17

    Attached is a folder of documents regarding safety reviews with C-AD you requested for distribution to DOE.  There were the following meetings (additional internal meetings and discussions with vendors occurred that are not included here):
    i) ASSRC meeting on March 8th (Robert presented for Rahul).
    ii) Engineering review of the installation platform on March 29th (Rahul presented remotely).
    iii) ESRC meeting on May 8th (Flemming and Tonko presented).
    iv) ASSRC meeting for enclosure with fire safety engineer on August 4th (Robert presented).
    v) Installation platform inspection by C-AD on August 15th (no formal presentation).

    Upcoming meetings include:
    i) Meeting on tests in the clean area.
    ii) ESRC meeting once power requirements for the new detector are finalized.
    iii) ESRC walk through before turn on.

    October 2018 DOE NP review

     Review was held at BNL

    Here are the final reports

    September 2016 iTPC NP review

    The review was held on September 13 & 14 in Washington DC
    All the talks and background material is available at the BNL indico page 

    The closeout report will be posted once finalized.
    I already extracted the recommendation and the comments that needs some action. Note it is preliminary since we do
    not have the final report, but it should still beuseful. See attachment. (9/20/16)

    The final close-out report was received on 12/14/2016 nd attached to this page. See list below

    • cover letter
    • excert from reviewers (personal comments)
    • final report

    The talks from the Jan 2016 Directors review are on the BNL indico page

    Response to recommendations

    1. Update on KPP -- we asked for more time on this which was granted
      We are suggesting a path forward for adding a KPP that reflects that the UPP dE/dx is achievable. At the review it was suggested, for example, to use the width from using signal from an 55Fe source. 


      The connection between observed resolution of an 55Fe source and final dE/dx is not trivial. The resolution of the source does e.g. depend on how the signal is read out e.g. via the pads or the wires. 

      We are pursuing this by simulations of MWPC response to 5.9 KeV electrons, by reviewing historical records since part of the original acceptance criteria was a scan of all sectors with sources, and investigating the just started tests with 55Fe source and X-ray gun at SDU, all to understand what the ideal response to stand alone measurements would be.

      We hope you will agree to such a path forward and the plan is to aim for having a quantified proposal by January 30 2017 for this KPP. Enclosed are the suggested table, and text.  

    2. Workforce for installation etc  main document
      1. workforce spreadsheet
    3. Lessons learned from previousconstruction
      1. Document (word) the double click does not work on mac OS
    4. Updated Project Management Plan with new milestones and resource loaded WBS
      1. The pdf of the updated WBS is here
      2. My comments to request and recommendation
    5. iTPC testing and commisioning plan September 2017

    Background Material for Review

     The requested material will be collected here. For now it's the list that can be updated

    • TDR (November 2015)  SN0644 
    • Review report for Directors review (Jan 2016)
    • Response to review (Feb 15 2016)
    • Risk assesment Plan (November 2015)
    • Q&A Plan and procedures
    • Project Management Plan (pdf)

    iTPC review responses

     The first response was by Oct 15 to provide an updated KPP. The repsonse that was send in is attached here

    By November 1 we have to provide a workforce plans

    The iTPC group should work with RHIC management to anticipate and identify workforce needs for construction, installation, and commissioning and develop a plan to mitigate any schedule risk due to a lack of technical and mechanical support personnel.  Submit the workforce plan to DOE by November 1, 2016.

    I have worked with a few members of iTPC and STSG to come up with the first estimates. These are contained in two documanents
    one describing the activities, and a second with a summary of resources. Its not yet complete.
    See the attached documents.

    The second items is

     Generate a Lessons Learned document from the construction and commissioning of the original STAR TPC and submit to DOE by November 1, 2016.

    This has been discussed  and Jim proposes to generated a document with the many-many presenteation that we have assembled, and write an introductionary 
    document.  This may actually serve us well to assemble all this material in a coherent fashion.
    Document was submitted on time

    The testing plan was submitted to DOE in mid September 2017 ahead of the  yearly review.

    September 2017 DOE NP yearly progress review

     Meeting will be a BNL in room 2-160.

    The call for review is the content of an e-mail send to me by Cassie Dukes of the NP office.

    The talks for the review is on the BNL indico pages.

    The final report was received in December 2017, and is included here

    iTC Risk assesment

    Nov 30, 2015

    A draft version of the risk assesment has been assembled and is available for
    comments  draft  

    This page will also have some of the background of the iTPC risk assesment.

    1. Letter from Berndt Mueller  (word file)
    2. Draft Charge from Zhangbu (pdf)
    3. Risk Analysis Plan - Draft template from the Late Ralph Brown (word)



    Background material

    iTPC Directors Review Jan 2016

    January 2016 Presentations on BNL Indico site

     

    This page will be used to keep track of note, documentation need for preparation of the review.

    1. Jim Thomas comments to the Schedule as on Dec 11. 
    2. Further comments from Jim, and action from Flemming (word file)
    3. Comments from Blair on cleanness, HV and water systems.
    4. The most recent pdf print of the project file is here
    5. Unfortunately I cannot attach the MS project file- drupal does not allow that

    QA page and documents

     This page contains the sector assembly QA schedule

    1. Pdf version of project file
    2. Project file (cannot be uploaded to drupal!)
    3. Sector assembly (and QA steps from Qinghua word file

    iTPC QA

     
    This page and the child page will contain information on the QA of sectors, organized by sector number
    SN1001 is the prototype  SN00xx the production ones.

    7/19/18 Added the daily test activity summary file. The daily updates are written by Qian, an has all entries in the file with the most recent first.

    Latest version  8/26/2018

    Hanseul made a nice webpage that  shares all update on the iTPC testing. 

    https://docs.google.com/spreadsheets/d/11mFkoX1Mu64uL4oYq-vshhRE0bZa8CAqgsL1sToJ354/edit?usp=sharing


    The traveller that is used for pre installation checks is attached to the page.

    Summary of sector status  9/20/2018. A summary of problem sectors that should be considered as spare,
    and not installed. Powerpoint File.

    10/15/2018 screenshot of testing status at BNL




    The material in the child pages are copies of the LBNL google pages, and additional analysis results that may have ben performed.

    The LBL assembly instructions including check points is attached
    The work at LBNL is completed as per 5/16/2018 and last sectors send to SDU

    6/6/2018 : A pdf version of the final filled out smartsheet is saved here

    6/6/2018: Jim  and Howard have analyzed all the survey measurements. The spreadsheet that summarizes the results is attached here.

    The Chinese travellers are updated to Quinghua's blog page

    The travellers for testing at BNL are posted on    https://www.star.bnl.gov/protected/heavy/tc88qy/iTPC/sector/

    Travellers from the pre-installation check saved under each SNXXXX folder.
    ITPC status  10/15/2018 . All sectors at BNL checked





    Note: 
    The two failed sectors SN0027 and SN0020 had separated in wide corners. Look fo repairs at LBL.
    SN0024 failed the very sensitive He-leak test at LBL vacuum was 10-4 above the limits. It can be shipped to SDU
    and see if it passes the Ar sniffer test.
    SN0028 had failed complete along long edge. May be difficult to repair should be set aside.

    The article 31-36 are the additional strongbacks to be bonded. Padplanes on, sidemount done. Waiting for final machiung, CMM and cleaning.
    It is believed that SN0012, SN0027 and SN0021 are repairable, planned to be done during feb-March until the strongbacks arrive from IMT


    Washing procedure changes ; double check vacuum, refill with glyptal.

    The padplane connectivity and checkout QA is in the summary attached, and provide a list of all pad planes (34) inspected and the summary from the
    QA sheets used during inspection


    Reports on various issue, found and resolved. This is work in progress

    January 3, 2018
      Report on shipping temperature for SN 1001

    December 15, 2017
      Report on 2 FEE slots in SN0025 blocked by epoxy    

    February 2, 2018

    De-bonding of PCB to Al  (GG wire) SN0015    

    March 21, 2018
     Report on pad plane drilling survey at LBL for last 7

    March 4, 2018
     Report of shifted GG board on SN0012

    April 3, 2018
    Report of grounded pins on ABDB board on SN0008



    Shipping of sectors

    August 2018  SN0014,1715 and 10
    The temperature from the USB file.
    The box was opened on August 2.
    The pdf of the graph is here.

    September 18 SN0001, 17, 31
    Temperature from sensor for shipment




    Article 7 SN0020

     

    Article 16 SN0012

     This sector had a serious oversight during bonding resulting in a grounding wire caught below the pad plane.
    As the wire is 600 microns think there is no way the flatness and bonding with expoxy which is only ~100 microns can be good.
    Project have rejected this sector.

    Article 17 SN0019

     

    Article 18 SN0025

     

    Article 19 SN0015

     

    Article 20 SN0014

     

    Article 20 SN0014

     Docs from LBL assembly

    Article 21 SN0010

     

    Article 21 SN0010

     

    Article 22 SN0017

     

    SN0018 (Article 24)

     

    SN0031 article 31

     SN0031

    article 6 SN0026

     QA from SDU

    SN0026_traveller_scan.pdf 


    article 1 prototype SN1001

     QA files for article 1.

    article 10 SN0028

     

    article 11 SN0024

     

    article 12 SN0029

     
    This sector was found at SDU to have the right GG side mount out of spec, just at limit where the
    wire would touch the side mount surface
    The figure shows the measurements from SDU and LBL CMM for righthand GG sidemount
    The difference is understood due to way measurements are done 30-50 microns 

    A copy of the SDU traveler is at this location 

    article 13 SN0030

     

    article 14 SN0021

     

    article 15 SN0011

     

    article 2 SN0009

     QA infor from LBL and analysis

    QA from SDU

    LBL spreadsheets etc as attachments
     
     
     
     

    article 3 SN0006

     
    Traveller from SDU production link to Qinghua's blog

    article 4 SN0022

     LBL survey info
    SDU scanned production traveller (upkoad 10/8/2017)

    article 5 SN0027

     
    The sector was rejected due to separated plane 8/16/1017 -- returned to LBL

    The QA travellers from SDU


    article 8 SN0023

     QA from SDU

    SN0023_traveller_scan.pdf 

    Sn0023_test results.pdf

    article 9 SN0016

     QA from SDU

    SN0016_Traveller_scanned.pdf


    1. Right side of anode wire mount apart from strongback about 5cm -- repaired at SDU
    2. two tapered pins extruded ~2mm beyond the side wire mount
    3. 5 pins of LOAB missing/broken repaired at SDU
    4. leakage found on 8 feed-through boards - re-epoxied
    5. Two fat wires used  75um BeCu wire used instead of 125 by mistake no effect from simulation by Irakli

    iTPC Quarterly Reports

    In this page I will also keep the slides for the monthly phone conferences with DOE, as well as brief minutes

     It is organized according to WBS. The lead people for each section is given here.

    • Project Management  --  Flemming
    • Padplane -- Flemming 
    • Mechanics-Strongback -- Flemmig
    • Mechanics -MWPC -- Qinghua
    • Integrations & installation R-- Robert
    • Electronics -- Tonko
    • Software - Irakli
    • Other activities -- Flemming

    Flemming is responsible for the reports as such.

    In general the report should be completed by mid-July, mid-October ,etc...


    Quarterly reports

    Monthly phone conferences

    iTPC brief reports and presentations

     Page to keep track of various brief notes, documents for iTPC

    2017

    May 24, 2017   Gain measurements on prototype MWPC at SDU - pdf
    May 18, 2017   Notes on assembly issues for article 2, 3 side mounts - word document
    April 28, 2017  Notes on broken pins - article 2  word document

    Aug 22, 2017 Brif update on status at SDU (FV) presentation


    iTPC closeout review May 2, 2019

    Final report and experts from the review were received on August 1, 2019
    The three documents have been attached 
    cover letter
    Review report  
    Excerpts from reviewers

    ---

    The material for the closeout review including talks are all on
    https://indico.bnl.gov/event/5980/

    On this page I added the 3 final documents: close-out report, lessons learned, and the transitions to ops.
     --
     The close out review for STAR has been scheduled for May 2. The notice from DOE is enclosed here
    This page will be used for the preliminary material, Final talks and material will be put on a BNL indico page
    as we did for the previous reviews.

    As the iTPC project is neither an MIE no a project for the CD process, I believe the requested closeout and transition to ops documents
    can be fairly brief.

    -- notes from DOE NP

    As you know, the dates for the STAR iTPC Project Closeout/Transition to Operations Review has been confirmed for May 2, 2019. Attached, please find a list of the reviewer panel and anticipated DOE participants. For your information, the web-conference info is included below for distribution to the panel via the review website.

     

    Join from PC, Mac, Linux, iOS or Android: https://science.zoom.us/j/737668005

     

    And/or join by phone:

     

        +1 646 876 9923 (US Toll) or +1 669 900 6833 (US Toll)

        Meeting ID: 737 668 005

     

    Note: with the ZOOM web-conference, if the equipment you are using is not equipped with a microphone, you will need to use the link to log into the meeting to share/view presentations, as well as call-in via phone for voice participation.

     

    Please draft an agenda and a list of proposed documents to be sent to the review panel prior to the review  (for example, previous review report, response to DOE Review Recommendations, etc.). Please note: background documents should also include a draft Project Closeout Report, draft Transition to Operations Report, as well as a draft lessons learned document.  Once the draft agenda and the list of proposed documents have been prepared please send this input to me for comments – I will collect comments from all applicable individuals within the NP office and will iterate with you on the agendas to ensure all topics are covered. Once the materials have been finalized, please make the background materials, as well as presentations, available to review participants in electronic form as soon as possible but no later than two weeks prior to the review (presentations can be made available at a later date, however, we request no later than 5 days prior to the review).

    The panel members are in the attached document

    -- as previous e-mail the charge for review content is

    Please hold May 2, 2019 for the Project Closeout/Transition to Operations Review of the STAR iTPC. We are happy to schedule the meeting as early as possible, however, this is likely to be one-day starting no earlier than 9:00 am ET with an executive session, 9:30 am ET for presentations to start. For your information, suggested topics for talks and the schedule for the one day close out meeting would be as follows:

     

    -              Project status and deliverables

    -              Project commissioning results 

    -              Cost and Schedule

    -              Management and Safety issues

    -              Transition to operations

    -              Working lunch and executive session to write a few page report

    -              Close out   (early afternoon, between 2-3 pm)

    --
    I should point out all t6he charge bullets is what any project from 5M$ and up is subject to.

    iTPC meetings

    A child page maintains a list of all tpc meetings and the technical meetings held
    for integration , installation. Some presentation are listed on that page

    The meeting summary up to October 2015  
    was kept in Quingha's blog page here


    For meeting related to safety see https://drupal.star.bnl.gov/STAR/subsys/upgr/itpc/esh-reviews

    Electronics Production Readiness Review

     An iTPC production readiness review was held on January 22, 2018

    The agenda was:
    the review will take place tomorrow at 1 pm EDT at 1-224 in the physics department.
    Agenda:
    Introduction 10 min  F.Videbaek
    Electronics   40 minutes  Tonko Lubicic
    Discussions; questions and answers 20 min
    Commitee discussions 40 min

    The committee will write a report to be send following the meeting to the iTPC group.
    Report was received on January 23, 2018

    iTPC minutes

     The following regular meetings are being organized for the project
    • Weekly iTPC meeting for all interested. Wednesday's at 10.30 am;  9.30 when standard time
      • The agenda is usually mgt updates, report on electronics, SDU, mechnics
      • The integration , installation is usually discussed primarely at the Monday meetings
    • Meeting info updated onMay 15; We are havig continuous problems with eZuse and go to BlueJeans
    • Phone Dial-in
      +1.408.740.7256 (United States)
      +1.888.240.2560 (US Toll Free)
      +1.408.317.9253 (Alternate number)
      (Global Numbers)

      Meeting ID: 832 810 289
      Moderator Passcode: 6253 

      Room System
      199.48.152.152 or bjn.vc

      Meeting ID: 263 878 370
      Moderator Passcode: 6253

      Description:
      weekly iTPC for status updates
       
       




    • The mechanical meeting series is complete with the final installation of all sectors on October 2018
    • Bi (or weekly)  weekly meetings of internal iTPC working group to define, and follow up on mechanics, and installation. Currently on Mondays at 1.00 pm.Meet in 1006 using the STAR ops standing eZuce reservation. Minutes attached (latest update 7/7/2016) from earlier meeting. The individual meetings will show up in list below here from. Meeting is led by Robert Pak, with engineers, techs,  physicist and from BNL and LBL, and minutes written by him.
      • August 30, 2018 minutes
      • April 19, 2018 minutes
      • April 12, 2018 minutes
      • March 15, 2018 minutes
      • March 8, 2018 minutes
      • February 22, 2018 minutes
      • February 12, 2018 minutes
      • December 18,2017  minutes
      • December 11, 2017  minutes
      • December   6, 2017 minutes
      • November 20, 2017 minutes
      • November 13, 2017 minutes
      • November 6, 2017 minutes
      • October 30, 2017 minutes
      • October 23, 2017 minutes
      • September 25, 2017 minutes 
      • September 11, 2017 minutes
      • August 28, 2017 minutes
      • August 21, 2017 minutes
      • August 14, 2017 minutes
      • August 7, 2017 minutes
      • July 31, 2017 minutes 
      • July 24, 2017 minutes (installation plan, clean room, spreader bar)
      • July 17, 2017, minutes (insertion tool, platform, clean enclosure, clean room)
      • July 10, 2017, minutes (Clean enclosure, AOB)
      • June 26, 2017 minutes
      • June 19, 2017 minutes
      • June 5, 2017 minutes
      • May 22, 2017 minutes
      • May  8, 2017 minutes
      • April 24, 2017 minutes
      • April 10, 2017 minutes
      • March 27, 2017 minutes (sideboards,LBNL activity, Mark update, Rahul installation)
      • March 13, 2017 meeting (LBL progress, items to ship,  testchambers)
      • February 27, 2017 meeting (QA pad planes,anode wire mounts, Update from Mark,Canary Chamber, Shipping containers)
      • February 13, 2017 meeting 
      • January 23, 2017 meeting (QA padplanes, wiremount cleaning, update LBNL)
      • Janunary 9, 2017 meeting (QA padplanes, wiremount cleaning, tooling LBNL, canary tests)
      • December 12, 2016 meeting (padplanes sidemounts canary tests insertion tooling)
      • November 28, 2016 meeting
      • November 14, 2016 meeting
      • October 31, 2016 meeting
      • October 17, 2016 meeting
      • October 3, 2016 meeting (padplane, combs, wiremounts, inventory, shipping containers canary chamber, installation platform kickoff)
      • September 19, 2016 meeting
      • August 29, 2016 meeting (padplane, report central shop, insertion tooling)
      • August 1, 2016 meeting (strongback inspection, combs, assembly inventory)
      • July 18, 2016 meeting    (strongback production, wire mounts, wire combs, insertion tooling)
      • July  5, 2016   meeting  (strongback QA, canary chamber, insertion tool, padplane)

    iTPC run18 progress

     The page will contain material presented at the itpc software meetings,
    or circulated with the group. Material will be in reverse order
    The meeting will be on Mo and Th 11 in general.
    The bluejeans information is

    Phone Dial-in
    +1.408.740.7256 (United States)
    +1.888.240.2560 (US Toll Free)
    +1.408.317.9253 (Alternate number)
    (Global Numbers)

    Meeting ID: 634 285 245
    Moderator Passcode: 6253 

    Room System
    199.48.152.152 or bjn.vc

    Meeting ID: 634 285 245

    April 23, 2018 A few pictures comparing iTPC and TPC by Yuri

    April 18, 2018 Update on Gain on iTPC  (Tonko)

    April 16, 2018 Further analysis of clusters and distribution (Flemming)
    File on Monday meeting https://drupal.star.bnl.gov/STAR/event/2018/04/16/itpc-software-monday

    March 29, 2018
    Analysis of 86040  . Charge distribution plots

    Entry April 2, 2018/ update by 4/8/18

    Adc vs pad for different rows (FV)

    Yuri' update after gating grid leak fix 

    Entry March 29,2018 

    Charge distribution per row for inner sector The profile has been fitted with a landau distribution

    Similar plot for the outer sector

    March 2018  Tonko analysis of iTPC pulser run

    iTPC NIM paper page

      This page is meant to have reference links, suggested plots, tables etc.

    - From Zhangbu

    Last week, Robert brought to my attention that iTPC is one of the greatest accomplishments STAR did in the last decade, and 
    now that the BESII data have been taken, and analyses are on-going, 
    we should write a NIMA paper to document:  
    mechanic structure and pad/electronic layouts,  

    Operation and performance, 

    Online/offline Calibration and Physics technical performance

     

    It is also important to document this for future physics paper references and also 
    serve as a historic document before everyone moves on and forgets about all the details.

    From Robert -
    outline based on DNP talk  NIM_outline

    Reference documents
    The technical design report is available as a STAR note SN0644.

    The iTPC closeout report with KPP and performance plots

    The Shandong group wrote two NIM performance papers
    1)   F. Shen et al., MWPC prototyping and performance test for the STAR inner TPC upgrade, Nucl. Instrum. Meth. A 896, 90 (2018).

    2) X. Wang et al., Design and implementation of wire tension measurement system for MWPCs used in the STAR iTPC upgrade, Nucl. Instrum. Meth. A 859 (2017) 90–94.


    EEMC

     

    Archived Endcap web pages for years 1998 -2008 at MIT 'locker'

     

    New place for Endcap 'unprotected' analysis

    2007 run, hardware changes

     

     

     April 4, 5 pm, no beam: 

    We would like ETOW tcd phase set from 12->19 and EMSD from 55->65 , Jeff implemented it

     Will's new table (to be implemented)

            Towers
                     TCD      box dec     box hex
               1.     19        18          0x12
               2.     19         8          0x8
               3.     19        86          0x56
               4.     19        80          0x50
               5.     19        43          0x2B
               6.     19        31          0x1f

    For EEMC MAPMT -- rather than change all 48 box configs we could
    start out by just adding 10 ns to the phase: 55 -> 65.

    new Tower config files in /home/online/mini/tcl/ you'll have to change symbolic links
    
           tower-1-current_beam_config.dat ->
                       03.01.07/tower-1-current_beam_config.dat
     Jim: after 8094053 should have the new tcd phase and box configuration
    ----
    The night shift took a 300k calibration run with the ETOW and ESMD Run 8095061
    ----
    There is a shift log entry as to the run numbers and conditions.
    I shifted only the tcd phase and used the new FEE configs installed
    yesterday. This should show fall off by ~10-20% on each side in towers
    and flat in MAPMTs.

    run ETOW tcd ESMD tcd
    8095096 19 65 30k std config, for 1st daq file ETWO head=off
    8095097 1 55 30k
    8095098 7 60 30k
    8095099 13 60 30k
    8095100 25 70 30k
    8095101 31 75 30k
    8095102 19 65 100k std config
    8095103 19 65 0k std config died

    8095104 19 65 28k std config

    ------ April 12----
    I've made these changes in 2007Production2, 2006ProductionMinBias &
    2007EMC_background

    a. UPC ps --> 3
    b. bht1 thresh --> 16
    c. raised all triggers to production
    d. changed all triggers based on zdc to different production ids

    -Jeff

    2008 run preparation

    Timinng curves for ETOW & ESMD

    Timing Scan 11/23/2007

    Timing Scan 11/23/2007

    EEMC timing scan was taken on11/23/2007.

    To generate the curves, log into your favorite rcas node and

    $ cvs co -D 11/27/2007 StRoot/StEEmcPool/StEEmcTimingMaker

    and follow the instructions in the HOWTO

    $ cat StRoot/StEEmcPool/StEEmcTimingMaker/HOWTO.l2ped

    (tried to post code, but drupal wouldn't accept it...)

    Will's notes on crate configuration:

    FYI here are some (old) notes on how I set up config files
    for EEMC tower timing scans that were taken a few days back (BTW,
    these configs are still loaded).

    1) Old running settings -- config files are on eemc-sc in directory
    /home/online/mini/tcl with symbolic links:
    tower-"n"-current_beam_config.dat ->
    dir/tower-"n"-current_beam_config.dat
    We had (I believe) been using directory 04.04.07 for the run with
    ETOW TCD phase of 19; crate fifo (RHIC tic) 0x13. Those crate delay
    settings are:
    1 -> 0x12; 2 -> 0x8; 3 -> 0x56; 4 -> 0x50; 5 -> 0x2b; 6 -> 0x1f

    2) new scan settings ... for time scans using only TCD phase
    we usually create special config files with approprieate box delay and
    TCD values.

    nominal (ns) for (eff RHIC tic
    run 7 scan ns)

    1 -> 0x12 (18) 0x5a (-17) fifo 0x14 (e.g., 107-90)
    2 -> 0x8 ( 8) 0x50 (-27) fifo 0x14 (e.g., 107-80)
    3 -> 0x56 (86) 0x42 ( 66) fifo 0x13
    4 -> 0x50 (80) 0x3c ( 60) fifo 0x13
    5 -> 0x2b (43) 0x17 ( 23) fifo 0x13
    6 -> 0x1f (31) 0xb ( 11) fifo 0x13

    Here we have shifted crates "earlier" in time: 1&2 by 35 ns and
    rest by 20 ns.Starting TCD scan at 5 then should start things
    ~ 34 (49) ns earlier than nominal with ~ 70 ns for scan to run
    (suggested TCD: 5,10,15,25,35,45,55,65,75,70,60,50,40,30,20,10)

    Run
    ETOW tcd phase delay
    N events
    8327013 5 136k
    8327014 15 20k
    8327015 25 20k
    8327016 35 20k
    8327017 45 20k
    8327018 55 20k
    8327019 65 20k
    8327020 75 20k
    8327021 70 20k
    8327022 60 20k
    8327023 50 20k
    8327024 40 20k
    8327025 30 20k
    8327026 20 20k
    8327027 10 20k














    Figure 1 --Integral from ped+25 to ped+75, normalized to total number of events, versus TCD phase delay.

    Channel-by-channel plots are attached below.

    Timing Scan 11/27/2007 (MuDst)

     

                Posted results are from MuDst data of 17 runs with 50K events in each run.

               

                  1,     ShiftLog Entry   

                  2,    Run Summary

                          Run                    ETOW & ESMD phase (ns)                # of Events (K)  

                          8331091                               5                                             200

                                 1092                            15                                              200

                                 1094                            25                                              200

                                 1097                            35                                              200

                                 1098                            45                                              200

                                 1102                            65                                              200

                                1103                             75                                              200

                                1104                             70                                              200

                                1105                             60                                              200

                                1106                             50                                              200

                                1107                             40                                              200

                                1108                             35                                              200

                                1109                             30                                              200

                                1110                             25                                              200

                                1111                             20                                              200

                                2001                             10                                              109 

                  

                   3,   Crate Timing Curves  

     

    Fig. 1 Tower Crate. It agrees with results from L2 Data

    Fig. 2 Crate 64-67

    Fig. 3 Crate 68-71

    Fig. 4 Crate 72_75

    Fig. 5 Crate 76-79

    Fig. 6 Crate 80_83

    Fig. 7 Crate 84-87

    Fig. 8 Crate 88-91

    Fig. 9 Crate 92-95

    Fig. 10 Crate 96-99

    Fig. 11 Crate 100-103

    Fig. 12 Crate 104-107

    Fig. 13 Crate 108-111

                  4, PDF Files of MAPMT Channel Timing Curve

       tower-crate-1.pdf  tower-crate-2.pdf  tower-crate-3.pdf  tower-crate-4.pdf  tower-crate-5.pdf  tower-crate-6.pdf

       mapmt-crate-64.pdf  mapmt-crate-65.pdf  mapmt-crate-66.pdf  mapmt-crate-67.pdf  mapmt-crate-68.pdf  mapmt-crate-69.pdf
       mapmt-crate-70.pdf  mapmt-crate-71.pdf  mapmt-crate-72.pdf  mapmt-crate-73.pdf  mapmt-crate-74.pdf  mapmt-crate-75.pdf
       mapmt-crate-76.pdf  mapmt-crate-77.pdf  mapmt-crate-78.pdf  mapmt-crate-79.pdf  mapmt-crate-80.pdf  mapmt-crate-81.pdf
       mapmt-crate-82.pdf  mapmt-crate-83.pdf  mapmt-crate-84.pdf  mapmt-crate-85.pdf  mapmt-crate-86.pdf  mapmt-crate-87.pdf
       mapmt-crate-88.pdf  mapmt-crate-89.pdf  mapmt-crate-90.pdf  mapmt-crate-91.pdf  mapmt-crate-92.pdf  mapmt-crate-93.pdf
       mapmt-crate-94.pdf  mapmt-crate-95.pdf  mapmt-crate-96.pdf  mapmt-crate-97.pdf  mapmt-crate-98.pdf  mapmt-crate-99.pdf
       mapmt-crate-100.pdf  mapmt-crate-101.pdf  mapmt-crate-102.pdf  mapmt-crate-103.pdf  mapmt-crate-104.pdf  mapmt-crate-105.pdf
       mapmt-crate-106.pdf  mapmt-crate-107.pdf  mapmt-crate-108.pdf  mapmt-crate-109.pdf  mapmt-crate-110.pdf  mapmt-crate-111.pdf
       

     

    Timing Scan 11/29/2007

    Timing Scan 11/29/2007

    posted 11/30/2007 Shiftlog entry
          run     btow   etow
    8333113 12 5
    115 17 15
    116 22 25
    117 27 35
    118 32 45
    120 37 55
    121 42 65
    123 52 70
    124 57 60
    125 62 50
    126 36 40
    127 36 30
    128 36 20
    129 36 10

    Figure 1 -- Tower crates


    Email from Will setting tower timing for this year
         Below in the forwarded message you will see a link to the 
    analysis (thanks Jason) of the more recent (much better statistics) 
    EEMC tower timing scan. Based on these I set the timing for this year
    as follows (from my shift log entry)
    
                              *************
    
    > 11/30/07
    > 19:49
    > 
    > General
    > new timing configuration files loaded for EEMC towers
    > crate 1 delay 0x12 (no change from last year nominal)
    > crate 2 delay 0x8 (no change)
    > crate 3 delay 0x57 ( + 1 ns)
    > crate 4 delay 0x52 (+ 2 ns)
    > crate 5 delay 0x2b (no change)
    > crate 6 delay 0x1f (no change)
    > change ETOW phase to 21 (saved in all configs)
    > above phase represents overall shift of + 2 ns
    

    makeL2TimingFiles.pl

    plotEEmcL2Timing.C


    TFile *file = 0;
    TChain *chain = 0;

    #include <vector>

    #include "/afs/rhic.bnl.gov/star/packages/DEV/StRoot/StEEmcUtil/EEfeeRaw/EEdims.h"

    // summary TTree branches created by StEEmcTimingMaker
    Int_t mRunNumber;
    Float_t mTowerDelay;
    Float_t mMapmtDelay;
    Int_t mTotalYield;
    Int_t mTowerCrateYield[MaxTwCrates];
    Int_t mMapmtCrateYield[MaxMapmtCrates];
    Int_t mTowerChanYield[MaxTwCrates][MaxTwCrateCh];
    Int_t mMapmtChanYield[MaxMapmtCrates][MaxMapmtCrateCh];
    Int_t mTowerMin;
    Int_t mTowerMax;
    Int_t mMapmtMin;
    Int_t mMapmtMax;

    // vectors to hold TGraphs for tower and mapmt crates
    Int_t npoints = 0;

    TGraphErrors *towerCrateCurves[MaxTwCrates];
    TGraphErrors *mapmtCrateCurves[MaxMapmtCrates];

    TGraphErrors *towerChanCurves[MaxTwCrates][MaxTwCrateCh];
    TGraphErrors *mapmtChanCurves[MaxMapmtCrates][MaxMapmtCrateCh];

    // enables printing of output files (gif) for documentation
    // figures will appear in same subdirector as input files,
    // specified below.
    //Bool_t doprint = true;
    Bool_t doprint = false;

    void plotEEmcL2Timing( const Char_t *input_dir="timing_files/")
    {

    // chain files
    chainFiles(input_dir);

    // setup branches
    setBranches(input_dir);

    // get total number of points
    Long64_t nruns = chain -> GetEntries();
    npoints=(Int_t)nruns;

    // setup the graphs for each crate and each channel
    initGraphs();

    // loop over all runs
    for ( Long64_t i = 0; i < nruns; i++ )
    {
    chain->GetEntry(i);

    fillCrates((int)i);
    fillChannels((int)i);

    }

    // draw timing scan curves for tower crates
    drawCrates();
    // for ( Int_t ii=0;ii<MaxTwCrates;ii++ ) towerChannels(ii);
    // for ( Int_t ii=0;ii<MaxMapmtCrates;ii++ ) mapmtChannels(ii);

    std::cout << "--------------------------------------------------------" << std::endl;
    std::cout << "to view timing curves for any crate" << std::endl;
    std::cout << std::endl;
    std::cout << "towerChannels(icrate) -- icrate = 0-5 for tower crates 1-6"<<std::endl;
    // std::cout << "mapmtChannels(icrate) -- icrate = 0-47 for mapmt crates 1-48"<<std::endl;
    std::cout << "print() -- make gif/pdf files for all crates and channels"<<std::endl;

    }
    // ----------------------------------------------------------------------------
    void print()
    {
    doprint=true;
    drawCrates();
    for ( Int_t ii=0;ii<MaxTwCrates;ii++ ) towerChannels(ii);
    //for ( Int_t ii=0;ii<MaxMapmtCrates;ii++ ) drawMapmt(ii);
    }

    // ----------------------------------------------------------------------------
    void drawCrates()
    {

    // tower crates first
    TCanvas *towers=new TCanvas("towers","towers",500,400);
    const Char_t *opt[]={"ALP","LP","LP","LP","LP","LP"};

    TLegend *legend=new TLegend(0.125,0.6,0.325,0.85);

    Float_t ymax=0.;
    for ( Int_t icr=0;icr<MaxTwCrates;icr++ )
    {
    towerCrateCurves[icr]->Sort();
    towerCrateCurves[icr]->Draw(opt[icr]);
    TString crname="tw crate ";crname+=icr+1;
    legend->AddEntry( towerCrateCurves[icr], crname, "lp" );
    if ( towerCrateCurves[icr]->GetYaxis()->GetXmax() > ymax )
    ymax=towerCrateCurves[icr]->GetYaxis()->GetXmax();
    }
    towerCrateCurves[0]->SetTitle("EEMC Tower Crate Timing Curves");
    towerCrateCurves[0]->GetXaxis()->SetTitle("TCD phase[ns]");
    TString ytitle=Form("Integral [ped+%i,ped+%i] / N_{events}",mTowerMin,mTowerMax);
    towerCrateCurves[0]->GetYaxis()->SetTitle(ytitle);
    towerCrateCurves[0]->GetYaxis()->SetRangeUser(0.,ymax);
    legend->Draw();

    if(doprint)towers->Print("tower_crates.gif");

    }
    // ----------------------------------------------------------------------------
    void mapmtChannels( Int_t crate )
    {

    static const Int_t stride=16;
    Int_t crateid = MinMapmtCrateID+crate;

    TString fname="mapmt-crate-";fname+=crate+MinMapmtCrateID;fname+=".ps";
    TCanvas *canvas=new TCanvas("canvas","canvas",850/2,1100/2);
    canvas->Divide(1,2);
    Int_t icanvas=0;

    for ( Int_t ich=0;ich<192;ich+=stride )
    {

    canvas->cd(1+icanvas%2);
    icanvas++;

    TString pname="crate";
    pname+=crateid;
    pname+="_ch";
    pname+=ich;
    pname+="-";
    pname+=ich+stride-1;

    const Char_t *opts[]={"ALP","LP","LP","LP","LP","LP","LP","LP","LP","LP","LP","LP","LP","LP","LP","LP","LP","LP","LP","LP","LP"};

    // normalize
    Float_t ymax=0.0;
    Double_t sum[stride];for ( Int_t jj=0;jj<stride;jj++ )sum[jj]=0.;
    Double_t max=0.;
    for ( Int_t jch=0;jch<stride;jch++ ) // loop over channels in this one graph
    {
    Int_t index=ich+jch;
    Double_t *Y=mapmtChanCurves[crate][index]->GetY();
    for ( Int_t ipoint=0;ipoint<npoints;ipoint++ ) {
    if ( Y[ipoint]>ymax ) ymax=Y[ipoint];
    sum[jch]+=Y[ipoint];
    }
    if ( sum[jch]>max ) max=sum[jch];
    }
    if ( max <= 0. ) continue; // meh?

    TLegend *legend=new TLegend(0.55,0.11,0.85,0.525);
    for ( Int_t jch=0;jch<stride;jch++ )
    {
    Int_t index=ich+jch;
    mapmtChanCurves[crate][index]->SetMarkerSize(0.75);
    // offset X axis of each of these
    Double_t *X=mapmtChanCurves[crate][index]->GetX();
    Double_t *Y=mapmtChanCurves[crate][index]->GetY();
    Double_t *EY=mapmtChanCurves[crate][index]->GetEY();
    if ( sum[jch]<= 0. ) continue;
    // std::cout<<"before"<<std::endl;
    for ( Int_t ip=0;ip<npoints;ip++ ){
    Float_t shift = 0.5+ ((float)jch) - ((float)stride)/2.0;
    Double_t yy=Y[ip];
    X[ip]-= 0.1*shift;
    Y[ip]*=max/sum[jch];
    EY[ip]*=max/sum[jch];
    // std::cout << "ip="<<ip<<" y="<<yy<<" y'="<<Y[ip]<<std::endl;
    }
    mapmtChanCurves[crate][index]->Sort();
    if ( !jch )
    mapmtChanCurves[crate][index]->GetXaxis()->SetRangeUser(0.,ymax*1.05);
    mapmtChanCurves[crate][index]->SetMarkerColor(38+jch);
    mapmtChanCurves[crate][index]->SetLineColor(38+jch);
    mapmtChanCurves[crate][index]->SetMinimum(0.);
    mapmtChanCurves[crate][index]->Draw(opts[jch]);

    TString label="crate ";label+=crate+1;label+=" chan ";label+=index;
    legend->AddEntry(mapmtChanCurves[crate][index],label,"lp");

    }
    legend->Draw();
    canvas->Update();

    // if(doprint)c->Print(pname+".gif");
    if ( !(icanvas%2) ){
    canvas->Print(fname+"(");
    canvas->Clear();
    canvas->Divide(1,2);
    }
    // if(doprint)c->Print(pname+".gif");

    }
    canvas->Print(fname+")");
    gSystem->Exec(TString("ps2pdf ")+fname);

    }
    // ----------------------------------------------------------------------------
    void towerChannels( Int_t crate )
    {

    static const Int_t stride=12;

    TString fname="tower-crate-";fname+=crate+1;fname+=".ps";
    TCanvas *canvas=0;
    canvas = new TCanvas("canvas","canvas",850/2,1100/2);

    canvas->Divide(1,2);
    Int_t icanvas=0;

    for ( Int_t ich=0;ich<120;ich+=stride )
    {

    canvas->cd(1+icanvas%2);
    icanvas++;

    // TString aname="crate";aname+=crate;aname+=" channels ";aname+=ich;aname+=" to ";aname+=ich+stride-1;
    // TCanvas *c = new TCanvas(aname,aname,400,300);

    TString pname="crate";
    pname+=crate+1;
    pname+="_ch";
    pname+=ich;
    pname+="-";
    pname+=ich+stride-1;

    const Char_t *opts[]={"ALP","LP","LP","LP","LP","LP","LP","LP","LP","LP","LP","LP","LP","LP","LP","LP"};

    // normalize
    Double_t sum[stride];for ( Int_t jj=0;jj<stride;jj++ )sum[jj]=0.;
    Double_t max=0.;
    for ( Int_t jch=0;jch<stride;jch++ ) // loop over channels in this one graph
    {

    Int_t index=ich+jch;
    Double_t *Y=towerChanCurves[crate][index]->GetY();
    for ( Int_t ipoint=0;ipoint<npoints;ipoint++ ) sum[jch]+=Y[ipoint];
    if ( sum[jch]>max ) max=sum[jch];
    }
    if ( max <= 0. ) continue; // meh?

    TLegend *legend=new TLegend(0.125,0.15,0.325,0.45);
    for ( Int_t jch=0;jch<stride;jch++ )
    {

    Int_t index=ich+jch;
    towerChanCurves[crate][index]->SetMarkerSize(0.75);
    // offset X axis of each of these
    Double_t *X=towerChanCurves[crate][index]->GetX();
    Double_t *Y=towerChanCurves[crate][index]->GetY();
    Double_t *EY=towerChanCurves[crate][index]->GetEY();
    if ( sum[jch]<= 0. ) continue;
    // std::cout<<"before"<<std::endl;
    for ( Int_t ip=0;ip<npoints;ip++ ){
    Float_t shift = 0.5+ ((float)jch) - ((float)stride)/2.0;
    Double_t yy=Y[ip];
    X[ip]-= 0.1*shift;
    Y[ip]*=max/sum[jch];
    EY[ip]*=max/sum[jch];
    // std::cout << "ip="<<ip<<" y="<<yy<<" y'="<<Y[ip]<<std::endl;
    }

    towerChanCurves[crate][index]->Sort();
    towerChanCurves[crate][index]->SetMinimum(0.);
    towerChanCurves[crate][index]->SetMarkerColor(38+jch);
    towerChanCurves[crate][index]->SetLineColor(38+jch);
    towerChanCurves[crate][index]->Draw(opts[jch]);

    TString label="crate ";label+=crate+1;label+=" chan ";label+=index;
    legend->AddEntry(towerChanCurves[crate][index],label,"lp");

    }
    legend->Draw();
    canvas->Update();

    // if(doprint)c->Print(pname+".gif");
    if ( !(icanvas%2) ){
    if ( doprint ) canvas->Print(fname+"(");
    canvas->Clear();
    canvas->Divide(1,2);
    }

    }
    if ( doprint ) {
    canvas->Print(fname+")");
    gSystem->Exec(TString("ps2pdf ")+fname);
    }

    }

    // ----------------------------------------------------------------------------
    void fillCrates(Int_t ipoint)
    {
    #if 1
    // loop over tower crates
    for ( Int_t icr=0;icr<MaxTwCrates;icr++ )
    {
    Float_t yield = (Float_t)mTowerCrateYield[icr];
    Float_t total = (Float_t)mTotalYield;
    if ( total > 10.0 ) {
    Float_t eyield = TMath::Sqrt(yield);
    Float_t etotal = TMath::Sqrt(total);
    Float_t r = yield / total;
    Float_t e1 = (yield>0)? eyield/yield : 0.0;
    Float_t e2 = etotal/total;
    Float_t er = r * TMath::Sqrt( e1*e1 + e2*e2 );
    towerCrateCurves[icr]->SetPoint(ipoint, mTowerDelay, r );
    towerCrateCurves[icr]->SetPointError( ipoint, 0., er );
    }
    else {
    towerCrateCurves[icr]->SetPoint(ipoint, mTowerDelay, -1.0 );
    towerCrateCurves[icr]->SetPointError( ipoint, 0., 0. );
    }
    }
    // loop over mapmt crates
    for ( Int_t icr=0;icr<MaxMapmtCrates;icr++ )
    {
    Float_t yield = (Float_t)mMapmtCrateYield[icr];
    Float_t total = (Float_t)mTotalYield;
    if ( total > 10.0 ) {
    Float_t eyield = TMath::Sqrt(yield);
    Float_t etotal = TMath::Sqrt(total);
    Float_t r = yield / total;
    Float_t e1 = (yield>0)? eyield/yield : 0.0;
    Float_t e2 = etotal/total;
    Float_t er = r * TMath::Sqrt( e1*e1 + e2*e2 );
    mapmtCrateCurves[icr]->SetPoint(ipoint, mMapmtDelay, r );
    mapmtCrateCurves[icr]->SetPointError( ipoint, 0., er );
    }
    else {
    mapmtCrateCurves[icr]->SetPoint(ipoint, mMapmtDelay, -1. );
    mapmtCrateCurves[icr]->SetPointError( ipoint, 0., 0. );
    }
    }
    #endif
    }
    // ----------------------------------------------------------------------------
    void fillChannels(Int_t ipoint)
    {

    #if 1
    // loop over tower crates
    for ( Int_t icr=0;icr<MaxTwCrates;icr++ )
    {
    for ( Int_t ich=0;ich<MaxTwCrateCh;ich++ )
    {

    Float_t yield = (Float_t)mTowerChanYield[icr][ich];
    Float_t total = (Float_t)mTotalYield;
    if ( total > 10.0 ) {
    Float_t eyield = TMath::Sqrt(yield);
    Float_t etotal = TMath::Sqrt(total);
    Float_t r = yield / total;
    Float_t e1 = (yield>0)? eyield/yield : 0.0;
    Float_t e2 = etotal/total;
    Float_t er = r * TMath::Sqrt( e1*e1 + e2*e2 );
    towerChanCurves[icr][ich]->SetPoint(ipoint, mTowerDelay, r );
    towerChanCurves[icr][ich]->SetPointError( ipoint, 0., er );
    }
    else {
    towerChanCurves[icr][ich]->SetPoint(ipoint, mTowerDelay, -1.0 );
    towerChanCurves[icr][ich]->SetPointError( ipoint, 0., 0. );
    }
    }
    }
    #endif
    #if 1
    // loop over mapmt crates
    for ( Int_t icr=0;icr<MaxMapmtCrates;icr++ )
    {
    for ( Int_t ich=0;ich<MaxMapmtCrateCh;ich++ )
    {

    Float_t yield = (Float_t)mMapmtChanYield[icr][ich];
    Float_t total = (Float_t)mTotalYield;
    if ( total > 10.0 ) {
    Float_t eyield = TMath::Sqrt(yield);
    Float_t etotal = TMath::Sqrt(total);
    Float_t r = yield / total;
    Float_t e1 = (yield>0)? eyield/yield : 0.0;
    Float_t e2 = etotal/total;
    Float_t er = r * TMath::Sqrt( e1*e1 + e2*e2 );
    mapmtChanCurves[icr][ich]->SetPoint(ipoint, mMapmtDelay, r );
    mapmtChanCurves[icr][ich]->SetPointError( ipoint, 0., er );
    }
    else {
    mapmtChanCurves[icr][ich]->SetPoint(ipoint, mMapmtDelay, -1.0 );
    mapmtChanCurves[icr][ich]->SetPointError( ipoint, 0., 0. );
    }
    }
    }
    #endif
    }
    // ----------------------------------------------------------------------------
    void initGraphs()
    {

    for ( Int_t i=0;i<MaxTwCrates;i++ ){
    towerCrateCurves[i] = new TGraphErrors(npoints);
    towerCrateCurves[i]->SetMarkerStyle(20+i);
    towerCrateCurves[i]->SetMarkerColor(i+1);
    towerCrateCurves[i]->SetLineColor(i+1);

    for ( Int_t j=0;j<MaxTwCrateCh;j++ )
    towerChanCurves[i][j]=(TGraphErrors*)towerCrateCurves[i]->Clone();

    }
    for ( Int_t i=0;i<MaxMapmtCrates;i++ ){
    mapmtCrateCurves[i]= new TGraphErrors(npoints);
    mapmtCrateCurves[i]->SetMarkerStyle(20+i%4);
    mapmtCrateCurves[i]->SetMarkerColor(1+i%4);
    mapmtCrateCurves[i]->SetLineColor(1+i%4);

    for ( Int_t j=0;j<MaxMapmtCrateCh;j++ )
    mapmtChanCurves[i][j]=(TGraphErrors*)mapmtCrateCurves[i]->Clone();

    }

    }
    // ----------------------------------------------------------------------------
    void chainFiles(const Char_t *path)
    {
    chain=new TChain("timing","Timing summary");

    TFile *tfile = 0;
    std::cout << "chaining files in " << path << std::endl;
    TSystemDirectory *dir = new TSystemDirectory("dir",path);

    TIter next( dir->GetListOfFiles() );
    TObject *file = 0;
    while ( file = (TObject*)next() )
    {
    TString name=file->GetName();

    // sum the event counter histogram
    if ( name.Contains(".root") ) {
    // open the TFile and
    std::cout << " + " << name << std::endl;
    // tfile = TFile::Open(name);
    chain->Add(name);
    }
    }

    }
    // ----------------------------------------------------------------------------
    void setBranches(const Char_t *dir)
    {

    chain->SetBranchAddress("mRunNumber", &mRunNumber );
    chain->SetBranchAddress("mTowerDelay", &mTowerDelay );
    chain->SetBranchAddress("mMapmtDelay", &mMapmtDelay );

    chain->SetBranchAddress("mTotalYield", &mTotalYield );
    chain->SetBranchAddress("mTowerCrateYield", &mTowerCrateYield );
    chain->SetBranchAddress("mMapmtCrateYield", &mMapmtCrateYield );
    chain->SetBranchAddress("mTowerChanYield", &mTowerChanYield );
    chain->SetBranchAddress("mMapmtChanYield", &mMapmtChanYield );

    chain->SetBranchAddress("mTowerMin",&mTowerMin);
    chain->SetBranchAddress("mTowerMax",&mTowerMax);
    chain->SetBranchAddress("mMapmtMin",&mMapmtMin);
    chain->SetBranchAddress("mMapmtMax",&mMapmtMax);

    }

    runEEmcL2Timing.C

    Calibrations

    New EEMC Calibrations Page

    2007 EEMC Tower Gains

    Run 7 EEMC Tower Gains - Using Slopes


    Goals: Use "inclusive slopes" from fast-detector only, min-bias runs to determine relative (eta-dependent) gains for all EEMC towers for the 2007 run.  More specifically, analyze ~30k events from run 8095104 (thanks to Jan [production] and Jason [fitting] !), fit slopes to all ungated tower spectra, in order to:

    1. check if new / replaced PMT's (only 2 for this year) need significant HV adjustment
    2. make sure tubes with new / replaced bases are working properly
    3. search for towers with unusual spectra, anomalous count rates, or slopes that are far from the norm for that eta bin
    4. compare individual slopes to 2006 absolute gains (from mips) for each eta bin, to test robustness and stability of gain determinations
    5. for tower gains far from ideal (as determined with slopes and/or mips) consider adjusting HV
    6. look for anything else that seems out of whack!

    Definitions:

    For the gain calibration of towers, we will use

    • x = channel number = ADC - ped
    • E = full e.m. energy (GeV) = (deposited energy / sampling fraction) for e.m. particles
    • G = absolute gain (channels / GeV) including sampling fraction

    So:   E = x / G

    For slopes, raw spectra (y vs. x) are fit to:   y = A e-bx

    Thus, one expects that for a given eta bin:   G  ~  1 / b


    Results:

    1.  Two new tubes are fine!   Slopes of recently replaced PMT's 04TB12 and 12TE06 are very close to those of neighboring towers at same eta, or those of same subsector in the neighboring sector:

    towerID integral slope error
    03TB12 2003 -0.04144 0.00181
    04TA12 2081 -0.04527 0.00177
    04TB12 2022 -0.04400 0.00173
    04TC12 2195 -0.03825 0.00170
    05TB12 2056 -0.04465 0.00177

    towerID integral slope error
    11TE06 2595 -0.04157 0.00162
    12TD06 1965 -0.05977 0.00185
    12TE06 2535 -0.04516 0.00165
    01TA06 2124 -0.05230 0.00179
    01TE06 2070 -0.05342 0.00190

    More global comparisons to all the tower slopes in the same eta bin are given below.   For both tubes, the gain is 5-10% lower than average, but well within useful range.


    2.  Change of base (same PMT) has little effect on tower gains.  This has been confirmed for the six bases that were changed (03TA09, 06TB04, 10TE01, 12TA01, 12TC11, 12TE06), using the same comparisons to neighboring towers used in step 1 above.


    3.  For all 720 towers, comparison of 2007 slopes to 2006 mip-based absolute gains indicates about 6 problem towers (most "well known")

    • 06TA03 - no useful mip results, fitted slope was positive!  Spectra never make much sense.
    • 08TC05 - didn't work last year, still not working!  Spectra shows only a pedestal.
    • 07TC05 - no gain determined in 2005 or 2006.  Has largest slope of all towers, probably useless.
    • 06TD11 - each year, everything gets replaced; each year it continues to be 'flakey,' sometimes working, sometimes not.
    • 12TD01 - seemed okay last year, now has a very small slope.  Maybe PMT is dying fast?
    • 10TA11 - worked fine last year, recently died.  HV off, only a pedestal.

    In addition, 09TE01 seems to be working now, though it failed the mip gain analysis last year, and hasn't been 'fixed.'

    All of these cases are easily seen in the following correlation plot:


    4.  See clear correlations, within each eta bin, between new (2007) slope analysis, last year's mip analysis    ->   gains are stable, methods are robust!   On vertical scale, solid magenta line = ideal gain for that bin, dashed = +/- 15%

    eta bin

    correlation plot comments
    1 .gif one high gain tube (10TA01), reasonable correlation, no obvious problems
    2 .gif looks okay, all within +/- 20% of ideal gains
    3 .gif pretty ratty - several towers ~15% off 'correlation' curve
    4 .gif one very low gain tube (01TA04), one with very small slope (02TD04), otherwise all okay
    5 .gif a couple of high-gain towers, correlation is very good
    6 .gif one low gain, a few high-gain, but good correlation.  New PMT 12TE06 looks reasonable
    7 .gif overall gains a bit high compare to ideal, no real problems
    8 .gif no problems
    9 .gif no problems
    10 .gif strong correlations, tight clustering in both gain sets
    11 .gif odd shape, but okay.  Only problem (lower left corner) is 06TD11
    12 .gif everything a bit noisier, gains ~7% high overall.  New PMT 04TB12 fits right in!


    5.  Number of 'gain outliers' is quite small, deviation of average from ideal always < 10%.  Because the endcap towers are not used for trigger decisions, no obvious advantage in making HV adjustments to large number of towers.


    Conclusion:  Endcap towers are in good shape!  A very small number (~6 / 720) are not working well, but for these few, HV adjustment would not solve the problem.  No strong argument for changing HV on any particular tube at this point.


    N.B.   For each eta bin, one can calculate the ratio   R = G / (1/b)   as a 'conversion' of slope data to absolute gains.   Using the 2006 mip calibration and the 2007 slopes, one gets a fairly smooth curve, though something seems to be happening around eta bin 8.

    Calculating EEMC pedestal and status tables

     
    These instructions are for generating EEMC pedestals and status tables from raw ADC distributions extracted from raw DAQ files.  A useful overview of the "philosophy" behind the code described below can be found here.  The instructions producing histograms from the raw DAQ files is given here.  

    The instructions below are based entirely on code which exists in CVS and not any private directories.
     
    1) Check out the relevant code from the cvs repository to your working directory:
     
    % cvs co StRoot/StEEmcPool/StEEmcDAQ2Ped
    
    % cvs co StRoot/StEEmcPool/StEEmcPedFit
    
    % cvs co StRoot/StEEmcPool/StEEmcStatus
     
    and compile
     
    % cons
     
    2) Copy the necessary macros and scripts to your working directory:
     
    % cp StRoot/StEEmcPool/StEEmcDAQ2Ped/macros/plDAQ2Ped.C ./
    
    % cp StRoot/StEEmcPool/StEEmcPedFit/macros/fitAdc4Ped.C ./
    
    % cp StRoot/StEEmcPool/StEEmcPedFit/macros/procDAQ2Ped.sh ./
    
    % cp StRoot/StEEmcPool/StEEmcStatus/macros/pedStat.C ./
    
    % cp StRoot/StEEmcPool/StEEmcStatus/macros/procPedStat.sh ./
     
    3) Produce 1D ADC histograms for every EEMC channel from the output of the daq reader (described elsewhere) using StEEmcDAQ2Ped and fit these distributions to produce pedestal tables using StEEmcPedFit.  The script will run over histogram files for many runs stored in a single directory:
     
    % ./procDAQ2Ped.sh
     
    4) Analyze the ADC distributions for every EEMC channel and determine its status and fail bits (ie. status table info) using StEEmcStatus and write to StatFiles/ directory.  The script will loop over all runs which are given in an ascii file (runList) with content like
     
    R13071063
    R13071064
     
    Then just execute
     
    % ./procPedStat.sh
     
    5) Produce status tables for DB upload (one for each sector) :
     
    % cd StatFiles/
    
    % ./procErrs.sh
     
    Pedestal and status table files for each sector should now be located in each run's directory in StatFiles/.  These are what will be uploaded to the DB.

    Calculating EEMC tower ideal gains and expected MIP response

    This is a quick summary of how one goes from a measured MIP peak to a final tower gain.

    In the attached spreadsheet, I use the EEMC tower boundaries in eta (lower table) to determine the average eta per bin and the ideal gain in each bin, assuming our goal ("ideal") is to have an e.m. shower of transverse energy ET = 60 GeV saturate the 12-bit ADC, that is, land in channel 4096.  This calculation is independent of assumed sampling fraction.  The result appears in column H in the lower table, and is highlighted in yellow.

    For a calibration based on MIP's, we also need to know the actual energy deposited by the MIP as it traverses all of the scintillator layers, so we need to know the total thickness of scintillator (for normal incidence) and the dE/dx of a MIP.  These values appear in cells L5 and M5, respectively.  All calculations are keyed to these cells, so changing these cells will propagate to all other columns.

    Finally, to connect ideal gains with MIP energy depositions (so we can arrive at the quantity of direct interest for a MIP-based calculation: in which ADC channel (above pedestal) should the MIP peak appear?), we also need to know the calorimeter sampling fraction.  I have used 5%, which is in cell G5.  Again, changing this one cell value will fill the rest of the tables accordingly. 

    With these assumed values (60 GeV, 4096 channels, 99 mm, 0.20 MeV/mm, 5% - all in row 5) one can now determine the ADC channel (above pedestal) in which the MIP peak will appear, if the gain is "ideal".  These are given in column N of the upper table and highlighted.  For each tower, the ratio of the actual (measured / fit) MIP peak channel to this ideal channel is the factor by which the ideal gain needs to be multiplied to arrive at the "true gain" per tower, which is what is loaded into the database.

    N.B.  I just cut and pasted these two tables together, so there is some overlap between them.  Several columns relate to estimating number of photo-electrons (pe) and high voltage (HV) and can be ignored.

    EEMC Calibration Docs

    This page is meant for centralizing all EEMC calibration documents.

    Calibrations through 2007(MIT locker):  web.mit.edu/rhic-spin/public/eemc-bnl/calibration/

    Alice B.'s calibration code blog:  https://drupal.star.bnl.gov/STAR/blog/aliceb/2010/oct/08/updated-eemc-calibration-code

    Ting Lin's updated MIP calibration instructions:  eemc_code_instruction_by_Ting_2.pdf

    EEMC Tower Swaps

    During the EEMC MIP calibration, we find that some of the EEMC cables are swaped.

    Swaps in database:(This has been implemented during the production, you do not need to worried about these.)
    1. mapping=S1 rot, P1 as in sect=5, swap TA4-5, QA11-B2, JB;
    2. mapping=swap V209:216, V216-280, V265:272, Bob
    3. mapping=swap 10TD04 with 10TD06, Ting and Mike(Not valid before 2015-08-13)

    Swap towers found From MIP calibration: 
     2009 pp200: 
     10TD06 <=> 10TD04 
     11TE10 <=> 11TE12

     2012 & 2013:
     10TD04  <=> 10TD06  
     11TC05  <=> 11TC07
     11TB04  <=> 11TB06
     04TB01  <=> 04TB12
     04TB02  <=> 04TB11
     04TB03  <=> 04TB10
     04TB04  <=> 04TB09
     04TB05  <=> 04TB08
     04TB06  <=> 04TB07

    Suggestion:
    You can remap the towers in your own analysis.
    For example, in Jet analysis, add a small piece of code before you record the tower Id in StRoot/StJetMaker/mudst/StjEEMCMuDst.cxx at line 84.
    /////////////////////////////////////////// Sample Code ////////////////////////////////////////////////////////

    void tileReMap(int &sec , int &sub , int &etabin){
      if(sec==11) {
        if(sub==5 && etabin==10 ) {  // 11TE10<==>11TE12;By Ting 10-27-2014
          etabin=12;
        } else if(sub==5 && etabin==12 ) {
          etabin=10;
        }
      }
      return;
    }
    //////////////////////////////////////////////////////////////////////////////////////////////////////////////////

    Overview for generating EEMC pedestals and status tables

    This is an informal overview of the 'philosophy' used to generate pedestals and status tables for the STAR endcap EMC, and in particular the ESMD.

    For the EEMC offline database, the default protocol is to store one set of pedestals and status tables per fill for the pp runs. In general, this information is updated less frequently for heavy-ion running, depending on user demand.  For the endcap, other database information, such as detector mapping and gains, will apply for much longer timescales (like years, or at least a whole run), and so there may be a single database entry for an entire running period.

    Focusing on the smd strips, our guiding assumptions are pretty simplistic: First, strips that have constant problems (i.e., those that stay bad after their crate is reconfigured or power-cycled, for example) are easy to catch, and will be flagged as bad no matter what data are used for QA. These aren't the concern. The messier case is when channels suddenly go bad, but can be recovered later (by power-cycling, etc.).  We assume that these sorts of problems, which tend to affect an entire FEE card and not individual channels, can occur at any time while running, but especially at the very start of a fill during tuning and collimation of the beams. These problems don't often fix themselves, and so will remain until action is taken - which usually occurs only between fills.

    For the smd strip pedestals and status tables, we analyze one min-bias run per fill, preferably one taken near the end of the fill. Guided by the above ideas, we assume that by this time we have 'accumulated' all the problems that will arise during the fill, and we will mark them as bad for the entire fill. This means that any strip that died or developed some problem _during_ the fill is marked as bad even for the runs early in the fill when it was still working. So it is a conservative approach: If a strip was malfunctioning near the end of the fill, we assume it was bad for the entire fill.  It is clear that if such problems are only cured in-between fills, this all makes sense.  If a problem is truly intermittent, however, and comes and goes from run to run, then in this approach we might catch it, or we could easily miss it.

    Updating the status information in the database on shorter timescales, or basing each entry on more than one run per fill (e.g., take an OR of problems found at the start and at the end of each fill), is certainly possible - it just requires more time and effort.  At this point, we don't plan to change our protocol unless new and unexpected time dependences are observed for problems that don't fit into our model of channels breaking and being 'fixed.'   I don't think our current assumptions are totally screwy.  Just from watching the P-plots, one can see that the endcap smd spectra start to get 'ratty' after a few fills, as more and more groups of four strips (which are consecutive in DAQ and P-plots, but not in the physical detector) start to drift around in their pedestal, or go south in some other manner.  After a thorough re-cycling, most of these problems can be fixed, so things look pretty good again at the start of the next fill.

    Nevertheless, at the end of the day, the most relevant question for the endcap status tables is "are we doing well enough?" and the best feedback comes from doing real analysis.  So the more that people stare at spectra and find new problems, especially those with unexpected time dependences, the better we can design our QA algorithms and protocol to identify and keep track of them.  No matter what sorts of problems we have found in the past or might have anticipated, nature always finds new ones, and we rely on users to point these out.

    Producing ADC distributions for the EMCs from raw DAQ files


    For calibrations of the EMCs (eg. pedestals, status, relative gains) it is often useful to look at raw ADC distributions from a sample of minimum bias events.  This can be done simply by analyzing the MuDsts once production has occured, but if one wants to proceed with calibrations before production occurs it often faster to use the raw DAQ files to produce these simple ADC distributions summed over many events.  The instructions below describe an adaptation of the general daq reader (provided by Tonko et. al.) used specifically to produce these distributions. 

    The general daq reader for all STAR subsystems can be found in StRoot/RTS/src/RTS_EXAMPLE/rts_example.C, however for EMC purposes not all of the functionality provided there are necessary.  The macro used here, emchist.C, uses the relevant pieces of the general daq reader to retrieve raw ADC information on a crate/channel level from the daq files and store them in ROOT histograms.  These histograms are useful for many QA and other tasks including timing scans, pedstals, status tables, relative gains (from slopes), etc.
     
    If one is interested in only events from a specific trigger to increment the raw ADC histograms, then this can be specified in the emchist.C macro by using
     
    int trigID[2]={A,B};
     
    where A is the "DAQ trigger ID" for events to be incremented in the histograms and B is another trigger which is only implemented as a counter.  These trigger IDs can be found in the RunLog browser, but then they need to be converted to get the numbers A and B in trigID[].  The mapping pattern is
    1 2 4 8 16 32 64 128 256 512 1024 2048...
    1 2 4 8 10 20 40 80  100 200 400  800...

    So for example, for the emc-check runs in Run 13,

    the line should be trigID[2]={1024,0};


    Instructions:

    1) Check out the macro from cvs:
    % cvs co StRoot/StEEmcPool/macros/DaqReader

    2) Copy the relevant macros and scripts to your working directory:

    % cp StRoot/StEEmcPool/macros/DaqReader/compile.sh ./
    
    % cp StRoot/StEEmcPool/macros/DaqReader/emchist.C  ./
    
    % cp StRoot/StEEmcPool/macros/DaqReader/submitAll.sh  ./
    
    % cp StRoot/StEEmcPool/macros/DaqReader/addFiles.sh  ./

    3) Compile the macro by using the compile.sh script:
     
    % ./compile.sh emchist.C
     
    which produces an executable emchist* which reads the daq file by executing
     
    ./emchist /star/data03/daq/2012/100/13100049w_js/st_W_13100049_raw_1340001.daq
     
    for the daq file of interest.
     
    To process all the daq files for a given run which are stored on some NFS disk space there is a script provided (submitAll.sh) which loops over a given runList to do this.  Finally, you can add all the histograms for a given run using the addFiles.sh script.

    Run 10 EEMC Calibrations

    Run 11 EEMC Calibrations

    This is the main page for all EEMC calibration information for Run 11

    EEMC HV adjustments for "outliers"

    EEMC HV adjustments for "outliers"

    Using the sums method (similar to Run 9 analysis) Scott Identified some outliers who's gains were either too high or too low compared to other towers in the same eta ring.  The list of towers is below in two different groups 1) known bad channels and 2) channels to adjust HV.

    1) Known bad channels:

    06TA03, 02TC06, 07TC05 -> all reported to still be bad and masked at L0

    04TB05 -> spectra shows this channel is dead at startup

    2) Channels to adjust HV

        gain too high:  11TB08, 03TA06, 08TC03, 07TC07, 02TE02, 01TA04

        gain too low:  12TD01, 10TA09

     

    Procedure:

    Minbias runs (12030069-73) were used to determine the slope of each channel using "Scott's method".  Below is a summary of the channels slope and the median slope for towers in that eta bin as well as the current HV (HVset_ix) for each tower, all of which is used to calculate the new HV (HVset_xi) to be used to match the other towers in that eta bin.

    Table 1:

    The equation used to determine the new HV values is HV_1 = HV_2 * (slope_2 / slope_1) ^ (1/kappa), where the value of kappa is taken to be 8.8 from previous HV adjustments.    This is equation is different than the similar equation used by the barrel becuase in the endcap gain ~ 1/slope and in the barrel gain ~ slope.

    Notes:

    1) 11TB08 appears to be unstable (HV status is 5 or 7 instead of 4="good") running at 517 V, so it was decided to set this tower to its original value of 699.9 V where it runs stably.  Thus, this tower will continue to be masked out of L0 and L2 trigger and will need to be calibrated carefully to be used in offline analysis.

    2) 12TD01 is a bit of a special case as it was "hot" in Run 8 and had its voltage lowered (946.2 -> 830.8) before Run 9, but then the gain was way too low.  So it was decided to put its voltage at 880.0 to try and increase the gain without getting hot.

     

    Runs taken to test new HV

    Once new HV values were determined 2 emc-check runs were taken to check the new files.  12037043 was taken with HVset_ix and 12037046 was taken with HVset_xi.  Below is a summary of the slopes for the 2 runs for the channels of interest.  Unfortunately. some channels didn't get a new HV loaded because of communications problems with HVsys branch "C" (these are shown in blue). 

    Table 2:

    For the towers where the new HV was loaded correctly (red) the slopes now match much better to the median slope in it's etabin, so the new HV values look reasonable and will be used for the remainder of Run 11.  HVset_xi is used for all runs after R12038072.

    Note:  10TA09 appeared to be hot after its HV was raised to 827.0 so it was set back to its HVset_ix voltage value.  There was probably beam background when the original slopes were measured causing this to incorrectly be labeled an outlier.

    5P1 adjustments

    During the same test runs (12037043 and 12037046) a test was done increasing tube #2 of 5P1 from 750 -> 840 V, as the spectra for channels 176-191 was way down in some early runs (eg. 12034091).  This increased the gain by ~2.6, but comparisons of the slopes for channels from tube #2 to other preshower channels in 5P1 showed that the gain should be increased by another factor of ~2 (see txt file with slopes for tubes of interest and median slope of other channels in run 12037046). 

    So the final HV used for 5P1 will be 913 V, which was determined with a similar formula as for the towers, but with kappa_mapmpt=8.3.

    Run 12 EEMC Calibrations

    pp500 EEMC Gain Calibration with MIPs

    Note four sets of tower gains were uploaded to the database to account for gain decrease over the course of the run.

    See attachments for calibration summary and masks lists.

    Plots can be found here:
    http://www.star.bnl.gov/protected/spin/jhkwas/calibrations/run12plots/

    Run 13 EEMC Calibrations

    EEMC Gain Calibration with MIPs

    Note four sets of tower gains were uploaded to the database to account for gain decrease over the course of the run.

    See attachments for calibration summary and masks lists.

    Plots can be found here:
    http://www.star.bnl.gov/protected/spin/jhkwas/calibrations/run13plots/

    EEMC Status and Pedestal Tables

    Run List

    General Procedure:

    1)  Produce ADC distributions from raw daq files as outlined here.

    2)  Make ADC distributions for each EEMC channel and fit the histograms to get pedestal and rms values as shown here.

    3)  Use StEEmcStatus to get status table information.  See the last two steps here.

    Typical ADC Distribution of one EEMC channel:
    http://www.star.bnl.gov/protected/spin/skoby/eemc/temp/typical-ADC-dist_14067073.png

    Status codes appearing in Run 13:
    ONLPED    0x0001 // only pedestal visible
    STKBT        0x0002 // sticky lower bits
    HIGPED      0x0010 // high chan, seems to work
    WIDPED      0x0080 // wide ped sigma

    "failed" status: 
    fit fails, all entries in 1 channel, too many ADC = 0 bins, dead channel, stuck in another mode, HV was off, signal fiber broken, etc.

    Lower bit(s) failed ("sticky bits"):

    Entire run 13: 

    • a04TB07, a06TA07, a08TB07

    Most of run 13:

    • a05TA12

    One to several runs:

    • a01TD12, a04TD05, a05TA10, a06TB11, a06TE10, a08TE08, a09TA09, a11TA07

    High pedestal:  a05TB01 for 6 runs

    Wide ADC distribution:

    http://www.star.bnl.gov/protected/spin/skoby/eemc/temp/WIPED-example_14069129.png

    Most of run 13
    • a03TB09
    One to several runs
    • a12TC09, a03TB01, a03TB02, a03TB10, a06TD04, a09TD09, a11TA12, a12TD09, a02TD04, a02TD05, a03TB11, a03TD09

    Status tables inserted where large time-gaps occur between emc-check runs:
    Fill 17237  -->  use table from fill 17250
    Fill 17263  -->  use table from fill 17268
    Fill 17311  -->  use table from fill 17312
    Fill 17328  -->  use table from fill 17329
    Fill 17399  -->  use table from fill 17402
    Fill 17426  -->  use table from fill 17427

    Status tables inserted mid-fill due to major problems and fixes:
    Fill 17333, Run 14096099  -->  use table from fill 17335
    Fill 17484, Run 14130003  -->  use table from fill 17486
    Fill 17573, Run 14152029  -->  use table from fill 17579
    Fill 17586, Run 14152029  -->  use table from fill 17587

    See this for MAPMT status info.
    See this for MAPMT "diagnostic" details.

    Run 15 EEMC Calibrations

    Status and Pedestal Tables

    We used one good emc-check run with 100k events each for each fill.  Some fills did not have a good emc-check run.  Below are the runs we used.
    Run List

    Produced ADC distributions from raw DAQ files following the instrucitons here.

    Determined the status and pedestals for each tower and mapmt channel with the instructions here.

    Summary of tower issues:
    On average, about 2.5% of all towers are masked-out per fill.
    Tower Status by Fill

    02TA01:  dead entire run
    02TC04:  bad entire run
    02TC06:  fail for part of the run
    04TB07:  stuck bit entire run
    04TC01:  fail for part of the run
    05TA12:  stuck bit entire run
    06TA03:  failed entire run
    06TA07:  stuck bit entire run
    06TC08:  dead for one run
    06TD04:  failed for one run
    07TC05:  failed entire run
    08TB07:  stuck bit entire run
    10TA09:  dead for two runs
    10TC02:  failed entire run
    10TC03:  failed entire run
    10TC09:  failed entire run
    10TC11:  good for only two runs
    11TA08:  dead entire run
    11TA12:  mark as stuck bit entire run
    11TB08:  failed entire run
    11TC04:  dead for part of the run
    12TB02:  dead part of the run
    12TC05:  bad entire run
    12TD01:  bad for part of the run

    **See attached file for MAPMT pedestal widths**

    Run 17 EEMC Calibrations

     

    Status and Pedestal Tables

    RunList Selection:
    Use one good emc-check run with at least 50k events for each fill from pp510. Some fills did not have good emc-check run.

    For pp510, total emc_check run list has 227 runs, some problematic (?) runs:
    18065063, only has 1 event; 
    18066002, second emc_check run in same fill, Shift Leader comment: 18065001 rate too high?
    18087034, only has 1 event;
    18092096, emc_check run pp energy < 500GeV
    18108002, second emc_check run in same fill.
    18108043, emc_check run pp energy < 500GeV
    18109051, emc_check run pp energy < 500GeV
    18111048, emc_check run pp energy < 500GeV
    18111052, emc_check run pp energy < 500GeV
    18112046, emc_check run pp energy < 500GeV
    18113047, emc_check run pp energy < 500GeV
    18115037, emc_check run pp energy < 500GeV
    18115043, emc_check run pp energy < 500GeV
    18115047, emc_check run pp energy < 500GeV
    18115051, emc_check run pp energy, Blue 253.798GeV, Yellow 253.797GeV, smaller than normal value 254.867GeV.
    18115055, emc_check run pp energy < 500GeV
    18119001, Shift Leader Marked as Junk
    18143047, second emc_check run in same fill.
     
    The final list has 210 runs and is available: runList_2017_emc_check_final

    Summary of tower:
    tower status by fill
    a01TA05: masked out on April 26th by Will
    a01TC05: failed for part of the run
    a02TA01: dead entire run
    a02TC04: bad entire run
    a02TC06: failed for part of the run
    a03TE03: dead for one run
    a03TE11: failed for two runs
    a04TB07: stuck bit entire run
    a04TC01: dead entire run
    a05TA12: stuck bit entire run
    a05TC12: dead entire run
    a06TA03: failed entire run
    a06TA07: stuck bit entire run
    a07TB12: failed for one run
    a07TC05: failed entire run
    a08TB07: stuck bit entire run
    a09TC04: dead for part of the run
    a11TA08: dead entire run
    a11TB07: failed for part of the run
     
    a11TD01 dead for part of the run
    a11TD02
    a11TD03
    a11TD04
    a11TD05
    a11TD06
    a11TD07
    a11TD08
    a11TD09
    a11TD10
    a11TD11
    a11TD12

    a12TC05: bad entire run
    a12TD01: failed for part of the run
    a12TD09: failed for part of the run

    Summary of MAPMT:
    MAPMT status by fill

    MAPMT Pedestal Width

    Run 8 EEMC calibrations

    This is the main page for all EEMC calibration information for Run 8

    Comparison of old (Run 7) to new (Run 8) EEMC tower HV's

    At the end of run 7, we compared slopes for all EEMC towers with two sets of HV values in a series of consecutive runs.  The goal was to see if the new HV set would change the slopes in the 'expected' way, to bring the tower hardware gains into closer agreement with their ideal values.

    Here is a summary of what we planned to do:

     www.star.bnl.gov/HyperNews-star/get/starops/2520/1.html  

    What was actually done was send in a private email to a few people.  The email is attached below, along with two plots.  "2006TowerGains" shows the absolute gain determined for each tower, compared to its ideal value.  "2006Gains_x_SlopeRatio" is the same data, but each point has simply been scaled by the ratio of the slopes determined for the two data sets (old HV vs new HV).  Note that the correction is not just a calculation, but based only the slope measured for each tower before and after the new HV set was loaded.

     

    EEMC Pedestals and Status Tables for Run 8

    Generating Pedestal and Status Table Information

    Requesting Emc Check Runs

    In order to produce pedestal and status table information for the EEMC, Min Bias runs must be analyzed to determine the pedestal and status for each channel (tower, preshower, or SMD strip).  Thus during the running, usually once per fill an EmcCheck run is produced with approximately 200,000 of these Min Bias events.  This run must include EEMC and ESMD, and it is preferable for this run to be the first run in the fill, but not absolutely necessary.  A list of these runs we wish to have produced via Fast-Offline is sent to Lidia Didenko through the starprod hypernews list.  These runs are usually produced to /star/data09/reco/EmcCheck/ReversedFullField/dev/ or sometimes data10.  For each run there might be 20 MuDst files, that will be deleted in a week or so, so they must be processed to hist.root files quickly.  A list of the runs that were requested and produced for 2008 is given here: www.star.bnl.gov/protected/spin/stevens4/ped2008/runList.html

    Creating hist.root files and fit.hist.root files

    In order to analyze these runs, the raw MuDst needs to be transfered hist.root fromat.   This is  done with the macro rdMu2Ped.C  which is executed by procPeds (all of the following code can be found in my directory at /star/u/stevens4/ped08 at rcf).   procPeds requires an input file Rnum.lis which has a list of the locations of all the files for a given run, this can be created via the executible filefind (Rnum is a string of Rydddnnn where y is the year, xxx is the day of the run, and nnn is the number of the run on that day).  Once rdMu2Ped.C has written the .hist.root file for the given run, procPeds then executes fitAdc4Ped.C which fits each channels histogram using the code in pedFitCode/.   The macro fitAdc4Ped.C also creates ped.sectorXX files which contain the pedestal information for each sector which will be loaded to the database, as well as other log files.  Finally procPeds moves all these files to a directory for that particular run. 

    Status Tables

    The code used to create these status and fail words for each channel are in the directory /star/u/stevens4/ped08/pedStatus.  The source code for the program is in oflPedStat.cxx, and it is executed using oflPed.  It requires and input file, runList, which contains the list of runs to be analyzed, and it requires the hist.root and fit.hist.root files to be stored in /star/u/stevens4/ped08/dayxxx/outPedRyxxxnnn.   This code outputs two files, Ryxxxnnn.errs, which contains the status and fail words in hex format for each problematic channel, and Ryxxxnnn.log, which gives explanation (ie which test it failed) for why each channel was problematic.  Both of these output files are written in /star/u/stevens4/ped08/pedStatus/StatFiles/.  Once these files are written the script procErrs can be executed in the same directory.   procErrs reads in the .errs files and writes (with DistrStat2Sectors.C) the stat-XX files containing the status and fail words that will be uploaded to the database, as well as copy  each stat-XX and ped.sectorXX to the run number directory in /StatFiles/.  

    The current set of status bits for EEMC (usage:  #define EEMCSTAT_* )
    ONLPED    0x0001 // only pedestal visible
    STKBT        0x0002 // sticky lower bits
    HOTHT       0x0004 // masked for HT trigger
    HOTJP        0x0008 // masked for JP trigger
    HIGPED      0x0010 // high chan, seems to work
    HOTSTR     0x0020 // hot esmd strip
    JUMPED     0x0040 // jumpy ped over few chan
    WIDPED      0x0080 // wide ped sigma

    More information about the database usage is given by Jan at: http://drupal.star.bnl.gov/STAR/subsys/eemc/endcap-calorimeter/db-usage

     

    Uploading Tables to Database

    General information for uploading tables to the database is given by Jan at: drupal.star.bnl.gov/STAR/subsys/eemc/endcap-calorimeter/offline-db-upload-write

    More specific information on uploading pedestals and status tables to the database refer to /star/u/stevens4/database/uploadinfo.txt

     

     

     

    Run 9 EEMC calibrations

    This is the main page for all EEMC calibration information for Run 9

    EEMC Gain Calibration Using MIP Run9

    Dead channels:
              SMD strips:        
                       03V112, 03V258, 05U212, 05U234, 05V057, 06U069, 06U091, 08U124, 09V029, 09V047, 09V113.
                       03U, 04V, 09U, 10V strips numbered 284-287. All the strips labeled 288.
              Pre&Post:
                       02PA11
              Tower:
                       02TC04, 02TC06, 06TA03, 06TE04, 07TC05, 11TD05

    Bad channels:
           
              Pre&Post:
                       05PA01, 05PA02, 05PA03, 05PB10, 05PB11, 05PB12, 05PC10, 05PC11, 05PC12, 05PD10, 05PD11, 05PD12, 05PE01,     
                       05PE02, 05PE03

                       11PE10, 11PE12, 11QE10, 11QE12, 11RE10, 11RE12
              Tower:
                       11TE12, 11TE10, 10TD04, 10TD06

    Note: Fifteen 05Pre1 channels have PMT problem; Swaps 11TE10<->11TE12; 10TD04<->10TD06

    The procedures and results can be found here:
    https://drupal.star.bnl.gov/STAR/system/files/EEMC-cal-run6%2526run9forpresentation11172014_1.pdf

    All the plots are in this directory:
    http://www.star.bnl.gov/protected/spin/tinglin/summber2014/run9final/

    Conclusion:
              Pre1, Pre2 and tower's gain decreased by 10% compared to run6;
              Post shower's gain remain stable.

    EEMC Gains - corrected for mis-set TCD phase

    EEMC TCD Phase and Effective Gains

    During Run 9 the TCD phase was set incorrectly for ETOW (run < 10114054) and ESMD (run < 10140030).  A study (shown here) found the slope ratio for the 2 TCD phases which was used to calculate new gain tables for ETOW and ESMD.   These "mis-set" TCD phase timing settings are not optimal for the vast majority of channels, thus the data taken during these periods are more suspect to issues such as timing jitter, vertex position dependencies, etc.  To account for these issues, we have set gain=-1 for these time periods for the standard "ofl" flavor of the DB.  The new gains (calculated with the slope ratios) have been uploaded to the DB for the same timestamps but with a flavor of "missetTCD", so that this data is permanently flagged as having issues that data taken with optimal timing (hopefully) do not. 

    As a reminder, here are a few lines of code for how to read these "missetTCD" flavor tables from the DB instead of the default "ofl" tables:

    stDb = new St_db_Maker("StarDb", "MySQL:StarDb");
    stDb->SetFlavor("missetTCD","eemcPMTcal");  //sets flavor for ETOW gains
    stDb->SetFlavor("missetTCD","eemcPIXcal");  //sets flavor for ESMD (mampt) gains

    Note: the "missetTCD" flavor is valid only for Run 9 during the time periods given above, so it will return gain<0 for any other times.

    EEMC Pedestals and Status Tables for Run 9

    Run 9 EEMC Pedestals and Status Tables

    Abstract:  To produce pedestals and status tables for ETOW, ESMD, and EPRS based on zdc_polarimeter (EmcCheck) runs taken at the beginning of each fill with calorimeters only.  This year the raw adc spectra were retrieved from the .daq files on HPSS using the DAQ_READER instead of waiting for these runs to be produced.

    Runlist:  200 GeV

    Procedure:

    1) Retrieve .daq files from HPSS and use Matt Walker's version of the DAQ_READER to make 2D spectra of all EEMC components (code located at /star/u/stevens4/daqReader/ ).  Output is hist.root file with 6 ETOW histograms and 48 ESMD/EPRS histograms, one for each crate.

    2) Using mapping from DB create 1D histograms for each EEMC channel softID from the 2D histograms generated by the DAQ_READER (macro: /star/u/stevens4/ped09/fromDAQ/plDAQ2Ped.C).  Ouput is hist.root file with 720 ETOW and 9072 ESMD/EPRS  histograms.

    3) Fit 1D histograms produced by plDAQ2Ped.C to get pedestal value for each channel (macro: /star/u/stevens4/ped09/offline/fitAdc4Ped.C) .  Output is fit.hist.root file with fitted 1D histograms for every channel and a ped.sectorXX with pedestal values for each channel in that sector which can be uploaded to the DB.

    4) Compute status for each channel and generate status table for each sector (macro: /star/u/stevens4/ped09/offline/pedStat.C).

    The current set of status bits for EEMC (usage:  #define EEMCSTAT_* )
    ONLPED    0x0001 // only pedestal visible
    STKBT        0x0002 // sticky lower bits
    HOTHT       0x0004 // masked for HT trigger
    HOTJP        0x0008 // masked for JP trigger
    HIGPED      0x0010 // high chan, seems to work
    HOTSTR     0x0020 // hot esmd strip
    JUMPED     0x0040 // jumpy ped over few chan
    WIDPED      0x0080 // wide ped sigma

     

    Known problems put in status tables "by hand":

    Problem Fills effected (based on zdc run at beginning of fill)
    Crate 6 problem configuring 10157048-10169028 (not all of crate 6 for all fills)
    SMD Sectors 12 and 1 bad spectra 10139100, 10142078-10146010

     Note:  We also had problems with counts below pedestal in the ESMD/EPRS due to "extra accepts" in the triggering.  These problems are not included in the status tables because it is thought that these problems won't show up in the produced data, but we will have to wait and see.

    ESMD (MAPMT FEE) Timing for Run 9

    The timing scans for ESMD/MAPMT FEE for Run 9 were taken by
    raising the box delay settings as far as feasible, staying
    within the present RHIC tic "c" delay setting and then varying
    the TCD phase delay in order to see as much as possible of the
    right-hand-side timing cutoff in the scans (Note: due to present
    nature of the various delays there is a timing "hole" which
    cannot be accessed ... we will try to fix in future years e.g.,
    by extending the delay range of the new TCD).

    The new MAPMT configuration files for the scans were put on the
    EEMC slow controls machine (eemc-sc) in directory:
    /home/online/mapmtConfig/03-01-09_scan. The initial conditions
    are outlined in cols 1-7 of the spreadsheet "MAPMT_DELAYS_v7"
    (among the last attachments below). The nominal TDC phase setting
    from previous years is = 65. Within allowable additions to
    the box delay, we chose to divide things into 4 classes:

    a) add 22 ns box delay as per cols 8-12 of spreadsheet:
    (should clearly see edge as in previous years):
    12S1-12P1, 2S3, 4S1-4S3, 7S1-7P1, 8S2-10P1

    b) add 17 ns box delay as per cols 15-19 of spreadsheet:
    (will see less but better than previous):
    1S1, 1S3, 2S1, 4P1, 8S1, 11S1, 11S3

    c) add 7 ns box delay as per cols 22-26 of spreadsheet:
    (will see even less, etc.):
    1P1, 2S2, 2P1-3S3, 5S1, 5S3, 6S1, 8S2, 6P1, 11S2, 11P1

    d) delay w/ truncation @ 400 HEX as per cols 22-26 of
    spreadsheet
    (mistake: see notes below ... max is really 3FF)
    1S2 (439), 3P1 (44E), 5P1 (40C), 6s3 (423)

    Data for the scans was taken on Friday 6 March 2009:

    Run ETOW TCD ESMD TCD
    Phase Phase
    10065031 80 80
    10065032 10 10
    10065033 20 20
    10065034 30 30
    10065035 40 40
    10065036 50 50
    10065037 60 60
    10065038 70 70
    10065039 75 75
    10065040 65 65
    10065041 55 55
    10065042 45 45
    10065043 35 35
    10065044 25 25
    10065045 15 15
    10065046 5 5
    10065047 0 0
    (in last entry at "0" peds seem to indicate this didn't work
    for ETOW ... too many counts in spectra)

    These data were analyzed by Alice Bridgeman at link:

    http://drupal.star.bnl.gov/STAR/blog-entry/aliceb/2009/mar/11/preliminary-eemc-timing-curves-crate

    Attached below are the same plots from her analysis, but
    annotated to show the run 8 effective timing setting (long
    vertical line at 43, 48, 58 depending on box ("crate"), for
    added delays of 22, 17 and 7, respectively) as well as
    indication of the location of the "right edge" of the timing
    scan and length of flat top region. (The files for the good
    timing scans are appended first and in order below for crates
    64 [e.g., mapmt-crate-64_set.pdf], 66-68, 70-78, 80-84, 86,
    88-89, 91-94, and 96-111.) The associated values from this
    somewhat subjective procedure (regions selected by eye and
    computer drawing "straight edge"), are given in cols 29-33
    (again on spreadsheet "MAPMT_DELAYS_v7) for the distance to the
    edge, range of distances, flattop range and time difference to
    near edge, respectively.

    In the past we have tried to set the delays so that that we
    sit (operating point) about 12 ns into the plateau region from
    the right side timing edge ... this allows for several ns of
    "time jitter" as well as possible error in determining the edge
    while still maintaining a safe (estimate > 5 ns) distance of
    the effective operation point from the fall off edge. The
    projected adjustments are given in col 36 of "MAPMT_DELAYS_v7"
    and converted to final box delay in HEX in col 39.

    These scans are more definitive that those of previous years
    (due to mixture of issues) and hence the new values bounce
    around a bit, but in general the shifts are only ~ few ns with
    a few outliers.

    There are several special cases!

    For boxes 65 (12S2), 69 (1S2), 79 (3P1), 85 (5S2), 87 (5P1),
    90 (6S3), the box delay was set to "400" instead of the max
    allowed of "3FF" (a mistake as noted above) which effectively
    zeroed out the box delay causing just the left hand timing edge
    to be visible in the scans (for box 95 7P1 plots there is something
    else is wrong and very little is plotted).

    For these special cases one can estimate the timing by looking at the
    LHS edge and applying the 50-55ns flat top to guess where the leading
    edge is (e.g., see the plots). But in general for these case the timing
    was set by looking at neighboring boxes in the clock chain and deducing
    a value to use (see spread sheet).

    The final Hex delay values are indicated in one of the last columns of
    the spreadsheet (MAPMT_DELAYS_v7)

     

    ETOW Gains - using "sums" to identify outliers

    Run 9 EEMC Tower Gains - Using "sums" to check for outliers


    Goal:  Use Alice B's analysis of endcap tower spectra, calculating sums of counts over a fixed range in (adc - ped), to search for "outliers," and in particular to check that any tower that was 'touched' during the shutdown is still giving results consistent with the average from similar towers.


    Method:

    1. Sums were extracted over the adc range 20-100 (above pedestal) for all 720 endcap towers, for a series of runs used in timing scans.  Details of the runs selected, and the timing delays used for each, can be found at Alice's blog
    2. Based on the above analysis, Will decided on the optimal TCD delay settings.  A summary of these can be found in Will's blog
    3. To test for problematic channels, I used Alice's results for the scan with 60 ns delay (run 10065037) to examine all towers in crates 1 and 2, and used the scan with 40 ns delay (run 10065035) for all towers in crates 3-6.  An ascii file giving the crate #, channel #, adc sum, and sum error for each tower, for each of these two delay settings, is available here
    4. Because these times are not exactly at their optimal values (60 is ~4 ns too late, while 40 is ~2-3 ns too early), the "averages" and "sigmas" I compare to are over all towers in the same eta ring, but only those in crates from the same timing scan, e.g., for a tower like 02TC10, which is in crate 2, I would compare its adc sum to the average of the 20 towers found in crates 1 and 2 at etabin = 10.
    5. Anything listed as "not analyzed" means that Alice found too few counts in the adc range to learn anything useful - usually indicates a dead or low gain channel.

    Results:  These fall into four main categories

      Known bad channels
      1. 06TA03 (cr 4 ch 18):   reported to be still bad and masked in fee cr4 bd1 JP4
        =>  not analyzed
      2. 07TC05 (cr 4 ch 117):   reported to be still bad and masked in fee cr4 bd4 JP3
        =>  still bad:  sum = 26 ± 6    avg = 155, sig = 30
      3. 12TC05 (cr 1 ch 97):   reported to be still bad and masked in fee cr1 bd4 JP1
        => known to have very high ped, not analyzed

      Needed to be checked
      1. 02TC06 (cr 2 ch 98):   seems ok right now, but still marked in Pplots as bad
        =>  looks fine:  sum = 187 ± 16    avg = 192, sig = 42
      2. 04TD10 (cr 3 ch 45):   seems ok now, not masked
        =>  looks fine:  sum = 223 ± 18    avg = 187, sig = 35

      Fixed before run
      1. 09TA05 (cr 5 ch 105):   base replaced, looks ok
        =>  looks fine now:  sum = 177 ± 16    avg = 155, sig = 30
      2. 11TA01 (cr 6 ch 56):   base replaced, looks ok
        =>  looks fine now:  sum = 128 ± 13    avg = 120, sig = 26
      3. 12TD01 (cr 1 ch 40):   HV recently lowered by 125 V, current gain now looks too low
        => PROBLEM!    gain very low now:  not analyzed, average of peers = 140
      4. 03TC09 (cr 2 ch 76):   cable replaced, now looks ok
        => PROBLEM!    gain too low now:  sum = 42 ± 8    avg = 173, sig = 40
      5. 05TA03 (cr 3 ch 58):   cable replaced, now looks ok
        =>  looks fine:  sum = 144 ± 14    avg = 161, sig = 40
      6. 08TC05 (cr 5 ch 101):   cable replaced, now looks ok
        =>  looks fine:  sum = 150 ± 14    avg = 155, sig = 30

      Extreme outliers
      • 07TD10 (cr 5 ch 5) and 11TB08 (cr 6 ch 67) had sums that were very high
        =>  need to monitor these to see if they are flagged as "hot".

     

    Endcap Tower Timing Settings Run 9

    For now see link to blog page:

    http://drupal.star.bnl.gov/STAR/blog-entry/wwjacobs/2009/mar/06/run-9-calibr-qa

    and go to Section IV

    Uploading EEMC Pedestal, Status and Gain Calibrations to the DB

     
    This page is intended to document the process of uploading EEMC calibrations to the DB using the "eemcDb" utility script originally written by Jan Balewski.  Some old notes from Jan are here.  Before we begin, a quick reminder that this is not a task many people should need to do but should be limited to one or two "experts", who will be given special DB writing priviledges by Dmitry Arkhipkin (arkhipkin@bnl.gov).  The code is all in CVS though for documentation purposes.


    Building the eemcDb utility:


    All the uploads to the EEMC DB are handled by the eemcDb utility which Jan called the "swiss army knife" of DB uploads.  The source code to build the eemcDb executible can be found in CVS.  To compile the script follow these quick instructions

    mkdir yourDirectory
    cd yourDirectory
    
    cvs co StRoot/StDbLib/
    cd StRoot/StDbLib/
    make
    
    cd ../../
    cvs co StRoot/StEEmcUtil/database/macros/upload/src/
    cd StRoot/StEEmcUtil/database/macros/upload/src/
    make

    Then the eemcDb executible is ready to use.  It can be tested with a simple read command:

    eemcDb -r -p Ver2004d/ -T

    Uploading tables to the DB:


    Thre are several scripts for uploading pedestal, status and gain tables to the DB located in CVS at StRoot/StEEmcUtil/database/macros/upload/.  The names are fairly obvious, but just in case, here is the breakdown:

    writeIdealGains.C :  writes ideal gain tables in ascii files to be uploaded
    writeIdealPed.C : write ideal ped tables (pedestal = 0) in ascii files to be uploaded
    writeIdealStatus.C : write ideal status tables (status = 0) in ascii files to be uploaded

    In the following scripts you need to specify the location of the eemcDb executible (compile instructions above) as its currently listed as "yourDirectory."  Also there are various exit lines in the scripts which you'll need to comment out once you're ready to upload (they kept me from accidental uploads in the past so I kept them in place).

    writeMapmtGains.sh :
    • script which executes eemcDb to write MAPMT gain files to the DB
    • requires user input of timestamp, table "flavor", gain file location, and comment to go in DB
    writeTowerGains.sh :
    • script which executes eemcDb to write tower gain files to the DB
    • requires user input of timestamp, table "flavor", gain file location, and comment to go in DB
    loadPed+Status.tcl :
    • script which executes writePed+Status.sh (below) for many runs from the input file which specifies the fillNumber, runNumber, and unix timestamp
    • requires user input of list of runs and comments to go in the DB
    writePed+Status.sh :
    • script which executes eemcDb to write pedestal and status files (for both towers and MAPMT) to the DB
    • loadPed+Status.tcl should provide user input of timestamp, table "flavor", gain file location, and comment to go in DB

    One last note that in order to upload to the DB you need write priviledges from Dmitry and to execute the following command which allows you to write to the DB.  Once you're done with the upload remove this to prevent "accidental" uploads.
    setenv DB_ACCESS_MODE write

    EEMC Detector Operator Manual

    Link to current EEMC Detector Operations manual

    This should also come up as the homepage on eemc-spin in the STAR control room.

     

    EEMC Maintenance/Operations Documents

    Detailed operations manual

    Over the years I have maintained a text based operations manual for the EEMC.  It has a lot of expert information and detail in it.

    The version in the control room has all the passwords filled in.  Of course the pw should not appear on the web so are blank here.   Consult the shift leader for any pw needs.

    The run 10 version is attached below.

    Jim

    ESMD (Shower Max and Pre- & Post-Shower Detectors)

    ETOW (Towers)

    Endcap Geometry

    Geometry definition

    1. Detailed description of the geometry (ver 5.1) as implemented by
      Oleg Rogachevski are listed in the depth.txt file
    2. Distribution of material as fuction of eta and phi
    3. Mapping of SMD strips to towers (11/20/03 jwebb)

    Geant pictures of calorimeter are generated with plot_geom.kumac

    1. View of EndCap caloremeter (variant C) in STAR detector

    2. cross section of calorimeter towers variant C

    3. cross section of lower half of calorimeter plane ZY at X=0

    4. cross section of calorimeter (var.C) plane ZY at X=30cm

    5. cross section at eta=2.0front part with SMD

      line eta = 2.0 intersects megatile (blue) at its center
      and radiators (black) at forward edge.
      Hub is seen at upper edge. Each megatile extends from eta=2 to hub
      as nonactive plastic and radiator extends as stainless steel.

    6. cross section at eta=1.086 front part with SMD

      line eta = 1.086 intersects megatile at its center
      and radiators at the back edge of radiator
      Projective (20x25 mm) bar is seen at the lower edge of calorimeter
      XXX page 9 - megatile cell structure in local coordinates, particles go along Y axis on this plot.

    7. regular SMD sector, V plane, blue line depicts +-15 deg. between sectors

    8. edge SMD sector with cutted strips, V plane

    9. cross section of 1st SMD plane

    10. cross section of 2st SMD plane

    11. cross section of 3st SMD plane

    12. cross section of 1st SMD plane labeled with "SUV" ordering

    13. cross section of the gap between SMD sectors

    14. cross section of the gap between tower at the sector boundary

    15. cross section of the backplate


    Three variants of EEMC geometry are available:
    A --- lower half with only 5-8 sectors filled with scintillators
    B --- fully filled lower half
    C --- both halves filled with scintillators

    Endcap Geometry update (2009)

    EEMC geometry v6.0

    Proposed name for the geometry file: pams/geometry/ecalgeo/ecalgeo1.g

    List of changes

    • CAir bug fixed
    • Increased size of mother volume containing SMD strips
    • Added material to front and back of SMD planes (material wrapping SMD planes)
    • Added SMD spacer layers
    • Introduced sector overlaps closer to the "as build" geometry
    • Birk's law constants for SMD strips corrected
    • Some dimention paramaters tuned to that in real geometry
    • Code reorganized and commented

    Note:
    Lead alloy mixtures are not implemented
    due to the problem with mixture in GSTAR/Geant (see below)

    Resulting improvements in the simulated EEMC response

    • Geometry configuration is closer to the "as build" geometry
    • Correct simulation of the transverse shower shape profile
    • Realistic (close to 5%) sampling fraction
    • More realistic simulation of the tower energy profile (with LOW_EM option)

    Supporting materials

    EEMC sampling fraction nonlinearities and CAir bug

    EEMC geometry reshape

    Issue with mixture in GSTAR/Geant

    Jason's tests and studies

    1. Validation of EEMC MC Geometry
    2. SMD Problems at the sector boundary
    3. EEmc MC Geometry version 5.21
    4. Rough cut of EEMC SMD spacer geometry
    5. Check of material in corrected EEMC geometry
    6. Linearity Check in fixed ecalgeo.g, take II
    7. Linearity Check in fixed ecalgeo.g
    8. List of small, almost trivial, problems with the Monte Carlo
    9. EEMC simulation study: mockup of CDF testbeam experiment
    10. Verify that the fast simulator sees all of the energy deposited in geant
    11. Energy dependence of the sampling fraction in the EEMC
    12. EEMC simulation studies (spin-pwg-simulations-report-07-30-2009.pdf)

    Ilya's tests and studies

    1. new EEMC geometry: Pure lead and new SMD layers
    2. Jason EEMC geometry: Effect of ELED block change
    3. Jason EEMC geometry: results with and without LOW_EM options
    4. Jason EEMC geometry: Jason with ELED block from CVS file
    5. Jason EEMC geometry: comparison without LOW_EM option
    6. Jason EEMC geometry: effect of removing new SMD layers
    7. Jason vs. CVS EEMC: removed SMD layers
    8. Sampling fraction problem: full STAR vs. EEMC stand alone geometry
    9. Jason geometry file: Full STAR simulations (sampling fraction, shower shapes)
    10. Jason geometry tests: SMD energy, number of strip vs. thrown photon position
    11. Effect of added layers in Jason geometry file
    12. Volume id fix in Jason geometry file
    13. Jason vs. CVS EEMC geometry: sampling fraction and shower shapes
    14. Test of corrected EEMC geometry: LOW_EM cuts
    15. EEMC geometry tests: on-off SVT detector and EEMC slow-simulator
    16. Single particle MC with corrected geometry vs. eta-meson from data
    17. Corrected EEMC geometry: shower shapes
    18. Corrected EEMC geometry (bug 1618)

    Alice's tests and studies

    1. Summary of Lead Problems
    2. Further tests with lead
    3. Testing changes to cvs geometry file
    4. Some geometry tests

    Log of tower base and fee issues

    Attached is a text file of tower base and other issues.  I went back through the electronic shift log and recorded all the bases that had been replaced and recording of other fixes and problems.  The status as of the begin of the run 10 is also stated.

    Jim

    eemc as-built info and proto tests

    Here starts 8/09 a collection of misc. information re: as-built eemc from construction documentation and prototype testing of various sorts.

    ... to be detailed

    emc2-hn minutes (by Jan)

    • xx xx, 200x
      • next,
    • xx xx, 200x
      • next,
    • xx xx, 200x
      • next, I forgot to edit this

     

    how-to by Jan

    Instructions

    offline DB upload (write)

     Loose notes helping Justin in uploading of the offline DB tables for Endcap


     

    1) Monitoring current content of DB,

     

    use web interface: http://orion.star.bnl.gov/Browser/EEMC/ 

    The following query selects pedestal tables for sector 1 (element ID) for year 2007+ 

    results with just 2 tables:

    • the official (just one) table for 2007 run (flavor 'ofl') valid since April 5, 4:40 pm  GMT   (note it is not EST)
    • test table ( flavor=online  ) for 2008 run, valid since February 22

    If you click on 'Control' you will see the content of this table, but I rarely do that

     

    2) Upload  new tables to DB

    • need special DB write privileges, only Jan Y Justin have such.
    •  setenv DB_ACCESS_MODE write
    • execute _working_ version of eemcDb with proper params
    • you do NOT need any more the dbServer.xml file pointing to robinson.db, so do NOT have it in your main directory

    Note, the last successfully compiled and working eemcDb is located at:

    /star/u/balewski/dbaseIO-2008-sc.starp/eemcDb - use it as long as it works.

    The source code is at sc.starp , user=sysuser, directory: junk2/

     

    Good luck,

    Jan

     

    offline DB usage (read)

    DB usage : ped, gains, fail/stat flags for EEMC towers/pre/post/SMD

    Wed Oct 27 13:09:10 EDT 2004
    Key features:

     

     

     

  • EEMC DB consist of 4 basic components: map of hardware channels to logical elements, peds, gains and fail/stat bits. The content of all DB tables for all time stamps is described in eLog entry 605 .

     

  • An independent set of tables with the 'sim' flavor is stored with beginTime of Jan 1 1999. If selected, allows to run ~exactly the same code on the M-C or real events.
    Content of 'ideal' simulation tables:
    * channel map as of October 2004
    * tower gains : 4096 ADC=60 GeV transverse electromagnetic energy in the tower
    * pre/post/smd gains : 23,000 ADC = 1 GeV of energy deposit by a MIP in scint
    * all pedestals set at 0 ADC, but sigPed of 1.0 ADC for Towers and of 0.7 ADC for MAPMT
    * masks: fail=stat=0 , meaning all elements are good

    The above content of sim-tables is consistent with the actual fast EEMC simulator code and hits stored in StEvent & muDst.
    Note, in the M-C a sampling fraction of 5% is assumed while converting GEANT energy deposit in tower scint layers to tower ADC. In order to get ~right pi0 or gamma energy a fudge factor of ~4/5 is still needed.

     

  • Example of geometrical manipulations:http://www.star.bnl.gov/cgi-bin/protected/cvsweb.cgi/StRoot/StEEmcPool/muDst/StMuEEDemoMaker.cxx
  • The definition of all possible stat/fail bits values is at $STAR/StRoot/StEEmcDbMaker/cstructs/eemcConstDB.hh.
    For 2004 data the following 'stat' bits are in use:
    #define EEMCSTAT_ONLPED   0x0001 // only pedestal visible
    #define EEMCSTAT_STKBT    0x0002 // sticky lower bits
    #define EEMCSTAT_HOTHT    0x0004 // masked for HT trigger
    #define EEMCSTAT_HOTJP    0x0008 // masked for JP trigger
    #define EEMCSTAT_HIGPED   0x0010 // ped is very high but channel seems to work
    #define EEMCSTAT_HOTSTR   0x0020 // hot esmd strip
    #define EEMCSTAT_JUMPED   0x0040 // jumpy  ped over several chan over days
    #define EEMCSTAT_WIDPED   0x0080 // wide ped over:2.5 ch  towers, 1.5 ch MAPMT's
    
    It is up to a user what 'stat' condition is fatal for his/her analysis.
    Potentaily good elements may have set the 'STKBT', meaning the lowest bit(s) did not worked and only the energy reolution is worse.
    'JUMPED' bit means ped jumped by 20-40 ADC counts during one run. It is fatal for calibration with MIP's but ~OK for reco of 20 GeV gammas.

    For the record the following 'fatal' bits are set and all users should reject all elements marked that way.

    #define EEMCFAIL_GARBG  0x0001  // exclude from any analysis
    #define EEMCFAIL_HVOFF  0x0002  // HV was off
    #define EEMCFAIL_NOFIB  0x0004  // signal fiber is broken
    #define EEMCFAIL_CPYCT  0x0008  // stuck in copyCat mode
    
  • To see all stat/fail tables for sector 5 type:
    ~/ezGames/dbase/src/eemcDb -p Ver2004d/sector05/eemcPMTstat -H -t 2001
    
    To see content of stat/fail table for a given time stamp type:
    ~/ezGames/dbase/src/eemcDb -p Ver2004d/sector05/eemcPMTstat -g -t 1080832032
    

     


    Use constants defined in
    $STAR/StRoot/StEEmcUtil/EEfeeRaw/EEdims.h

     


    Access to EEMC DB-maker within the chain
    In .h add
    ......
    class  StEEmcDbMaker;
    class StYourAnalysistMaker : public StMaker {
     private:
      StEEmcDbMaker *eeDb;
      ....
    }
    
    In .cxx add
     
    #include "StEEmcDbMaker/StEEmcDbMaker.h"
    #include "StEEmcDbMaker/EEmcDbItem.h"
    #include "StEEmcDbMaker/cstructs/eemcConstDB.hh"
    #include "StEEmcUtil/EEfeeRaw/EEname2Index.h"
    .....
    StYourAnalysistMaker::Init() {
     // connect to eemcDB
      eeDb = (StEEmcDbMaker*)GetMaker("eemcDb"); // or "eeDb" in BFC
      assert(eeDb); // eemcDB must be in the chain, fix it
    } 
    

     


    Details of DB usage for muDst events , loop over hist for one event:
    .....
     // choose which 'stat' bits are fatal for you, e.g.
     uint killStat=EEMCSTAT_ONLPED  | ......... ;
    .....
       StMuEmcCollection* emc = mMuDstMaker- >muDst()- >muEmcCollection();
    .....
      //.........................  T O W E R S .....................
      for (i=0; i < emc->getNEndcapTowerADC(); i++) {
        int sec,eta,sub,rawAdc; //muDst  ranges:sec:1-12, sub:1-5, eta:1-12
        emc->getEndcapTowerADC(i,rawAdc,sec,sub,eta);
        assert(sec>0 && sec< =MaxSectors);// total corruption of muDst
        //Db ranges: sec=1-12,sub=A-E,eta=1-12,type=T,P-R ; slow method
        const EEmcDbItem *x=eeDb->getTile(sec,'A'+sub-1,eta,'T');
        ...... this is the same also for pre/post/smd (except ene fudge factor!)................
        assert(x); // it should never happened for muDst
        if(x->fail ) continue;  // drop broken channels
        if(x->stat &  killStat) continue; // drop not working channels
        if(x->gain < =0) continue; // drop it, unless you work with ADC spectra
        if(rawAdc < x-> thr) continue; // drop raw ADC < ped+N*sigPed
        float adc=rawAdc-x->ped; // ped subtracted ADC
        float ene=adc/x->gain;   // energy in GeV
        if(MCflag) ene/=0.8; //fudge factor for TOWER sampling fraction,  to get pi0, gamma energy right
        .... do your stuff ..........
        ........................
     } // end of towers
    
     //.........................  P R E - P O S T .....................  
      int pNh= emc->getNEndcapPrsHits();
      for (i=0; i < pNh; i++) {
        int pre, sec,eta,sub;
        //muDst  ranges: sec:1-12, sub:1-5, eta:1-12 ,pre:1-3==>pre1/pre2/post
        StMuEmcHit *hit=emc->getEndcapPrsHit(i,sec,sub,eta,pre);
        float rawAdc=hit->getAdc();
        //Db ranges: sec=1-12,sub=A-E,eta=1-12,type=T,P-R ; slow method
        const EEmcDbItem *x=eeDb-> getTile(sec,sub-1+'A', eta, pre-1+'P');
        if(x==0) continue;
        ..... etc, as for towers ....
        ..............
       }
    
       //.......................  S M D ................................
      char uv='U';
      for(uv='U'; uv < ='V'; uv++) {
        int sec,strip;
        int nh= emc->getNEndcapSmdHits(uv);
         for (i=0; i < nh; i++) {
          StMuEmcHit *hit=emc->getEndcapSmdHit(uv,i,sec,strip);
          float rawAdc=hit->getAdc();
          const EEmcDbItem *x=eeDb->getByStrip(sec,uv,strip);
          assert(x); // it should never happened for muDst
          ... etc, as for towers ....
          ..............
      }
    }
    

     


    Details of DB usage for StEvent , loop over hist for one event - posted for the record.
    Remember, NEVER EVER access EEMC data from StEvent. It is so slow and clumsy. It is asking for a trouble. All EEMC analysis SHOULD work on muDst ONLY - Jan
     T O W E R S 
    .....
     // choose which 'stat' bits are fatal for you, e.g.
     uint killStat=EEMCSTAT_ONLPED  | .......;
    .....
     StEvent*  mEvent = (StEvent*) StMaker::GetChain()->GetInputDS("StEvent");  assert(mEvent);
     StEmcCollection* emcC =(StEmcCollection*)mEvent->emcCollection(); assert(emcC);
     StEmcDetector* etow = emcC->detector(kEndcapEmcTowerId);  assert(etow);
    
     for(uint mod=1;mod < =det->numberOfModules();mod++) {
        StEmcModule*     module=det->module(mod);
        StSPtrVecEmcRawHit&     hit=  module->hits();
        int  sec=mod ; // range 1-12
        for(uint ih=0;ih < hit.size();ih++){
          StEmcRawHit *h=hit[ih];
          char sub='A'+h- >sub()-1; // range 'A' - 'E'
          int  eta=h->eta(); // range 1-12
          int  rawAdc=h->adc(); // raw ADC
          //Db ranges: sec=1-12,sub=A-E,eta=1-12,type=T,P-R; slow method
          const EEmcDbItem *x=eeDb->getTile(sec,sub,eta,'T');
          if(x==0) continue;
          ....  now follow muDst example for towers ....
        } // end of sector
      }
    
    

    PRE1,PRE2, and POST -all mixed together preL='P','Q','R' for pres1,pres2,post, respectively StEmcDetector* det=emcC- > detector(kEndcapEmcPreShowerId)//==(14) for(int imod=1;imod < =det- > numberOfModules();imod++) { StEmcModule* module=det- > module(imod); printf("EPRE sect=%d nHit=%d\n",imod, module- > numberOfHits()); StSPtrVecEmcRawHit& hit= module- > hits(); int ih; for(ih=0;ih < hit.size();ih++){ StEmcRawHit *x=hit[ih]; int sec=x- > module(); int ss=x- > sub()-1; char sub='A'+ss%5; char preL='P'+ss/5; int eta=x- > eta(); int adc=x- > adc(); printf("ih=%d %02d%c%c%02d ss=%d -->adc=%d ener=%f ss=%d\n",ih,sec,preL,sub,eta,ss,adc, x- > energy(),ss); }

    SMD U & V are stored in SEPARATE collections StEmcDetector* det=emcC- > detector(kEndcapSmdUStripId)// U=15, V=16 for(int imod=1;imod < =det->numberOfModules();imod++) { StEmcModule* module=det- > module(imod); printf("ESMD sector=%d nHit=%d\n",imod, module- > numberOfHits()); StSPtrVecEmcRawHit& hit= module- > hits(); int ih; for(ih=0;ih < hit.size();ih++){ StEmcRawHit *x=hit[ih]; int sec=x- > module(); int strip=x- > eta(); int adc=x- > adc(); printf("ih=%d %02dU%03d -->adc=%d ener=%f\n",ih,sec,strip,adc, x- > energy()); }


    Details of the DB- Maker(s) setup in the muSort.C script:

     

  • By default ADC threshold (x->thr) is set at 3*sigPed. You may change 3.0 to any other (float) factor :
    myDb=new StEEmcDbMaker("eemcDb");
    ....
    myDb->setThreshold(2.5);
    ....
    chain->Init();
    ....
    
  • To switch to (ideal) simulation DB tables you need to change DB flavor for the St_db_Maker (different from StEEmcDbMaker ):
      St_db_Maker *dbMk=new St_db_Maker("db", "MySQL:StarDb", "$STAR/StarDb");
         dbMk->SetFlavor("sim","eemcPMTcal");
         dbMk->SetFlavor("sim","eemcPIXcal");
         dbMk->SetFlavor("sim","eemcPMTped");
         dbMk->SetFlavor("sim","eemcPMTstat");
         dbMk->SetFlavor("sim","eemcADCconf");
         dbMk->SetFlavor("sim","eemcPMTname");
        ? dbMk->SetDateTime(20031120,0);  (you may need to specify the DB time stamp)
    
    To verify the ideal DB tables were loaded you should find in the log-file the following message for every sector:
    .....
      EEDB  conf ADC map for sector=6
    StInfo:       EEDB chanMap=Ideal EEMC mapping, (October 2004), RF
    StInfo:       EEDB calTw=Ideal EEMC tower gains, E_T 60GeV=4096ch, RF
    StInfo:       EEDB tubeTw=Ideal EEMC P-names, (October 2004), RF
    StInfo:       EEDB calMAPMT=Ideal EEMC P,Q,R,U,V gains, 23000ch/GeV, RF
    StInfo:       EEDB ped=Ideal EEMC peds/ADC at 0.0, sig: Tow=1.0, Mapmt=0.7; RF
    StInfo:       EEDB stat=Ideal EEMC stat=fail=0 (all good), RF
    ....
    
    To use Barrel ideal DB do
      starDb->SetFlavor("sim", "bemcPed");
      starDb->SetFlavor("sim", "bemcStatus");
      starDb->SetFlavor("sim", "bemcCalib");
      starDb->SetFlavor("sim", "bemcGain");
    
  •  

     

     

    slow controls archive viewer

    There is a viewer for slow controls quantities documented at

    http://drupal.star.bnl.gov/STAR/subsys/ctl/archive-viewer

    There is an interface through a web browser or a java script.  Both must run from the starp internal network.  I used the java script run from sc.  Two plots of voltage tracking are attached.  These were made via screen capture of the display.

     

     

    trash

    Trash pages for testing

    chain tricks

     

    > do I remember it correct the current implementation of StMaker() does
    > not prevent execution of subsequent makers in the chain even if the
    > earlier one returns:
    > return kStErr;
    It depends. If maker is "privileged" then loop is ended.
    Normal user makers are not priviliged, to avoid skipping of other users
    makers.
    The priviliged ones are all I/O makers and StGeantMaker (which is also I/O
    maker)
    User could define his maker as a priviliged:
    instead of:
    bfc(999,...)
    user should:
    bfc(0,...);
    chain->SetAttr(".Privilege",1,"UserMakerName" );
    chain->EventLoop(1,999);

    Victor

     

    software

    Endcap Sofware  2006++

     

    test entry

    test child page control

    timing_WMZ

    Fig. 1 Tower Crate

    Fig. 2 Crate 64-67

    Fig. 3 Crate 68-71

    Fig. 4 Crate 72_75

    Fig. 5 Crate 76-79

    Fig. 6 Crate 80_83

    Fig. 7 Crate 84-87

    Fig. 8 Crate 88-91

    Fig. 9 Crate 92-95

    Fig. 10 Crate 96-99

    Fig. 11 Crate 100-103

    Fig. 12 Crate 104-107

    Fig. 13 Crate 108-111

       Plots of channels  
       mapmt-crate-64.pdf
       

    EPD


    Welcome to the area for the STAR Event Plane Detector (EPD)




    All you want to know about the EPD can be found in the...


    Placeholder for the Event Plane Detector information.

    2017 cosmic ray tests at OSU and BNL

    Scans of the logbook for supersector tests may be downloaded here:

    Excel file with data: drupal.star.bnl.gov/STAR/system/files/EPDCosmicData.xlsx

    Test configuration 1

    SS05

    SS06

    SS07

    SS08

    SS09

    SS10

    Cosmic Tests at BNL - Diagonal issue

    While running at BNL, it was noticed that there was a diagonal line when looking at the correlation between two tiles.  Essentially, they would both fire, and with ADCs that are strongly correlated.


    Figure 1: Example correlation between two tiles on the left, example correlation between a tile and the empty tile at tile "zero".

    In Figure 1, we can see an example of this.  As expected, one can see that pedestal and mip peak for each tile, completely uncorrelated with the signal in the other, except for the points along the diagonal.  In fact, this was even seen in the first channel when running on evens (which we have called tile 0).  Since there is no tile, or fiber, there is no way for there to be a signal.

    The entire set of correlations can be seen at: drupal.star.bnl.gov/STAR/system/files/Histos12142017_EPD_Diagonal.pdf
    It should be noted here that the last 4 tiles of the bottom SS were removed from the ADC in slot 7 to the empty ADC (given the labels blank0, etc).  In fact, if we select events in which tile 0 in the top supersector had a significantly higher than pedestal value (adc > 250), we see that we can pull this diagonal correlation from all channels, other than those from the "empty" ADC: drupal.star.bnl.gov/STAR/system/files/Histos12142017_EPD_Diagonal_SelectedonTile0.pdf

    What was noticed is that the last ADC, which had been empty, did not show this characteristic diagonal correlation.  This is true even with the data being added to it.  (We noted that the empty channels in the previously full ADC do show it, but none of the channels on the empty one showed it either before or after putting the data into the system).  One difference was the timing, the first 4 ADCs were 28 ns behind.  Another was that the logic feeding these ADCs, it looked sort of like the 4-fold coincidence was a 3-fold (at least it was firing more often than the other, and we could not put the level any higher in coincidence), so we moved this cables into a more stable one and verified that both sets now fired precisely the same.  We also removed the 28 ns of extra cable, so everything was a the same timing.

    After doing this, the diagonal correlation seems to go away:

    Figure 2: Two correlations after our fix which do not seem to show the diagonal line.

    In Figure 2, we do not see any evidence of this correlation.  On the left is the correlation between tile 14 on the top and in the middle, so the correlation in the middle is from true cosmic rays.  On the right is the correlation between tile 14 on the top and tile 6 in the middle.  One can see the pedestal and MIP peaks for both, which are mostly uncorrelated as expected.  (The few points in the middle could be from diagonal cosmics.)

    The full range of correlations from these channels can be seen at: drupal.star.bnl.gov/STAR/system/files/Histos12152017_EPD_NoDiagonalProblem.pdf 

    First Autumn 2017 cosmic tests at STAR

    After Thanksgiving, we have placed a stack of three supersectors on the top platform ("roof") of STAR.  Above and below the stack are position-sensitive scintillator strips provided by Les Bland; these generated the trigger.  The clear fiber bundles were dangled down to the new EPD FEE box on the SE side, which have six FEE cards (the same ones used in the 2017 run).  These are powered by a new power supply and the signals brought to the EPD rack C1 on the first level of the south side platform.

    Phase 1 - Nov-Dec 2017

    For "phase 1" of these tests, we are reading out with a CAMAC-based DAQ system which was developed largely by Wanbing He and Xinyue Ju.
    Here are some photos of the first day of "phase 1".


    The set-up on the SE side of the upper platform of STAR.  Three SS are sandwiched between the trigger scintillator strips.


    The SS stack is seen in the upper left of this photo. The fiber bundles (black, so difficult to distinguish) are dangled down to the shiny FEE box which sits by itself in the otherwise-brown region.


    The populated FEE box with 6 FEEs.  We managed to make it look relatively neat, but assembling a full box will require a couple of hours of very careful work.  We have put advice in the log book for the future.


    Not a great photo of our set-up in the 1C1 and 1C2 racks on the platform.  Left rack (1C1) from top down: simple trigger electronics in NIM bin; TUFF box; DAQ (mac) computer; DAQ PS; CAMAC crate carrying Rx cards.  (The CAMAC crate with the Rx cards has voltages set to 6.5 V rather than 6 V.)  Right rack (1C2) has a CAMAC crate (set to 6 V) with 5 LRS ADCs, a LRS TDC and the crate controller.


    The new TUFF PS is much nicer on the back, and on the front panel, voltage and current for the positive and negative supplies are indicated.


    First MIP peaks from one hour of running.  Top, middle and bottom Supersector signals are shown, for odd-numbered tiles.  Small tiles in top SS don't show good peaks, probably due to trigger timing issues which we are looking into.



    Update 2 Dec 2017

    First production data using the trigger PMTs has been taken and is being analyzed by Te-Chuan and Joey.  Their first study is inspired by the Prashanth's analysis of the pre-2017 run tests done by Prashanth and Les.  See his study at www.star.bnl.gov/protected/bulkcorr/prashanth/EPD/EPDcosmic_01302017.pdf

    Joey/Te-Chuan, please update with your stuff here. 
    First results of ADCs from vertical cosmic rays:
     - Landau fits: https://drupal.star.bnl.gov/STAR/system/files/ADC_SS252723_odd.pdf
     - presentation by Te-Chuan: https://drupal.star.bnl.gov/STAR/system/files/TeChuan_EPD_cosmic_20171204.pdf


    Update 5 Dec 2017

    It was noted by Te-Chuan that the ADC distribution for TOP Tile 15 goes all the way to 1024 (hence saturates at the high end).  The pedestal was close to 500 counts, which is quite high.  (One sees this also in the photos of the spectra above.)  It turns out that our high pedestals are due to the overly wide gate I had set.  As seen in the photo below, we see that we sit very nicely in the gate, but the gate itself is about 275 ns (compare to ~80 ns that we use in STAR).  The pedestal one expects is

    pedestal = (baseline voltage) * (gate width) / [(input impedance) *(ADC conversion factor)
    = (baseline voltage) * (275 ns) / [(50 ohm)*(0.25 pC/count)
    = (baseline voltage) * (22 count/mV)

    This should be added to the "typical residual pedestal" of the 2249A (see here) quoted as
    1 + [(0.03 pC)/(0.25 pC/ct)*(t in ns) = 34 counts for our gate

    The baseline voltages coming out of the Rx cards are of order 10 mV, so we expect pedestals of about 250 counts.  I know these "typical residual pedestals" can vary significantly, so 350 counts is not crazy.  500 counts is still a bit high, but I am not shocked, since baseline voltage will vary, too.


    Output of the Rx cards for the same tile-number from the three stacked supersectors fall well within the gate.  They have baseline offsets of about 10 mV.  The gate is overly wide, about 275 ns.






    Calibrations

     This will be a page that aggregates information for calibrating the EPD.

    Welcome to the EPD calibration page. From here, you can:

    • Find a calibration status for any year and run
    • Learn how to calibrate the EPD
    • Get current software to calibrate the EPD and display EPD issues

    Sections generally provide links to more detailed information.



    ---------------------------------------------------------------------------------------------

    Calibration Status

    Done:

    production_3p85GeV_fixedTarget_2019 3 2019 FXT 5  https://drupal.star.bnl.gov/STAR/blog/eloyd/EPD-Calibration-Run-19-385GeV-FXT-dataset
    1. Generate picos
     
    production_4p59GeV_fixedTarget_2019 3.2 2019 FXT 5   2. Fit all days  
    production_7.3GeV_fixedTarget_2019 3.9 2019 FXT 5 https://drupal.star.bnl.gov/STAR/blog/cracz/run-19-epd-calibrations-73-gev-fxt
    3. Troubleshoot problem tiles
     
    production_31GeV_fixedTarget_2019 7.7 2019 FXT 5  https://drupal.star.bnl.gov/STAR/blog/eloyd/EPD-Calibration-Run-19-31GeV-FXT-dataset
    4. Enter values into database
     
    production_7p7GeV_2019 7.7 2019 COL 5      
    production_14p5GeV_2019 14.5 2019 COL 5      
    production_19GeV_2019 19 2019 COL 5      
    production_AuAu200_2019 200 2019 COL 1      
    production_5p75GeV_fixedTarget_2020 3.5 2020 FXT 5    
    production_7p3GeV_fixedTarget_2020 3.9 2020 FXT 5    
    production_9p8GeV_fixedTarget_2020 4.5 2020 FXT 5    
    production_13p5GeV_fixedTarget_2020 5.2 2020 FXT 5  https://drupal.star.bnl.gov/STAR/blog/eloyd/EPD-Calibration-Run-20-13p5GeV-FXT-dataset    
    production_19p5GeV_fixedTarget_2020 6.2 2020 FXT 5  https://drupal.star.bnl.gov/STAR/blog/eloyd/EPD-Calibration-Run-20-19p5GeV-FXT-dataset    
    production_26p5GeV_fixedTarget_2020 7.2 2020 FXT 5 https://drupal.star.bnl.gov/STAR/blog/eloyd/EPD-Calibration-Run-20-26p5GeV-FXT-dataset    
    production_31p2GeV_fixedTarget_2020 7.7 2020 FXT 5      
    production_11p5GeV_2020 11.5 2020 COL 5 https://drupal.star.bnl.gov/STAR/blog/dchen/EPD-Calibration-Run20-11p5-GeV-AuAu    
    2021 "live calibration"   2021   4      
    2022 "calibration"   2022   2      
    production_26p5GeV_fixedTarget_2020 7.2 2020   1    


    This is the current and past calibration status for the various runs in STAR. All runs will also show whether they are complete, in progress, or not yet started.

    Run 22 Calibration Status

    Run 21 Calibration Status

    • Status: Done ("Live Calibration")

    Run 20 calibrations:

    • Status: in progress
    • 7.7 GeV FXT and 9.2 GeV COL complete
    • 3.5 (5.75) GeV FXT, 3.9 (7.3) GeV FXT, 4.5 (9.8) GeV FXT, 11.5 GeV COL calibration complete.

    Run 19 calibrations:

    Run 18 calibrations:

    ---------------------------------------------------------------------------------------------
    Calibration Process

    The basics of calibration can be found here:
    https://drupal.star.bnl.gov/STAR/blog/skk317/epd-calibration-isobar-final

    Erik's slides on the process of calibration:
    https://drupal.star.bnl.gov/STAR/blog/eloyd/EPD-Calibration-Procedure

    There will be some updates to this based on more recent code, but this will get you started. Up to date code can be found here:
    https://github.com/cdxing/EpdCalibration
    https://colab.research.google.com/drive/1a_GEeXxQRDjgFs1E1CUSNY-vjRJYTAKM?usp=sharing

    EPD Meetings



    EPD Meetings
    Thursdays 08:30 (BNL time)

    Meeting Zoom

     https://lehigh.zoom.us/j/93334661391?pwd=b1hpY3lPK3hPM0tCM2d0YUZOYk96UT09




    Meeting 27 October 2021 (830 am EST)

    1. Run 22 EPD install
    2. Operations Run 22
    3. Calibration progress
    4. 19.6 GeV production issues?
    5. AOB
       
    Meeting 12 August 2021 (830 am EST)
    1. EPD Small Systems - Jordan Cory - drupal.star.bnl.gov/STAR/system/files/Cory%20EPD%20Meeting%208-12.pdf
    2. Calibrations
    3. AOB


    Meeting 1 July 2021 (830 am EST)

    1. EPD Removal Plan
    2. Calibrations
    3. AOB

    Meeting 3 June 2021 (830 am EST)

    1. EPD Current Status
    2. Calibrations
    3. AOB


    Meeting 13 May 2021 (830 am EST)

    1. EPD EQ3 Issues and OO Running - drupal.star.bnl.gov/STAR/system/files/RReedEPD05132021.pdf
    2. EPD EP Resolution
    3. Calibrations
    4. AOB

    Meeting 6 May 2021 (830 am EST)
    1. dN/dphi - Xiaoyu - drupal.star.bnl.gov/STAR/system/files/EPD05062021.pdf
    2. Run 21 Calibrations
    3. Run 21 OO Running
    4. AOB


    Meeting 11 March 2021 (830 am EST)

    1. Beamline Offset - Xiaoyu
    2. Run 21 -
    3. AOB



    Meeting 19 June 2017

    Rosi: drupal.star.bnl.gov/STAR/system/files/RReedEPD06192017.pdf
    More complete analysis, including PDFs of all 93 channels and their associated ADC, TDC and TAC values can be found at: drupal.star.bnl.gov/STAR/blog/rjreed/auau-54-gev-triggers-and-eval-part-1
    For those who may be interested in vertexing, multiplicity in the 54 GeV data, look at (not to be discussed): drupal.star.bnl.gov/STAR/blog/rjreed/auau-53-gev-evaluating-performance

    Mike: In Au+Au collisions at 54 GeV (much higher than the multiplicities that drove the design of the EPD), we expect very significant multi-hit events on the tiles.  In fact, this analysis based on central Au+Au collisions at 62.4 GeV published by PHOBOS, indicated that the average hit multiplicity on a tile would be greater than 1.

    And indeed, this is what we see, as shown on the following page:
    drupal.star.bnl.gov/STAR/blog/lisa/multi-mip-events-2017-epd-auau-54-gev

    Most particles are not coming from single-hit events.  So, we will need to be careful not to show "real quick dN/deta distributions," as they will be way off.

    Justin:
    Fiber polishing and bundle creation at Lehigh
    drupal.star.bnl.gov/STAR/system/files/Fibers%2C%20EPD%20Meeting%206-19-17.pdf
    drupal.star.bnl.gov/STAR/system/files/SS%25231_Fibers%281%29.pdf

    Sam:
    Simulation update
    drupal.star.bnl.gov/STAR/system/files/sim_EPD_update06192017.pdf


    Meeting 5 June 2017
    1) Mike: TDCvsTAC
    It turns out that all QT channels have a TDC in addition to the ADC they put out.  Supposedly it is a 5-bit TDC, but Hank reports that he's only seen 4 bits fire (returned value 0..15).  I have verified this in the EPD QTB boards, and find that the TDC is linearly proportional to the TAC, in the one QTB board that has a TAC.  You may find the picture here:
    drupal.star.bnl.gov/STAR/system/files/TDCvsTAC.png
    This could be useful in data analysis, for ALL channels (even the QT32Bs that will not have TAC, which is 75% of the EPD channels in 2018+).  I expect (though it has not been tested) that we can require TDC not equal zero, and be looking at in-time hits..
    The TDC information is now included in the StEpdTile data object.

    The following plots are ADC distributions, where the blue curves correspond to TDC!=0 and the red curves are for TDC=0.
    drupal.star.bnl.gov/STAR/system/files/ValidTDCstudy.pdf
    The TDC would seem to be an excellent way to discriminate noise/out-of-time signals from real particles.  Much more convenient than an ADC threshold.....  Except for a couple of bins, where the behaviour is precisely opposite to that expected.  This needs to be investigated.

    More detail: drupal.star.bnl.gov/STAR/system/files/TDCstudy.pdf

    2) Polishing at Lehigh:
    drupal.star.bnl.gov/STAR/system/files/FiberPolishing06052017.pdf
    drupal.star.bnl.gov/STAR/system/files/FiberPolishing06052017_pics.pdf

    Meeting on 5/22/2017

    1) Prashanth: Short update/summary of the situation with the QTs and new FEEs
    www.star.bnl.gov/protected/bulkcorr/prashanth/EPD/weeklymeetings/PrashanthS_05222017.pdf

    www.star.bnl.gov/protected/bulkcorr/prashanth/EPD/weeklymeetings/PrashanthS_05222017.pdf


    Meeting on 5/16/2017 - at BNL during STAR Collaboration Meeting
    Tuesday evening at 6:30 with pizza
    Current run
    Construction Software Ahead
    • all - plans for remainder of 2017 run
    • toward a "standard set of procedures" for operations 2018+
    • AOB

    Meeting on 5/08/2017

    1) Rosi - BBC-EPD correlations: drupal.star.bnl.gov/STAR/system/files/RReedEPDBBCCorr05092017.pdf
    BBC - EPD timing studies: drupal.star.bnl.gov/STAR/blog/rjreed/epd-timing-correletion-slew-correction-vs-bbc-ii
    Trigger, etc.

    2) Justin - Fiber polishing and testing at Lehigh:
    On the hardware page under "Useful EPD Documents":   drupal.star.bnl.gov/STAR/system/files/Justin%20Polishing%20Testing.pdf
    Also at: http://www.star.bnl.gov/protected/bulkcorr/prashanth/EPD/Polishing_Testing_Justin_05082017.pdf


    Meeting on 5/01/2017


    1) Rosi - EPD-BBC timing studies & QT32C spikes - drupal.star.bnl.gov/STAR/blog/rjreed/epd-timing-correletion-slew-correction-vs-bbc

    2) Prashanth - status at STAR
    Also, discriminator study: drupal.star.bnl.gov/STAR/subsys/epd/operations-2017/discriminator-threshold-scan

    3) Rosi - status of trigger coordination, including trigger bit test, etc.

    4) Gerard - status/plans for FEE upgrade drupal.star.bnl.gov/STAR/system/files/gain_change_update.pdf

    5) Mike - status/plans of EPD construction at OSU

    6) Mike - ADC spectra with TAC cut - drupal.star.bnl.gov/STAR/system/files/ADCfitsTACcut_Lisa28apr2017.pdf

    7) All - availability for in-person EPD session at STAR Collaboration Meeting

    8) All - Plans for the coming week




    Meeting on 4/24/2017

    1) Rosi's blog:
    drupal.star.bnl.gov/STAR/blog/rjreed/epd-analysis-32b-vs-32c-timing-correlations-etc

    One important point:  best START for QT32Bs is -8 and best START for QT32Cs is -20

    2) Joey BBC-EPD
    www.dropbox.com/s/rbgzbbuhqfev1kr/BBCEPDCorr.pdf

    3) Mike timing
    www.dropbox.com/s/bn5wpxxj8gmm2qv/Timing.pdf


    EPD To-Dos:
    1) Confirm 32C settings - Change Tier 1 file!
    2) Threshold Scan (32B first, then 32C once the above is accomplished)
    3) DSM Bit checking
    4) Software in CVS?


    Meeting on 4/17/2017

    1) Prashanth (software)
    Mike's StEpd analysis software: 
    /star/u/lisa/StEpd/

    Database related
    https://drupal.star.bnl.gov/STAR/blog/sprastar/epd-offline-db-table
    https://online.star.bnl.gov/dbExplorer/


    Meeting on 4/10/2017

    1) Status of QT32Cs - Prashanth, Rosi, Gerard, Hank(?)

    2) Bias scan - Mike
    drupal.star.bnl.gov/STAR/subsys/epd/operations-2017/bias-scan


    Meeting on 4/03/2017

    0) Status - Prashanth, all

    1) Prepost 32c vs 32b

    2) Gate scan + conclusions
    drupal.star.bnl.gov/STAR/system/files/GateScan_1.pdf
    Summary:
    • QT32B likes START=-8  (as we concluded before)
    • QT32C likes START=-20
    • we see the "left edge" of the drop-off reasonably well
    • Gate scan concluded
    3) Update on mounting structure - Robert

    4) Bias Scan - Rosi has produced files for analysis; Mike has not yet analyzed them.

    5) Gerard, all: interpretation of QT32C messages

    6) Dark current:

    7) Rosi - IV scan histograms: drupal.star.bnl.gov/STAR/system/files/IvscanHistos_03_26_2017.pdf

    7.5) Sam - First steps with GEANT4 EPD: drupal.star.bnl.gov/STAR/system/files/EPD_geantSTARSIM_0.pdf

    8) (If time) - Rosi: First steps with UrQMD: drupal.star.bnl.gov/STAR/system/files/RReedEPD03272017.pdf


    8.5) Rosi: Starting to look at BES data (7.7 GeV): drupal.star.bnl.gov/STAR/system/files/RReedEPD04032017.pdf

    9) Tasks for the coming week.
    • From last week: 1,2 are done; 3 is canceled; 4 is done but not analyzed.  5....?
    • New tasks

    Meeting on 3/27/2017

    1) Prashanth, all:
    • Current problems - CAMAC PS?
    • Plans for access
    • do not swap QT32C inputs with QT32B for now (see "steps for this week" below)

    2) Mike - Gate scan study
    3) All: Next steps for this week
    1. get trigger guys to put us in PrePost=0
    2. continue gate scan to "more negative" values to find QT32C plateau and final gate delay value
    3. swap inputs of QT32C with a QT32B to check gain (why this is not already known from the lab...??)
    4. Bias scan
      • step size, range
      • what is figure of merit? "MIP position matching"?  Separation from dark current?
      • operationally - take-a-run, burn-a-run while switching, take-a-run...?
    5. TAC ajustment - will need Jack's help (Gerard will be at BNL?)

    4) Dark current:

    5) Rosi - IV scan histograms: drupal.star.bnl.gov/STAR/system/files/IvscanHistos_03_26_2017.pdf

    6) Joey - BBC-EPD correlations - drupal.star.bnl.gov/STAR/system/files/EPD%20Presentation.pdf

    7) (If time) - Rosi: First steps with UrQMD: drupal.star.bnl.gov/STAR/system/files/RReedEPD03272017.pdf




    Meeting on 3/20/2017
    1) Mike: PrePost study of runs 1806643, 18076064 and 18076065:
    drupal.star.bnl.gov/STAR/system/files/Study18066043and18076064and18076065all.pdf
    (This study under the Operations 2017 child page)

    2) Mike: Draft (v1) of trigger documents shared with trigger group and to be discussed 21march2017:
    drupal.star.bnl.gov/STAR/system/files/EPD%20Trigger%20Requirements%20v1.pdf

    3) Mike: First Gate Scan study.  This will need to be revised, as discussed on first page of this document.
    drupal.star.bnl.gov/STAR/system/files/GateScan_PrePostIndex2.pdf
    (This study under the Operations 2017 child page)

    Meeting on 3/13/2017
    1) Mike: TAC-selected spectra from run 18067001: drupal.star.bnl.gov/STAR/system/files/Run18067001_spectra.pdf
    Also, drupal.star.bnl.gov/STAR/system/files/FitValidTACadcSpectrum.pdf


    Meeting on 3/6/2017
    1) Mike: Spectra Fits from vbias 56.5V data: drupal.star.bnl.gov/STAR/system/files/MLisaEPDFits03062017.pdf

    2) Rosi: Vbias comparison: drupal.star.bnl.gov/STAR/system/files/h_CompareVolt03052017_Vbias56_5_v3.pdf
    How shall we set these values?  (Above needs to be repeated with the better vped values and with Jack's new Tier1 file.)


    Meeting on 2/27/2017
    1) Rosi - Tested the detector by increasing the Vbias in each channel by 5 V.  The dark current increased, one can see it at:
    drupal.star.bnl.gov/STAR/system/files/h_DarkCurrentHistogram_Nhours149.pdf
    The current increase is at the end.
    Plot of the pedestal per channel:
    https://drupal.star.bnl.gov/STAR/system/files/h_Ped18052002_v2.pdf
    Only one channel is "fat" and it should be determined whether it is FEE, QT, or Receiver
    Dark Current as of this morning:
    drupal.star.bnl.gov/STAR/system/files/h_DarkCurrent_Nhours288.pdf
    Gerard asked what happens if we decrease ymax to 0.3nA, see:
    drupal.star.bnl.gov/STAR/system/files/h_DarkCurrent_Nhours288_ymax03.pdf
    Lehigh schedule for WSL fibers:
    drupal.star.bnl.gov/STAR/system/files/epd_schedule_Lehigh_WSL.pdf

    Also note, Prashanth has added it to the online monitoring!

    2) Prashanth - Timing in, inclusion to StRoot

    3) Sam - Adding EPD to the STAR GEANT:
    drupal.star.bnl.gov/STAR/system/files/Epd_Geant.pdf

    Meeting on 2/20/2017
    1) Rosi -
    VPed Setting: Code is at
    /gpfs01/star/subsysg/EPD/VpedDet/
    Summary: drupal.star.bnl.gov/STAR/system/files/RReed_EPD_02192017_Vped.pdf
    1 set of fits: drupal.star.bnl.gov/STAR/system/files/h_Vped.pdf

    Vbias: Code is at
    /gpfs01/star/subsysg/EPD/IVScan
    Summary: drupal.star.bnl.gov/STAR/system/files/RReed_EPD_IVscan_02192017.pdf

    Dark Current: Code is at
    /gpfs01/star/subsysg/EPD/DarkCurrentMonitor
    Summary: drupal.star.bnl.gov/STAR/system/files/RReed_EPD_IVscan_02192017.pdf
    Scan at of 10:30 am: drupal.star.bnl.gov/STAR/system/files/h_DarkCurrentHistogram_EPD02202017.pdf
    Note: I did an IV scan, which we can see the evidence of.  Also, I am unsure why one of the boards is not showing any movement.  I will check on this during the week.

    Open questions: When can we be timed in? Shift crew instructions? Monitoring at: https://online.star.bnl.gov/epd/ ?  MuDst?

    2) Prashanth -
    online monitoring page here: https://online.star.bnl.gov/epd/

    Notes: Prashanth is discussing with Akio the best way to incorporate our detector in the MuDsts.  The question is where does the mapping get picked up, which is currently being picked up in the trigger data.
    Action items for Rosi:
    *Test the detector by setting Vbias = Vbias + 5 V to see that the dark current has increased.
    *Plot the pedestal distributions per channel.
    *Touch base with Akio/Tonko about the functionality of the standby mode and whether it is needed for pedstal runs.
    *Fix Vped for 3 channels not connected to an EPD tile
    *Touch base with Prashanth to see what software tasks can be distributed so he can concentrate on incorporating the EPD into the MuDst
    *Ask to have the IV Scan + Pedestals taken via the shift crew (pending the answer from Akio/Tonko).
    We also discussed the need for documentation, at the moment things are a bit chaotic.  This will improve our ability to distribute tasks.

    Old BlueJeans connection info.
     
    Meeting ID
    164817274
     
    Want to dial in from a phone?
    Dial one of the following numbers:
    • +1.408.740.7256
      (United States)
    • +1.888.240.2560
      (US Toll Free)
    • +1.408.317.9253
      (Alternate number)
    see all numbers
    Enter the meeting ID followed by #
    Connecting from a room system?
    Dial: 199.48.152.152 or bjn.vc and enter your meeting ID

    Tuesday night EPD Management meetings

    Weekly meeting of Upgrades Coordinator (Elke Aschenauer, formerly Flemming Videbaek) and the EPD management team

    Tuesdays at 22:00 BNL time

    To join the meeting on a computer or mobile phone: https://bluejeans.com/659632750?src=calendarLink
    -----------------------------------
    Connecting directly from a room system?
    1) Dial: 199.48.152.152 or bjn.vc
    2) Enter Meeting ID: 659632750

    Just want to dial in on your phone?
    1) +1.408.740.7256 (US)
       +1.888.240.2560 (US Toll Free)
       +1.408.317.9253 (Alternate number)
        (http://bluejeans.com/numbers)
    2) Enter Meeting ID: 659632750
    3) Press #
    -----------------------------------


    Rahul's official shut-down schedule 2017/18:
    https://drupal.star.bnl.gov/STAR/blog/rsharma/star-shutdown-schedule



    5 Dec 2017 meeting
    12 Dec 2017 meeting:
    9 Jan 2018 meeting:

    EPD Slow-Control Manual

    EPD Slow-Control Manual

    Last Edited: 5/1/19 by Joey Adams


    Slow Controls:        https://dashboard1.star.bnl.gov/daq/EPD_UI/?EPD

    TUFF Controls:      130.199.60.221

    This Document:      https://drupal.star.bnl.gov/STAR/subsys/epd/epd-run-control-manual-0



    2019 Experts
     
              Joey Adams                                                   (614) 636-5773           adams.1940@osu.edu
              Annika Ewigleben                                           (202) 207-6243           jre315@lehigh.edu
              Prashanth Shanmuganathan                           (330) 906-2019           sprashan@kent.edu                      prs416@lehigh.edu
              EPD Apartment                                              (631) 344-1018 

    Others to call in the event that the above are not reachable:
               Mike Lisa                                                       (614) 449-6005                   lisa@physics.osu.edu
               Rosi Reed                                                     (408) 507-7802                   rosijreed@lehigh.edu

    Page Layout
    1. Connecting to the GUI
    2. Setting permissions
    3. Turning on/off the SiPM high voltage
    4. Performing the daily IV scan
    5. Other GUI information
    6. TUFF Controls
    7. What to do if EQ crate won't configure
    8. What to do in the event that some tiles show a low <ADC>
    -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    !!!!!Preliminary warning!!!!!
    • If beam starts during the IV scan, then immediately go back to "PHYSICS" mode. Enter a note in the logbook if this happens.
    • You should never switch run states within ~30 seconds of already having done so. When you switch run states, a green circle starts loading in a small window entitled "SC State" on the left side of the screen. Wait until this is a full circle before switching to another state. 




    1. Connecting to the GUI
    The GUI is very memory intensive and may run very slowly. If at any point it stops working and you need to restart the browser, the GUI is the default home page (the System Monitor can be left open to check the memory usage of Firefox. If it is above 1.5GB, it should be restarted). If the GUI page does not show up after restarting, the URL is located at the top of this document labeled as "Slow Controls". There is also a bookmark on the bookmark bar titled "EPD GUI". The first thing you should note any time you are using the GUI is whether you are connected or not.



    It will attempt to auto-connect on its own. If after ~20 seconds it still has not connected, try refreshing the page. If you have just restarted the browser, refresh immediately, you will need to re-enter the username and password before it will connect. It should be auto-filled. If it's not, the username is "protected" (without quotes). If you do not know the password, ask the Shift Leader.

    If after restarting the browser, refreshing the page and waiting a sufficient amount of time you are still not connected, refer to the end of this document (TUFF controls) for troubleshooting. This may happen if STAR loses power. By default the power supply is not set to turn back on automatically.



    2. Setting Permissions

    If the GUI has been reset, it may start in ReadOnly mode.




    To change this, click on ReadOnly in the upper-right. This will bring up a drop down menu. Mouse over "Permissions" to see another list.




    In general, you should use the DetectorOperator setting. This will enable you to operate the EPD under normal running conditions. In principle, it is possible for anyone to select Expert, but only do this if you need to (if you need to change a setting and it says you need to be in Expert mode, it is okay to do so, just inform the Shift Leader and a note should be made in the Shift Log). When selecting Expert, one is only given permission for a certain amount of time before reverting back to DetectorOperator mode.

    To make the drop down menu go away, click again on current setting in the upper-right (DetectorOperater, Expert or ReadOnly). The menu will not go away by clicking anywhere else.


    3. Turning on/off the SiPM high voltage

    To turn on the high voltage, first click the "PHYSICS" button on the top bar, and then click the "PHYSICS" button in the upper-left corner.



    In the event that the high voltage across the SiPMs needs to be turned off, first click the "OFF" button on the top bar, and then click the "OFF" button in the upper-left corner. This should only be done if specifically asked by an expert. The EPD will generally be left in Physics mode without ever being turned off.



    4. Performing the daily IV scan

    An IV scan should be performed once per day, during other detectors' scans, when there is no beam. It may take up to 30 minutes. Whenever an IV Scan is done, a note should be made in the shift log. The "IVSCAN" button on the top bar should be clicked, followed by the "IVSCAN" in the upper-left corner. The current alarm may trip during an IVSCAN, this is normal and can be ignored. The final voltage for the IVSCAN is 65.4 V.



    IMPORTANT: After the IVSCAN is done, it should automatically switch back into PHYSICS mode. If it doesn't, or if the SC State box is stuck with a half yellow circle trying to switch into PHYSICS, manually move the detector back into PHYSICS mode as normal. Make a note in the "IV Scan Log" if this happens.



    If need be, the EPD may be switched to the "PHYSICS" mode during an IV scan without harm (just be sure to wait for the EPD to be fully in the IV scan mode by checking that the green circle in the "SC State" box on the left side is complete. Enter a note in the logbook if this happens.


    5. Other GUI information

    Color-scale bounds can be set by moving the slider or entering values.



    Measured values that cross an alarm threshold lead to an audible alarm and flashing tile(s). Hovering over a tile shows the associated warning.



    Alarms can be silenced (for 15 minutes) or unsilenced by double clicking the tile; muted tiles show black “X”s over them.



    Entire supersectors can be silenced or unsilenced by double clicking on the names (e.g. “E12”) and entire wheels can be silenced or unsilenced by double clicking the middle.



    A short log is shown at the bottom, and a longer list can be brought up from the bottom-right corner.

     

    Alarm thresholds can be set in expert mode (experts should be consulted first).



    The threshold for current alarms should be at 2 microAmps (subject to change as the run continues). The threshold for temperature should be at 45 degrees Celcius. The thresholds for Rmon should be at 99 and 0.0 for the maximum and minimum, respectively. 

    6. TUFF Controls

    The TUFF should only need to be accessed if power to STAR has been lost or if the power has been turned off for any reason. There should be a bookmark on the browser bookmark bar called "CyberPowerManag...", otherwise it can be accessed by entering 130.199.60.221 into the browser. The username and password are "cyber" and "p4*EPD!!" .




    Under Status on the main page, you can check which outlets are on and off. If any of #1, 2, 3 or 4 are off, they will need to be turned on. Nothing is plugged in to outlets #5-8, so these can be ignored.

    To turn on/off/reboot the outlets, on the left side of the screen, click Outlet Control. Select which outlets need to be turned on/rebooted (only #1-4 are hooked up, but all can be selected). Either Sequenced or Immediate can be selected for the delay. Once you click apply, a new page will pop up and will auto refresh once they have been changed to the new setting.





    !!!!!IMPORTANT!!!!!

    Any time the TUFF power has been reset, a "1WIRE_SCAN" must be performed before any of the other controls will work. This is done in the same way as the IVSCAN or going into PHYSICS mode. Click "1WIRE_SCAN" on the top bar, and then click "1WIRE_SCAN" on the box in the upper-left corner. If you click in the empty space below the "SC State" box, another box will appear/disappear. After performing a 1WIRE_SCAN, it should list the "FEEs Found" at 48. If it finds any number of FEEs other than 48, contact an expert.






    7. What to do if EQ crate won't configure

    If the shift crew is trying to start a run but it is stuck in configuration because of eq1, eq2, or eq3 (which would show a red "W" next to it), then:
         1. Stop the run
         2. Power cycle that crate
              (eq1 <--> VME 99,
               eq2 <--> VME 100,
               eq3 <--> VME 64),
         3. Try starting another run




    8. What to do if some tiles show a low <ADC>
    Sometimes the QA plots will suddenly start showing one or more tiles with a low <ADC>

    If this is seen in one or a few tiles, take a pedestal run. If two attempts at fixing it with a pedestal run do not work, call an EPD expert.


    Printing This Document

    Without following this printing procedure, the images will come out pixelated with text barely readable. I am using Chrome as the browser and Adobe Acrobat as my PDF viewer.
    1. In Chrome, highlight everything in the document
    2. Print to a PDF (go to "Destination" and select "Save as PDF", change the Layout to "Landscape", then go to "More Settings", "Options", and then select only "Selection Only")
    3. In Adobe Acrobat, select the printer icon, go to "Advanced", select "Print as Image" and select 600 dpi
    4. You can connect to the printer in the control room via the USB




    Final Note

    For any other issues outside of this document, please contact the Experts listed above. In general, first try normal solutions, ie. rebooting/powercycling crates, etc. The EPD should be hands off, only requiring an IVSCAN once per day, and to be turned back on in case of power failures. Otherwise it will be left in PHYSICS mode at all times and does not need to be powered on or off. Alarms for current will trip whenever there is a beam dump, the beam sweeps into the EPD, during APEX, etc. This is normal. You should only be alarmed by the current alarm if it stays on for an extended period of time and nothing odd is happening with the beam.

    ________________________________________________________________________________________________________________________________________________________________









    EPD-GSTAR

    1st iteration by Sam:

    2nd iteration by Prashanth:
    Added tile structure:
    Dropbox link for log book:
    https://www.dropbox.com/s/73ddkbhnmvpnpc1/vsp_gstar_071717.pdf?dl=0

    3rd Iteration by Prashanth
    Changes:
    1. Tile coordinates are from Solid Works.
    2. Implemented thinner part of the tiles.
    3. More realistic tile 1

    Blue are regular 1.2 cm thick tiles.
    Green are thinner part of tiles.
    Red Tile 1 lower side (Trapezoidal)
    Magenta Tile 1 triangular part.

    Here is the detail calculations:

    https://drupal.star.bnl.gov/STAR/system/files/SuperSectorCorners.xlsx

    IV Scan

    Individual's pages

    This area provides a place where individuals can post their EPD-related studies.  See the "child pages" below.


    Cosmic tests at BNL

    Cosmic tests at BNL - strange results (Dec. 7, 2017)


        All of the relevant plots, only a few of which are shown in this blog as examples, can be found here: https://drupal.star.bnl.gov/STAR/blog/adams92/te-chuan-huangs-plots-cosmic-tests-bnl

    So far, here at BNL, we have taken 7 full cosmic runs. The cosmic runs use 4 large (~0.1 m wide by ~1 m long) paddles and tests 3 supersectors. One pair of paddles sits above the supersectors and another pair sits below. They can only cover half of a supersector, so we do a run with the odds covered and then do a separate run with the evens covered. The paddles can be used to determine the position and incident angle of each cosmic ray; however, we have been using the supersectors as offline triggers to isolate vertical cosmic rays (as was done at OSU). Some more detail is given in my presentation from this week's EPD meeting: 
    https://drupal.star.bnl.gov/STAR/system/files/Cosmic%20Ray%20Testing%20at%20BNL.pdf 


    We realized earlier today that two of the 7 cosmic runs have strange results. These two runs were with (in one configuration) supersectors 21, 29, and 30 and (in another configuration) supersectors 23, 25, and 27. The strange results are:
         
    1.1) The "zero" channels, which are non-existent supersector tiles, show what seems to be MIP peaks. There should of course be nothing.
    SS21_TT00: &amp;amp;nbsp;SS21_TT00: &amp;nbsp;

                 For all 6 supersectors in the aforementioned "problem" runs, the tile 0 ADC distributions look like this.

          1.2) For the configuration with supersectors 02, 03, and 04, the tile 0 ADCs also show what seem to be a very small MIP peak.
    SS02_TT00: &amp;amp;nbsp;SS02_TT00: &amp;nbsp;
            

                 For all 3 supersectors, this is seen in tile 0. In fact, this configuration (even tiles of supersectors 02, 03, and 04) was taken twice, and each 24-hour run shows this behavior in all 3 supersectors.

         2.1) The ADC distributions do not represent standard MIP peaks. There is a bump to the right of the peak.
    SS21_TT16: &amp;amp;nbsp;SS21_TT16: &amp;nbsp;

                 You guessed it: this is seen for all even tiles of supersectors 21, 23, 25, 27, 29, and 30. Supersectors 02, 03, and 04, which had the very small tile 0 signals, do not show this bump.

         2.1) The bump to the right of the peak persists (is not reduced in height) after a vertical cosmic ray cut for the small tiles, but it is significantly reduced for the larger tiles.
                  This is simply attributed to the fact that larger tiles have more statistics, so the shoulder is drowned out.
    SS21_TT02_vertical: &amp;amp;nbsp;SS21_TT02_vertical: &amp;nbsp;

    SS21_TT30_vertical: &amp;amp;nbsp;SS21_TT30_vertical: &amp;nbsp;

                 At this point, I feel like I don't have to say that this behavior is true of supersectors 21, 23, 25, 27, 29, and 30.



    Some notes:
          1) The same fiber bundles are being used for all runs.

          2) The same ADC channels, receiver cards, and trigger paddles are being used for the even and the odds.

          3) Vertical cosmics are determined offline from the supersectors by requiring hits above some threshold in the same tile number of each supersector.

          4) At OSU, we saw some random spikes and drops in the ADCs that were generally short lived but significant in magnitude. Here, they have been pretty stable.
    SS21_TT02_strip: &amp;amp;nbsp;SS21_TT02_strip: &amp;nbsp;

    SS25_TT02_strip: &amp;amp;nbsp;SS25_TT02_strip: &amp;nbsp;

          5) At OSU, when performing the heat map tests at high bias voltage (~66 V), we saw the dark current walk in each tile independently.
    DarkWalk_TT01: &amp;amp;nbsp;DarkWalk_TT01: &amp;nbsp;

    DarkWalk_TT03: &amp;amp;nbsp;DarkWalk_TT03: &amp;nbsp;

                   A profile histogram showing all odd tile dark currents walking on average:
    DarkWalk_FEE5: &amp;amp;nbsp;DarkWalk_FEE5: &amp;nbsp;



          6) (Dec. 8th) Tim Camarda and I looked inside the FEE box to make sure everything was connected properly. We found nothing apparently wrong with the setup and proceeded to switch the FSCs in FEEs 5 and 6 (as well as their corresponding RX cables in the back of the corresponding RX card) to read the even tiles out of an "odd" FEE (i.e. a FEE previously used to measure only odd tiles). We didn't take enough data to see something wrong with the ADC distributions, but we DID see a non-pedestal signal in tile 0. I can't imagine there is anything physically creating a signal (let's just take that probability to be 0), so this implies that whatever problem we are seeing exists in ALL FEE cards and only happened to show up during the even runs.

         7) The transmission of the fiber bundles we are using here at BNL are:
                   FB 10:  -62%
                   FB 11:  -62%
                   FB 17:  -63%

             The transmissions of the fiber bundles used at OSU are:
                   FB 02:  -55%
                   FB 03:  -54%
                   FB 04:  -63%
                   FB 05:  -62%


          8) (Dec. 11th) It was realized in this week's EPD meeting that events with more than a few tiles firing were not thrown out; this cut was being applied for the cosmic testing at OSU. Te-Chuan Huang plotted the ADC_{Tile 0} vs the ADC_{Tile X}, and we see three main features: a pedestal (which is all that we should get in theory), a diagonal line (which likely is the shoulder we see in the "big" problem even tiles), and some fuzz in-between. 
    TOP_SS212930: &amp;amp;nbsp;TOP_SS212930: &amp;nbsp;

    MID_SS212930: &amp;amp;nbsp;MID_SS212930: &amp;nbsp;






    BOT_SS212930: &amp;amp;nbsp;BOT_SS212930: &amp;nbsp;






                   This is plotted against ADC_{Tile 2}, but the same behavior exists for all tiles. The slopes are not the same in every case, but the slope associated with RxMID is consistently lower than the slopes of RxTOP and RxBOT. The diagonal is likely associated with all channels firing at once.

                   For the "small" problem runs, we see the same features: a pedestal, a diagonal, and some fuzz in-between. The difference is that there are fewer "diagonal" or "fuzz in-between" events in these runs.
    TOP_SS020304: &amp;amp;nbsp;TOP_SS020304: &amp;nbsp;


                   It seems likely that the "small" and "big" problems are really the same, time-varying problem; that this problem just so happened not to be present during the odd runs, to be more dominant during the first two even runs, and not to be as dominant during the following even runs.

          8) (Dec. 12th) In order to rule out cross talk at the FSC, I produced a cross-talk heat map including tile 0. The "Excess Current With Source [underneath]" obviously doesn't make sense for tile 0, so I took it to be 0.1 uA (other tiles typically have an excess of 0.5 uA) to make the colors more discernible. 
    FSC_xTalk: &amp;amp;nbsp;FSC_xTalk: &amp;nbsp;

                   We see from the bottom row (the only relevant part of the plot) that there is just noise. If the FSC cross talk was existent, we would see it between tile 0 and tiles 1, 2, 8, 9, and 10; we don't see that here.

          9) (Dec. 12th) In order to see whether the problem is isoloated to a given FEE/Rx card or if it is global, ADC_{tile 0, RxTOP} was plotted against ADC_{tile X, RxMID} and ADC_{tile X, RxBOT}.
    TOP0vsMIDandBOT: &amp;amp;nbsp;TOP0vsMIDandBOT: &amp;nbsp;

                   There are no noticeable differences between these plots and those within the same Rx card (see point 8), indicating that this is a global problem

          10) (Dec. 12th) In an attempt to remove the diagonal and "fuzz" addressed in point 8, a cut was applied to throw away events with more than 10 tiles in a given supersector reading hits. This removed most of the diagonal and fuzz. 
    ADC_10hitCut: &amp;amp;nbsp;ADC_10hitCut: &amp;nbsp;


                   There is a soft cutoff in ADC_{tile 0} for each plot. 

          11) (Dec. 12th) Just to make sure that this problem is not isolated to tile 0, ADC_{tile 10} was plotted against ADC_{tile X} with the cut mentioned in point 9.

    ADC_tile10vsTileX: &amp;amp;nbsp;ADC_tile10vsTileX: &amp;nbsp;

                   Unsurprisingly, we see that this problem is not isolated to tile 0.




    ___________________________________________________________________________________________________________________________________________________________________________________________


    Isaac Upsal

    Generally stuff is on my blog (https://drupal.star.bnl.gov/STAR/blog/iupsal), so I'll provide links to EPD stuff here.

    Maybe useful references:

    pEPD darkcurrent measurements showing radiation damage:
    https://drupal.star.bnl.gov/STAR/blog/iupsal/sipm-dark-current-2015-pepd

    EPD cosmic tests (from Joey):
    https://drupal.star.bnl.gov/STAR/blog/iupsal/osu-epd-supersector-cosmic-tests

    EPD v2 at 54GeV:
    https://drupal.star.bnl.gov/STAR/blog/iupsal/first-look-epd-v2

    EPD centrality NN study for 54GeV data:
    https://drupal.star.bnl.gov/STAR/blog/iupsal/epd-centrality-nn-study

    BBC tile size crosscheck (the BBC page is wrong)
    https://drupal.star.bnl.gov/STAR/blog/iupsal/bbc-tile-size

    Probably not interesting in the future:

    GEANT stuff:
    https://drupal.star.bnl.gov/STAR/blog/iupsal/epd-geant-simulation-update

    pEPD cosmic testing ect.:
    https://drupal.star.bnl.gov/STAR/blog/iupsal/osu-epd-testbench-bes-meeting-feb23-2015
    https://drupal.star.bnl.gov/STAR/blog/iupsal/osu-epd-testbench-page-numbers
    https://drupal.star.bnl.gov/STAR/blog/iupsal/osu-epd-testbench-epd-meeting-feb25-2015
    https://drupal.star.bnl.gov/STAR/blog/iupsal/osu-epd-testbench-bes-meeting-march9-2015
    https://drupal.star.bnl.gov/STAR/blog/iupsal/osu-epd-testbench-epd-meeting-march11-2015

    pEPD FEE box geometry and cabling:
    https://drupal.star.bnl.gov/STAR/blog/iupsal/pepd-2016-fee-box-geometry-and-cabling

    OSU fiber polishing (Joey and Keith):
    https://drupal.star.bnl.gov/STAR/blog/iupsal/osu-fiber-polishing-epd

    EPD FEE box basics design and schematic (design in these documents is not final):
    https://drupal.star.bnl.gov/STAR/blog/iupsal/epd-box-design
    https://drupal.star.bnl.gov/STAR/blog/iupsal/epd-fee-box-schematic

    Presentations given outside of EPD meetings:

    October 2014 Upgrade workshop (this is all Alex, not me. I just thought it should be uploaded somewhere since I don't see it on the meeting page):
    (attached)

    November 2014 collaboration meeting:
    https://drupal.star.bnl.gov/STAR/meetings/star-collaboration-meeting-november-3-7/plenary-session-i/epd

    June 2015 Collaboration meeting (shared with Michael Lomnitz):
    https://drupal.star.bnl.gov/STAR/content/epd-status

    November 2017 Analysis meeting, Yang and I talk about 54GeV flow using the EPD during the BulkCorr meeting:
    https://drupal.star.bnl.gov/STAR/meetings/star-fall-2017-analysis-meeting/bulk-correlations-parallel-2/epd-54-gev

    February 2018 Collaboration meeting, I use a bit of my BulkCorr spot to talk about centrality (this is less detailed than the EPD meeting presentation):
    https://drupal.star.bnl.gov/STAR/meetings/star-collaboration-meeting/bulkcorr/mag-field-r17

    Justin Ewigleben

    Justin Ewigleben's studies

    Prashanth

    Prashanth pages

    Robert Pak

    Robert Pak's studies

    Rosi Reed

    Saehanseul Oh

    Saehanseul Oh

    Sam Heppelmann

    Sam Heppelmann pages

    Te-Chuan Huang

    Te-Chuan Huang studies

    First results of autumn 2017 cosmic tests at STAR

    The first vertical cosmics results:
     - Landau fits: STAR/system/files/userfiles/3886/ADC_SS252723_odd.pdf
     
    - presentation: drupal.star.bnl.gov/STAR/system/files/TeChuan_EPD_cosmic_20171204_0.pdf

    Vertical cosmic rays of EPD with SS08181007 configurations

    ReadMe.txt

    -from top to bottom
     
    SS08 = FEE05
    SS18 = FEE03
    SS10 = FEE02
    SS07 = FEE04
     
     even/odd  config#  #triggers     date      Wlodek_under   Beavis_under    pp logbook  notes
    --------  -------  ---------  ----     ------------   ------------    ----------  -----
     odd           1             618k       8/22-24   1,3,5,7             15                      60-62       1
     odd           2             ?              8/24-25   27.29,31         11,13                 62-63       2
     odd           3             322k       8/28-29   21,23,25         17,19                  80-81
     odd           4             245k       9/1-2       13,15,17         29,31                  87-88
     
     even          1             201k       9/2           22,24,26         28,30                 88          3
     even          2             198k       9/2-3       12,14,16         18,20                 88-89
     even          3             360k       9/3-4       2,4,6              8,10                    89,92
     
    Notes:
     
    1] In order to look at tile #1, we have to re-cable (since we have only 60 ADC channels
    but 64 odd-numbered tiles).  Therefore, for this configuration:
    FEE02TT01 ---> ADC in slot 5 channel 2   (where FEE02TT31 normally goes)
    FEE03TT01 ---> ADC in slot 9 channel 5   (where FEE03TT31 normally goes)
    FEE04TT01 ---> ADC in slot 15 channel 8  (where FEE04TT31 normally goes)
    FEE05TT01 ---> ADC in slot 22 channel 2  (where FEE03TT17 normally goes) <--- ATTENTION! Different!
     
    2] CAMAC tripped *and* Rosi's mac froze.  Nevertheless, I sent the latest autosave
    file to Joey (and Rosi I think).
     
    3] For the first time, we are using a "virgin SiPM" (one not irradiated in run 2017)
    on the even-tiles of a SS.

    - FEE strips:
    https://drupal.star.bnl.gov/STAR/system/files/SS08181007odd_Configuration1_Strips_FEE.pdf
    https://drupal.star.bnl.gov/STAR/system/files/SS08181007odd_Configuration2_Strips_FEE.pdf
    https://drupal.star.bnl.gov/STAR/system/files/SS08181007odd_Configuration3_Strips_FEE.pdf
    https://drupal.star.bnl.gov/STAR/system/files/SS08181007odd_Configuration4_Strips_FEE.pdf
    https://drupal.star.bnl.gov/STAR/system/files/SS08181007even_Configuration1_Strips_FEE.pdf
    https://drupal.star.bnl.gov/STAR/system/files/SS08181007even_Configuration2_Strips_FEE.pdf
    https://drupal.star.bnl.gov/STAR/system/files/SS08181007even_Configuration3_Strips_FEE.pdf

    - presentation:
    https://drupal.star.bnl.gov/STAR/system/files/TeChuan_EPD_cosmic_20170918_3.pdf

    - ADC fits
    https://drupal.star.bnl.gov/STAR/system/files/ADC_2.pdf


    Vertical cosmic rays of EPD with SS14081516 and SS17181920 configurations

     SS14081516 configuration:

    This is the fourth set of cosmic tests done.
     
    SS14 = FEE01  (Note that the hand-written label on the card says "7", but that is irrelevant here
    SS08 = FEE02
    SS15 = FEE03
    SS16 = FEE04
     
    >->->->->->->->  This set is the first to use this new FEE mapping.  See page 111 in log book.
     
    even/odd  config#  #triggers  date     Wlodek_under   Beavis_under    pp logbook  notes
    --------  -------  ---------  ----     ------------   ------------    ----------  -----
    odd       1        305k       9/13-14  23,25,27       29,31           111-114
    odd       2        475k       9/14-15  1,3,5,7        9,11            114-115     1
    odd       3        520k       9/15-16  13,15,17       19,21           115-116
     
    even      1        500k       9/16-17  2,4,6,8        10,12           116,119
    even      2        183k       9/17     22,24,26       28,30           119
    even      3        238k       9/17-18  14,16,18       20,22           119,120
     
    Notes:
     
    1] In order to look at tile #1, we have to re-cable (since we have only 60 ADC channels
    but 64 odd-numbered tiles).
    FEE01TT01 ---> ADC in slot 5 channel 2   (where FEE01TT31 normally goes)
    FEE02TT01 ---> ADC in slot 9 channel 5   (where FEE02TT31 normally goes)
    FEE03TT01 ---> ADC in slot 15 channel 8  (where FEE03TT31 normally goes)
    FEE04TT01 ---> ADC in slot 22 channel 11 (where FEE04TT31 normally goes)
     
    Note that, unlike previous cosmic set-ups, "tile 1 goes where tile 31 normally goes"

    SS17181920 configuration:

    This is the fifth set of cosmic tests done.
     
    SS17 = FEE01  (Note that the hand-written label on the card says "7", but that is irrelevant here
    SS08 = FEE02
    SS19 = FEE03
    SS20 = FEE04
     
    >->  This mapping is described on page 111 in log book.
     
    even/odd  config#  #triggers  date     Wlodek_under   Beavis_under    pp logbook  notes
    --------  -------  ---------  ----     ------------   ------------    ----------  -----
    even      1        306k       9/19-20  22,24,26       28,30           123
    even      2        475k       9/20-21  12,14,16       18,20           123,127
    even      3        543k       9/21-22  2,4,6          8,10            127-128
     
    odd       1        485k       9/22-23  1,3,5,7        9,11            128-129     1
    odd       2        584k       9/23-24  27,29,31       23,25           129,131-132 2
     
     
    Notes:
     
    1] In order to look at tile #1, we have to re-cable (since we have only 60 ADC channels
    but 64 odd-numbered tiles).
    FEE01TT01 ---> ADC in slot 5 channel 2   (where FEE01TT31 normally goes)
    FEE02TT01 ---> ADC in slot 9 channel 5   (where FEE02TT31 normally goes)
    FEE03TT01 ---> ADC in slot 15 channel 8  (where FEE03TT31 normally goes)
    FEE04TT01 ---> ADC in slot 22 channel 11 (where FEE04TT31 normally goes)
     
    2] Te-Chuan has been seeing essentially zero vertical cosmics for TT30,31.
    This could be an index problem in his code (i.e. bug), but in principle
    it could be caused by us not putting the trigger detectors all the way
    under the largest tiles. (They are at the edge of the shelf, so we might
    be "shy" to put the trigger detectors there.)  For odd_Configuration2,

    presentation:
    https://drupal.star.bnl.gov/STAR/system/files/TeChuan_EPD_cosmic_20170925_0.pdf



    dummy

     

    Xinyue Ju

    Jinyue Ju from USTC

    Known issues and problems

    [On this page, red, bold text indicates action needed.]

    Since the construction of the EPD, there have been problems here and there. Usually, we resolve them real time, then move on with our lives actually thinking we'll remember what we did. This page will hopefully solve those "What did we decide about what to do with those QA plot holes" and "Wasn't there a mapping change or something?" moments. This shouldn't be grounds for discussion (reserved for star-epd-l@lists.bnl.gov), but instead for descriptions of problems encountered, as well as the solutions and rationale behind them. If you see something that should be here, please put it here and add a reminder to the EPD Google Calendar. This will make all our lives so much easier!


    Description and link Decision/Result EPD Calendar reminder date if action required
    Dark current oscillating in multiple channels
    Revisit after run19 2019-07-13
    "Y joints" on fiber bundles breaking Reinforcement design needed 2019-06-17
     Scattered cold tiles in QA plots Replace QT32B daughter board  
     4-tile hole in <ADC> and <TAC> QA plots Daughter card (EQ3,0x1C, DC2) replaced  
     Channel measuring cosmics out of Rx board, but blue in QA plots and not showing any MIP peaks Bad QT32B connector (EQ1-0x10-C) replaced  
     EPD cable weight between Rx boards and QT boards Regularly check cable tension between Rx and QT boards 2019-06-17
    Totally dead tile Switched the connector from EQ1-0x1D-B channel 3 to EQ3-0x1D-D channel 1  
    Totally dead tile Fixed a bent ground pin  
    ...  ...  

     

    Mapping


    Here we keep pages (they are "child pages" to this one) relating to mapping.  This includes hardware maps of detector components to electronics and sofware maps of tiles to physical location.


    The current mapping is: https://drupal.star.bnl.gov/STAR/system/files/EPD_mapping_4crate_05312023.xlsx






    Old files below.

    Geometry information for use in codes



    Important!

    The information on this page can be very helpful, but if you are running root (not necessarily root4star), please use the StEpdGeom class.  It has all the geometrical information you will ever need.
    It is part of the StEpdUtil package (on RCAF, 'cvs co StRoot/StEpdUtil'), which also includes
    StBbcGeom (the geometry for the BBC, obviously), and StEpdEpFinder, which finds the Event Planes for you.





    One often wants the phi and eta for a given tile. 

    In the attached file TestGeometry.txt there are two Double_t arrays that you can cut and paste into your code.  It is a root macro. (drupal does not allow .C extension, so it is uploaded as .txt)

    Double_t EpdTilePhi[2][12][31]
    • first index is EW=0/1  - 0 means East wheel and 1 means West wheel
    • second index is PP (position number)
    • third index is TT (tile number)
    Double_t EpdTileEta[2][31]
    • first index is EW
    • second index is PP
    The macro also produces some pictures so you can check the validity of the geometry.  Pictures are attached below.  When you are looking to the west, be careful because -x points to the right!




    The numbering of the supersector positions (PP) are such that it follows a clock face, when seen by particles flying into the detector.  I.e. when viewed from (0,0,0) in the TPC.  If you look at an EPD wheel from the outside, the numbering goes "counter-clockwise."  See description in the StEpd software documentation for more discussion of this.

    Keep in mind the official STAR Coordinate system:
    • +x points South
    • +y points up (thank goodness)
    • +z points West
    Some very useful sketches of STAR detectors with coordinate system may be found here.  Probably the most useful image is reproduced below:






    EpdTileCenterRadii_v4.ods is the spreadsheet which was used to calculate the geometric center of the various tiles. This center location is not trivial because the supersector design makes all but the first of the tiles asymmetric, where one side is higher than the other. An image is embedded in the spreadsheet where the basic trig calculations are done. The goal was to get these values for the psuedorapidity, so, for simplicity, the supersector was assumed to be centered on the y coordinate, which is true for no actual supersectors. Thus, if one wants the proper phi coordinate it is necessary to rotate the coordinates by (15 degrees)/2 for the first sector of the first supersector.

    Additionally this spreadsheet was used for calculating the eta of the edges of each tile, as will be in the EPD NIM paper. A version of such a table was in the EPD construction proposal, but there appears to be some mistake in the calculation.

    The scintillator of the EPD is assumed to start 4.6cm from the beamline, as designed. For reasons unclear to me the construction proposal has 4.5cm for this distance, but the difference is negligible. In reality there is a 1.65mm-thick epoxy gap between the tiles. This is not part of the calculation. One tile is assumed to end along the center of the gap and the next tile begins along the same line.

    Hardware mapping for Run 2017

    Hardware map for run 2017

    It is important to maintain a record of the correspondence between (physical) supersector number (SS), position on the detector (PP) in any given run.

    Also needed is the correspondence between tile number (TT) on a supersector and the QT board associated with it.

    Prashanth Shanmuganathan (sprashan@kent.edu) maintains this information for the 1/8 Installation in the 2017 run.  (updated 5 Dec 2017 by Mike to show fiber bundle # also)

    position on the East side (PP) physical SS # Fiber bundle #
    4 o'clock 3 3
    5 o'clock 4 4
    6 o'clock 2 2



    FEE # tiles one-wire ID
    FEE cards and associated tiles
    1 PP 4 odd-numbered tiles 0xFF00000007F6043A
    2 PP 4 even-numbered tiles 0xFE0000003263413A
    3 PP 5 odd-numbered tiles 0x0800000032C3333A
    4 PP 5 even-numbered tiles 0x1100000032C0493A
    5 PP 6 odd-numbered tiles 0x4800000032C04A3A
    6 PP 6 even-numbered tiles 0xA700000032C3423A








    Tile FEE# = Rx# FEE channel
    Tiles in supersector at 4 o'clock
    1 1 0
    3 1 1
    5 1 2
    7 1 3
    9 1 4
    11 1 5
    13 1 6
    15 1 7
    17 1 8
    19 1 9
    21 1 10
    23 1 11
    25 1 12
    27 1 13
    29 1 14
    31 1 15
    2 2 1
    4 2 2
    6 2 3
    8 2 4
    10 2 5
    12 2 6
    14 2 7
    16 2 8
    18 2 9
    20 2 10
    22 2 11
    24 2 12
    26 2 13
    28 2 14
    30 2 15


    Tile FEE# = Rx# FEE channel
    Tiles in supersector at 5 o'clock
    1 3 0
    3 3 1
    5 3 2
    7 3 3
    9 3 4
    11 3 5
    13 3 6
    15 3 7
    17 3 8
    19 3 9
    21 3 10
    23 3 11
    25 3 12
    27 3 13
    29 3 14
    31 3 15
    2 4 1
    4 4 2
    6 4 3
    8 4 4
    10 4 5
    12 4 6
    14 4 7
    16 4 8
    18 4 9
    20 4 10
    22 4 11
    24 4 12
    26 4 13
    28 4 14
    30 4 15


    Tile FEE# = Rx# FEE channel
    Tiles in supersector at 6 o'clock
    1 5 0
    3 5 1
    5 5 2
    7 5 3
    9 5 4
    11 5 5
    13 5 6
    15 5 7
    17 5 8
    19 5 9
    21 5 10
    23 5 11
    25 5 12
    27 5 13
    29 5 14
    31 5 15
    2 6 1
    4 6 2
    6 6 3
    8 6 4
    10 6 5
    12 6 6
    14 6 7
    16 6 8
    18 6 9
    20 6 10
    22 6 11
    24 6 12
    26 6 13
    28 6 14
    30 6 15




    Please see attached mapping for more details.
    Three Spreadsheets are included:
    • Mapping Tiles
    • Mapping FEEs
    • Definitions
    Text verision of the mapping is also attached, and epdMapReader.txt is the macro to read the map. Please change the extention to '.C', Drupal doesn't allow to upload .C extention files.


    Input to 0x16 (QTC) and 0x18(QTB) are switched during access on 04/05/2017. New Mapping is attached (EPD_mapping_04052017.xlsx).

    Mapping as of May 25 2017

    There has been frequent mapping and swapping etc.  Things can get confusing.  This is the mapping as of 25 May 2017, and we (tentatively) plan to keep it this way at least through the Au+Au 53 GeV running.  (Edit: nope, we remapped one more time.  See drupal.star.bnl.gov/STAR/subsys/epd/mapping/mapping-53-gev-auau-run-30-may-2017)

    • FEE cards:
      • Five FEE cards are the original versions.  One-wire codes 0xA700000032C3423A, 0x4800000032C04A3A, 0x0800000032C3333A, 0x1100000032C0493A, 0xFE00000032C3413A.
      • And we have one new FEE card (one-wire code 0xE700000032C03B3A) with 2.5x higher gain.  This will be the version we use going forward.
    • We have four QT boards
      • Two are QT32B boards with no TAC (hence up to 32 tiles served, with ADC only).  These are addresses 0x10, 0x12.
      • One is a QT32B board with TAC (hence up to 16 tiles served, with ADC and TAC).  This is address 0x18.
      • One is a QT32C board with TAC (hence up to 16 tiles served, with ADC and TAC).  This is address 0x16.  Here, things get even a little more complicated, as regards the TACs
        • Eight of the TAC channels (4, 5, 6, 7, 28, 29, 30, 31, numbering starting at 0) are the original versions, which showed problems (spikes, etc) with the low-gain FEEs
        • Four of the TAC channels have improvements implemented by Steve in April/May.  These are channels 12, 15, 20, 23, numbering starting at 0.
        • Four of the TAC channels are disabled (for technical reasons related to fixing the channels described above).  These are 13, 14, 21, 22.

    And, just for kicks, an additional complication is the signal from PP4TT15 is being split between a QT32B[with TAC] (0x18 ADC/TAC=8/12) and a QT32 C (0x16 ADC/TAC=19/23).

    Steve Valentino made a very nice "cheat sheet" showing QTs 0x18 and 0x16, shown here:

    I have made two maps showing these maps from the "detector point of view."  The first one shows simply the "Universal ID" which is 100*PP+TT (both numbering starting at 1), and the second one shows also the QT address and QT channels (starting from zero) of the ADC and TAC.  A TAC address of -1000 means there is no TAC.  They are below.  If you need to use these maps, you may want to download the file (see bottom of page) in pdf or png, and blow them up.


      

    Mapping for the 53 GeV Au+Au run - 30 May 2017

    For the final push of the 2017 run, we measure Au+Au with a mapping slightly different than the 25 May mapping.  Here it is:

    PSSST!  Wanna know how this nice graphic was generated?  The macro is attached at the bottom of this page, or you can just click here



    Offline DataBase

    Per request from database admin Dmitry Arkhipkin <arkhipkin@bnl.gov>
    https://drupal.star.bnl.gov/STAR/comp/db/how-to-user/new-table/
    Here is the updated blog with the latest fields:
    https://drupal.star.bnl.gov/STAR/blog/sprastar/epd-offline-db-table-updated

    Following page is created:
    https://drupal.star.bnl.gov/STAR/blog/sprastar/epd-offline-db-table

    Dmitry has created the database for us, database explore is here:
    https://online.star.bnl.gov/dbExplorer/

    Entry History in tables:
    Following entries erased after modigfication of tables{
     * Geometry/epd/epdQTMap   2016-12-10 00:00:00 => sim initialization for year 2017
     *
    Geometry/epd/epdQTMap   2016-12-20 00:00:00 => ofl initialization for year 2017
     *
    Geometry/epd/epdQTMap   2017-02-13 12:00:00 => 1st cable mapping completed
     *
    Geometry/epd/epdQTMap   2017-04-05 12:00:00 => QTB 0x18 and QTC 0x16 are swapped
     * Geometry/epd/epdQTMap   2017-05-03 12:00:00 => QTC 0x16 are swapped PP5 Tile 10 and PP5 Tile 14 are swapped
     * Geometry/epd/epdQTMap   2017-05-17 12:00:00 => Many changes drupal.star.bnl.gov/STAR/system/files/CableSwap_05172017.pdf
     * Calibrations/epd/status       2017-05-30 12:00:00 => (inserted on 08/23/17)Updated the mapping, which is used for 54.4 GeV run
     * Calibrations/epd/status       2016-12-10 00:00:00 => sim initialization for year 2017 (inserted on 08/23/17)
     *
    Calibrations/epd/status      
    2016-12-20 00:00:00 => ofl initialization for year 2017 (inserted on 08/23/17)
     * Geometry/epd/epdFEEMap   2016-12-20 00:00:00 => sim initialization for year 2017 (inserted on 08/23/17), 1-wire ID has the 54.4 GeV values
     * Geometry/epd/epdFEEMap   2016-12-20 00:00:00 => ofl initialization for year 2017 (inserted on 08/23/17), 1-wire ID has the 54.4 GeV values
     * Calibrations/epd/epdGain    2016-12-20 00:00:00 => sim initialization for year 2017 (inserted on 08/24/17)
     *
    Calibrations/epd/epdGain    2016-12-20 00:00:00 => ofl initialization for year 2017 (inserted on 08/24/17)} Current entries:
    * Geometry/epd/epdQTMap   2016-12-10 00:00:00 => sim initialization for year 2017
    *
    Geometry/epd/epdQTMap   2016-12-20 00:00:00 => ofl initialization for year 2017
    * Geometry/epd/epdFeeMap   2016-12-10 00:00:00 => ofl initialization for year 2017
    * Geometry/epd/epdFeeMap   2016-12-10 00:00:00 => sim initialization for year 2017
    * Geometry/epd/epdStatus     2016-12-10 00:00:00 => ofl initialization for year 2017
    * Geometry/epd/epdStatus     2016-12-10 00:00:00 => sim initialization for year 2017
    * Geometry/epd/epdGain       2016-12-10 00:00:00 => sim initialization for year 2017
    * Geometry/epd/epdGain       2016-12-10 00:00:00 => ofl initialization for year 2017


    * Geometry/epd/epdStatus     2017-02-08 00:00:00 => Completed 1st cabling
    * Geometry/epd/epdFeeMap   2017-02-08 00:00:00 => Completed 1st cabling

    * Geometry/epd/epdFeeMap   2017-02-08 00:00:00 => Completed 1st cabling

    Operations 2017

    53 GeV run

    Here is information related to the 53 GeV run in 2017, expected to begin 31 May:



    Some expectations of multi-hit:

    (This study has been corrected since its original posting.  Originally, I had assumed that all rings had 24 phi segments, but in reality, ring 1 (TT1) only has 12)

    Update 15 June 2017: A first analysis of the ADC spectra bear out the expectations of the analysis below quite well!  See drupal.star.bnl.gov/STAR/blog/lisa/multi-mip-events-2017-epd-auau-54-gev

    As seen on the attached spreadsheet at drupal.star.bnl.gov/STAR/system/files/53GeVexpectations_1.xlsx, based on PHOBOS measurements at 62.4 GeV (see inspirehep.net/record/876609), we can expect the EPD to light up!  This beam energy is higher than what the EPD was designed for, but we will be fine.

    (That spreadsheet has been updated to include calculations for 200 GeV and for 19.6 GeV.)

    The average number of hits expected in each tile is of order unity, and the multi-hit probabilities can be estimated using Poisson statistics.  The spreadsheet attached uses a scan of the PHOBOS data for 62.4 GeV collisions (figure 16) at about 6% centrality.  The average and multi-hit probabilities depend on the collision vertex position.  Some examples are shown in the screenshots below.

    Here, we see that for collisions not far from the center of the TPC (|Vz|<75 cm), we will be just fine: 6-MIP hits are at the sub-percent level:
    Here is for collisions at the center of the TPC:


    Here is for collisions at Vz=-75 cm:


    But if we come a lot closer to the detector (say Vz=-2.75 cm, which is outside the TPC!!), then we get blasted:





    Mapping used in the Au+Au 53 GeV run
    (Right-click and do "view image" to see it blown up, if you need details on QT address and channel number)




    Bias Scan


    Bias Scan runs taken by Rosi.

    No Run No Voltages: Vset / Vbias (*) Prashanth's initial results Mike's Results
    1 18089047 56.5 / 58.3  Bias Scan drupal.star.bnl.gov/STAR/system/files/BiasScan.pdf
    See below for analysis
    2 18089055 55.5 / 57.3
    3 18089063 57.5 / 59.3
    4 18089065 54.5 / 56.3
     5 18090003 58.5 / 60.3

    Root files for above runs are here: rcasxx:/gpfs01/star/subsysg/EPD/sprastar/EPDHists

    (*) Bias voltage (Vbias) is 1.8 V above the Vset value one sets with the TUFF box.  The relevant clip of Gerard's mail:
    "The actual bias on the SiPM is higher than the setpoint by about 0.9 V for the
    VCOMP (temperature compensation) plus about 0.9 V for the preamp input voltage
    (the regulator regulates the anode voltage to -(VSET+VCOMP)*(1 +/-1% error) )"

    Note that Hamamatsu specs Vbias at about 57.7 V, for this initial batch of SiPMs:
    drupal.star.bnl.gov/STAR/system/files/S13360-1325PE%20specs%20initial%20150.pdf

    Mike's analysis of the bias scan - 9 April 2017
    • We don't want to wander too far away from Hamamatsu's recommendation of Vbias=57.7 V (Vset=55.9 V), since the signal increases linearly with bias voltage (see plots), and the dark current increases exponentially with bias voltage.
    • Looking carefully at the 6 FEE groupings (one FEE handles odd or even tiles from a supersector):
      • 4 groupings (PP4 even, PP5 odd, PP6 odd, PP6 even) give MPV ~ 45 ADC counts for Vset ~ 56.5 V
      • PP4 odd gives MPV ~ 60 ADC counts for Vset ~56.5 V: this is because it is read through QT32c which has higher gain
      • PP5 even gives MPV ~ 24 ADC counts for Vset ~ 56.5 V: This may be due to one of the following reasons:
        1. This fee has a lower gain than the others
        2. This QT board has lower gain than the others.
          • This may be checked by swapping inputs with another QT board
        3. The fiber-to-SiPM connection (FSC) is worse than the others.
          • In principle, this could lead to a highter WID/MPV ratio due to reduced photon number and higher Poisson fluctuations, which is not observed.
          • Nevertheless, I kind of suspect #3
    • The "gain" is rather linear with Vset, so we can "peak-match" the various channels by adjusting Vset such that the MIP peak is always at the same location.  However, this should only be done within limits.  It makes no sense to lower Vbias a lot, just to "compensate" for the high gain of the QT32c or to raise it a lot to "compensate" for the low gain of PP5 even.
    • Therefore, I suggest to proceed as follows:  Set Vset such that
      • MPV = 45 ADC counts for PP4 even, PP5 odd, PP6 odd, PP6 even
      • MPV = 60 ADC counts for PP4 odd
      • MPV = 25 ADC counts for PP5 even
    • The Vset values, using this criteria, may be found at drupal.star.bnl.gov/STAR/system/files/VsetValues.txt
      • I set Vset by hand to 56.5 for 3 of the 93 fits that failed.

    BitMap Checking

    Eleanor has given following information to start with:

    QT @ 0x10 is using algorithm v6.4 ( http://www.star.bnl.gov/public/trg/TSL/Software/qt_v6_4_doc.pdf ) and connects to ch 2 & 3 of BB102
    QT @ 0x12 is using algorithm v6.4 ( http://www.star.bnl.gov/public/trg/TSL/Software/qt_v6_4_doc.pdf ) and connects to ch 4 & 5
    QT @ 0x16 is using algorithm v6.d ( http://www.star.bnl.gov/public/trg/TSL/Software/qt_v6_d_doc.pdf ) and connects to ch 6
    QT @ 0x18 is using algorithm v5.2 ( http://www.star.bnl.gov/public/trg/TSL/Software/qt_v5_2_doc.pdf ) and connects to ch 7
    NOTE: the QT @ 0x18 may actually be using algorithm v5.a, which I believe is the same as v5.2 except that a 2nd copy of the output bits is driven on the previously unused output cable.

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

    pedasphys configuration is copied to epdbitcheck_pedasphys
    Tier1 file is changed to trg_170414_EPQ_AlgoX_OLX.bin

    EPQ_Algorithm_Latch changed from 1 to 7 in 1 unit steps
    for each of above steps following output latch is changed
    EPD_QT_Output_Latch_Delay 0 to 120 in 20 steps
    EPD_QTc_Output_Latch_Delay 0 to 120 in 20 steps

     

    No Run Number
    TAC min =50
    Run Number
    TAC min =80
    Run Number
    QTB TAC min =80
    QTC TAC min =40

    Run Number
    QTB TAC min =80
    QTC TAC min =47
    Run Number
    QTB TAC min =82
    QTB ADC TH=80
    QTC TAC min =40
    EPQ_Algorithm_Latch
    (for QTB & QTC)
    EPD_QT_Output_Latch_Delay
    (for QTB & QTC)
    % Miss match
    0x18 Algo v5.2
    % Miss match
    0x16 Algo v6.d
    % Miss match
    0x12 Algo v6.4
    % Miss match
    0x10 Algo v6.4
    1  18129066 18130041 18131026 18150036 18158037 1   0        
    2  18129067 18130042 18131027 18150037 18158038 1  20        
    3  18129068 18130043 18131028 18150038 18158039 1  40        
    4  18129069 18130044 18131029 18150039 18158040 1  60        
    5  18129070 18130045 18131030 18150040 18158041 1  80        
    6  18129071 18130046 18131031 18150041 18158042  1  100        
    7  18129072 18130047 18131032 18150042 18158043  1  120        
                           
     8  18129073 18130048 18131059 18150043 18158044  2  0        
     9  18129074 18130049 18131060 18150044 18158045  2  20        
     10  18129075 18130050 18131061 18150045 18158046  2  40        
     11  18129076 18130051 18131062 18150046 18158047  2  60        
     12  18129077 18130052 18131063 18150047 18158048  2  80        
     13  18129078 18130053 18131064 18150048 18158049  2  100        
     14  18129079 18130054 18131065 18150049 18158050  2  120        
                           
     15  18129080 18130055 18131066 18150050 18158051  3  0        
     16  18129081 18130056 18131067 18150051 18158052  3  20        
     17  18129082 18130057 18131068 18150052 18158053  3  40        
     18  18129083 18130058 18131069 18150053 18158054  3  60        
     19  18129084 18130059 18131070 18150054 18158055  3  80        
     20  18129085 18130060 18131071 18150055 18158056  3  100        
     21  18129086 18130061 18131072 18150056 18158057  3  120        
                           
     22  18129061 18130062 18131033 18150057 18158058  4  0        
     23  18129062 18130063 18131034 18150058 18158059  4  20        
     24  18129063 18130064 18131035 18150059 18158060  4  40        
     25  18129064 18130065 18131036 18150060 18158061  4  60        
     26  18129065 18130066 18131037 18150061 18158062  4  80        
     27  18129059 18130067 18131038 18150062 18158063  4  100        
     28  18129060 18130068 18131039 18150063 18158064  4  120        
                           
     29  18129087 18130069 18131073 18150064 18158072  5  0        
     30  18129088 18130070 18131074 18150065 18158073  5  20        
     31  18129089 18130071 18131075 18150066 18158074  5  40        
     32  18129090 18130072 18131076 18150067 18158075  5  60        
     33  18129091 18130073 18131077 18150068 18158076 5  80        
     34  18129092 18130074 18131078 18150069 18158077  5  100        
     35  18129093 18130075 18131079 18150070 18158078  5  120        
                           
     36  18129094 18130076 18131040 18150071 18158079  6  0        
     37  18129095 18130077 18131041 18150072 18158080  6  20        
     38  18129096 18130078 18131042 18150073 18158081  6  40        
    39 18129097 18130079 18131043 18150074 18158082  6  60        
    40  18129098 18130080 18131044 18150075 18158083  6  80        
     41  18129099 18130081 18131045 18150076 18158084  6  100        
     42  18129100 18130082 18131046 18150077 18158085  6  120        
                           
     43   18130083 18131080 18150078 18158086  7  0        
    44   18130084 18131081 18150079 18158087 7 20        
    45   18130085 18131082 18150080 18158088 7 40        
    46   18130086 18131083 18150081 18158089 7 60        
    47   18130087 18131084 18150082 18158090 7 80        
    48   18130088 18131085 18150083 18158091 7 100        
    49   18130089 18131086 18150084 18158092 7 120        
     

    Discriminator Threshold Scan

    Following is worked  out by Valentino, Stephen <svalentino@bnl.gov>

    Following runs are taken
    Gate values are set to best values in following runs.

      Run Number QTB  Discriminator Threshold QTC_Discriminator_Threshold
    1  18116044  16 (0x00F) =  8 mV    64(0x03F) = 11 mV
    2  18116047  32 (0x01F) = 14 mV  128(0x07F) = 28 mV
    3  18118021  24 (0x017) = 11 mV (linear interpolation)    96(0x05F) = 19.5 mV (linear interpolation)
           
      Phase II    
    4      
           
           
           
           


    Here is first analysis:
    drupal.star.bnl.gov/STAR/system/files/EPD_ThresholdScan_Prashanth_05012017.pdf

     

    Timing In


    Timing scan after installing QT32c board on 03/17/2017

    #EPD Trigger is present in all following runs(i.e. +BBC_TAC +BBC_E + BBC_W)
    Timing Scan
    -----------
    
    
    
    
    Pre-Post Run# QTb Start QTb End QTc Start QTc End TAC Stop
    Pre=2, Post=2 18066043 8 24 not precent not precent 48
    Pre=2, Post=0 18076064 0 16 0 16 56
    Pre=0, Post=2 18076065 0 16 0 16 56
    Root files for above runs are here: rcasxx:/gpfs01/star/subsysg/EPD/sprastar/EPDHists
    I have attaced the results for run above runs and run 18066043.
    Run 18066043 was taken earlier for QTb STAR Delay =8 and QTb END Delay = 24 ns.

    Then we did the Gate scan in 2ns. During this step we recorded only pre, post = 0,0 crossings.
    From this gate scan we noticed QTb STAR Delay =0 and QTb END Delay = 16 ns is best.
    From the runs 18076064 and 18076065 it seems we are in pre = 1 crossing for QT32 'b' board, i.e PP-4 even, PP-5 odd, PP-6 (even & odd)

    and QT32 'c' board in trigger crossing.
    Timeing seems completely messed up.
    Let me thing more in the morning






    Gate Scan
    --------
    Following runs are taken on March 19th.
    In the following runs EPD Trigger is updated so we have now JP2 (i.e. +BBC_TAC +BBC_E + BBC_W + JP2)

    No Run Number Gate Start QT 32 b Gate End QT 32 b  Gate Start QT 32 c Gate End QT 32 c TAC Stop Data Start Address Initial Study by Prashanth Mike's PrePost study and
    Comments
    1 18078043 0 16 0 16 56 c=8, b=8 EPD_plots_18078043.pdf PrePost=-1
    1st run of the fill
    2 18078044 4 20 4 20 56 c=8, b=8 EPD_plots_18078044.pdf PrePost=-1
    3 18078045 8 24 8 24 56 c=8, b=8 EPD_plots_18078045.pdf PrePost=0
    4 18078046 12 28 12 28 56 c=8, b=8 EPD_plots_18078046.pdf PrePost=0
    5 18078047 16 32 16 32 56 c=8, b=8 EPD_plots_18078047.pdf This run was stopped for beam squeeze
    6 18078048 16 32 16 32 56 c=8, b=8 EPD_plots_18078048.pdf PrePost=0
    7 18078049 20 36 20 36 56 c=8, b=8 EPD_plots_18078049.pdf stopped for polarization measurements
    8 18078050 20 36 20 36 56 c=8, b=8 EPD_plots_18078050.pdf PrePost=0
    9 18078051 24 40 24 40 56 c=8, b=8 EPD_plots_18078051.pdf PrePost=0
    10 18078052 28 44 28 44 56 c=8, b=8 EPD_plots_18078052.pdf PrePost=0  Taken By Yang Wu
    11 18078053 32 48 32 48  56 c=8, b=8 EPD_plots_18078053.pdf PrePost=0
    12 18078054 36 52 36 52  56 c=8, b=8 EPD_plots_18078054.pdf PrePost=0
    13 18078056 40 56 40 56  56 c=8, b=8 EPD_plots_18078056.pdf PrePost probably 0 (bad stats)
    14 18078057
    44 60 44 60  56 c=8, b=8 EPD_plots_18078057.pdf PrePost probably 0 (out of gate)
    15 18078058 48 64 48 64  56 c=8, b=8 EPD_plots_18078058.pdf PrePost=0?  or 1?
                  c=8, b=8    
            Negative Scan
    16 18082042 0 16 0 16 56 c=8, b=8  EPD_plots_18082042.pdf  QT32b in PrePost=-1 QT32c in PrePost=0
     17 18082043
    107(0) 123(16) 107(0) 123(16) 56 c=8, b=8  EPD_plots_18082043.pdf  QT32b in PrePost=-1 QT32c in PrePost=0
     18 18082044 103(-4) 119(12) 103(-4) 119(12) 56 c=8, b=8  EPD_plots_18082044.pdf  QT32b in PrePost=-1 QT32c in PrePost=0
     19 18082045 99(-8) 115(8) 99(-8) 115(8) 56 c=8, b=8  EPD_plots_18082045.pdf  QT32b in PrePost=-1 QT32c in PrePost=0
     20 18082046 95(-12) 111(4) 95(-12) 111(4) 56 c=8, b=8  EPD_plots_18082046.pdf  QT32b in PrePost=-1 QT32c in PrePost=0
     21 18082047 91(-16) 107(0) 91(-16) 107(0) 56 c=8, b=8  EPD_plots_18082047.pdf  QT32b in PrePost=-1 QT32c in PrePost=0
     22 18089004 87(-20)  103(-4)  87(-20) 103(-4) 56 c=8, b=8
     
     EPD_plots_18089004.pdf  
     23 18089007 83(-24) 99(-8) 83(-24) 99(-8) 56 c=8, b=8
     
     EPD_plots_18089007.pdf  
     24 18089008 79(-28) 95(-12) 79(-28) 95(-12) 56 c=8, b=8
     
     EPD_plots_18089008.pdf  
     25 18089009 75(-32) 91(-16) 75(-32) 91(-16)  56 c=8, b=8
     
      EPD_plots_18089009.pdf  
     26 18089010 99(-8) 115(8) 99(-8) 115(8)  56 c=8, b=7
     
      EPD_plots_18089010.pdf  
    27 18089013 99(-8) 115(8) 99(-8) 115(8)  56 c=8, b=9
     
      EPD_plots_18089013.pdf  
    Root files for above runs are here: rcasxx:/gpfs01/star/subsysg/EPD/sprastar/EPDHists

    Overlay plot of run 18078043, 18078044, 18078045, 18078046 is available here:https://online.star.bnl.gov/epd/TimingScan/OverlayPlot.pdf
    In this plot I compare
    QT32c
    prepost =0
    Gate (STAR,STOP) = (0,16)
    Gate (STAR,STOP) = (4,20)

     
    vs.
     QT32b
    Prepost=-1
    Gate (STAR,STOP) = (0,16)
    Gate (STAR,STOP) = (4,20)
    Gate (STAR,STOP) = (8,24)
    Gate (STAR,STOP) = (12,28)


    Run 18090046 has new Tier1 file = trg_170329
    Here is the analysis: https://online.star.bnl.gov/epd/TimingScan/EPD_plots_18090046.pdf


    e-scope traces

    E-Scope connections

    03/24/2017
    CH#1 => PP4_tile_1 (no-split, 50 ohm)
    CH#3 => QT32b board 0x18 gate
    Please see attaced image:PP_6_tile_1_nosplit.pdf, which shows a cosmic ray pules and QT gate.

    04/03/2017
    Following scope traces are acquired during beamin in RHIC.
    THreshold = 8mV
    CH#1 of scop to PP6 Tile 1 no split, 50 ohm termination)
    CH#3 => QT32b board 0x10 gate
    Please see

    https://drupal.star.bnl.gov/STAR/system/files/scopetracesPP6T1_nosplit.pdf

    04/04/2017
    Following scope traces are acquired during beamin in RHIC.
    THreshold = 4mV
    CH#1 of scop to PP6 Tile 1 no split, 50 ohm termination)
    CH#3 => QT32b board 0x10 gate
    Please see https://drupal.star.bnl.gov/STAR/system/files/scopetracesPP6T1_nosplit_4mV.pdf

    04/04/2017
    Following scope traces are acquired during beamin in RHIC.
    THreshold = 4mV &&
    persistent

    CH#1 of scop to PP6 Tile 1 no split, 50 ohm termination)
    CH#3 => QT32b board 0x10 gate
    Please see 
    : https://drupal.star.bnl.gov/STAR/system/files/scopetracesPP6T1_nosplit_4mV_persistent.pdf

    Operations 2018

    EPD Run by Run QA 2018

    Isobar run by run QA: drupal.star.bnl.gov/STAR/blog/jewigleb/epd-isobar-2018-run-list

    EPD racks in plat form

    EPD uses 1C2 and 1C3 racks in the south platform:
    Here is the drawing of the racks:

    https://drupal.star.bnl.gov/STAR/system/files/EPD%20rack%20dwg%20rev6.pdf

    Mapping


    For the current mapping file, look for "Current" in the Attached Files at the bottom of this page.

    I removed some of the older versions of excel sheets, which are irrelevant for now.
     
    Mapping mistakes, errors and fixes:
     
    02/21/2018 : East PP8 TT27 no data from QTC board. Found bent pin in the positronix cable.
     
    02/21/2018: East PP9 TT1 output is missing. My mistake.
                         This SS is a odd one, where tile 1 in this SS is in the even side.
                         So we have to swap the cable at RxB. Mistakenly I have swapped in the database too.
                         I have updated the database.
     
    02/21/2018: East PP10 TT22, high current.
                          Swapped odd and even side of the fiber boundel, bad channel swapped. So replaced the SiPM board 28 to 32.
     
    02/22/2018: East PP08 TT08 and PP08 TT09 are swapped at RxB to QT. It has been fixed.

    02/23/2018: East vs West PP9 TT01 issues.
                         West TUFF map is wrong.
                         Correct TUFF mapping for east and west should be:
                         ew    PP    TT    TUFF    TUFF Branch    TUFF Channel
                          0      9      1        2            0                      5   
                          1      9      1        1            1                      4

                          Still East side has issues, un solved.
                          I have checked the east tile with scope and saw cosmic pulses

    Pre- and post-beam commissioning

    Post-installation / pre-beam commissioning
    1. Pre-post checking, to make sure we are in same bin as BBC.  DONE
    2. Verify proper mapping from TUFF to StEvent (tests databases, cabling...)  ("Seven Patterns" test) DONE
    3. Vped adjustment   DONE
    4.  I-V scan has been done about once/day   DONE
    5. Confirm data goes from QT32C to TCU 
      • this is important for TRIGGER readiness, but not for PHYSICS readiness
      • Prashanth or Rosi need to write up the purpose and instructions here.


    Post-beam commissioning to be ready for PHYSICS
    1. (Re-)confirm pre-post timing  (it's easy and it's important)
      • Same instructions as item #1 above.
    2. Timing scan    DONE
    3. Bias scan   DONE


    SW FEE box Removal

    Trigger Commissioning

    EPD Trigger is very similar to BBC, except EPD uses hit count instead of ADC sum for trigger.
    Trigger layers:

    1. 14 QTC boards with TAC, 108 TAC tiles for each east and west.
    2. Layer zero: EP001(east) & EP002(west) DSMs.
    3. Layer one: EP101 DSM.
    4. Layer 2: Vertex DSM
    5. TCU bits EPD-E, EPD-W, EPD-TAC; corresponding to number of good hits above hit count threshold and in TAC window.

    Useful links:

    1. Trigger Algorithm document (Explains the algorithms in QTs and DSMs for triggering)
    2. Cheat sheet of QTs and DSMs
    3. EPD QT mapping (please see the excel sheet at the end)
    4. EPD online monitoring (Has summary plots, ADC & TAC distributions, bit map checking)

    EPD trigger shows some mismatch (few percent to 20%) from QTC to L0 DSM.
    Issue is more likely the Algorithm and/or output latch issues and can not be hardware issue (Many thanks to Steve and Jack fixing hardware issues).

    Current Tier1 file is set to provide EPD-E, EPD-W and EPD-TAC
    To test the trigger capabilities of EPD following runs are taken only with triggers: EPD-E + EPD-W + EPD-TAC
    19075034 (short run)
    19075035 (long run)
    19075053 (during the vernier scan, very long run)

    Following thresholds are used in above runs:

    1. 300 < TAC < 4000
    2.  16 < ADC
    3. All the QTC boards are enabled to pass
    4. A TAC difference is between 3895 and 4295 applied (corresponding to ~ 30 cm).
    5. Hit count is required to have greater than 0 for both east and west.

    Following are some QA plots from run 19075035

    EPD TAC difference East vs. West (a)Earliest TAC East, (b) Earliest TAC west, (c) TAC diff. Blue histo is calculated from QTs and Red is from EP101
    So the difference between red and blue tells the bits mis-match!
     
    https://online.star.bnl.gov/epd/Signal/19075/19075035.tac_evsw.png https://online.star.bnl.gov/epd/Signal/19075/19075035.tac_diff.png
     Hit count from Ep101, Note that we have 108 TAC channles Top left panel is for BBC only, rest compare BBC with EPD
    https://online.star.bnl.gov/epd/Signal/19075/19075035.hit_count.png https://online.star.bnl.gov/bbc/19075/19075035.tacew.png

    Applied cut is doing its job on selecting events from ~30 cm cut.

    Now the issues:
    EP001 and EP002 shows earliest TAC of 0x20, 0x800 and 0x820 for east and west for about 5-20% of the time for all the QTC boards.
    Please following bit map check plot, plot compares mis-match from QTC algorithm against bits received by EP001/2. Bits mismatch at 0x20, 0x800 and 0x820 clearly visible.
    For 19075034
    https://online.star.bnl.gov/epd/BitMapCheck/19075/19075034.QTC.png

    This gives peak TAC difference centered around 4096 (0x800-0x800 or 0x820=0x820 or 0x20-0x20, plus 4096)
       and 0x20 above and below 4096 for 0x800 and 0x820 combos.
    Following plot from low statistic run 19075034
    https://online.star.bnl.gov/epd/Signal/19075/19075034.tac_diff.png

    To test for stuck bits, we took same configuration run without the beam. Run number : 19075042.
    This run didn't record any events. This tells us there is no high stuck bits.

    We confirmed that when there is no good hit in QT, no output is generated from QT algorithm or in EP001/2.

    Current proposal is to include EPD trigger in production running with limited band with and play with QT algo latches and output latches.
    Any thoughts, comments or suggestions?

    What supersector, fiberbundle, FEE, and SiPM is where?

    Prashanth maintains a page of the complete EPD mapping.  Here, the focus is to record which supersector and fiber bundle is at what position in the 2018 run.  These are unlikely to change, during the run!
    • You can click on the Supersector number to see ADC spectra from test stand cosmic runs.  More complete information on SS health can be found on this page.
    • Click on the Fiberbundle number to see results of transmission tests.  More complete information on FB health can be found on this page.
    • On 22 June 2018, Mike has filled in the SiPM# information based on BNL logbook page 102, which was filled between 20 Jan and 4 Feb 2018.

    Position OSU SS # Lehigh FB# FEE# (odd) FEE# (even) SiPM# (odd) SiPM# (even) Comments/issues
    East PP01  21  13  04  47  13  59    
    East PP02  23  23  05  52  43  53  
    East PP03  20  20  11  46  47  57  
    East PP04  22  15  10  41  51  52  
    East PP05  14  05  14  21  19  06  
    East PP06  29  18  12  20  50  49  
    East PP07  25  08  44  42  02  11  TT25 dead.  all good
    East PP08  24  24  49  60  56  12  
    East PP09  01  27  58  48  46  55  
    East PP10  05  14  51  55  26  28 32 SiPM#28 was bad; replaced
    East PP11  06  21  01  53  ???????  58  odd SiPM# not recorded
    East PP12  31  01  50  59  27  29  
    West PP01  15  04  22  15  15  14  
    West PP02  19  03  17  19  35  25  
    West PP03  12  11  28  24  24  34  
    West PP04  11  02  07  29  37  36  
    West PP05  30  16  25  16  18  21  
    West PP06  27  26  38  02  10  60  
    West PP07  07  10  23  57  22  08  
    West PP08  09  07  26  43  16  09  
    West PP09  02  17  32  56  20  38  TT25 dead all good
    West PP10  04  09  27  30  33  05  
    West PP11  10  06  31  40  04  07  
    West PP12  03  19  45  37  23  03  


    Not listed in the table above (i.e. not used in run 2018):

    • 12 of the 60 FEE cards produced: 03, 06, 08, 09, 13, 18, 33, 34, 35, 36, 39, 54
      • #18 had been installed originally but didn't communicate.  Tim has since repaired it and returned it to EPD group
      • End-of-run inventory June 2018:  12 "spare" FEEs are in cabinet, but one has no label on it.  All numbers except for #35 were found on the spare labels, so presumably the un-labeled spare is #35.
      • So, all 60 FEEs are accounted for, at end of run 2018.
      • Mike is taking four FEEs (#33,34,36,39) to Ohio for summer 2018
    • 13 SiPM cards of the 60 SiPM cards produced:  01, 17, 28(bad), 30, 31, 39, 40, 41, 42, 44, 45, 48, 54
      • note that one card listed in the table had no number, so presumably one of the 13 listed above is actually in the experiment.
      • End-of-run inventory June 2018: 12 "spare" SiPMs accounted for, in total
        •  all numbers on the list above are identified, except 01, 17, 44.  However, we have one in the experiment un-numbered, and two from Tim simply labeled "1" and "2" on the bag.
      • Bottom line: all 60 SiPM cards are accounted for, at end of run 2018, except we have three un-numbered cards who are certainly 01,17,44, but we don't know exactly which is which.  No big deal.
      • Mike is taking four SiPM cards (#40, 41, 45, 48) to Ohio for summer 2018
    • 7 of the 31 SS produced: 08, 13, 16, 17, 18, 26, 28

     



    Here is a handy map to help reproduce the fiber placement in run 2019.   A high-resolution PDF file is here: drupal.star.bnl.gov/STAR/system/files/Setup2018.pdf

    Operations 2019

    Escope traces

    Here are some scope traces took on Feb 26, 2019, with beam:

    Ch1: EPD East Position 1, Tile 31 (last/outer tile)
    Ch2: QTB clock
    Ch3: QTC clock




     

    Post-installation / pre-beam commissioning 2019

    1. Verify proper mapping from TUFF to StEvent (tests databases, cabling...)  ("Seven Patterns" test) - DONE
    We followed the instructions listed at: drupal.star.bnl.gov/STAR/subsys/epd/operations-2018/pre-and-post-beam-commissioning

    2. Vped adjustment - DONE
    The instructions written for this year can be found at: drupal.star.bnl.gov/STAR/blog/rjreed/epd-vped-scans-2019

    3. Verifying fiber mapping/pre-post
    In 2018 we verified the pre-post by running special runs.  It was noted this year that if we are in the right bunch crossing, we should see the epd-bbc correlation.  If not, then we will not.

    In progress:  *.daq files have been requested from HPSS and restored to NFS.  Then the bfc.C has been run, creating pico/MuDsts.  These need to be analyzed.

    EPD Timing Scan 2019

    Nominal Values for 2018 were
    B Gate End = 48
    B Gate Start = 32
    C Gate End = 39
    C Gate Start = 23
    TAC Stop = 64

    Operations 2024


    Pre-beam commissioning

    With-beam commissioning
    • Timing scan- done by Maria and Mike 27 April 2024 - blog by Mike
    • Bias scan- done by Maria and Mike 27-28 April 2024 - blog by Mike
    • TAC offset adjustment - done by Mike 28-29 April 2024 - blog by Mike

    Transition to Au+Au running - Oct 2024:
    • Quick-and-dirty timing scan - done by Maria and Mike 6 Oct 2024 - blog by Mike
    • TAC offset adjustment - done by Mike 7 Oct 2024 - blog by Mike


    Issues for this run

    Software


    EPD software
    • Analysis software (StEpdMaker): See the "child page" (listed at the bottom of this page) or click here
    • For database documentation, go to this page.
    • For simulations software (EPD in GSTAR) go to this page

    Offline analysis package - StEpd and StEpdMaker

    StEpd package

    A compact software framework has been written to support offline data analysis of EPD data.  It includes interface to online raw data (mostly useful for experts during the run) on an online node, muDst-based analysis at RCF/PDSF, and picoDst-based analysis on your own laptop.

    The code is CVS-managed in STAR.  At the moment, you may obtain it via
    cvs co offline/users/lisa/StRoot/StEpdMaker

    A complete Users Guide and Reference Manual (Rev 1.2) has been written - get it here.  It describes the detector and data; tells you how to obtain, build and use the library; and provides examples.



    Start-of-run EPD tasks and instructions

     At the beginning of every run, there are a number of things that need to be done for the EPD.

    After the EPD is installed, and before beam, you need to:
    After beam comes, you need to:
    • Re-confirm pre-post (same as above)
    • Do timing scan (this needs to be done for physics to be declared!)
    • Do bias voltage scan (this can be done after physics is declared)

    Useful EPD documents

    This page collects the most important "free-standing" documents for the Event Plane Detector.

    Hardware documents

    This page is to organize hardware-related documents and pages.



    Key logbooks
    Logbook links
    OSU Supersector Construction Logbook (2017) online google doc (do not edit!) / pdf file (printout of google doc) / .docx file
    EPD BNL Logbook (2016-2018)  01-40, 42-49, 51-96, 97-123
    OSU Supersector Testing Logbook (2017)  01-49, 50-99, 100-149, 150-159, 160, 161-174




    At the bottom of this page are "child pages" related to

    Always useful: a very explicit set of pictures showing the mapping between WLS fiber ends and the tile number is at drupal.star.bnl.gov/STAR/system/files/TileFiberMapping_0.pdf


    Here is the logbook used for Supersector testing at Ohio State 30 June - 16 Oct 2017
    Here is the supersector construction logbook from OSU in summer 2017 - PLEASE BE CAREFUL!  I don't seem to be able to make this read-only.  You should put the setting (upper right) to "viewing" to protect against unintended changes.

    Clear Fiber Bundle Health/Status

    A summary of all of the testing results of the Clear Fiber bundles since the start.

    Criteria for passing (somewhat arbitrary, but based on results from the 3 bundles used in Run 17 that were deemed "good"):
    • Average transmission of above -65%
    • No single fiber in a bundle has transmission below -75%


    Bundles to use in Run 18:

    FB# Status Comments Average Max Full Sheet
    1 Healthy  First bundle; repolished twice  -63% -68%   FB01
    2 Healthy  Results from 2nd test; avg unchanged from first test  -55%  -62%  FB02
    3 Healthy    -54% -67%   FB03
    4 Healthy
     -63% -73%   FB04
    5 Healthy    -62% -71%   FB05
    6 Healthy  Repolished once  -62% -68%   FB06
    7 Healthy    -61% -73%   FB07
    8 Healthy  Repolished 3 times  -63% -72%   FB08
    9 Healthy    -64% -71%   FB09
    10 Healthy    -62% -68%   FB10
    11 Healthy    -62% -73%   FB11
    13 Healthy Repolished 3 times  -64.8%  -73%  FB13
    14 Healthy    -62% -73%   FB14
    15 Healthy    -62% -73%   FB15
    16 Healthy    -62% -73%   FB16
    17 Healthy    -63% -70%   FB17
    18 Healthy    -61% -69%   FB18
    19 Healthy    -63% -68%   FB19
    20 Healthy  Repolished twice  -64%  -69%  FB20
    21 Healthy    -64% -72%   FB21
    23 Healthy  Repolished once  -63% -72%   FB23
    24 Healthy    -63% -74.6%  FB24
    26 Healthy    -62% -66%   FB26
    27 Healthy    -57% -68%  FB27



    Failing Bundles that can still be used in Run 18 in an emergency:

    FB# Status Comments Average Max Full Sheet
    22 Injured Failing upon retest. Not sure what's going on  -61% -77%   FB22
    12 Injured Single fiber glows on both ends, possibly cracked N/A N/A N/A
    28 Injured Failing final test. Also even/odd difference of 6% -61% -78%  FB28


    Bundle Under Construction: FB25 (Broken splitter; construction not completed, but can be fixed and polished for the future if needed)




    In summary, at the moment we have:
    • 24 Passing Bundles (FB1-FB11, FB13-FB21, FB23-FB24, FB26-FB27)
    • 3 Failing Bundles (FB28 has a fiber at -78%; FB12 has a fiber glowing on either end, so it was set aside. Has never been fully tested, but the glowing fiber was actually passing testing; FB22 failing upon retest; 2 fibers below -75%)  
    • 1 Incomplete bundle (FB25 had a tube splitter that broke. It can be fixed, polished and used, but of course should only be used as a last resort)




    Google Spreadsheets for posterity:

    Fiber by Fiber testing of every bundle: https://docs.google.com/spreadsheets/d/1rs-XOkZjJQz_nzMzTgH1vVVI4Ck12OCSJ_Dc_WHTaK8/edit#gid=0

    Fiber Bundle Status Summary: https://docs.google.com/spreadsheets/d/1u5dWDUeguvjiPTBiH32b--NCzr-gS7l-ocCTeo5hBF0/edit#gid=0

    FEE, SiPM, RX board Health Status

    USTC has done a fantastic job of collecting and systematically posting their QA and characterization tests here.  They cover

    • FEE boards
    • SiPM cards
    • Receiver cards

    Supersector and Tile Health/Status

    This page records the health status of the 31 Supersectors.  (Well, SS28 was dropped, so there are only 30, but we keep the numbering.)

    Results of cosmic tests at OSU in Aug-Sept 2017 (4 SS in a stack), and BNL in Nov 2017 - Jan 2018 (3 SS in a stack).

    The health sheets may be found here.  (Also see "how to read a health sheet".)


    Health status before run 2018 based on cosmic tests and cross-talk scan
    SS status comments ADC spectra
    1  healthy  hey, our much-maligned "first pancake" actually looks great!  160131
    2  healthy    020304
    3  healthy    020304
    4  healthy  a bit less light than some others, but it worked well in run 2017  020304
    5  healthy  (taking another look at TT26 jan2018, to be sure, but it's fine)  08050607
    6  healthy  (taking another look at TT26 jan2018, to be sure, but it's fine)  08050607
    7  healthy    08181007
    8  injured  TT05 very low gain, probably unusable  14081516
    9  healthy  TT03 has low gain  06091417
    10  healthy    08181007
    11  healthy    09111213
    12  healthy  ADC spectra in test looked a little strange  09111213
    13  injured  MAY be usable.  X-talk b/t TT01,03; glued WLS fibers  09111213
    14  healthy    06091417
    15  healthy  ADC spectrum for TT20 a little strange; see notes  14081516
    16  injured TT02 has very low gain.  Had suspected paint on fiber end, but problem persists after cleaning  161731
    17  prob. healthy  Joey should comment on X-talk issue; see health sheet  161731
    18  injured  TT03 broke during construction.  low gain.  PERHAPS viable  08181007
    19  healthy    17181920
    20  healthy    17181920
    21  healthy  TT21 ADC distribution a bit abnormal  212930
    22  healthy    21222426
    23  healthy    252723
    24  healthy    21222426
    25  healthy    252723
    26  healthy enough
     TT03 has rather low gain, but probably usable. I'm comfortable with this in the experiment.  21222426
    27  healthy  see notes for comments on TT24,25,27-31  252723
    28  dead  dropped and broken in pieces :-(  
    29  healthy  TT27 has low gain  212930
    30  healthy    212930
    31  healthy    161731

    According to this table we have
    • 24 healthy (1-7, 9-12, 14, 15, 19-25, 27, 29-31)  We'll use these in Run 2018.
    • 2 healthy "enough" to use in experiment (17, 26)
    • 4 injuredif we need to swap in, take them in the following order
      • SS16 has low-gain TT02, but maybe usable.  Take this first if needed.
      • SS08 and SS18 each have one tile with low gain that are likely unusable.
      • SS13 has cross-talk in small tiles due to glued WLS fibers.  May be usable
    • 1 dead (SS28).  Don't use this one.
    For the experiment, we need 24 SS, so we are in good shape: 100% of our 744 tiles are good!

    Useful Links


    Semi-Expert Page:
    https://dashboard1.star.bnl.gov/daq/DCS_UI/

    Wanbing's New Control/Monitoring page:
    dashboard1.star.bnl.gov/daq/EPD_UI/

    EPD Online Monitor: (ADC histograms, etc):
    https://online.star.bnl.gov/epd/

    Very useful visual map of the tile-to-electronics mapping:
    https://drupal.star.bnl.gov/STAR/blog/lisa/How-check-and-visualize-EPD-mapping-tile-QT-channel

    Rosi's blog on "Running the EPD remotely" (how to bring up VME crate interface etc):
    https://drupal.star.bnl.gov/STAR/blog/rjreed/Running-EPD-Remotely

    HyperNews Forum:
    http://www.star.bnl.gov/HyperNews-star//protected/get/epd.html

    Weekly meeting agenda and documents:
    drupal.star.bnl.gov/STAR/subsys/epd/epd-meetings-0

    STAR Operations homepage (including meeting agendas and bluejeans link)
    drupal.star.bnl.gov/STAR/public/operations



    Not-anymore-useful links (but kept for historical reasons):

    Rahul's official shut-down schedule 2017/18
    drupal.star.bnl.gov/STAR/blog/rsharma/star-shutdown-schedule

    OLD Shift Crew Page: (old GUI)
    https://dashboard1.star.bnl.gov/daq/dcs_control/next.html?EPD

    Materials list for EPD construction (not updated)
    drupal.star.bnl.gov/STAR/blog/lisa/epd-materials

    EPD Time In

    Attaced (ScopeTraces_beforeTimeIn.pdf) is scope traces before timing in, taken during beam in RHIC

    ETOF

    Endcap Time Of Flight Detector


    • Mailing list: star-etof-l@lists.bnl.gov
    • Weekly STAR/CBM eTOF meeting: Thursdays 10.00 AM (EDT) via bluejeans


    Calibration Status

     Status Mach 25th 2024

    HV scan

    Attached are some plots from the HV scan on day 090.
    The first set of plots shows the distribution of the number of digis per module per event recorded in the 10% most central events (selected by a cut on the TOF L0 multiplicity). Printed in red is the mean of the distribution in each module.

    The second set of plots shows the distribution of the number of reconstructed hits per module in the 10% most central events (selected by a cut on the number of reconstructed bTOF hits). Printed in red is the mean of the distribution in each module.

    I also looked at the fraction of events that have at least 1 eTOF hit reconstructed. This increases from 88.7% for the lowest HV setting (run 20090030) to 95.4% for the highest HV set (run 20090034).

    The average number of matched primary tracks per event that enter the PID plots is

    2.12  in run 027
    1.71  in run 028
    1.31  in run 029
    0.92  in run 030

    2.90  in run 031

    3.36  in run 032
    3.57  in run 033
    4.25  in run 034

    Open Tasks

    Open EToF-Tasks:

    Open Tasks

    Open EToF-Tasks:

    Directory with Makers and example-scripts:

    /star/data06/ETOF/ScriptsAndMakers/

    Directory with Task related plots, *.root-files etc. :

    /star/data06/ETOF/Tasks/*

    Background investigations

    The EToF PID plots show an unexpected high Background:

    which is not sufficiently described by a high order polynomial: 

    Reflections

    Signal-Reflections, likely from the edge of the Read-out-strips, can be "absorbed" by the original Signal, resulting in an artificially prolonged Time-over-Threshold value for the corresponding Digi. This might have an impact on the ToT & Walk-Calibration and Position and Timing of reconstructed Hits.


    Sketch of the Signal/Reflection pathways on a single strip

    ToT - Correlation: left vs. right Side of Strip (USTC Counter)

    ToT vs. Y-Position  (USTC Counter)

      

    delta Y (Intersections - Hits): Data vs. Simulation

    Data (Red) : System Resolution < 80 ps


    Simulation (Blue) : Counter Resolution 50ps, Electronics Resolution 25ps, no Noise


    Simulation (Blue) : Counter Resolution 60ps, Electronics Resolution 60ps, no Noise


    Simulation (Blue) : Counter Resolution 60ps, Electronics Resolution 60ps, 1 to 5 Noise Hits per Counter & Event

    local Y-Position shift

    Local Y of Hits reconstructed from Data shifted towards the center of the counter (Y=0)

    No shift observed in Simulation:

     
     -> might be related to Calibration ?

    Pre-Amp damage 2020

    Information slides regarding the pre-amp damage on eTOF during the 2020 run

    Pre-Amp damage 2020

    Information slides regarding the pre-amp damage on eTOF during the 2020 run

    ProtonAnalysis

     

    Talk Analysis Meeting 06/2022: eTOF data in Analysis

    eTOF Data Format

    Data Format Used in the 2020 run (commissioning & production)

    1) CbmTofStarSubevent2019

    One subevent sent by the full eTOF wheel for each STAR trigger.
    The packed version sent to the STAR DAQ systems is made of a 256b header (4 long unsigned integers) followed by a buffer of gdpb::FullMessage (128b each).
    The data from each sector are inserted in a continuous block, starting from sector 13.
    The maximal size of the eTOF subevent is 131072 bytes, corresponding to 8190 FullMessages. Subevents with bigger size will be truncated and the corresponding flag set in the Subevent header.
    If the insertion of status messages was enabled in the Event Builder, the first 8 FullMessages of each sector block will be the latest update of the status mask before the trigger time.
    In the status mask, a 1 indicates an ASIC curretnly having problems (disabled, off-sync in time or currently undergoing recovery).

    The class definition with accessor and setter methods can be found at:
    redmine.cbm.gsi.de/projects/cbmroot/repository/entry/trunk/fles/star2019/eventbuilder/CbmTofStarData2019.h
    redmine.cbm.gsi.de/projects/cbmroot/repository/entry/trunk/fles/star2019/eventbuilder/CbmTofStarData2019.cxx

    WARNING: the 'n' FullMessages in the graphical representation include in the first case the Status messages of Sectors 14 to 24!

    2) gdpbv100::FullMessage and gdpbv100::Message data format

    gdpbv100::Message = 64 bit message as received from the eTOF gDPB boards
    gdpbv100::FullMessage = 128 bit message, obtained by combining a 64b extended epoch (bits 127-64) with a gdpbv100::Message (bits 63-0)

    Compared to the format used during the 2019 run, the only change is the addition of a 4th type of pattern messages, which is only generated by the eTOF event builder (not present in raw data on the "CBM" side).
    This new pattern is generated by a bitwise OR of the 3 original patterns (sync missmatch, disable and reconfig).

    The class definition with accessor and setter methods can be found at:
    https://lxcbmredmine01.gsi.de/projects/cbmroot/repository/entry/trunk/fles/mcbm2018/dataformat/gDpbMessv100.h
    https://lxcbmredmine01.gsi.de/projects/cbmroot/repository/entry/trunk/fles/mcbm2018/dataformat/gDpbMessv100.cxx

    Data Format Used in the 2019 run (commissioning & production)

    1) CbmTofStarSubevent2018

    One subevent sent by the full eTOF wheel for each STAR trigger.
    The packed version sent to the STAR DAQ systems is made of 256b header (4 long unsigned integers) followed by a buffer of gdpb::FullMessage (128b each).
    The data from each sector are inserted in a continuous block, starting from sector 13.
    The maximal size of the eTOF subevent is 131072 bytes, corresponding to 8190 FullMessages. Subevents with bigger size will be truncated and the corresponding flag set in the Subevent header.

    The class definition with accessor and setter methods can be found at:
    redmine.cbm.gsi.de/projects/cbmroot/repository/entry/trunk/fles/star2018/unpacker/CbmTofStarData2018.h
    redmine.cbm.gsi.de/projects/cbmroot/repository/entry/trunk/fles/star2018/unpacker/CbmTofStarData2018.cxx

    2) gdpbv100::FullMessage and gdpbv100::Message data format

    gdpbv100::Message = 64 bit message as received from the eTOF gDPB boards
    gdpbv100::FullMessage = 128 bit message, obtained by combining a 64b extended epoch (bits 127-64) with a gdpbv100::Message (bits 63-0)

    The class definition with accessor and setter methods can be found at:
    https://lxcbmredmine01.gsi.de/projects/cbmroot/repository/entry/trunk/fles/mcbm2018/dataformat/gDpbMessv100.h
    https://lxcbmredmine01.gsi.de/projects/cbmroot/repository/entry/trunk/fles/mcbm2018/dataformat/gDpbMessv100.cxx

    gdpbv100::Message data format for all defined types, as in memory:

    Only the 32b Hits are used in the eTOF system in normal operation. 24b Hits are a debug option which is not planned at the moment to be used with this system.

    The time of a hit is calculated as:

    ClockCycleNs = 6.25
    EpochNs = 25600
    FtBins = 112

    HitTime = (Extended)Epoch * EpochNs + ClockCycleNs * FullTs / FtBins

    For monitoring and eventual calibration, one can obtain the FineTime by

    FineTime = FullTs % FtBins

    For more information on the GET4 v2.00 used in eTOF, its performances and its timestamps, please refer to the GET4 manual which can be found at:
    To be added when publicly released

    Data Format Used in the 2018 run (prototyping/commissioning)

    1) CbmTofStarSubevent2018

    One subevent sent by the eTOF sector 16 for each STAR trigger.
    The packed version sent to the STAR DAQ systems is made of 256b header (4 long unsigned integers) followed by a buffer of gdpb::FullMessage (128b each).

    The event size field is filled only for data sets started after 24/04/2018 at 11:10 EDT. For older runs the field is filled with 0.

    The event status flags are (in bit order):
    0x0001 => Bad Event
    0x0002 => Overlap Event = Event with trigger window overlapping the previous/next event, not possible in 2018 run
    0x0004 => Empty Event
    0x0008 => Start Border Event = Event with a trigger window overlapping the Start border of the timeslice (may have data in previous timeslice), data from previous timeslice are not added to the event in 2018 run!
    0x0010 => End Border Event = Event with a trigger window overlapping the End border of the timeslice (may have data in next timeslice), in run 2018 only possible if the trigger message for at least one gDPB was in the current timeslice

    The class definition with accessor and setter methods can be found at:
    lxcbmredmine01.gsi.de/projects/cbmroot/repository/entry/trunk/fles/star2018/unpacker/CbmTofStarData2018.h
    lxcbmredmine01.gsi.de/projects/cbmroot/repository/entry/trunk/fles/star2018/unpacker/CbmTofStarData2018.cxx

    CbmTofStarSubevent2018 data format as in memory:

    2) gdpb::FullMessage and gdpb::Message data format

    gdpb::Message = 64 bit message as received from the eTOF gDPB boards
    gdpb::FullMessage = 128 bit message, obtained by combining a 64b extended epoch (bits 127-64) with a gdpb::Messaged (bits 63-0)

    The class definition with accessor and setter methods can be found at:
    lxcbmredmine01.gsi.de/projects/cbmroot/repository/entry/trunk/fles/star2018/unpacker/rocMess_wGet4v2.h
    lxcbmredmine01.gsi.de/projects/cbmroot/repository/entry/trunk/fles/star2018/unpacker/rocMess_wGet4v2.cxx

    gdpb::Message data format for all defined types, as in memory:

    Only the 32b Hits are used in the eTOF system in normal operation. 24b Hits are a debug option which is not planned at the moment to be used with this system.

    The time of a hit is calculated as:

    ClockCycleNs = 6.25
    EpochNs = 25600
    FtBins = 112

    HitTime = (Extended)Epoch * EpochNs + ClockCycleNs * FullTs / FtBins

    For monitoring and eventual calibration, one can obtain the FineTime by

    FineTime = FullTs % FtBins

    For more information on the GET4 v2.00 used in eTOF, its performances and its timestamps, please refer to the GET4 manual which can be found at:
    To be added when publicly released

    eTOF database tables

    collection of database tables for the eTOF

    CalibMaker

    • electronics map
    • status map
    • calib parameters
    • timing window
    • digi time correction
    • digi tot correction
    • slewing correction
    • reset time offset
    • pulser tot peak
    • pulser time difference in Gbtx

    HitMaker

    • hit parameters
    • signal velocity

    MatchMaker

    • match parameters
    • geometry alignment
    • geometry alignment on counter level
    • detector resolution

    SimMaker

    • sim efficiency

    calib param

    parameters used in the eTOF CalibMaker

    variables:

    float                   get4TotBinWidthNs       // conversion factor of Get4 ToT from bin width to nanoseconds
    octet                  minDigisInSlewBin       // minimum number of digis in each bin for the slewing corrections
    unsigned short  referencePulserIndex   // index of the pulser channel used as reference

    frequency:

    once per dataset

    index name:

    *table is not indexed

    size:

    8 bytes per entry

    write access:

    fseck          --  Florian Seck           ( TU Darmstadt )
    weidenkaff  --  Philipp Weidenkaff   ( Heidelberg University )

    etofCalibParam.idl:

    /* etofCalibParam.idl

    *

    * table: etofCalibParam

    *

    * description: calibration parameters used in the

    *              etof calibmaker

    *

    * author: Florian Seck ( TU Darmstadt )

    *

    */

    struct etofCalibParam{

    float   get4TotBinWidthNs;  /* tot bin width to ns conversion */
    octet  minDigisInSlewBin;  /* min number of digis per slewing bin */
    unsigned short  referencePulserIndex;  /* index of the pulser channel used as reference */

    };

    /* end etofCalibParam.idl */

     

    detector resolution

    detector resolution (local X, localY, time) of the eTOF counters

    variables:

    float   detectorResX[ 108 ]     // resolution in localX (cm) in each RPC counter (12 sectors * 3 modules * 3 counters)
    float   detectorResY[ 108 ]     // resolution in localY (cm) in each RPC counter
    float   detectorResT[ 108 ]     // resolution in time (ns) in each RPC counter
                                               // ( sector - 13 ) * 9  +  ( zPlane - 1 ) * 3  +  ( counter - 1 )

    frequency:

    once per dataset

    index name:

    *table is not indexed

    size:

    1296 bytes per entry

    write access:

    fseck          --  Florian Seck           ( TU Darmstadt )
    weidenkaff  --  Philipp Weidenkaff   ( Heidelberg University )

    etofDetResolution.idl:

    /* etofDetResolution.idl

    *

    * table: etofDetResolution

    *

    * description: detector resolution (localX, localY, time)

    *              of each etof counter

    *

    * author: Florian Seck ( TU Darmstadt )

    *

    */

    struct etofDetResolution{

    float  detectorResX[108];  /* detector resolution in local X (cm)  */
    float  detectorResY[108];  /* detector resolution in local Y (cm)  */
    float  detectorResT[108];  /* detector resolution in time (ns)     */

    };

    /* end etofDetResolution.idl */

    digi time correction

    calibration parameter applied to the time of the eTOF digis in each channel

    variables:

    float   timeCorr[ 6912 ]     // correction parameter to be applied to the digi time in each channel of eTOF
                                          // combines hit position offsets and offsets related to electronics, cables, etc.
                                          // ( sector - 13 ) * 576  +  ( zPlane - 1 ) * 192  + ( counter - 1 ) * 64 + ( strip - 1 ) * 2 + ( side - 1 )


    frequency:

    in general once per dataset

    index name:

    *table is not indexed

    size:

    27648 bytes per entry

    write access:

    fseck          --  Florian Seck           ( TU Darmstadt )
    weidenkaff  --  Philipp Weidenkaff   ( Heidelberg University )

    etofDigiTotCorr.idl:

    /* etofDigiTimeCorr.idl

    *

    * table: etofDigiTimeCorr

    *

    * description: correction parameter to be applied to each channel

    *              on the digi level

    *

    * author: Florian Seck ( TU Darmstadt )

    *

    */

    struct etofDigiTimeCorr{

    float  timeCorr[6912];  /* time offset correction for etof digis */

    };

    /* end etofDigiTimeCorr.idl */

    digi tot correction

    calibration parameter applied to the ToT of the eTOF digis in each channel

    variables:

    float   totCorr[ 6912 ]     // correction factor to be applied to the digi TOT in each channel of eTOF
                                       // ( sector - 13 ) * 576  +  ( zPlane - 1 ) * 192  + ( counter - 1 ) * 64 + ( strip - 1 ) * 2 + ( side - 1 )


    frequency:

    in general once per dataset

    index name:

    *table is not indexed

    size:

    27648 bytes per entry

    write access:

    fseck          --  Florian Seck           ( TU Darmstadt )
    weidenkaff  --  Philipp Weidenkaff   ( Heidelberg University )

    etofDigiTotCorr.idl:

    /* etofDigiTotCorr.idl

    *

    * table: etofDigiTotCorr

    *

    * description: correction factor to be applied to each channel

    *              on the digi level

    *

    * author: Florian Seck ( TU Darmstadt )

    *

    */

    struct etofDigiTotCorr{

    float  totCorr[6912];  /* tot correction factor for etof digis */

    };

    /* end etofDigiTotCorr.idl */

    electronics map

    map electronic addresses ( AFCK address, chip id, channel id in each Get4 chip ) to geometry: sector, z-plane, counter, strip, side

    variables:
    octet                  nAfcks                          // number AFCK boards
    unsigned short    nChannels                     // number of channels per AFCK board

    unsigned short    afckAddress[12]            // MAC address of AFCK board
    octet                  sector[12]                     // eTOF sector linked to the AFCK

    unsigned short    channelNumber[ 576 ]    // channel number ( up to 9 RPCs with 64 channels each are connected to one AFCK ) = chipId * 10 + channelId
    unsigned short    geometryId[ 576 ]          // eTOF plane, counter, strip and side corresponding to each channel
                                                                // geometry id = zPlane * 10000 + counter * 1000 + strip * 10 + side

    frequency:

    after initial upload, this table will only be updated when electronic boards need to be exchanged
    or some error in the cable connections are found ( potentially once per RHIC run or less )

    index name:

    *table is not indexed

    size:

    the size per entry is 2344 bytes

    write access:

    fseck          --    Florian Seck          ( TU Darmstadt )
    weidenkaff  --    Philipp Weidenkaff  ( Heidelberg University )

    etofElectronicsMap.idl:

    /* etofElectronicsMap.idl

    *

    * table: etofElectronicsMap

    *

    * description: parameters for convertion of electronic addresses to

    *              etof geometry identifiers

    *

    * author: Florian Seck ( TU Darmstadt )

    *

    */

    struct etofElectronicsMap{
    octet           nAfcks;              /* number of AFCK boards in the system */
    unsigned short  nChannels;           /* number of channels connected to the AFCKs */

    unsigned short  afckAddress[12];     /* MAC address of AFCK board */
    octet           sector[12];          /* eTOF sector linked to the AFCK */

    unsigned short  channelNumber[576];  /* channel number */
    unsigned short  geometryId[576];     /* geometry id->plane,counter,strip,side */
    };

    /* end etofElectronicsMap.idl */

    etof get4state

     state of Get4s regarding to clock-jumps


    variables:

     unsigned long etofGet4State[1000000];    // state of get4s, changes & corresponding event id's 


    frequency:

    each entry covers a run

    index name:

    *table is not indexed

    size:

     about 4MB per run

    write access:

    ysoehngen  --  Yannick Söhngen   ( Heidelberg University )

    etofGet4State.idl:

    /* etofGet4State.idl

    *

    * table: etofGet4State

    *

    * description:Get4States and state changes dealing with "Clock-Jumps"

    *             0 - good,  1 - too early by 6.25ns ,2 too late by 6.25 ns,3 - bad

    *

    * author: Yannick Söhngen ( PI Heidelberg )

    *

    */

     

    struct etofGet4State {

      unsigned long etofGet4State[1000000]; /* state of get4s, changes & event id */

    };

     

    geometry alignment

    geometry alignment parameters (local X, localY, localZ, rotation angles ) of the eTOF counters

    variables:

    float   offsetX[ 36 ]        // offset in localX (cm) for each eTOF module (12 sectors * 3 modules)
    float   offsetY[ 36 ]        // offset in localY (cm) for each eTOF module
    float   offsetZ[ 36 ]        // offset in local Z (cm) for each eTOF module
    float   angleXY[ 36 ]      // rotation angle in local XY plane for each eTOF module
    float   angleXZ[ 36 ]      // rotation angle in local XY plane for each eTOF module
    float   angleYZ[ 36 ]      // rotation angle in local XY plane for each eTOF module
                                       // ( sector - 13 ) * 9  +  ( zPlane - 1 )

    frequency:

    once per dataset

    index name:

    *table is not indexed

    size:

    864 bytes per entry

    write access:

    fseck          --  Florian Seck           ( TU Darmstadt )
    weidenkaff  --  Philipp Weidenkaff   ( Heidelberg University )

    etofGeomAlign.idl:

    /* etofGeomAlign.idl

    *

    * table: etofGeomAlign

    *

    * description: geometry alignment parameters ( offset in localX, localY, localZ,

    *              rotations in XY, XZ, YZ planes) of each etof module

    *

    * author: Florian Seck ( TU Darmstadt )

    *

    */

    struct etofGeomAlign{

    float  offsetX[36];  /* offset in local X (cm) */
    float  offsetY[36];  /* offset in local Y (cm) */
    float  offsetZ[36];  /* offset in local Z (cm) */
    float  angleXY[36];  /* rotation in local XY plane */
    float  angleXZ[36];  /* rotation in local XZ plane */
    float  angleYZ[36];  /* rotation in local YZ plane */

    };

    /* end etofGeomAlign.idl */

    geometry alignment on counter level

    geometry alignment parameters (local X, localY, localZ ) of the eTOF counters for individual counters

    variables:

       float  detectorAlignX[108];  // detector alignment in local X (cm)   (12 sectors * 3 modules * 3 counters)
       float  detectorAlignY[108];  // detector alignment in local Y (cm) 
       float  detectorAlignZ[108];  // detector alignment in local Z (cm)                  

    frequency:

    once per dataset

    index name:

    *table is not indexed

    size:

     1296 bytes per entry

    write access:

    ysoehngen --  Yannick Soehngen   ( Heidelberg University )
    weidenkaff  --  Philipp Weidenkaff   ( Heidelberg University )

    etofAlign.idl:

    /* etofAlign.idl
    *
    * table: etofAlign
    *
    * description: detector alignment parameters (local X,local Y,local Z)

    *                    of each etof counter

    *
    * author: Yannick Söhngen (Universität Heidelberg)
    *
    */

    struct etofAlign {
        float  detectorAlignX[108];  /* detector alignment in local X (cm)  */
        float  detectorAlignY[108];  /* detector alignment in local Y (cm)  */
        float  detectorAlignZ[108];  /* detector alignment in local Z (cm)  */
    };
    /* end etofAlign.idl */

    hit modification on counter level

    Flag for modification of hits on counter level (flip local x & y)

    variables:

       octet  detectorModFlag[108];  // flag for hit modification on counter level (1: flip local x & y, 2: flip local x, 3: flip local y, 4&5 : rotate by +- 90°)
                 

    frequency:

    once per hardware replacement (about once per year)

    index name:

    *table is not indexed

    size:

      108 bytes per entry

    write access:

    ysoehngen --  Yannick Soehngen   ( Heidelberg University )
    weidenkaff  --  Philipp Weidenkaff   ( Heidelberg University )

    etofModCounter.idl:

    /* etofModCounter.idl
    *
    * table: etofModCounter
    *
    * description: flag to modify Hits on Counter

    *                    0: no modification, 1: flip local X & Y position            
    *                    2: flip local X, 3: flip local Y, 4&5: rotate by +-90°

    *
    * author: Yannick Söhngen (Universität Heidelberg)
    *
    */

    struct etofModCounter {
        octet detectorModFlag[108];  /* flag for hit-modifications on counter level  */
    };
    /* end etofModCounter.idl */

    hit param

    parameters used in the eTOF HitMaker

    variables:

    float  maxLocalY               // maximum absolute local Y matching of digis of different sids of a strip
    float  clusterMergeRadius  // maximum distance for hits on adjacent strips to be clustered together

    frequency:

    once per dataset

    index name:

    *table is not indexed

    size:

    8 bytes per entry

    write access:

    fseck          --  Florian Seck           ( TU Darmstadt )
    weidenkaff  --  Philipp Weidenkaff   ( Heidelberg University )

    etofHitParam.idl:

    /* etofHitParam.idl

    *

    * table: etofHitParam

    *

    * description: parameters used in the etof hitmaker for clustering

    *

    *

    * author: Florian Seck ( TU Darmstadt )

    *

    */

    struct etofHitParam{

    float  maxLocalY;           /* maximum absolute local Y for matching of digis */
    float  clusterMergeRadius;  /* maximum distance for clustering of hits */

    };

    /* end etofHitParam.idl */

    match param

    parameters used in the eTOF MatchMaker

    variables:

    float   matchRadius              // maximum distance between eTOF hit and track intersection
    octet  trackCutNHitsFit         // cut for tracks to be used in the matching: nHitsFit in TPC
    float   trackCutNHitsRatio     // cut for tracks to be used in the matching: nHitsFit to nHitsPoss ratio in TPC
    float   trackCutLowPt            // cut for tracks to be used in the matching: low pt

    frequency:

    once per dataset

    index name:

    *table is not indexed

    size:

    16 bytes per entry

    write access:

    fseck          --  Florian Seck           ( TU Darmstadt )
    weidenkaff  --  Philipp Weidenkaff   ( Heidelberg University )


    etofMatchParam.idl:

    /* etofMatchParam.idl

    *

    * table: etofMatchParam

    *

    * description: parameters used in the etof matchmaker

    *

    *

    * author: Florian Seck ( TU Darmstadt )

    *

    */

    struct etofMatchParam{

    float  matchRadius;         /* maximum distance etof hit to track intersection */
    octet  trackCutNHitsFit;    /* track cut nHitsFit in TPC */
    float  trackCutNHitsRatio;  /* track cut nHitsFit to nHitsPoss ratio in TPC */
    float  trackCutLowPt;       /* track cut low pt */

    };

    /* end etofMatchParam.idl */

    pulser time difference in Gbtx

    time difference of pulser digis within the same Gbtx (e.g. due to cables) compared to counter 1

    variables:

    float   pulserTimeDiffGbtx[ 144 ]     // time difference of one pulser to the pulser in counter 1 on the same Gbtx
                                                           // ( sector - 13 ) * 12  +  ( zPlane - 1 ) * 4  + ( side - 1 ) * 2 +  x
                                                           //  x = 0: counter 1 - counter 2,   x= 1: counter 1 - counter 3

    frequency:

    in general once per dataset

    index name:

    *table is not indexed

    size:

    576 bytes per entry

    write access:

    fseck          --  Florian Seck           ( TU Darmstadt )
    weidenkaff  --  Philipp Weidenkaff   ( Heidelberg University )

    etofPulserTimeDiffGbtx.idl:

    /* etofPulserTimeDiffGbtx.idl

    *

    * table: etofPulserTimeDiffGbtx

    *

    * description: time difference of pulsers within one Gbtx

    *                    (used to correct for missing pulsers)

    *

    * author: Florian Seck ( TU Darmstadt )

    *

    */

    struct etofPulserTimeDiffGbtx{

    float  pulserTimeDiffGbtx[144];  /* time difference between counters within one Gbtx */

    };

    /* end etofPulserTimeDiffGbtx.idl */

    pulser tot peak

    parameters for ToT used to find the pulsers in the events

    variables:

    octet  pulserTot[ 216 ]    // ToT peak position (bin between 0 and 255) of pulsers for each side of the RPC counters

    frequency:

    once (or up to a few times) per year if RHIC running

    index name:

    *table is not indexed

    size:

    the size per entry is 216 bytes

    write access:

    fseck          --    Florian Seck          ( TU Darmstadt )
    weidenkaff  --    Philipp Weidenkaff  ( Heidelberg University )

    etofPulderTotPeak.idl:
    /* etofPulserTotPeak.idl

    *

    * table: etofPulserTotPeak

    *

    * description: parameters for ToT used to find the pulsers in the events

    *

    * author: Florian Seck ( TU Darmstadt )

    *

    */

    struct etofPulserTotPeak{
    octet pulserTot[216]; /*ToT peak position (bin: 0-255) of pulsers per side*/
    };

    /* end etofPulserTotPeak.idl */

    reset time offset

    parameter for common T0 offset correction to sync the eTOF with the bTOF clock ( could be on run-by-run basis )

    variables:

    float    resetTimeOffset    // common T0 offset correction (bTOF clock reset)

    frequency:

    can change in each run (~4000 runs in 2018: FXT 3.0 GeV, isobars and first half of 27 GeV datasets), should be more constant in the future (--> entries will be much less frequent)

    index name:

    *table is not indexed

    size:

    the size per entry is 4 bytes -->  ~8 000 bytes per year maximum

    write access:

    fseck          --    Florian Seck          ( TU Darmstadt )
    weidenkaff  --    Philipp Weidenkaff  ( Heidelberg University )

    etofResetTimeCorr.idl:

    /* etofResetTimeCorr.idl

    *

    * table: etofResetTimeCorr

    *

    * description: parameter for a common T0 offset correction to sync

    *              the whole eTOF with the bTOF clock

    *

    * author: Florian Seck ( TU Darmstadt )

    *

    */

    struct etofResetTimeCorr{

    float  resetTimeOffset;   /* common t0 offset correction to sync eTOF with bTOF clock */

    };

    /* end etofResetTimeCorr.idl */

    signal velocity

    velocity of electronic signals travelling across the RPC strips

    variables:

    float   signalVelocity[ 108 ]     // signal velocity in each RPC counter (12 sectors * 3 modules * 3 counters)
                                               // ( sector - 13 ) * 9  +  ( zPlane - 1 ) * 3  +  ( counter - 1 )


    frequency:

    once per RHIC run

    index name:

    *table is not indexed

    size:

    432 bytes per entry

    write access:

    fseck          --  Florian Seck           ( TU Darmstadt )
    weidenkaff  --  Philipp Weidenkaff   ( Heidelberg University )


    etofSignalVelocity.idl:

    /* etofSignalVelocity.idl

    *

    * table: etofSignalVelocity

    *

    * description: calibration parameters of signal velocity

    *              of etof counters

    *

    * author: Florian Seck ( TU Darmstadt )

    *

    */

    struct etofSignalVelocity{

    float  signalVelocity[108];  /* signal velocity of each counter  */

    };
    /* end etofSignalVelocity.idl */

    sim efficiency

    efficiency of hit creation in eTOF counters used for simulation

    variables:

    float   efficiency[ 108 ]      // hit creation efficiency in each RPC counter (12 sectors * 3 modules * 3 counters)
                                          // ( sector - 13 ) * 9  +  ( zPlane - 1 ) * 3  +  ( counter - 1 )

    frequency:

    once per RHIC run or less

    index name:

    *table is not indexed

    size:

    432 bytes per entry

    write access:

    fseck          --  Florian Seck           ( TU Darmstadt )
    weidenkaff  --  Philipp Weidenkaff   ( Heidelberg University )

    etofSimEfficiency.idl:

    /* etofSimEfficiency.idl

    *

    * table: etofSimEfficiency

    *

    * description: efficiency for hit creation in etof counters

    *

    *

    * author: Florian Seck ( TU Darmstadt )

    *

    */

    struct etofSimEfficiency{

    float  efficiency[108];  /* hit creation efficiency in each counter */

    };

    /* end etofSimEfficiency.idl */

    slewing correction

    parameters for the slewing corrections applied to the eTOF digis in each channel and ToT bin


    variables:

    unsigned short   channelId[207360]           // ( sector - 13 ) * 576  +  ( zPlane - 1 ) * 192  + ( counter - 1 ) * 64 + ( strip - 1 ) * 2 + ( side - 1 )  --> 6912 channels
    unsigned short   upperTotEdge[ 207360 ]   // upper edge of tot interval ( * 100 ) for corr --> 30 bins per channel
    short                 corr[ 270360 ]                 // correction parameter ( * 100 ) to be applied to the digi time

    frequency:

    in general once per dataset

    index name:

    *table is not indexed

    size:

    the size per entry is 1244160 bytes

    write access:

    fseck          --  Florian Seck           ( TU Darmstadt )
    weidenkaff  --  Philipp Weidenkaff   ( Heidelberg University )

    etofDigiTotCorr.idl:

    /* etofDigiSlewCorr.idl

    *

    * table: etofSlewTimeCorr

    *

    * description: correction parameter to be applied to each channel (6912)

    *              and tot bin (30) on the digi level --> 207360 values

    *

    * author: Florian Seck ( TU Darmstadt )

    *

    */

    struct etofDigiSlewCorr{

    unsigned short channelId[207360];    /* channel id */
    unsigned short upperTotEdge[207360]; /* edge of tot intervals for corr */

    short          corr[207360]; /* correction parameter to be applied to the digi time */

    };

    /* end etofDigiSlewCorr.idl */
     

    status map

    status map of eTOF channels

    variables:

    octet   status[ 6912 ]     // status of each channel ( 0 - off / not existing,  1 - on )
                                       // ( sector - 13 ) * 576  +  ( zPlane - 1 ) * 192  + ( counter - 1 ) * 64 + ( strip - 1 ) * 2 + ( side - 1 )


    frequency:

    whenever some part of the detector is taken out for an extended period in time, in generally however only once per RHIC run

    index name:

    *table is not indexed

    size:

    6912 bytes per entry

    write access:

    fseck          --  Florian Seck           ( TU Darmstadt )
    weidenkaff  --  Philipp Weidenkaff   ( Heidelberg University )

    etofStatusMap.idl:

    /* etofStatusMap.idl

    *

    * table: etofStatusMap

    *

    * description: status map of all etof channels

    *              0 - off / not existing,  1 - on

    *

    * author: Florian Seck ( TU Darmstadt )

    *

    */

    struct etofStatusMap{

    octet  status[6912];  /* status of each eTOF channel */

    };

    /* end etofStatusMap.idl */

    timing window

    parameters for the timing window of eTOF digis on each AFCK board

    variables:

    unsigned short    afckAddress[12]   // MAC address of AFCK board

    float                   timingMin[12]       // lower edge of the timing window
    float                   timingMax[12]      // upper edge of the timing window
    float                   timingPeak[12]     // peak position in the timing window
    float                   pulserMin[12]       // lower edge of the 'pulser-after-token' window
    float                   pulserMax[12]      // upper edge of the 'pulser-after-token' window
    float                   pulserPeak[12]     // peak position in the 'pulser-after-token' window

    frequency:

    once per dataset

    index name:

    *table is not indexed

    size:

    312 bytes per entry

    write access:

    fseck          --    Florian Seck          ( TU Darmstadt )
    weidenkaff  --    Philipp Weidenkaff  ( Heidelberg University )

    etofTimingWindow.idl:

    /* etofTimingWindow.idl

    *

    * table: etofTimingWindow

    *

    * description: parameters for selection of digis inside the

    *              timing window and pulser digis for clock synchonization

    *

    * author: Florian Seck ( TU Darmstadt )

    *

    */

    struct etofTimingWindow{
    unsigned short  afckAddress[12];   /* MAC address of AFCK board */

    float  timingMin[12];  /* lower edge of the timing window */
    float  timingMax[12];  /* upper edge of the timing window */
    float  timingPeak[12]; /* peak position in the timing window */
    float  pulserMin[12];  /* lower edge of the pulser-after-token window */
    float  pulserMax[12];  /* upper edge of the pulser-after-token window */
    float  pulserPeak[12]; /* peak position in the pulser-after-token window */
    };

    /* end etofTimingWindow.idl */
     

    eTOF hit positions

    eTOF noise rates

    noise rates 2020

    noise rates 2019

    noise rates 2018

    eTOF noise rates in Run18

     - noise rate calculations for several pedAsPhys_tcd_only runs taken between March 24th (day 083) and June 21st (day 172)  [scroll down to get to the newest rates]
     
    - hit maps for each counter are attached below

     *** hit rate calculation in run 19083046 ***
    nEvents: 3999774   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:    5064  --> hit rate:   316.5 Hz   -->  0.37 Hz/cm^2
    nDigis in module 1  counter 2:    9625  --> hit rate:   601.6 Hz   -->  0.70 Hz/cm^2
    nDigis in module 1  counter 3:    2904  --> hit rate:   181.5 Hz   -->  0.21 Hz/cm^2
    nDigis in module 2  counter 1:    9650  --> hit rate:   603.2 Hz   -->  0.70 Hz/cm^2
    nDigis in module 2  counter 2:    5579  --> hit rate:   348.7 Hz   -->  0.40 Hz/cm^2
    nDigis in module 2  counter 3:    2876  --> hit rate:   179.8 Hz   -->  0.21 Hz/cm^2
    nDigis in module 3  counter 1:    2548  --> hit rate:   159.3 Hz   -->  0.18 Hz/cm^2
    nDigis in module 3  counter 2:    4147  --> hit rate:   259.2 Hz   -->  0.30 Hz/cm^2
    nDigis in module 3  counter 3:    1742  --> hit rate:   108.9 Hz   -->  0.13 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 332
    maximum # of digis per channel on module 1, counter 2: 416
    maximum # of digis per channel on module 1, counter 3: 228
    maximum # of digis per channel on module 2, counter 1: 810
    maximum # of digis per channel on module 2, counter 2: 1281
    maximum # of digis per channel on module 2, counter 3: 397
    maximum # of digis per channel on module 3, counter 1: 138
    maximum # of digis per channel on module 3, counter 2: 622
    maximum # of digis per channel on module 3, counter 3: 192
     *** -------------------- ***

     *** hit rate calculation in run 19091032 ***
    nEvents: 3999773   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:    9299  --> hit rate:   581.2 Hz   -->  0.67 Hz/cm^2
    nDigis in module 1  counter 2:   10075  --> hit rate:   629.7 Hz   -->  0.73 Hz/cm^2
    nDigis in module 1  counter 3:    3278  --> hit rate:   204.9 Hz   -->  0.24 Hz/cm^2
    nDigis in module 2  counter 1:   13865  --> hit rate:   866.6 Hz   -->  1.00 Hz/cm^2
    nDigis in module 2  counter 2:    6088  --> hit rate:   380.5 Hz   -->  0.44 Hz/cm^2
    nDigis in module 2  counter 3:    3194  --> hit rate:   199.6 Hz   -->  0.23 Hz/cm^2
    nDigis in module 3  counter 1:    2694  --> hit rate:   168.4 Hz   -->  0.19 Hz/cm^2
    nDigis in module 3  counter 2:    4876  --> hit rate:   304.8 Hz   -->  0.35 Hz/cm^2
    nDigis in module 3  counter 3:    2177  --> hit rate:   136.1 Hz   -->  0.16 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 367
    maximum # of digis per channel on module 1, counter 2: 365
    maximum # of digis per channel on module 1, counter 3: 246
    maximum # of digis per channel on module 2, counter 1: 999
    maximum # of digis per channel on module 2, counter 2: 1529
    maximum # of digis per channel on module 2, counter 3: 320
    maximum # of digis per channel on module 3, counter 1: 269
    maximum # of digis per channel on module 3, counter 2: 983
    maximum # of digis per channel on module 3, counter 3: 238
     *** -------------------- ***

     *** hit rate calculation in run 19093021 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:   15491  --> hit rate:   968.2 Hz   -->  1.12 Hz/cm^2
    nDigis in module 1  counter 2:   10526  --> hit rate:   657.9 Hz   -->  0.76 Hz/cm^2
    nDigis in module 1  counter 3:    3292  --> hit rate:   205.8 Hz   -->  0.24 Hz/cm^2
    nDigis in module 2  counter 1:   11631  --> hit rate:   726.9 Hz   -->  0.84 Hz/cm^2
    nDigis in module 2  counter 2:    6916  --> hit rate:   432.2 Hz   -->  0.50 Hz/cm^2
    nDigis in module 2  counter 3:    3174  --> hit rate:   198.4 Hz   -->  0.23 Hz/cm^2
    nDigis in module 3  counter 1:    1948  --> hit rate:   121.8 Hz   -->  0.14 Hz/cm^2
    nDigis in module 3  counter 2:    3219  --> hit rate:   201.2 Hz   -->  0.23 Hz/cm^2
    nDigis in module 3  counter 3:    1597  --> hit rate:    99.8 Hz   -->  0.12 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 502
    maximum # of digis per channel on module 1, counter 2: 361
    maximum # of digis per channel on module 1, counter 3: 272
    maximum # of digis per channel on module 2, counter 1: 1001
    maximum # of digis per channel on module 2, counter 2: 1306
    maximum # of digis per channel on module 2, counter 3: 260
    maximum # of digis per channel on module 3, counter 1: 175
    maximum # of digis per channel on module 3, counter 2: 700
    maximum # of digis per channel on module 3, counter 3: 191
     *** -------------------- ***

     *** hit rate calculation in run 19095012 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:   39034  --> hit rate:  2439.6 Hz   -->  2.82 Hz/cm^2
    nDigis in module 1  counter 2:   10894  --> hit rate:   680.9 Hz   -->  0.79 Hz/cm^2
    nDigis in module 1  counter 3:    4076  --> hit rate:   254.8 Hz   -->  0.29 Hz/cm^2
    nDigis in module 2  counter 1:   13837  --> hit rate:   864.8 Hz   -->  1.00 Hz/cm^2
    nDigis in module 2  counter 2:    7281  --> hit rate:   455.1 Hz   -->  0.53 Hz/cm^2
    nDigis in module 2  counter 3:    3196  --> hit rate:   199.8 Hz   -->  0.23 Hz/cm^2
    nDigis in module 3  counter 1:    2491  --> hit rate:   155.7 Hz   -->  0.18 Hz/cm^2
    nDigis in module 3  counter 2:    5172  --> hit rate:   323.2 Hz   -->  0.37 Hz/cm^2
    nDigis in module 3  counter 3:    2868  --> hit rate:   179.2 Hz   -->  0.21 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 1284
    maximum # of digis per channel on module 1, counter 2: 396
    maximum # of digis per channel on module 1, counter 3: 269
    maximum # of digis per channel on module 2, counter 1: 1032
    maximum # of digis per channel on module 2, counter 2: 1347
    maximum # of digis per channel on module 2, counter 3: 365
    maximum # of digis per channel on module 3, counter 1: 209
    maximum # of digis per channel on module 3, counter 2: 1047
    maximum # of digis per channel on module 3, counter 3: 276
     *** -------------------- ***

     *** hit rate calculation in run 19098029 ***
    nEvents: 3999774   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:   83290  --> hit rate:  5205.9 Hz   -->  6.03 Hz/cm^2
    nDigis in module 1  counter 2:   12568  --> hit rate:   785.5 Hz   -->  0.91 Hz/cm^2
    nDigis in module 1  counter 3:    4131  --> hit rate:   258.2 Hz   -->  0.30 Hz/cm^2
    nDigis in module 2  counter 1:   18665  --> hit rate:  1166.6 Hz   -->  1.35 Hz/cm^2
    nDigis in module 2  counter 2:    9105  --> hit rate:   569.1 Hz   -->  0.66 Hz/cm^2
    nDigis in module 2  counter 3:    4673  --> hit rate:   292.1 Hz   -->  0.34 Hz/cm^2
    nDigis in module 3  counter 1:    3334  --> hit rate:   208.4 Hz   -->  0.24 Hz/cm^2
    nDigis in module 3  counter 2:    8267  --> hit rate:   516.7 Hz   -->  0.60 Hz/cm^2
    nDigis in module 3  counter 3:    3983  --> hit rate:   249.0 Hz   -->  0.29 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 2592
    maximum # of digis per channel on module 1, counter 2: 532
    maximum # of digis per channel on module 1, counter 3: 281
    maximum # of digis per channel on module 2, counter 1: 1485
    maximum # of digis per channel on module 2, counter 2: 1543
    maximum # of digis per channel on module 2, counter 3: 377
    maximum # of digis per channel on module 3, counter 1: 383
    maximum # of digis per channel on module 3, counter 2: 1908
    maximum # of digis per channel on module 3, counter 3: 342
     *** -------------------- ***

     *** hit rate calculation in run 19100029 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  120556  --> hit rate:  7534.8 Hz   -->  8.72 Hz/cm^2
    nDigis in module 1  counter 2:   12522  --> hit rate:   782.6 Hz   -->  0.91 Hz/cm^2
    nDigis in module 1  counter 3:    3733  --> hit rate:   233.3 Hz   -->  0.27 Hz/cm^2
    nDigis in module 2  counter 1:   10079  --> hit rate:   629.9 Hz   -->  0.73 Hz/cm^2
    nDigis in module 2  counter 2:    6254  --> hit rate:   390.9 Hz   -->  0.45 Hz/cm^2
    nDigis in module 2  counter 3:    3587  --> hit rate:   224.2 Hz   -->  0.26 Hz/cm^2
    nDigis in module 3  counter 1:    1743  --> hit rate:   108.9 Hz   -->  0.13 Hz/cm^2
    nDigis in module 3  counter 2:    4384  --> hit rate:   274.0 Hz   -->  0.32 Hz/cm^2
    nDigis in module 3  counter 3:    1742  --> hit rate:   108.9 Hz   -->  0.13 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 3499
    maximum # of digis per channel on module 1, counter 2: 466
    maximum # of digis per channel on module 1, counter 3: 264
    maximum # of digis per channel on module 2, counter 1: 939
    maximum # of digis per channel on module 2, counter 2: 1142
    maximum # of digis per channel on module 2, counter 3: 245
    maximum # of digis per channel on module 3, counter 1: 226
    maximum # of digis per channel on module 3, counter 2: 1157
    maximum # of digis per channel on module 3, counter 3: 204
     *** -------------------- ***

     *** hit rate calculation in run 19104039 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  189036  --> hit rate: 11814.8 Hz   --> 13.67 Hz/cm^2
    nDigis in module 1  counter 2:   12274  --> hit rate:   767.1 Hz   -->  0.89 Hz/cm^2
    nDigis in module 1  counter 3:    3086  --> hit rate:   192.9 Hz   -->  0.22 Hz/cm^2
    nDigis in module 2  counter 1:    9103  --> hit rate:   568.9 Hz   -->  0.66 Hz/cm^2
    nDigis in module 2  counter 2:    4701  --> hit rate:   293.8 Hz   -->  0.34 Hz/cm^2
    nDigis in module 2  counter 3:    2461  --> hit rate:   153.8 Hz   -->  0.18 Hz/cm^2
    nDigis in module 3  counter 1:    1844  --> hit rate:   115.2 Hz   -->  0.13 Hz/cm^2
    nDigis in module 3  counter 2:    5142  --> hit rate:   321.4 Hz   -->  0.37 Hz/cm^2
    nDigis in module 3  counter 3:    1655  --> hit rate:   103.4 Hz   -->  0.12 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 4980
    maximum # of digis per channel on module 1, counter 2: 346
    maximum # of digis per channel on module 1, counter 3: 224
    maximum # of digis per channel on module 2, counter 1: 804
    maximum # of digis per channel on module 2, counter 2: 1062
    maximum # of digis per channel on module 2, counter 3: 202
    maximum # of digis per channel on module 3, counter 1: 303
    maximum # of digis per channel on module 3, counter 2: 1263
    maximum # of digis per channel on module 3, counter 3: 169
     *** -------------------- ***

     *** hit rate calculation in run 19108021 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  265009  --> hit rate: 16563.1 Hz   --> 19.17 Hz/cm^2
    nDigis in module 1  counter 2:   19512  --> hit rate:  1219.5 Hz   -->  1.41 Hz/cm^2
    nDigis in module 1  counter 3:    3450  --> hit rate:   215.6 Hz   -->  0.25 Hz/cm^2
    nDigis in module 2  counter 1:   16140  --> hit rate:  1008.8 Hz   -->  1.17 Hz/cm^2
    nDigis in module 2  counter 2:    6149  --> hit rate:   384.3 Hz   -->  0.44 Hz/cm^2
    nDigis in module 2  counter 3:    2820  --> hit rate:   176.2 Hz   -->  0.20 Hz/cm^2
    nDigis in module 3  counter 1:    2603  --> hit rate:   162.7 Hz   -->  0.19 Hz/cm^2
    nDigis in module 3  counter 2:    5183  --> hit rate:   323.9 Hz   -->  0.37 Hz/cm^2
    nDigis in module 3  counter 3:    2362  --> hit rate:   147.6 Hz   -->  0.17 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 6583
    maximum # of digis per channel on module 1, counter 2: 653
    maximum # of digis per channel on module 1, counter 3: 234
    maximum # of digis per channel on module 2, counter 1: 1188
    maximum # of digis per channel on module 2, counter 2: 1521
    maximum # of digis per channel on module 2, counter 3: 220
    maximum # of digis per channel on module 3, counter 1: 232
    maximum # of digis per channel on module 3, counter 2: 1303
    maximum # of digis per channel on module 3, counter 3: 267
     *** -------------------- ***

     *** hit rate calculation in run 19114034 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  452621  --> hit rate: 28288.8 Hz   --> 32.74 Hz/cm^2
    nDigis in module 1  counter 2:   55584  --> hit rate:  3474.0 Hz   -->  4.02 Hz/cm^2
    nDigis in module 1  counter 3:    3544  --> hit rate:   221.5 Hz   -->  0.26 Hz/cm^2
    nDigis in module 2  counter 1:   12213  --> hit rate:   763.3 Hz   -->  0.88 Hz/cm^2
    nDigis in module 2  counter 2:    2721  --> hit rate:   170.1 Hz   -->  0.20 Hz/cm^2
    nDigis in module 2  counter 3:    1857  --> hit rate:   116.1 Hz   -->  0.13 Hz/cm^2
    nDigis in module 3  counter 1:    2012  --> hit rate:   125.8 Hz   -->  0.15 Hz/cm^2
    nDigis in module 3  counter 2:    3282  --> hit rate:   205.1 Hz   -->  0.24 Hz/cm^2
    nDigis in module 3  counter 3:    1666  --> hit rate:   104.1 Hz   -->  0.12 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 10111
    maximum # of digis per channel on module 1, counter 2: 1827
    maximum # of digis per channel on module 1, counter 3: 242
    maximum # of digis per channel on module 2, counter 1: 1019
    maximum # of digis per channel on module 2, counter 2: 414
    maximum # of digis per channel on module 2, counter 3: 165
    maximum # of digis per channel on module 3, counter 1: 117
    maximum # of digis per channel on module 3, counter 2: 628
    maximum # of digis per channel on module 3, counter 3: 113
     *** -------------------- ***

     *** hit rate calculation in run 19115023 ***
    nEvents: 3999779   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  488806  --> hit rate: 30552.1 Hz   --> 35.36 Hz/cm^2
    nDigis in module 1  counter 2:   67284  --> hit rate:  4205.5 Hz   -->  4.87 Hz/cm^2
    nDigis in module 1  counter 3:    3660  --> hit rate:   228.8 Hz   -->  0.26 Hz/cm^2
    nDigis in module 2  counter 1:   12860  --> hit rate:   803.8 Hz   -->  0.93 Hz/cm^2
    nDigis in module 2  counter 2:    3097  --> hit rate:   193.6 Hz   -->  0.22 Hz/cm^2
    nDigis in module 2  counter 3:    2075  --> hit rate:   129.7 Hz   -->  0.15 Hz/cm^2
    nDigis in module 3  counter 1:    1908  --> hit rate:   119.3 Hz   -->  0.14 Hz/cm^2
    nDigis in module 3  counter 2:    3872  --> hit rate:   242.0 Hz   -->  0.28 Hz/cm^2
    nDigis in module 3  counter 3:    1548  --> hit rate:    96.8 Hz   -->  0.11 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 10780
    maximum # of digis per channel on module 1, counter 2: 2163
    maximum # of digis per channel on module 1, counter 3: 264
    maximum # of digis per channel on module 2, counter 1: 1136
    maximum # of digis per channel on module 2, counter 2: 463
    maximum # of digis per channel on module 2, counter 3: 176
    maximum # of digis per channel on module 3, counter 1: 141
    maximum # of digis per channel on module 3, counter 2: 1004
    maximum # of digis per channel on module 3, counter 3: 139
     *** -------------------- ***

     *** hit rate calculation in run 19117020 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  531598  --> hit rate: 33224.9 Hz   --> 38.45 Hz/cm^2
    nDigis in module 1  counter 2:   83451  --> hit rate:  5215.7 Hz   -->  6.04 Hz/cm^2
    nDigis in module 1  counter 3:    3842  --> hit rate:   240.1 Hz   -->  0.28 Hz/cm^2
    nDigis in module 2  counter 1:   15883  --> hit rate:   992.7 Hz   -->  1.15 Hz/cm^2
    nDigis in module 2  counter 2:    4416  --> hit rate:   276.0 Hz   -->  0.32 Hz/cm^2
    nDigis in module 2  counter 3:    3101  --> hit rate:   193.8 Hz   -->  0.22 Hz/cm^2
    nDigis in module 3  counter 1:    2212  --> hit rate:   138.2 Hz   -->  0.16 Hz/cm^2
    nDigis in module 3  counter 2:    3561  --> hit rate:   222.6 Hz   -->  0.26 Hz/cm^2
    nDigis in module 3  counter 3:    2102  --> hit rate:   131.4 Hz   -->  0.15 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 11326
    maximum # of digis per channel on module 1, counter 2: 2559
    maximum # of digis per channel on module 1, counter 3: 199
    maximum # of digis per channel on module 2, counter 1: 1285
    maximum # of digis per channel on module 2, counter 2: 575
    maximum # of digis per channel on module 2, counter 3: 290
    maximum # of digis per channel on module 3, counter 1: 123
    maximum # of digis per channel on module 3, counter 2: 852
    maximum # of digis per channel on module 3, counter 3: 194
     *** -------------------- ***

     *** hit rate calculation in run 19118039 ***
    nEvents: 3999559   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  555879  --> hit rate: 34746.3 Hz   --> 40.22 Hz/cm^2
    nDigis in module 1  counter 2:   94864  --> hit rate:  5929.7 Hz   -->  6.86 Hz/cm^2
    nDigis in module 1  counter 3:    4049  --> hit rate:   253.1 Hz   -->  0.29 Hz/cm^2
    nDigis in module 2  counter 1:   16824  --> hit rate:  1051.6 Hz   -->  1.22 Hz/cm^2
    nDigis in module 2  counter 2:    4131  --> hit rate:   258.2 Hz   -->  0.30 Hz/cm^2
    nDigis in module 2  counter 3:    2256  --> hit rate:   141.0 Hz   -->  0.16 Hz/cm^2
    nDigis in module 3  counter 1:    2304  --> hit rate:   144.0 Hz   -->  0.17 Hz/cm^2
    nDigis in module 3  counter 2:    3215  --> hit rate:   201.0 Hz   -->  0.23 Hz/cm^2
    nDigis in module 3  counter 3:    1759  --> hit rate:   109.9 Hz   -->  0.13 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 11723
    maximum # of digis per channel on module 1, counter 2: 2883
    maximum # of digis per channel on module 1, counter 3: 226
    maximum # of digis per channel on module 2, counter 1: 1414
    maximum # of digis per channel on module 2, counter 2: 691
    maximum # of digis per channel on module 2, counter 3: 202
    maximum # of digis per channel on module 3, counter 1: 228
    maximum # of digis per channel on module 3, counter 2: 890
    maximum # of digis per channel on module 3, counter 3: 199
     *** -------------------- ***

     *** hit rate calculation in run 19119029 ***
    nEvents: 3642556   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  514400  --> hit rate: 35304.9 Hz   --> 40.86 Hz/cm^2
    nDigis in module 1  counter 2:   93811  --> hit rate:  6438.5 Hz   -->  7.45 Hz/cm^2
    nDigis in module 1  counter 3:    3874  --> hit rate:   265.9 Hz   -->  0.31 Hz/cm^2
    nDigis in module 2  counter 1:   16487  --> hit rate:  1131.6 Hz   -->  1.31 Hz/cm^2
    nDigis in module 2  counter 2:    4109  --> hit rate:   282.0 Hz   -->  0.33 Hz/cm^2
    nDigis in module 2  counter 3:    2042  --> hit rate:   140.1 Hz   -->  0.16 Hz/cm^2
    nDigis in module 3  counter 1:    2046  --> hit rate:   140.4 Hz   -->  0.16 Hz/cm^2
    nDigis in module 3  counter 2:    3205  --> hit rate:   220.0 Hz   -->  0.25 Hz/cm^2
    nDigis in module 3  counter 3:    1783  --> hit rate:   122.4 Hz   -->  0.14 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 10799
    maximum # of digis per channel on module 1, counter 2: 2783
    maximum # of digis per channel on module 1, counter 3: 222
    maximum # of digis per channel on module 2, counter 1: 1378
    maximum # of digis per channel on module 2, counter 2: 555
    maximum # of digis per channel on module 2, counter 3: 184
    maximum # of digis per channel on module 3, counter 1: 201
    maximum # of digis per channel on module 3, counter 2: 998
    maximum # of digis per channel on module 3, counter 3: 232
     *** -------------------- ***

     *** hit rate calculation in run 19120034 ***
    nEvents: 3999779   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  585449  --> hit rate: 36592.6 Hz   --> 42.35 Hz/cm^2
    nDigis in module 1  counter 2:  115427  --> hit rate:  7214.6 Hz   -->  8.35 Hz/cm^2
    nDigis in module 1  counter 3:    4177  --> hit rate:   261.1 Hz   -->  0.30 Hz/cm^2
    nDigis in module 2  counter 1:   19746  --> hit rate:  1234.2 Hz   -->  1.43 Hz/cm^2
    nDigis in module 2  counter 2:    3536  --> hit rate:   221.0 Hz   -->  0.26 Hz/cm^2
    nDigis in module 2  counter 3:    2082  --> hit rate:   130.1 Hz   -->  0.15 Hz/cm^2
    nDigis in module 3  counter 1:    3219  --> hit rate:   201.2 Hz   -->  0.23 Hz/cm^2
    nDigis in module 3  counter 2:    3627  --> hit rate:   226.7 Hz   -->  0.26 Hz/cm^2
    nDigis in module 3  counter 3:    1603  --> hit rate:   100.2 Hz   -->  0.12 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 12126
    maximum # of digis per channel on module 1, counter 2: 3324
    maximum # of digis per channel on module 1, counter 3: 224
    maximum # of digis per channel on module 2, counter 1: 1703
    maximum # of digis per channel on module 2, counter 2: 579
    maximum # of digis per channel on module 2, counter 3: 207
    maximum # of digis per channel on module 3, counter 1: 200
    maximum # of digis per channel on module 3, counter 2: 1026
    maximum # of digis per channel on module 3, counter 3: 185
     *** -------------------- ***

    *** hit rate calculation in run 19121022 ***
    nEvents: 3999771   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 1  counter 2:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 1  counter 3:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 2  counter 1:   22323  --> hit rate:  1395.3 Hz   -->  1.61 Hz/cm^2
    nDigis in module 2  counter 2:    4063  --> hit rate:   254.0 Hz   -->  0.29 Hz/cm^2
    nDigis in module 2  counter 3:    2871  --> hit rate:   179.4 Hz   -->  0.21 Hz/cm^2
    nDigis in module 3  counter 1:    2680  --> hit rate:   167.5 Hz   -->  0.19 Hz/cm^2
    nDigis in module 3  counter 2:    3384  --> hit rate:   211.5 Hz   -->  0.24 Hz/cm^2
    nDigis in module 3  counter 3:    1696  --> hit rate:   106.0 Hz   -->  0.12 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 0
    maximum # of digis per channel on module 1, counter 2: 0
    maximum # of digis per channel on module 1, counter 3: 0
    maximum # of digis per channel on module 2, counter 1: 1549
    maximum # of digis per channel on module 2, counter 2: 587
    maximum # of digis per channel on module 2, counter 3: 224
    maximum # of digis per channel on module 3, counter 1: 132
    maximum # of digis per channel on module 3, counter 2: 769
    maximum # of digis per channel on module 3, counter 3: 173
     *** -------------------- ***

     

     

    *** hit rate calculation in run 19122022 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 1  counter 2:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 1  counter 3:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 2  counter 1:   25857  --> hit rate:  1616.1 Hz   -->  1.87 Hz/cm^2
    nDigis in module 2  counter 2:    4476  --> hit rate:   279.8 Hz   -->  0.32 Hz/cm^2
    nDigis in module 2  counter 3:    2838  --> hit rate:   177.4 Hz   -->  0.21 Hz/cm^2
    nDigis in module 3  counter 1:    4399  --> hit rate:   274.9 Hz   -->  0.32 Hz/cm^2
    nDigis in module 3  counter 2:    4980  --> hit rate:   311.2 Hz   -->  0.36 Hz/cm^2
    nDigis in module 3  counter 3:    2319  --> hit rate:   144.9 Hz   -->  0.17 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 0
    maximum # of digis per channel on module 1, counter 2: 0
    maximum # of digis per channel on module 1, counter 3: 0
    maximum # of digis per channel on module 2, counter 1: 1722
    maximum # of digis per channel on module 2, counter 2: 589
    maximum # of digis per channel on module 2, counter 3: 234
    maximum # of digis per channel on module 3, counter 1: 239
    maximum # of digis per channel on module 3, counter 2: 1259
    maximum # of digis per channel on module 3, counter 3: 197
     *** -------------------- ***

     *** hit rate calculation in run 19123019 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 1  counter 2:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 1  counter 3:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 2  counter 1:   26110  --> hit rate:  1631.9 Hz   -->  1.89 Hz/cm^2
    nDigis in module 2  counter 2:    4395  --> hit rate:   274.7 Hz   -->  0.32 Hz/cm^2
    nDigis in module 2  counter 3:    2553  --> hit rate:   159.6 Hz   -->  0.18 Hz/cm^2
    nDigis in module 3  counter 1:    2767  --> hit rate:   172.9 Hz   -->  0.20 Hz/cm^2
    nDigis in module 3  counter 2:    3346  --> hit rate:   209.1 Hz   -->  0.24 Hz/cm^2
    nDigis in module 3  counter 3:    1912  --> hit rate:   119.5 Hz   -->  0.14 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 0
    maximum # of digis per channel on module 1, counter 2: 0
    maximum # of digis per channel on module 1, counter 3: 0
    maximum # of digis per channel on module 2, counter 1: 1695
    maximum # of digis per channel on module 2, counter 2: 564
    maximum # of digis per channel on module 2, counter 3: 254
    maximum # of digis per channel on module 3, counter 1: 190
    maximum # of digis per channel on module 3, counter 2: 895
    maximum # of digis per channel on module 3, counter 3: 240
     *** -------------------- ***

      *** hit rate calculation in run 19125025 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 1  counter 2:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 1  counter 3:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 2  counter 1:   30556  --> hit rate:  1909.8 Hz   -->  2.21 Hz/cm^2
    nDigis in module 2  counter 2:    3918  --> hit rate:   244.9 Hz   -->  0.28 Hz/cm^2
    nDigis in module 2  counter 3:    3171  --> hit rate:   198.2 Hz   -->  0.23 Hz/cm^2
    nDigis in module 3  counter 1:    3522  --> hit rate:   220.1 Hz   -->  0.25 Hz/cm^2
    nDigis in module 3  counter 2:    3153  --> hit rate:   197.1 Hz   -->  0.23 Hz/cm^2
    nDigis in module 3  counter 3:    1414  --> hit rate:    88.4 Hz   -->  0.10 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 0
    maximum # of digis per channel on module 1, counter 2: 0
    maximum # of digis per channel on module 1, counter 3: 0
    maximum # of digis per channel on module 2, counter 1: 2136
    maximum # of digis per channel on module 2, counter 2: 644
    maximum # of digis per channel on module 2, counter 3: 298
    maximum # of digis per channel on module 3, counter 1: 211
    maximum # of digis per channel on module 3, counter 2: 885
    maximum # of digis per channel on module 3, counter 3: 170
     *** -------------------- ***

    *** ------------- *** hit rate calculation in run 19126022 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 1  counter 2:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 1  counter 3:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 2  counter 1:   33622  --> hit rate:  2101.4 Hz   -->  2.43 Hz/cm^2
    nDigis in module 2  counter 2:    3798  --> hit rate:   237.4 Hz   -->  0.27 Hz/cm^2
    nDigis in module 2  counter 3:    2353  --> hit rate:   147.1 Hz   -->  0.17 Hz/cm^2
    nDigis in module 3  counter 1:    4056  --> hit rate:   253.5 Hz   -->  0.29 Hz/cm^2
    nDigis in module 3  counter 2:    4298  --> hit rate:   268.6 Hz   -->  0.31 Hz/cm^2
    nDigis in module 3  counter 3:    1879  --> hit rate:   117.4 Hz   -->  0.14 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 0
    maximum # of digis per channel on module 1, counter 2: 0
    maximum # of digis per channel on module 1, counter 3: 0
    maximum # of digis per channel on module 2, counter 1: 2078
    maximum # of digis per channel on module 2, counter 2: 586
    maximum # of digis per channel on module 2, counter 3: 248
    maximum # of digis per channel on module 3, counter 1: 291
    maximum # of digis per channel on module 3, counter 2: 1136
    maximum # of digis per channel on module 3, counter 3: 200
     *** -------------------- ***------- ***

     *** hit rate calculation in run 19127023 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 1  counter 2:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 1  counter 3:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 2  counter 1:   38835  --> hit rate:  2427.2 Hz   -->  2.81 Hz/cm^2
    nDigis in module 2  counter 2:    4135  --> hit rate:   258.4 Hz   -->  0.30 Hz/cm^2
    nDigis in module 2  counter 3:    2321  --> hit rate:   145.1 Hz   -->  0.17 Hz/cm^2
    nDigis in module 3  counter 1:    4467  --> hit rate:   279.2 Hz   -->  0.32 Hz/cm^2
    nDigis in module 3  counter 2:    4093  --> hit rate:   255.8 Hz   -->  0.30 Hz/cm^2
    nDigis in module 3  counter 3:    1828  --> hit rate:   114.2 Hz   -->  0.13 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 0
    maximum # of digis per channel on module 1, counter 2: 0
    maximum # of digis per channel on module 1, counter 3: 0
    maximum # of digis per channel on module 2, counter 1: 2176
    maximum # of digis per channel on module 2, counter 2: 638
    maximum # of digis per channel on module 2, counter 3: 186
    maximum # of digis per channel on module 3, counter 1: 265
    maximum # of digis per channel on module 3, counter 2: 1187
    maximum # of digis per channel on module 3, counter 3: 237
     *** -------------------- ***

     *** hit rate calculation in run 19128020 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 1  counter 2:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 1  counter 3:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 2  counter 1:   39448  --> hit rate:  2465.5 Hz   -->  2.85 Hz/cm^2
    nDigis in module 2  counter 2:    3395  --> hit rate:   212.2 Hz   -->  0.25 Hz/cm^2
    nDigis in module 2  counter 3:    2063  --> hit rate:   128.9 Hz   -->  0.15 Hz/cm^2
    nDigis in module 3  counter 1:    5034  --> hit rate:   314.6 Hz   -->  0.36 Hz/cm^2
    nDigis in module 3  counter 2:    2781  --> hit rate:   173.8 Hz   -->  0.20 Hz/cm^2
    nDigis in module 3  counter 3:    1218  --> hit rate:    76.1 Hz   -->  0.09 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 0
    maximum # of digis per channel on module 1, counter 2: 0
    maximum # of digis per channel on module 1, counter 3: 0
    maximum # of digis per channel on module 2, counter 1: 1934
    maximum # of digis per channel on module 2, counter 2: 534
    maximum # of digis per channel on module 2, counter 3: 176
    maximum # of digis per channel on module 3, counter 1: 270
    maximum # of digis per channel on module 3, counter 2: 635
    maximum # of digis per channel on module 3, counter 3: 144
     *** -------------------- ***

     *** hit rate calculation in run 19129017 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 1  counter 2:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 1  counter 3:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 2  counter 1:   43510  --> hit rate:  2719.4 Hz   -->  3.15 Hz/cm^2
    nDigis in module 2  counter 2:    3534  --> hit rate:   220.9 Hz   -->  0.26 Hz/cm^2
    nDigis in module 2  counter 3:    2226  --> hit rate:   139.1 Hz   -->  0.16 Hz/cm^2
    nDigis in module 3  counter 1:    5857  --> hit rate:   366.1 Hz   -->  0.42 Hz/cm^2
    nDigis in module 3  counter 2:    3546  --> hit rate:   221.6 Hz   -->  0.26 Hz/cm^2
    nDigis in module 3  counter 3:    1516  --> hit rate:    94.8 Hz   -->  0.11 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 0
    maximum # of digis per channel on module 1, counter 2: 0
    maximum # of digis per channel on module 1, counter 3: 0
    maximum # of digis per channel on module 2, counter 1: 1841
    maximum # of digis per channel on module 2, counter 2: 554
    maximum # of digis per channel on module 2, counter 3: 150
    maximum # of digis per channel on module 3, counter 1: 318
    maximum # of digis per channel on module 3, counter 2: 905
    maximum # of digis per channel on module 3, counter 3: 184
     *** -------------------- ***

     

    *** hit rate calculation in run 19134031 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  574601  --> hit rate: 35912.6 Hz   --> 41.57 Hz/cm^2
    nDigis in module 1  counter 2:  136583  --> hit rate:  8536.4 Hz   -->  9.88 Hz/cm^2
    nDigis in module 1  counter 3:    5141  --> hit rate:   321.3 Hz   -->  0.37 Hz/cm^2
    nDigis in module 2  counter 1:   48664  --> hit rate:  3041.5 Hz   -->  3.52 Hz/cm^2
    nDigis in module 2  counter 2:    4353  --> hit rate:   272.1 Hz   -->  0.31 Hz/cm^2
    nDigis in module 2  counter 3:    2272  --> hit rate:   142.0 Hz   -->  0.16 Hz/cm^2
    nDigis in module 3  counter 1:   10416  --> hit rate:   651.0 Hz   -->  0.75 Hz/cm^2
    nDigis in module 3  counter 2:    3250  --> hit rate:   203.1 Hz   -->  0.24 Hz/cm^2
    nDigis in module 3  counter 3:    1483  --> hit rate:    92.7 Hz   -->  0.11 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 11874
    maximum # of digis per channel on module 1, counter 2: 3518
    maximum # of digis per channel on module 1, counter 3: 232
    maximum # of digis per channel on module 2, counter 1: 1771
    maximum # of digis per channel on module 2, counter 2: 635
    maximum # of digis per channel on module 2, counter 3: 212
    maximum # of digis per channel on module 3, counter 1: 524
    maximum # of digis per channel on module 3, counter 2: 510
    maximum # of digis per channel on module 3, counter 3: 151
     *** -------------------- ***

     *** hit rate calculation in run 19136037 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  560834  --> hit rate: 35052.1 Hz   --> 40.57 Hz/cm^2
    nDigis in module 1  counter 2:  138879  --> hit rate:  8679.9 Hz   --> 10.05 Hz/cm^2
    nDigis in module 1  counter 3:    4949  --> hit rate:   309.3 Hz   -->  0.36 Hz/cm^2
    nDigis in module 2  counter 1:   41385  --> hit rate:  2586.6 Hz   -->  2.99 Hz/cm^2
    nDigis in module 2  counter 2:    3046  --> hit rate:   190.4 Hz   -->  0.22 Hz/cm^2
    nDigis in module 2  counter 3:    1485  --> hit rate:    92.8 Hz   -->  0.11 Hz/cm^2
    nDigis in module 3  counter 1:   10506  --> hit rate:   656.6 Hz   -->  0.76 Hz/cm^2
    nDigis in module 3  counter 2:    2950  --> hit rate:   184.4 Hz   -->  0.21 Hz/cm^2
    nDigis in module 3  counter 3:    1952  --> hit rate:   122.0 Hz   -->  0.14 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 11905
    maximum # of digis per channel on module 1, counter 2: 3510
    maximum # of digis per channel on module 1, counter 3: 236
    maximum # of digis per channel on module 2, counter 1: 1620
    maximum # of digis per channel on module 2, counter 2: 586
    maximum # of digis per channel on module 2, counter 3: 160
    maximum # of digis per channel on module 3, counter 1: 480
    maximum # of digis per channel on module 3, counter 2: 272
    maximum # of digis per channel on module 3, counter 3: 171
     *** -------------------- ***

     *** hit rate calculation in run 19142042 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  643641  --> hit rate: 40227.6 Hz   --> 46.56 Hz/cm^2
    nDigis in module 1  counter 2:  169841  --> hit rate: 10615.1 Hz   --> 12.29 Hz/cm^2
    nDigis in module 1  counter 3:    6325  --> hit rate:   395.3 Hz   -->  0.46 Hz/cm^2
    nDigis in module 2  counter 1:   49718  --> hit rate:  3107.4 Hz   -->  3.60 Hz/cm^2
    nDigis in module 2  counter 2:    4041  --> hit rate:   252.6 Hz   -->  0.29 Hz/cm^2
    nDigis in module 2  counter 3:    1831  --> hit rate:   114.4 Hz   -->  0.13 Hz/cm^2
    nDigis in module 3  counter 1:   12986  --> hit rate:   811.6 Hz   -->  0.94 Hz/cm^2
    nDigis in module 3  counter 2:    3068  --> hit rate:   191.8 Hz   -->  0.22 Hz/cm^2
    nDigis in module 3  counter 3:    1836  --> hit rate:   114.8 Hz   -->  0.13 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 13127
    maximum # of digis per channel on module 1, counter 2: 4237
    maximum # of digis per channel on module 1, counter 3: 264
    maximum # of digis per channel on module 2, counter 1: 1333
    maximum # of digis per channel on module 2, counter 2: 530
    maximum # of digis per channel on module 2, counter 3: 166
    maximum # of digis per channel on module 3, counter 1: 583
    maximum # of digis per channel on module 3, counter 2: 391
    maximum # of digis per channel on module 3, counter 3: 209
     *** -------------------- ***

    *** hit rate calculation in run 19145045 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  747899  --> hit rate: 46743.7 Hz   --> 54.10 Hz/cm^2
    nDigis in module 1  counter 2:  209294  --> hit rate: 13080.9 Hz   --> 15.14 Hz/cm^2
    nDigis in module 1  counter 3:    7250  --> hit rate:   453.1 Hz   -->  0.52 Hz/cm^2
    nDigis in module 2  counter 1:   59897  --> hit rate:  3743.6 Hz   -->  4.33 Hz/cm^2
    nDigis in module 2  counter 2:    5773  --> hit rate:   360.8 Hz   -->  0.42 Hz/cm^2
    nDigis in module 2  counter 3:    2430  --> hit rate:   151.9 Hz   -->  0.18 Hz/cm^2
    nDigis in module 3  counter 1:   16710  --> hit rate:  1044.4 Hz   -->  1.21 Hz/cm^2
    nDigis in module 3  counter 2:    2872  --> hit rate:   179.5 Hz   -->  0.21 Hz/cm^2
    nDigis in module 3  counter 3:    1523  --> hit rate:    95.2 Hz   -->  0.11 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 14937
    maximum # of digis per channel on module 1, counter 2: 5028
    maximum # of digis per channel on module 1, counter 3: 223
    maximum # of digis per channel on module 2, counter 1: 1622
    maximum # of digis per channel on module 2, counter 2: 634
    maximum # of digis per channel on module 2, counter 3: 252
    maximum # of digis per channel on module 3, counter 1: 647
    maximum # of digis per channel on module 3, counter 2: 274
    maximum # of digis per channel on module 3, counter 3: 154
     *** -------------------- ***

     *** hit rate calculation in run 19146031 ***  (taken ~ 1h after magnet ramp & turning on the FEEs)
    nEvents: 3371908   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  417419  --> hit rate: 30948.3 Hz   --> 35.82 Hz/cm^2
    nDigis in module 1  counter 2:  124179  --> hit rate:  9206.9 Hz   --> 10.66 Hz/cm^2
    nDigis in module 1  counter 3:    5074  --> hit rate:   376.2 Hz   -->  0.44 Hz/cm^2
    nDigis in module 2  counter 1:   31790  --> hit rate:  2357.0 Hz   -->  2.73 Hz/cm^2
    nDigis in module 2  counter 2:    2815  --> hit rate:   208.7 Hz   -->  0.24 Hz/cm^2
    nDigis in module 2  counter 3:    1577  --> hit rate:   116.9 Hz   -->  0.14 Hz/cm^2
    nDigis in module 3  counter 1:    9053  --> hit rate:   671.2 Hz   -->  0.78 Hz/cm^2
    nDigis in module 3  counter 2:    1427  --> hit rate:   105.8 Hz   -->  0.12 Hz/cm^2
    nDigis in module 3  counter 3:     965  --> hit rate:    71.5 Hz   -->  0.08 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 8201
    maximum # of digis per channel on module 1, counter 2: 2966
    maximum # of digis per channel on module 1, counter 3: 142
    maximum # of digis per channel on module 2, counter 1: 893
    maximum # of digis per channel on module 2, counter 2: 457
    maximum # of digis per channel on module 2, counter 3: 137
    maximum # of digis per channel on module 3, counter 1: 339
    maximum # of digis per channel on module 3, counter 2: 124
    maximum # of digis per channel on module 3, counter 3: 96
     *** -------------------- ***

     *** hit rate calculation in run 19148025 ***
    nEvents: 3999563   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  787452  --> hit rate: 49221.1 Hz   --> 56.97 Hz/cm^2
    nDigis in module 1  counter 2:  227901  --> hit rate: 14245.4 Hz   --> 16.49 Hz/cm^2
    nDigis in module 1  counter 3:    7244  --> hit rate:   452.8 Hz   -->  0.52 Hz/cm^2
    nDigis in module 2  counter 1:   70339  --> hit rate:  4396.7 Hz   -->  5.09 Hz/cm^2
    nDigis in module 2  counter 2:    4964  --> hit rate:   310.3 Hz   -->  0.36 Hz/cm^2
    nDigis in module 2  counter 3:    2232  --> hit rate:   139.5 Hz   -->  0.16 Hz/cm^2
    nDigis in module 3  counter 1:   18329  --> hit rate:  1145.7 Hz   -->  1.33 Hz/cm^2
    nDigis in module 3  counter 2:    2796  --> hit rate:   174.8 Hz   -->  0.20 Hz/cm^2
    nDigis in module 3  counter 3:    1436  --> hit rate:    89.8 Hz   -->  0.10 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 15761
    maximum # of digis per channel on module 1, counter 2: 5445
    maximum # of digis per channel on module 1, counter 3: 225
    maximum # of digis per channel on module 2, counter 1: 1942
    maximum # of digis per channel on module 2, counter 2: 729
    maximum # of digis per channel on module 2, counter 3: 178
    maximum # of digis per channel on module 3, counter 1: 719
    maximum # of digis per channel on module 3, counter 2: 333
    maximum # of digis per channel on module 3, counter 3: 158
     *** -------------------- ***

    *** hit rate calculation in run 19149027 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  816337  --> hit rate: 51021.1 Hz   --> 59.05 Hz/cm^2
    nDigis in module 1  counter 2:  242103  --> hit rate: 15131.4 Hz   --> 17.51 Hz/cm^2
    nDigis in module 1  counter 3:    8201  --> hit rate:   512.6 Hz   -->  0.59 Hz/cm^2
    nDigis in module 2  counter 1:   75377  --> hit rate:  4711.1 Hz   -->  5.45 Hz/cm^2
    nDigis in module 2  counter 2:    6241  --> hit rate:   390.1 Hz   -->  0.45 Hz/cm^2
    nDigis in module 2  counter 3:    2566  --> hit rate:   160.4 Hz   -->  0.19 Hz/cm^2
    nDigis in module 3  counter 1:   20347  --> hit rate:  1271.7 Hz   -->  1.47 Hz/cm^2
    nDigis in module 3  counter 2:    3215  --> hit rate:   200.9 Hz   -->  0.23 Hz/cm^2
    nDigis in module 3  counter 3:    2205  --> hit rate:   137.8 Hz   -->  0.16 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 16386
    maximum # of digis per channel on module 1, counter 2: 5734
    maximum # of digis per channel on module 1, counter 3: 279
    maximum # of digis per channel on module 2, counter 1: 2021
    maximum # of digis per channel on module 2, counter 2: 744
    maximum # of digis per channel on module 2, counter 3: 244
    maximum # of digis per channel on module 3, counter 1: 814
    maximum # of digis per channel on module 3, counter 2: 323
    maximum # of digis per channel on module 3, counter 3: 202
     *** -------------------- ***

     *** hit rate calculation in run 19150051 *** (taken shortly after magnet ramping and turning on the FEEs)
    nEvents: 3999491   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  334273  --> hit rate: 20894.7 Hz   --> 24.18 Hz/cm^2
    nDigis in module 1  counter 2:  119997  --> hit rate:  7500.8 Hz   -->  8.68 Hz/cm^2
    nDigis in module 1  counter 3:    3839  --> hit rate:   240.0 Hz   -->  0.28 Hz/cm^2
    nDigis in module 2  counter 1:   28227  --> hit rate:  1764.4 Hz   -->  2.04 Hz/cm^2
    nDigis in module 2  counter 2:    3364  --> hit rate:   210.3 Hz   -->  0.24 Hz/cm^2
    nDigis in module 2  counter 3:    1367  --> hit rate:    85.4 Hz   -->  0.10 Hz/cm^2
    nDigis in module 3  counter 1:    7078  --> hit rate:   442.4 Hz   -->  0.51 Hz/cm^2
    nDigis in module 3  counter 2:    1369  --> hit rate:    85.6 Hz   -->  0.10 Hz/cm^2
    nDigis in module 3  counter 3:    1039  --> hit rate:    64.9 Hz   -->  0.08 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 6742
    maximum # of digis per channel on module 1, counter 2: 2804
    maximum # of digis per channel on module 1, counter 3: 137
    maximum # of digis per channel on module 2, counter 1: 765
    maximum # of digis per channel on module 2, counter 2: 532
    maximum # of digis per channel on module 2, counter 3: 107
    maximum # of digis per channel on module 3, counter 1: 267
    maximum # of digis per channel on module 3, counter 2: 94
    maximum # of digis per channel on module 3, counter 3: 89
     *** -------------------- ***

     *** hit rate calculation in run 19150054 *** (taken ~1h after magnet ramping and turning on the FEEs)
    nEvents: 3998959   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 1  counter 2:  129200  --> hit rate:  8077.1 Hz   -->  9.35 Hz/cm^2
    nDigis in module 1  counter 3:    5651  --> hit rate:   353.3 Hz   -->  0.41 Hz/cm^2
    nDigis in module 2  counter 1:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in module 2  counter 2:    4286  --> hit rate:   267.9 Hz   -->  0.31 Hz/cm^2
    nDigis in module 2  counter 3:    1575  --> hit rate:    98.5 Hz   -->  0.11 Hz/cm^2
    nDigis in module 3  counter 1:    9731  --> hit rate:   608.3 Hz   -->  0.70 Hz/cm^2
    nDigis in module 3  counter 2:    1527  --> hit rate:    95.5 Hz   -->  0.11 Hz/cm^2
    nDigis in module 3  counter 3:     987  --> hit rate:    61.7 Hz   -->  0.07 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 0
    maximum # of digis per channel on module 1, counter 2: 3039
    maximum # of digis per channel on module 1, counter 3: 187
    maximum # of digis per channel on module 2, counter 1: 0
    maximum # of digis per channel on module 2, counter 2: 736
    maximum # of digis per channel on module 2, counter 3: 159
    maximum # of digis per channel on module 3, counter 1: 334
    maximum # of digis per channel on module 3, counter 2: 95
    maximum # of digis per channel on module 3, counter 3: 80
     *** -------------------- ***

     *** hit rate calculation in run 19150061 ***
    nEvents: 3999780   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  725270  --> hit rate: 45331.9 Hz   --> 52.47 Hz/cm^2
    nDigis in module 1  counter 2:  212230  --> hit rate: 13265.1 Hz   --> 15.35 Hz/cm^2
    nDigis in module 1  counter 3:    7863  --> hit rate:   491.5 Hz   -->  0.57 Hz/cm^2
    nDigis in module 2  counter 1:   64288  --> hit rate:  4018.2 Hz   -->  4.65 Hz/cm^2
    nDigis in module 2  counter 2:    6724  --> hit rate:   420.3 Hz   -->  0.49 Hz/cm^2
    nDigis in module 2  counter 3:    2214  --> hit rate:   138.4 Hz   -->  0.16 Hz/cm^2
    nDigis in module 3  counter 1:   15576  --> hit rate:   973.6 Hz   -->  1.13 Hz/cm^2
    nDigis in module 3  counter 2:    2875  --> hit rate:   179.7 Hz   -->  0.21 Hz/cm^2
    nDigis in module 3  counter 3:    2310  --> hit rate:   144.4 Hz   -->  0.17 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 14851
    maximum # of digis per channel on module 1, counter 2: 5085
    maximum # of digis per channel on module 1, counter 3: 245
    maximum # of digis per channel on module 2, counter 1: 2199
    maximum # of digis per channel on module 2, counter 2: 1242
    maximum # of digis per channel on module 2, counter 3: 190
    maximum # of digis per channel on module 3, counter 1: 607
    maximum # of digis per channel on module 3, counter 2: 201
    maximum # of digis per channel on module 3, counter 3: 222
     *** -------------------- ***

     *** hit rate calculation in run 19151075 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  789939  --> hit rate: 49371.2 Hz   --> 57.14 Hz/cm^2
    nDigis in module 1  counter 2:  237670  --> hit rate: 14854.4 Hz   --> 17.19 Hz/cm^2
    nDigis in module 1  counter 3:    8949  --> hit rate:   559.3 Hz   -->  0.65 Hz/cm^2
    nDigis in module 2  counter 1:   71924  --> hit rate:  4495.3 Hz   -->  5.20 Hz/cm^2
    nDigis in module 2  counter 2:    8468  --> hit rate:   529.3 Hz   -->  0.61 Hz/cm^2
    nDigis in module 2  counter 3:    2128  --> hit rate:   133.0 Hz   -->  0.15 Hz/cm^2
    nDigis in module 3  counter 1:   18881  --> hit rate:  1180.1 Hz   -->  1.37 Hz/cm^2
    nDigis in module 3  counter 2:    3529  --> hit rate:   220.6 Hz   -->  0.26 Hz/cm^2
    nDigis in module 3  counter 3:    2418  --> hit rate:   151.1 Hz   -->  0.17 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 15855
    maximum # of digis per channel on module 1, counter 2: 5721
    maximum # of digis per channel on module 1, counter 3: 271
    maximum # of digis per channel on module 2, counter 1: 1942
    maximum # of digis per channel on module 2, counter 2: 1564
    maximum # of digis per channel on module 2, counter 3: 180
    maximum # of digis per channel on module 3, counter 1: 672
    maximum # of digis per channel on module 3, counter 2: 410
    maximum # of digis per channel on module 3, counter 3: 290
     *** -------------------- ***

    *** hit rate calculation in run 19152061 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  867648  --> hit rate: 54228.0 Hz   --> 62.76 Hz/cm^2
    nDigis in module 1  counter 2:  275183  --> hit rate: 17198.9 Hz   --> 19.91 Hz/cm^2
    nDigis in module 1  counter 3:   10370  --> hit rate:   648.1 Hz   -->  0.75 Hz/cm^2
    nDigis in module 2  counter 1:   84426  --> hit rate:  5276.6 Hz   -->  6.11 Hz/cm^2
    nDigis in module 2  counter 2:    7858  --> hit rate:   491.1 Hz   -->  0.57 Hz/cm^2
    nDigis in module 2  counter 3:    2931  --> hit rate:   183.2 Hz   -->  0.21 Hz/cm^2
    nDigis in module 3  counter 1:   21156  --> hit rate:  1322.3 Hz   -->  1.53 Hz/cm^2
    nDigis in module 3  counter 2:    2739  --> hit rate:   171.2 Hz   -->  0.20 Hz/cm^2
    nDigis in module 3  counter 3:    2056  --> hit rate:   128.5 Hz   -->  0.15 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 17364
    maximum # of digis per channel on module 1, counter 2: 6462
    maximum # of digis per channel on module 1, counter 3: 317
    maximum # of digis per channel on module 2, counter 1: 2235
    maximum # of digis per channel on module 2, counter 2: 919
    maximum # of digis per channel on module 2, counter 3: 216
    maximum # of digis per channel on module 3, counter 1: 777
    maximum # of digis per channel on module 3, counter 2: 469
    maximum # of digis per channel on module 3, counter 3: 242
      *** -------------------- ***

     *** hit rate calculation in run 19152070 ***  (after setting new HV of 5.1kV to modules 2 & 3)
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  860777  --> hit rate: 53798.6 Hz   --> 62.27 Hz/cm^2
    nDigis in module 1  counter 2:  282089  --> hit rate: 17630.6 Hz   --> 20.41 Hz/cm^2
    nDigis in module 1  counter 3:   10870  --> hit rate:   679.4 Hz   -->  0.79 Hz/cm^2
    nDigis in module 2  counter 1: 1480743  --> hit rate: 92546.4 Hz   --> 107.11 Hz/cm^2
    nDigis in module 2  counter 2: 1365412  --> hit rate: 85338.3 Hz   --> 98.77 Hz/cm^2
    nDigis in module 2  counter 3: 1258211  --> hit rate: 78638.2 Hz   --> 91.02 Hz/cm^2
    nDigis in module 3  counter 1:   33697  --> hit rate:  2106.1 Hz   -->  2.44 Hz/cm^2
    nDigis in module 3  counter 2:    6542  --> hit rate:   408.9 Hz   -->  0.47 Hz/cm^2
    nDigis in module 3  counter 3:    4769  --> hit rate:   298.1 Hz   -->  0.34 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 16968
    maximum # of digis per channel on module 1, counter 2: 6517
    maximum # of digis per channel on module 1, counter 3: 340
    maximum # of digis per channel on module 2, counter 1: 474999
    maximum # of digis per channel on module 2, counter 2: 206512
    maximum # of digis per channel on module 2, counter 3: 121543
    maximum # of digis per channel on module 3, counter 1: 1129
    maximum # of digis per channel on module 3, counter 2: 974
    maximum # of digis per channel on module 3, counter 3: 405
     *** -------------------- ***

     *** hit rate calculation in run 19153041 ***
    nEvents: 3604560   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  796802  --> hit rate: 55263.5 Hz   --> 63.96 Hz/cm^2
    nDigis in module 1  counter 2:  253097  --> hit rate: 17553.9 Hz   --> 20.32 Hz/cm^2
    nDigis in module 1  counter 3:    9460  --> hit rate:   656.1 Hz   -->  0.76 Hz/cm^2
    nDigis in module 2  counter 1:  301395  --> hit rate: 20903.7 Hz   --> 24.19 Hz/cm^2
    nDigis in module 2  counter 2:  146922  --> hit rate: 10190.0 Hz   --> 11.79 Hz/cm^2
    nDigis in module 2  counter 3:  155909  --> hit rate: 10813.3 Hz   --> 12.52 Hz/cm^2
    nDigis in module 3  counter 1:   32321  --> hit rate:  2241.7 Hz   -->  2.59 Hz/cm^2
    nDigis in module 3  counter 2:   17015  --> hit rate:  1180.1 Hz   -->  1.37 Hz/cm^2
    nDigis in module 3  counter 3:    6695  --> hit rate:   464.3 Hz   -->  0.54 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 15815
    maximum # of digis per channel on module 1, counter 2: 5994
    maximum # of digis per channel on module 1, counter 3: 288
    maximum # of digis per channel on module 2, counter 1: 24139
    maximum # of digis per channel on module 2, counter 2: 18076
    maximum # of digis per channel on module 2, counter 3: 14176
    maximum # of digis per channel on module 3, counter 1: 1362
    maximum # of digis per channel on module 3, counter 2: 4019
    maximum # of digis per channel on module 3, counter 3: 599
     *** -------------------- ***

     *** hit rate calculation in run 19154035 ***
    nEvents: 3330069   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  732110  --> hit rate: 54962.1 Hz   --> 63.61 Hz/cm^2
    nDigis in module 1  counter 2:  236956  --> hit rate: 17789.1 Hz   --> 20.59 Hz/cm^2
    nDigis in module 1  counter 3:    8787  --> hit rate:   659.7 Hz   -->  0.76 Hz/cm^2
    nDigis in module 2  counter 1:  235908  --> hit rate: 17710.4 Hz   --> 20.50 Hz/cm^2
    nDigis in module 2  counter 2:   59541  --> hit rate:  4470.0 Hz   -->  5.17 Hz/cm^2
    nDigis in module 2  counter 3:   34131  --> hit rate:  2562.3 Hz   -->  2.97 Hz/cm^2
    nDigis in module 3  counter 1:   34169  --> hit rate:  2565.2 Hz   -->  2.97 Hz/cm^2
    nDigis in module 3  counter 2:   16362  --> hit rate:  1228.4 Hz   -->  1.42 Hz/cm^2
    nDigis in module 3  counter 3:    7313  --> hit rate:   549.0 Hz   -->  0.64 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 14616
    maximum # of digis per channel on module 1, counter 2: 5554
    maximum # of digis per channel on module 1, counter 3: 291
    maximum # of digis per channel on module 2, counter 1: 7315
    maximum # of digis per channel on module 2, counter 2: 5080
    maximum # of digis per channel on module 2, counter 3: 3296
    maximum # of digis per channel on module 3, counter 1: 1244
    maximum # of digis per channel on module 3, counter 2: 3339
    maximum # of digis per channel on module 3, counter 3: 535
     *** -------------------- ***

     *** hit rate calculation in run 19156022 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  925327  --> hit rate: 57832.9 Hz   --> 66.94 Hz/cm^2
    nDigis in module 1  counter 2:  323901  --> hit rate: 20243.8 Hz   --> 23.43 Hz/cm^2
    nDigis in module 1  counter 3:   12864  --> hit rate:   804.0 Hz   -->  0.93 Hz/cm^2
    nDigis in module 2  counter 1:  357601  --> hit rate: 22350.1 Hz   --> 25.87 Hz/cm^2
    nDigis in module 2  counter 2:   87331  --> hit rate:  5458.2 Hz   -->  6.32 Hz/cm^2
    nDigis in module 2  counter 3:   74170  --> hit rate:  4635.6 Hz   -->  5.37 Hz/cm^2
    nDigis in module 3  counter 1:   51446  --> hit rate:  3215.4 Hz   -->  3.72 Hz/cm^2
    nDigis in module 3  counter 2:   24373  --> hit rate:  1523.3 Hz   -->  1.76 Hz/cm^2
    nDigis in module 3  counter 3:    9713  --> hit rate:   607.1 Hz   -->  0.70 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 18020
    maximum # of digis per channel on module 1, counter 2: 7515
    maximum # of digis per channel on module 1, counter 3: 416
    maximum # of digis per channel on module 2, counter 1: 10603
    maximum # of digis per channel on module 2, counter 2: 7926
    maximum # of digis per channel on module 2, counter 3: 5280
    maximum # of digis per channel on module 3, counter 1: 1594
    maximum # of digis per channel on module 3, counter 2: 4580
    maximum # of digis per channel on module 3, counter 3: 787
     *** -------------------- ***

     *** hit rate calculation in run 19159029 ***
    nEvents: 3971975   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  866277  --> hit rate: 54524.3 Hz   --> 63.11 Hz/cm^2
    nDigis in module 1  counter 2:  333635  --> hit rate: 20999.3 Hz   --> 24.30 Hz/cm^2
    nDigis in module 1  counter 3:   13363  --> hit rate:   841.1 Hz   -->  0.97 Hz/cm^2
    nDigis in module 2  counter 1:  280877  --> hit rate: 17678.7 Hz   --> 20.46 Hz/cm^2
    nDigis in module 2  counter 2:   52378  --> hit rate:  3296.7 Hz   -->  3.82 Hz/cm^2
    nDigis in module 2  counter 3:   12494  --> hit rate:   786.4 Hz   -->  0.91 Hz/cm^2
    nDigis in module 3  counter 1:   55054  --> hit rate:  3465.2 Hz   -->  4.01 Hz/cm^2
    nDigis in module 3  counter 2:   15670  --> hit rate:   986.3 Hz   -->  1.14 Hz/cm^2
    nDigis in module 3  counter 3:    7360  --> hit rate:   463.2 Hz   -->  0.54 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 16599
    maximum # of digis per channel on module 1, counter 2: 7400
    maximum # of digis per channel on module 1, counter 3: 419
    maximum # of digis per channel on module 2, counter 1: 7420
    maximum # of digis per channel on module 2, counter 2: 5751
    maximum # of digis per channel on module 2, counter 3: 1007
    maximum # of digis per channel on module 3, counter 1: 1675
    maximum # of digis per channel on module 3, counter 2: 3025
    maximum # of digis per channel on module 3, counter 3: 675
     *** -------------------- ***

     *** hit rate calculation in run 19162026 ***
    nEvents: 3999779   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  923000  --> hit rate: 57690.7 Hz   --> 66.77 Hz/cm^2
    nDigis in module 1  counter 2:  375978  --> hit rate: 23499.9 Hz   --> 27.20 Hz/cm^2
    nDigis in module 1  counter 3:   15820  --> hit rate:   988.8 Hz   -->  1.14 Hz/cm^2
    nDigis in module 2  counter 1:  400993  --> hit rate: 25063.4 Hz   --> 29.01 Hz/cm^2
    nDigis in module 2  counter 2:   73455  --> hit rate:  4591.2 Hz   -->  5.31 Hz/cm^2
    nDigis in module 2  counter 3:   18955  --> hit rate:  1184.8 Hz   -->  1.37 Hz/cm^2
    nDigis in module 3  counter 1:   67079  --> hit rate:  4192.7 Hz   -->  4.85 Hz/cm^2
    nDigis in module 3  counter 2:   24095  --> hit rate:  1506.0 Hz   -->  1.74 Hz/cm^2
    nDigis in module 3  counter 3:    9884  --> hit rate:   617.8 Hz   -->  0.72 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 17735
    maximum # of digis per channel on module 1, counter 2: 8384
    maximum # of digis per channel on module 1, counter 3: 474
    maximum # of digis per channel on module 2, counter 1: 10778
    maximum # of digis per channel on module 2, counter 2: 7531
    maximum # of digis per channel on module 2, counter 3: 1563
    maximum # of digis per channel on module 3, counter 1: 1992
    maximum # of digis per channel on module 3, counter 2: 4207
    maximum # of digis per channel on module 3, counter 3: 812
     *** -------------------- ***

     *** hit rate calculation in run 19163037 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  884710  --> hit rate: 55294.4 Hz   --> 64.00 Hz/cm^2
    nDigis in module 1  counter 2:  360476  --> hit rate: 22529.8 Hz   --> 26.08 Hz/cm^2
    nDigis in module 1  counter 3:   15635  --> hit rate:   977.2 Hz   -->  1.13 Hz/cm^2
    nDigis in module 2  counter 1:  408279  --> hit rate: 25517.4 Hz   --> 29.53 Hz/cm^2
    nDigis in module 2  counter 2:   66338  --> hit rate:  4146.1 Hz   -->  4.80 Hz/cm^2
    nDigis in module 2  counter 3:   15662  --> hit rate:   978.9 Hz   -->  1.13 Hz/cm^2
    nDigis in module 3  counter 1:   73938  --> hit rate:  4621.1 Hz   -->  5.35 Hz/cm^2
    nDigis in module 3  counter 2:   30530  --> hit rate:  1908.1 Hz   -->  2.21 Hz/cm^2
    nDigis in module 3  counter 3:   11318  --> hit rate:   707.4 Hz   -->  0.82 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 16978
    maximum # of digis per channel on module 1, counter 2: 7826
    maximum # of digis per channel on module 1, counter 3: 496
    maximum # of digis per channel on module 2, counter 1: 10786
    maximum # of digis per channel on module 2, counter 2: 6538
    maximum # of digis per channel on module 2, counter 3: 1429
    maximum # of digis per channel on module 3, counter 1: 3399
    maximum # of digis per channel on module 3, counter 2: 5044
    maximum # of digis per channel on module 3, counter 3: 1351
     *** -------------------- ***


     *** hit rate calculation in run 19165025 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  971727  --> hit rate: 60732.9 Hz   --> 70.29 Hz/cm^2
    nDigis in module 1  counter 2:  423557  --> hit rate: 26472.3 Hz   --> 30.64 Hz/cm^2
    nDigis in module 1  counter 3:   18996  --> hit rate:  1187.3 Hz   -->  1.37 Hz/cm^2
    nDigis in module 2  counter 1:  504755  --> hit rate: 31547.2 Hz   --> 36.51 Hz/cm^2
    nDigis in module 2  counter 2:  101728  --> hit rate:  6358.0 Hz   -->  7.36 Hz/cm^2
    nDigis in module 2  counter 3:   60859  --> hit rate:  3803.7 Hz   -->  4.40 Hz/cm^2
    nDigis in module 3  counter 1:   91107  --> hit rate:  5694.2 Hz   -->  6.59 Hz/cm^2
    nDigis in module 3  counter 2:   31024  --> hit rate:  1939.0 Hz   -->  2.24 Hz/cm^2
    nDigis in module 3  counter 3:   12809  --> hit rate:   800.6 Hz   -->  0.93 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 18445
    maximum # of digis per channel on module 1, counter 2: 9257
    maximum # of digis per channel on module 1, counter 3: 603
    maximum # of digis per channel on module 2, counter 1: 13316
    maximum # of digis per channel on module 2, counter 2: 9384
    maximum # of digis per channel on module 2, counter 3: 3192
    maximum # of digis per channel on module 3, counter 1: 4461
    maximum # of digis per channel on module 3, counter 2: 4909
    maximum # of digis per channel on module 3, counter 3: 1371
     *** -------------------- ***

     *** hit rate calculation in run 19166018 ***
    nEvents: 3991810   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  949085  --> hit rate: 59439.5 Hz   --> 68.80 Hz/cm^2
    nDigis in module 1  counter 2:  420307  --> hit rate: 26323.1 Hz   --> 30.47 Hz/cm^2
    nDigis in module 1  counter 3:   19167  --> hit rate:  1200.4 Hz   -->  1.39 Hz/cm^2
    nDigis in module 2  counter 1:  534986  --> hit rate: 33505.2 Hz   --> 38.78 Hz/cm^2
    nDigis in module 2  counter 2:   88890  --> hit rate:  5567.0 Hz   -->  6.44 Hz/cm^2
    nDigis in module 2  counter 3:   33328  --> hit rate:  2087.3 Hz   -->  2.42 Hz/cm^2
    nDigis in module 3  counter 1:   89684  --> hit rate:  5616.8 Hz   -->  6.50 Hz/cm^2
    nDigis in module 3  counter 2:   34712  --> hit rate:  2174.0 Hz   -->  2.52 Hz/cm^2
    nDigis in module 3  counter 3:   13517  --> hit rate:   846.5 Hz   -->  0.98 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 18442
    maximum # of digis per channel on module 1, counter 2: 9208
    maximum # of digis per channel on module 1, counter 3: 612
    maximum # of digis per channel on module 2, counter 1: 14136
    maximum # of digis per channel on module 2, counter 2: 8651
    maximum # of digis per channel on module 2, counter 3: 2461
    maximum # of digis per channel on module 3, counter 1: 3109
    maximum # of digis per channel on module 3, counter 2: 4994
    maximum # of digis per channel on module 3, counter 3: 1214
     *** -------------------- ***

     *** hit rate calculation in run 19168023 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  933488  --> hit rate: 58343.0 Hz   --> 67.53 Hz/cm^2
    nDigis in module 1  counter 2:  423383  --> hit rate: 26461.4 Hz   --> 30.63 Hz/cm^2
    nDigis in module 1  counter 3:   19931  --> hit rate:  1245.7 Hz   -->  1.44 Hz/cm^2
    nDigis in module 2  counter 1:  543484  --> hit rate: 33967.8 Hz   --> 39.31 Hz/cm^2
    nDigis in module 2  counter 2:   72364  --> hit rate:  4522.8 Hz   -->  5.23 Hz/cm^2
    nDigis in module 2  counter 3:   18647  --> hit rate:  1165.4 Hz   -->  1.35 Hz/cm^2
    nDigis in module 3  counter 1:   96981  --> hit rate:  6061.3 Hz   -->  7.02 Hz/cm^2
    nDigis in module 3  counter 2:   32942  --> hit rate:  2058.9 Hz   -->  2.38 Hz/cm^2
    nDigis in module 3  counter 3:   10749  --> hit rate:   671.8 Hz   -->  0.78 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 18084
    maximum # of digis per channel on module 1, counter 2: 9377
    maximum # of digis per channel on module 1, counter 3: 644
    maximum # of digis per channel on module 2, counter 1: 14269
    maximum # of digis per channel on module 2, counter 2: 6646
    maximum # of digis per channel on module 2, counter 3: 1525
    maximum # of digis per channel on module 3, counter 1: 3423
    maximum # of digis per channel on module 3, counter 2: 4431
    maximum # of digis per channel on module 3, counter 3: 954
     *** -------------------- ***

     

     

     

    *** hit rate calculation in run 19169020 ***
    nEvents: 3999904   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  954757  --> hit rate: 59673.7 Hz   --> 69.07 Hz/cm^2
    nDigis in module 1  counter 2:  446455  --> hit rate: 27904.1 Hz   --> 32.30 Hz/cm^2
    nDigis in module 1  counter 3:   21184  --> hit rate:  1324.0 Hz   -->  1.53 Hz/cm^2
    nDigis in module 2  counter 1:  577881  --> hit rate: 36118.4 Hz   --> 41.80 Hz/cm^2
    nDigis in module 2  counter 2:   76375  --> hit rate:  4773.6 Hz   -->  5.52 Hz/cm^2
    nDigis in module 2  counter 3:   18359  --> hit rate:  1147.5 Hz   -->  1.33 Hz/cm^2
    nDigis in module 3  counter 1:  102948  --> hit rate:  6434.4 Hz   -->  7.45 Hz/cm^2
    nDigis in module 3  counter 2:   27968  --> hit rate:  1748.0 Hz   -->  2.02 Hz/cm^2
    nDigis in module 3  counter 3:   10791  --> hit rate:   674.5 Hz   -->  0.78 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 18662
    maximum # of digis per channel on module 1, counter 2: 9733
    maximum # of digis per channel on module 1, counter 3: 648
    maximum # of digis per channel on module 2, counter 1: 14905
    maximum # of digis per channel on module 2, counter 2: 6597
    maximum # of digis per channel on module 2, counter 3: 1247
    maximum # of digis per channel on module 3, counter 1: 2780
    maximum # of digis per channel on module 3, counter 2: 4189
    maximum # of digis per channel on module 3, counter 3: 879
     *** -------------------- ***

     *** hit rate calculation in run 19169022 ***
    nEvents: 23898430   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1: 5623384  --> hit rate: 58825.9 Hz   --> 68.09 Hz/cm^2
    nDigis in module 1  counter 2: 2561519  --> hit rate: 26795.9 Hz   --> 31.01 Hz/cm^2
    nDigis in module 1  counter 3:  127607  --> hit rate:  1334.9 Hz   -->  1.55 Hz/cm^2
    nDigis in module 2  counter 1: 3447915  --> hit rate: 36068.4 Hz   --> 41.75 Hz/cm^2
    nDigis in module 2  counter 2:  435872  --> hit rate:  4559.6 Hz   -->  5.28 Hz/cm^2
    nDigis in module 2  counter 3:  114852  --> hit rate:  1201.5 Hz   -->  1.39 Hz/cm^2
    nDigis in module 3  counter 1:  604338  --> hit rate:  6321.9 Hz   -->  7.32 Hz/cm^2
    nDigis in module 3  counter 2:  182248  --> hit rate:  1906.5 Hz   -->  2.21 Hz/cm^2
    nDigis in module 3  counter 3:   73737  --> hit rate:   771.4 Hz   -->  0.89 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 109603
    maximum # of digis per channel on module 1, counter 2: 55916
    maximum # of digis per channel on module 1, counter 3: 3728
    maximum # of digis per channel on module 2, counter 1: 89759
    maximum # of digis per channel on module 2, counter 2: 37439
    maximum # of digis per channel on module 2, counter 3: 8007
    maximum # of digis per channel on module 3, counter 1: 17131
    maximum # of digis per channel on module 3, counter 2: 26774
    maximum # of digis per channel on module 3, counter 3: 5604
     *** -------------------- ***

     *** hit rate calculation in run 19169032 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  749234  --> hit rate: 46827.1 Hz   --> 54.20 Hz/cm^2
    nDigis in module 1  counter 2:  342422  --> hit rate: 21401.4 Hz   --> 24.77 Hz/cm^2
    nDigis in module 1  counter 3:   17273  --> hit rate:  1079.6 Hz   -->  1.25 Hz/cm^2
    nDigis in module 2  counter 1:  444714  --> hit rate: 27794.6 Hz   --> 32.17 Hz/cm^2
    nDigis in module 2  counter 2:   53963  --> hit rate:  3372.7 Hz   -->  3.90 Hz/cm^2
    nDigis in module 2  counter 3:   13381  --> hit rate:   836.3 Hz   -->  0.97 Hz/cm^2
    nDigis in module 3  counter 1:   79993  --> hit rate:  4999.6 Hz   -->  5.79 Hz/cm^2
    nDigis in module 3  counter 2:   18732  --> hit rate:  1170.8 Hz   -->  1.36 Hz/cm^2
    nDigis in module 3  counter 3:    7531  --> hit rate:   470.7 Hz   -->  0.54 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 14616
    maximum # of digis per channel on module 1, counter 2: 7383
    maximum # of digis per channel on module 1, counter 3: 507
    maximum # of digis per channel on module 2, counter 1: 11669
    maximum # of digis per channel on module 2, counter 2: 4675
    maximum # of digis per channel on module 2, counter 3: 906
    maximum # of digis per channel on module 3, counter 1: 2132
    maximum # of digis per channel on module 3, counter 2: 2523
    maximum # of digis per channel on module 3, counter 3: 733
     *** -------------------- ***

     *** hit rate calculation in run 19170004 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  357942  --> hit rate: 22371.4 Hz   --> 25.89 Hz/cm^2
    nDigis in module 1  counter 2:  171601  --> hit rate: 10725.1 Hz   --> 12.41 Hz/cm^2
    nDigis in module 1  counter 3:   10832  --> hit rate:   677.0 Hz   -->  0.78 Hz/cm^2
    nDigis in module 2  counter 1:  155956  --> hit rate:  9747.3 Hz   --> 11.28 Hz/cm^2
    nDigis in module 2  counter 2:   21958  --> hit rate:  1372.4 Hz   -->  1.59 Hz/cm^2
    nDigis in module 2  counter 3:    4901  --> hit rate:   306.3 Hz   -->  0.35 Hz/cm^2
    nDigis in module 3  counter 1:   33705  --> hit rate:  2106.6 Hz   -->  2.44 Hz/cm^2
    nDigis in module 3  counter 2:    7215  --> hit rate:   450.9 Hz   -->  0.52 Hz/cm^2
    nDigis in module 3  counter 3:    5050  --> hit rate:   315.6 Hz   -->  0.37 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 6771
    maximum # of digis per channel on module 1, counter 2: 3583
    maximum # of digis per channel on module 1, counter 3: 316
    maximum # of digis per channel on module 2, counter 1: 3892
    maximum # of digis per channel on module 2, counter 2: 1682
    maximum # of digis per channel on module 2, counter 3: 249
    maximum # of digis per channel on module 3, counter 1: 928
    maximum # of digis per channel on module 3, counter 2: 754
    maximum # of digis per channel on module 3, counter 3: 267
     *** -------------------- ***

     *** hit rate calculation in run 19170007 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  649015  --> hit rate: 40563.4 Hz   --> 46.95 Hz/cm^2
    nDigis in module 1  counter 2:  303350  --> hit rate: 18959.4 Hz   --> 21.94 Hz/cm^2
    nDigis in module 1  counter 3:   16317  --> hit rate:  1019.8 Hz   -->  1.18 Hz/cm^2
    nDigis in module 2  counter 1:  360439  --> hit rate: 22527.4 Hz   --> 26.07 Hz/cm^2
    nDigis in module 2  counter 2:   47612  --> hit rate:  2975.8 Hz   -->  3.44 Hz/cm^2
    nDigis in module 2  counter 3:    9325  --> hit rate:   582.8 Hz   -->  0.67 Hz/cm^2
    nDigis in module 3  counter 1:   65261  --> hit rate:  4078.8 Hz   -->  4.72 Hz/cm^2
    nDigis in module 3  counter 2:   14378  --> hit rate:   898.6 Hz   -->  1.04 Hz/cm^2
    nDigis in module 3  counter 3:    7590  --> hit rate:   474.4 Hz   -->  0.55 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 12537
    maximum # of digis per channel on module 1, counter 2: 6467
    maximum # of digis per channel on module 1, counter 3: 488
    maximum # of digis per channel on module 2, counter 1: 9196
    maximum # of digis per channel on module 2, counter 2: 3894
    maximum # of digis per channel on module 2, counter 3: 604
    maximum # of digis per channel on module 3, counter 1: 1748
    maximum # of digis per channel on module 3, counter 2: 1956
    maximum # of digis per channel on module 3, counter 3: 661
     *** -------------------- ***


     *** hit rate calculation in run 19171002 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  320316  --> hit rate: 20019.8 Hz   --> 23.17 Hz/cm^2
    nDigis in module 1  counter 2:  152904  --> hit rate:  9556.5 Hz   --> 11.06 Hz/cm^2
    nDigis in module 1  counter 3:    8950  --> hit rate:   559.4 Hz   -->  0.65 Hz/cm^2
    nDigis in module 2  counter 1:  129445  --> hit rate:  8090.3 Hz   -->  9.36 Hz/cm^2
    nDigis in module 2  counter 2:   16920  --> hit rate:  1057.5 Hz   -->  1.22 Hz/cm^2
    nDigis in module 2  counter 3:    3515  --> hit rate:   219.7 Hz   -->  0.25 Hz/cm^2
    nDigis in module 3  counter 1:   29019  --> hit rate:  1813.7 Hz   -->  2.10 Hz/cm^2
    nDigis in module 3  counter 2:    5429  --> hit rate:   339.3 Hz   -->  0.39 Hz/cm^2
    nDigis in module 3  counter 3:    2703  --> hit rate:   168.9 Hz   -->  0.20 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 6270
    maximum # of digis per channel on module 1, counter 2: 3248
    maximum # of digis per channel on module 1, counter 3: 287
    maximum # of digis per channel on module 2, counter 1: 3129
    maximum # of digis per channel on module 2, counter 2: 1305
    maximum # of digis per channel on module 2, counter 3: 190
    maximum # of digis per channel on module 3, counter 1: 788
    maximum # of digis per channel on module 3, counter 2: 506
    maximum # of digis per channel on module 3, counter 3: 257
     *** -------------------- ***

    *** hit rate calculation in run 19171021 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  600873  --> hit rate: 37554.6 Hz   --> 43.47 Hz/cm^2
    nDigis in module 1  counter 2:  278792  --> hit rate: 17424.5 Hz   --> 20.17 Hz/cm^2
    nDigis in module 1  counter 3:   15522  --> hit rate:   970.1 Hz   -->  1.12 Hz/cm^2
    nDigis in module 2  counter 1:  313582  --> hit rate: 19598.9 Hz   --> 22.68 Hz/cm^2
    nDigis in module 2  counter 2:   39039  --> hit rate:  2439.9 Hz   -->  2.82 Hz/cm^2
    nDigis in module 2  counter 3:    6856  --> hit rate:   428.5 Hz   -->  0.50 Hz/cm^2
    nDigis in module 3  counter 1:   59445  --> hit rate:  3715.3 Hz   -->  4.30 Hz/cm^2
    nDigis in module 3  counter 2:   12329  --> hit rate:   770.6 Hz   -->  0.89 Hz/cm^2
    nDigis in module 3  counter 3:    6355  --> hit rate:   397.2 Hz   -->  0.46 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 11551
    maximum # of digis per channel on module 1, counter 2: 6000
    maximum # of digis per channel on module 1, counter 3: 451
    maximum # of digis per channel on module 2, counter 1: 7861
    maximum # of digis per channel on module 2, counter 2: 3209
    maximum # of digis per channel on module 2, counter 3: 411
    maximum # of digis per channel on module 3, counter 1: 1609
    maximum # of digis per channel on module 3, counter 2: 1391
    maximum # of digis per channel on module 3, counter 3: 553
     *** -------------------- ***

    *** hit rate calculation in run 19172004 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  311278  --> hit rate: 19454.9 Hz   --> 22.52 Hz/cm^2
    nDigis in module 1  counter 2:  154570  --> hit rate:  9660.6 Hz   --> 11.18 Hz/cm^2
    nDigis in module 1  counter 3:    9206  --> hit rate:   575.4 Hz   -->  0.67 Hz/cm^2
    nDigis in module 2  counter 1:  123192  --> hit rate:  7699.5 Hz   -->  8.91 Hz/cm^2
    nDigis in module 2  counter 2:   15881  --> hit rate:   992.6 Hz   -->  1.15 Hz/cm^2
    nDigis in module 2  counter 3:    3169  --> hit rate:   198.1 Hz   -->  0.23 Hz/cm^2
    nDigis in module 3  counter 1:   29786  --> hit rate:  1861.6 Hz   -->  2.15 Hz/cm^2
    nDigis in module 3  counter 2:    5049  --> hit rate:   315.6 Hz   -->  0.37 Hz/cm^2
    nDigis in module 3  counter 3:    2583  --> hit rate:   161.4 Hz   -->  0.19 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 6015
    maximum # of digis per channel on module 1, counter 2: 3280
    maximum # of digis per channel on module 1, counter 3: 282
    maximum # of digis per channel on module 2, counter 1: 2946
    maximum # of digis per channel on module 2, counter 2: 1268
    maximum # of digis per channel on module 2, counter 3: 169
    maximum # of digis per channel on module 3, counter 1: 799
    maximum # of digis per channel on module 3, counter 2: 429
    maximum # of digis per channel on module 3, counter 3: 255
     *** -------------------- ***

     *** hit rate calculation in run 19172006 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  558100  --> hit rate: 34881.3 Hz   --> 40.37 Hz/cm^2
    nDigis in module 1  counter 2:  263941  --> hit rate: 16496.3 Hz   --> 19.09 Hz/cm^2
    nDigis in module 1  counter 3:   14469  --> hit rate:   904.3 Hz   -->  1.05 Hz/cm^2
    nDigis in module 2  counter 1:  264108  --> hit rate: 16506.8 Hz   --> 19.11 Hz/cm^2
    nDigis in module 2  counter 2:   31451  --> hit rate:  1965.7 Hz   -->  2.28 Hz/cm^2
    nDigis in module 2  counter 3:    5487  --> hit rate:   342.9 Hz   -->  0.40 Hz/cm^2
    nDigis in module 3  counter 1:   53348  --> hit rate:  3334.3 Hz   -->  3.86 Hz/cm^2
    nDigis in module 3  counter 2:    8552  --> hit rate:   534.5 Hz   -->  0.62 Hz/cm^2
    nDigis in module 3  counter 3:    5933  --> hit rate:   370.8 Hz   -->  0.43 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 10771
    maximum # of digis per channel on module 1, counter 2: 5693
    maximum # of digis per channel on module 1, counter 3: 426
    maximum # of digis per channel on module 2, counter 1: 6602
    maximum # of digis per channel on module 2, counter 2: 2372
    maximum # of digis per channel on module 2, counter 3: 332
    maximum # of digis per channel on module 3, counter 1: 1470
    maximum # of digis per channel on module 3, counter 2: 977
    maximum # of digis per channel on module 3, counter 3: 535
     *** -------------------- ***

     *** hit rate calculation in run 19172007 ***
    nEvents: 4000000   time window: 2us     active area: 864cm^2
    nDigis in module 1  counter 1:  561456  --> hit rate: 35091.0 Hz   --> 40.61 Hz/cm^2
    nDigis in module 1  counter 2:  264413  --> hit rate: 16525.8 Hz   --> 19.13 Hz/cm^2
    nDigis in module 1  counter 3:   14211  --> hit rate:   888.2 Hz   -->  1.03 Hz/cm^2
    nDigis in module 2  counter 1:  265857  --> hit rate: 16616.1 Hz   --> 19.23 Hz/cm^2
    nDigis in module 2  counter 2:   31188  --> hit rate:  1949.3 Hz   -->  2.26 Hz/cm^2
    nDigis in module 2  counter 3:    5387  --> hit rate:   336.7 Hz   -->  0.39 Hz/cm^2
    nDigis in module 3  counter 1:   53760  --> hit rate:  3360.0 Hz   -->  3.89 Hz/cm^2
    nDigis in module 3  counter 2:    8776  --> hit rate:   548.5 Hz   -->  0.63 Hz/cm^2
    nDigis in module 3  counter 3:    5834  --> hit rate:   364.6 Hz   -->  0.42 Hz/cm^2
     *** -------------------- ***
    maximum # of digis per channel on module 1, counter 1: 10808
    maximum # of digis per channel on module 1, counter 2: 5616
    maximum # of digis per channel on module 1, counter 3: 420
    maximum # of digis per channel on module 2, counter 1: 6544
    maximum # of digis per channel on module 2, counter 2: 2421
    maximum # of digis per channel on module 2, counter 3: 364
    maximum # of digis per channel on module 3, counter 1: 1521
    maximum # of digis per channel on module 3, counter 2: 894
    maximum # of digis per channel on module 3, counter 3: 542
     *** -------------------- ***

    eTOF noise rates in Run19

    For the moment we have the noise rates per sector (excluding the channels with pulser input) for noise runs (pedAsPhys or pedAsPhys_tcd_only) from January 12th (day 012) to May 23rd (day 143):

     *** noise rate calculation in run 20012031 ***

    nEvents: 1995298   time window: 3us     active area (sector): 7533cm^2
    nDigis in sector 13:   52470  --> hit rate:  4382.8 Hz   -->  0.58 Hz/cm^2
    nDigis in sector 14:   33714  --> hit rate:  2816.1 Hz   -->  0.37 Hz/cm^2
    nDigis in sector 15:  218203  --> hit rate: 18226.4 Hz   -->  2.42 Hz/cm^2
    nDigis in sector 16:   66805  --> hit rate:  5580.2 Hz   -->  0.74 Hz/cm^2
    nDigis in sector 17:   60485  --> hit rate:  5052.3 Hz   -->  0.67 Hz/cm^2
    nDigis in sector 18:   47784  --> hit rate:  3991.4 Hz   -->  0.53 Hz/cm^2
    nDigis in sector 19:   47838  --> hit rate:  3995.9 Hz   -->  0.53 Hz/cm^2
    nDigis in sector 20:   35534  --> hit rate:  2968.1 Hz   -->  0.39 Hz/cm^2
    nDigis in sector 21:  135134  --> hit rate: 11287.7 Hz   -->  1.50 Hz/cm^2
    nDigis in sector 22:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in sector 23:   79938  --> hit rate:  6677.2 Hz   -->  0.89 Hz/cm^2
    nDigis in sector 24:   88353  --> hit rate:  7380.1 Hz   -->  0.98 Hz/cm^2
     *** -------------------- ***


     *** noise rate calculation in run 20012032 ***

    nEvents: 3999998   time window: 3us     active area (sector): 7533cm^2
    nDigis in sector 13:  106538  --> hit rate:  4439.1 Hz   -->  0.59 Hz/cm^2
    nDigis in sector 14:   67668  --> hit rate:  2819.5 Hz   -->  0.37 Hz/cm^2
    nDigis in sector 15:  442016  --> hit rate: 18417.3 Hz   -->  2.44 Hz/cm^2
    nDigis in sector 16:  132107  --> hit rate:  5504.5 Hz   -->  0.73 Hz/cm^2
    nDigis in sector 17:  119783  --> hit rate:  4991.0 Hz   -->  0.66 Hz/cm^2
    nDigis in sector 18:   96125  --> hit rate:  4005.2 Hz   -->  0.53 Hz/cm^2
    nDigis in sector 19:   95652  --> hit rate:  3985.5 Hz   -->  0.53 Hz/cm^2
    nDigis in sector 20:   70297  --> hit rate:  2929.0 Hz   -->  0.39 Hz/cm^2
    nDigis in sector 21:  267425  --> hit rate: 11142.7 Hz   -->  1.48 Hz/cm^2
    nDigis in sector 22:       0  --> hit rate:     0.0 Hz   -->  0.00 Hz/cm^2
    nDigis in sector 23:  159088  --> hit rate:  6628.7 Hz   -->  0.88 Hz/cm^2
    nDigis in sector 24:  173432  --> hit rate:  7226.3 Hz   -->  0.96 Hz/cm^2
     *** -------------------- ***

     *** noise rate calculation in run 20013043 ***
    nEvents: 3655403   time window: 3us     active area (sector): 7533cm^2
    nDigis in sector 13:   94210  --> hit rate:  4295.5 Hz   -->  0.57 Hz/cm^2
    nDigis in sector 14:   58535  --> hit rate:  2668.9 Hz   -->  0.35 Hz/cm^2
    nDigis in sector 15:  397960  --> hit rate: 18144.8 Hz   -->  2.41 Hz/cm^2
    nDigis in sector 16:  130759  --> hit rate:  5961.9 Hz   -->  0.79 Hz/cm^2
    nDigis in sector 17:  106755  --> hit rate:  4867.5 Hz   -->  0.65 Hz/cm^2
    nDigis in sector 18:   97815  --> hit rate:  4459.8 Hz   -->  0.59 Hz/cm^2
    nDigis in sector 19:   81002  --> hit rate:  3693.3 Hz   -->  0.49 Hz/cm^2
    nDigis in sector 20:   83005  --> hit rate:  3784.6 Hz   -->  0.50 Hz/cm^2
    nDigis in sector 21:  162815  --> hit rate:  7423.5 Hz   -->  0.99 Hz/cm^2
    nDigis in sector 22:  318136  --> hit rate: 14505.3 Hz   -->  1.93 Hz/cm^2
    nDigis in sector 23:  135613  --> hit rate:  6183.2 Hz   -->  0.82 Hz/cm^2
    nDigis in sector 24:  156598  --> hit rate:  7140.0 Hz   -->  0.95 Hz/cm^2
     *** -------------------- ***


     *** noise rate calculation in run 20017008 ***     (CosmicLocalClock/physics run)
    # events: 337993   time window: 3 microseconds
    # digis in sector 13:    7252  --> hit rate:  3576.0 Hz, active area: 7533.0 cm^2   -->  0.47 Hz/cm^2
    # digis in sector 14:    5816  --> hit rate:  2867.9 Hz, active area: 7479.0 cm^2   -->  0.38 Hz/cm^2
    # digis in sector 15:   41430  --> hit rate: 20429.4 Hz, active area: 7533.0 cm^2   -->  2.71 Hz/cm^2
    # digis in sector 16:    9734  --> hit rate:  4799.9 Hz, active area: 7371.0 cm^2   -->  0.65 Hz/cm^2
    # digis in sector 17:    9625  --> hit rate:  4746.2 Hz, active area: 7533.0 cm^2   -->  0.63 Hz/cm^2
    # digis in sector 18:    9636  --> hit rate:  4751.6 Hz, active area: 7533.0 cm^2   -->  0.63 Hz/cm^2
    # digis in sector 19:    7848  --> hit rate:  3869.9 Hz, active area: 7533.0 cm^2   -->  0.51 Hz/cm^2
    # digis in sector 20:    5362  --> hit rate:  2644.0 Hz, active area: 7506.0 cm^2   -->  0.35 Hz/cm^2
    # digis in sector 21:    7636  --> hit rate:  3765.4 Hz, active area: 7384.5 cm^2   -->  0.51 Hz/cm^2
    # digis in sector 22:    2222  --> hit rate:  1095.7 Hz, active area: 2470.5 cm^2   -->  0.44 Hz/cm^2
    # digis in sector 23:    6765  --> hit rate:  3335.9 Hz, active area: 7452.0 cm^2   -->  0.45 Hz/cm^2
    # digis in sector 24:    8915  --> hit rate:  4396.0 Hz, active area: 7533.0 cm^2   -->  0.58 Hz/cm^2
     *** -------------------- ***


     *** noise rate calculation in run 20021052 ***
    # events: 3602540   time window: 3 microseconds
    # digis in sector 13:   74687  --> hit rate:  3455.3 Hz, active area: 7533.0 cm^2   -->  0.46 Hz/cm^2
    # digis in sector 14:   57666  --> hit rate:  2667.8 Hz, active area: 7479.0 cm^2   -->  0.36 Hz/cm^2
    # digis in sector 15:  550240  --> hit rate: 25456.1 Hz, active area: 7533.0 cm^2   -->  3.38 Hz/cm^2
    # digis in sector 16:  130037  --> hit rate:  6016.0 Hz, active area: 7533.0 cm^2   -->  0.80 Hz/cm^2
    # digis in sector 17:   97749  --> hit rate:  4522.2 Hz, active area: 7533.0 cm^2   -->  0.60 Hz/cm^2
    # digis in sector 18:   94488  --> hit rate:  4371.4 Hz, active area: 7533.0 cm^2   -->  0.58 Hz/cm^2
    # digis in sector 19:   74864  --> hit rate:  3463.5 Hz, active area: 7479.0 cm^2   -->  0.46 Hz/cm^2
    # digis in sector 20:  221113  --> hit rate: 10229.5 Hz, active area: 7533.0 cm^2   -->  1.36 Hz/cm^2
    # digis in sector 21:  112178  --> hit rate:  5189.8 Hz, active area: 7533.0 cm^2   -->  0.69 Hz/cm^2
    # digis in sector 22:  249187  --> hit rate: 11528.3 Hz, active area: 7533.0 cm^2   -->  1.53 Hz/cm^2
    # digis in sector 23:   71223  --> hit rate:  3295.0 Hz, active area: 7425.0 cm^2   -->  0.44 Hz/cm^2
    # digis in sector 24:   95773  --> hit rate:  4430.8 Hz, active area: 7533.0 cm^2   -->  0.59 Hz/cm^2

     *** noise rate calculation in run 20022023 ***
    # events: 3865515   time window: 3 microseconds
    # digis in sector 13:   67599  --> hit rate:  2914.6 Hz, active area: 7533.0 cm^2   -->  0.39 Hz/cm^2
    # digis in sector 14:   50238  --> hit rate:  2166.1 Hz, active area: 7479.0 cm^2   -->  0.29 Hz/cm^2
    # digis in sector 15:  522992  --> hit rate: 22549.5 Hz, active area: 7533.0 cm^2   -->  2.99 Hz/cm^2
    # digis in sector 16:  117246  --> hit rate:  5055.2 Hz, active area: 7533.0 cm^2   -->  0.67 Hz/cm^2
    # digis in sector 17:   91879  --> hit rate:  3961.5 Hz, active area: 7533.0 cm^2   -->  0.53 Hz/cm^2
    # digis in sector 18:   89902  --> hit rate:  3876.2 Hz, active area: 7533.0 cm^2   -->  0.51 Hz/cm^2
    # digis in sector 19:   72531  --> hit rate:  3127.3 Hz, active area: 7533.0 cm^2   -->  0.42 Hz/cm^2
    # digis in sector 20:   50446  --> hit rate:  2175.0 Hz, active area: 7533.0 cm^2   -->  0.29 Hz/cm^2
    # digis in sector 21:   84903  --> hit rate:  3660.7 Hz, active area: 7533.0 cm^2   -->  0.49 Hz/cm^2
    # digis in sector 22:  180176  --> hit rate:  7768.5 Hz, active area: 7533.0 cm^2   -->  1.03 Hz/cm^2
    # digis in sector 23:   65197  --> hit rate:  2811.1 Hz, active area: 7425.0 cm^2   -->  0.38 Hz/cm^2
    # digis in sector 24:   88901  --> hit rate:  3833.1 Hz, active area: 7533.0 cm^2   -->  0.51 Hz/cm^2

     *** noise rate calculation in run 20023052 ***
    # events: 3081608   time window: 3 microseconds
    # digis in sector 13:   54846  --> hit rate:  2966.3 Hz, active area: 7533.0 cm^2   -->  0.39 Hz/cm^2
    # digis in sector 14:   43162  --> hit rate:  2334.4 Hz, active area: 7479.0 cm^2   -->  0.31 Hz/cm^2
    # digis in sector 15:  414145  --> hit rate: 22398.7 Hz, active area: 7533.0 cm^2   -->  2.97 Hz/cm^2
    # digis in sector 16:  100708  --> hit rate:  5446.7 Hz, active area: 7533.0 cm^2   -->  0.72 Hz/cm^2
    # digis in sector 17:   71016  --> hit rate:  3840.9 Hz, active area: 7533.0 cm^2   -->  0.51 Hz/cm^2
    # digis in sector 18:   61749  --> hit rate:  3339.7 Hz, active area: 6277.5 cm^2   -->  0.53 Hz/cm^2
    # digis in sector 19:   58516  --> hit rate:  3164.8 Hz, active area: 7533.0 cm^2   -->  0.42 Hz/cm^2
    # digis in sector 20:   42032  --> hit rate:  2273.3 Hz, active area: 7533.0 cm^2   -->  0.30 Hz/cm^2
    # digis in sector 21:   72722  --> hit rate:  3933.1 Hz, active area: 7533.0 cm^2   -->  0.52 Hz/cm^2
    # digis in sector 22:  171482  --> hit rate:  9274.5 Hz, active area: 7533.0 cm^2   -->  1.23 Hz/cm^2
    # digis in sector 23:   47536  --> hit rate:  2571.0 Hz, active area: 7425.0 cm^2   -->  0.35 Hz/cm^2
    # digis in sector 24:   68154  --> hit rate:  3686.1 Hz, active area: 7533.0 cm^2   -->  0.49 Hz/cm^2

     *** noise rate calculation in run 20027043 ***
    # events: 4000000   time window: 3 microseconds
    # digis in sector 13:   73070  --> hit rate:  3044.6 Hz, active area: 7533.0 cm^2   -->  0.40 Hz/cm^2
    # digis in sector 14:   53411  --> hit rate:  2225.5 Hz, active area: 7479.0 cm^2   -->  0.30 Hz/cm^2
    # digis in sector 15:  545023  --> hit rate: 22709.3 Hz, active area: 7533.0 cm^2   -->  3.01 Hz/cm^2
    # digis in sector 16:  152626  --> hit rate:  6359.4 Hz, active area: 7533.0 cm^2   -->  0.84 Hz/cm^2
    # digis in sector 17:   87279  --> hit rate:  3636.6 Hz, active area: 7533.0 cm^2   -->  0.48 Hz/cm^2
    # digis in sector 18:   87997  --> hit rate:  3666.5 Hz, active area: 7533.0 cm^2   -->  0.49 Hz/cm^2
    # digis in sector 19:   72289  --> hit rate:  3012.0 Hz, active area: 7533.0 cm^2   -->  0.40 Hz/cm^2
    # digis in sector 20:   52627  --> hit rate:  2192.8 Hz, active area: 7533.0 cm^2   -->  0.29 Hz/cm^2
    # digis in sector 21:  124652  --> hit rate:  5193.8 Hz, active area: 7533.0 cm^2   -->  0.69 Hz/cm^2
    # digis in sector 22:  248901  --> hit rate: 10370.9 Hz, active area: 7533.0 cm^2   -->  1.38 Hz/cm^2
    # digis in sector 23:   67214  --> hit rate:  2800.6 Hz, active area: 7425.0 cm^2   -->  0.38 Hz/cm^2
    # digis in sector 24:   93113  --> hit rate:  3879.7 Hz, active area: 7533.0 cm^2   -->  0.52 Hz/cm^2

     *** noise rate calculation in run 20030048 ***
    # events: 4000000   time window: 3 microseconds
    # digis in sector 13:   84559  --> hit rate:  3523.3 Hz, active area: 7533.0 cm^2   -->  0.47 Hz/cm^2
    # digis in sector 14:   64333  --> hit rate:  2680.5 Hz, active area: 7479.0 cm^2   -->  0.36 Hz/cm^2
    # digis in sector 15:  828956  --> hit rate: 34539.8 Hz, active area: 7533.0 cm^2   -->  4.59 Hz/cm^2
    # digis in sector 16:  707667  --> hit rate: 29486.1 Hz, active area: 7533.0 cm^2   -->  3.91 Hz/cm^2
    # digis in sector 17:   91706  --> hit rate:  3821.1 Hz, active area: 7533.0 cm^2   -->  0.51 Hz/cm^2
    # digis in sector 18:   92144  --> hit rate:  3839.3 Hz, active area: 7533.0 cm^2   -->  0.51 Hz/cm^2
    # digis in sector 19:   79645  --> hit rate:  3318.5 Hz, active area: 7533.0 cm^2   -->  0.44 Hz/cm^2
    # digis in sector 20:   56082  --> hit rate:  2336.7 Hz, active area: 7533.0 cm^2   -->  0.31 Hz/cm^2
    # digis in sector 21:  360070  --> hit rate: 15002.9 Hz, active area: 7533.0 cm^2   -->  1.99 Hz/cm^2
    # digis in sector 22:  655828  --> hit rate: 27326.2 Hz, active area: 7533.0 cm^2   -->  3.63 Hz/cm^2
    # digis in sector 23:   71182  --> hit rate:  2965.9 Hz, active area: 7425.0 cm^2   -->  0.40 Hz/cm^2
    # digis in sector 24:   97501  --> hit rate:  4062.5 Hz, active area: 7533.0 cm^2   -->  0.54 Hz/cm^2

     *** noise rate calculation in run 20036038 ***
    # events: 3811057   time window: 3 microseconds
    # digis in sector 13:   72845  --> hit rate:  3185.7 Hz, active area: 7533.0 cm^2   -->  0.42 Hz/cm^2
    # digis in sector 14:   59429  --> hit rate:  2599.0 Hz, active area: 7479.0 cm^2   -->  0.35 Hz/cm^2
    # digis in sector 15:  942326  --> hit rate: 41210.2 Hz, active area: 7533.0 cm^2   -->  5.47 Hz/cm^2
    # digis in sector 16:  486065  --> hit rate: 21256.8 Hz, active area: 7533.0 cm^2   -->  2.82 Hz/cm^2
    # digis in sector 17:   70676  --> hit rate:  3090.8 Hz, active area: 6277.5 cm^2   -->  0.49 Hz/cm^2
    # digis in sector 18:   81622  --> hit rate:  3569.5 Hz, active area: 7533.0 cm^2   -->  0.47 Hz/cm^2
    # digis in sector 19:   57893  --> hit rate:  2531.8 Hz, active area: 6277.5 cm^2   -->  0.40 Hz/cm^2
    # digis in sector 20:   46968  --> hit rate:  2054.0 Hz, active area: 7533.0 cm^2   -->  0.27 Hz/cm^2
    # digis in sector 21:  348748  --> hit rate: 15251.6 Hz, active area: 7533.0 cm^2   -->  2.02 Hz/cm^2
    # digis in sector 22:  193457  --> hit rate:  8460.3 Hz, active area: 5022.0 cm^2   -->  1.68 Hz/cm^2
    # digis in sector 23:   74638  --> hit rate:  3264.1 Hz, active area: 7060.5 cm^2   -->  0.46 Hz/cm^2
    # digis in sector 24:   95041  --> hit rate:  4156.4 Hz, active area: 7533.0 cm^2   -->  0.55 Hz/cm^2

     *** noise rate calculation in run 20037044 ***
    # events: 4000000   time window: 3 microseconds
    # digis in sector 13:   66433  --> hit rate:  2768.0 Hz, active area: 7533.0 cm^2   -->  0.37 Hz/cm^2
    # digis in sector 14:   53120  --> hit rate:  2213.3 Hz, active area: 7479.0 cm^2   -->  0.30 Hz/cm^2
    # digis in sector 15:  784399  --> hit rate: 32683.3 Hz, active area: 7533.0 cm^2   -->  4.34 Hz/cm^2
    # digis in sector 16:  543280  --> hit rate: 22636.7 Hz, active area: 7533.0 cm^2   -->  3.01 Hz/cm^2
    # digis in sector 17:   79533  --> hit rate:  3313.9 Hz, active area: 7533.0 cm^2   -->  0.44 Hz/cm^2
    # digis in sector 18:   80281  --> hit rate:  3345.0 Hz, active area: 7533.0 cm^2   -->  0.44 Hz/cm^2
    # digis in sector 19:   74362  --> hit rate:  3098.4 Hz, active area: 7533.0 cm^2   -->  0.41 Hz/cm^2
    # digis in sector 20:   53651  --> hit rate:  2235.5 Hz, active area: 7533.0 cm^2   -->  0.30 Hz/cm^2
    # digis in sector 21:  266452  --> hit rate: 11102.2 Hz, active area: 7533.0 cm^2   -->  1.47 Hz/cm^2
    # digis in sector 22:  163246  --> hit rate:  6801.9 Hz, active area: 5022.0 cm^2   -->  1.35 Hz/cm^2
    # digis in sector 23:   63731  --> hit rate:  2655.5 Hz, active area: 7060.5 cm^2   -->  0.38 Hz/cm^2
    # digis in sector 24:   91908  --> hit rate:  3829.5 Hz, active area: 7533.0 cm^2   -->  0.51 Hz/cm^2


     *** noise rate calculation in run 20039024 ***
    # events: 3798998   time window: 3 microseconds
    # digis in sector 13:   66356  --> hit rate:  2911.1 Hz, active area: 6277.5 cm^2   -->  0.46 Hz/cm^2
    # digis in sector 14:   57269  --> hit rate:  2512.5 Hz, active area: 7479.0 cm^2   -->  0.34 Hz/cm^2
    # digis in sector 15:  802133  --> hit rate: 35190.5 Hz, active area: 7533.0 cm^2   -->  4.67 Hz/cm^2
    # digis in sector 16:  760143  --> hit rate: 33348.4 Hz, active area: 7533.0 cm^2   -->  4.43 Hz/cm^2
    # digis in sector 17:   68281  --> hit rate:  2995.6 Hz, active area: 6277.5 cm^2   -->  0.48 Hz/cm^2
    # digis in sector 18:   80235  --> hit rate:  3520.0 Hz, active area: 7533.0 cm^2   -->  0.47 Hz/cm^2
    # digis in sector 19:   38631  --> hit rate:  1694.8 Hz, active area: 5022.0 cm^2   -->  0.34 Hz/cm^2
    # digis in sector 20:   62715  --> hit rate:  2751.4 Hz, active area: 7533.0 cm^2   -->  0.37 Hz/cm^2
    # digis in sector 21:  358261  --> hit rate: 15717.3 Hz, active area: 7452.0 cm^2   -->  2.11 Hz/cm^2
    # digis in sector 22:  241978  --> hit rate: 10615.9 Hz, active area: 5022.0 cm^2   -->  2.11 Hz/cm^2
    # digis in sector 23:   87118  --> hit rate:  3822.0 Hz, active area: 7060.5 cm^2   -->  0.54 Hz/cm^2
    # digis in sector 24:  108247  --> hit rate:  4748.9 Hz, active area: 7533.0 cm^2   -->  0.63 Hz/cm^2


     *** noise rate calculation in run 20041025 ***
    # events: 4000000, time window: 3 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   59094  --> hit rate:  2462.2 Hz, active area: 7533.0 cm^2   -->  0.33 Hz/cm^2
    # digis in sector 14:   43821  --> hit rate:  1825.9 Hz, active area: 7479.0 cm^2   -->  0.24 Hz/cm^2
    # digis in sector 15:  716808  --> hit rate: 29867.0 Hz, active area: 7533.0 cm^2   -->  3.96 Hz/cm^2
    # digis in sector 16:   33556  --> hit rate:  1398.2 Hz, active area: 3766.5 cm^2   -->  0.37 Hz/cm^2
    # digis in sector 17:   69227  --> hit rate:  2884.5 Hz, active area: 7533.0 cm^2   -->  0.38 Hz/cm^2
    # digis in sector 18:   68895  --> hit rate:  2870.6 Hz, active area: 7533.0 cm^2   -->  0.38 Hz/cm^2
    # digis in sector 19:   60156  --> hit rate:  2506.5 Hz, active area: 7533.0 cm^2   -->  0.33 Hz/cm^2
    # digis in sector 20:   50962  --> hit rate:  2123.4 Hz, active area: 7533.0 cm^2   -->  0.28 Hz/cm^2
    # digis in sector 21:  183170  --> hit rate:  7632.1 Hz, active area: 7533.0 cm^2   -->  1.01 Hz/cm^2
    # digis in sector 22:  157609  --> hit rate:  6567.0 Hz, active area: 5022.0 cm^2   -->  1.31 Hz/cm^2
    # digis in sector 23:   71489  --> hit rate:  2978.7 Hz, active area: 7060.5 cm^2   -->  0.42 Hz/cm^2
    # digis in sector 24:   89943  --> hit rate:  3747.6 Hz, active area: 7533.0 cm^2   -->  0.50 Hz/cm^2

     *** noise rate calculation in run 20042039 ***
    # events: 3843464, time window: 3 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   62951  --> hit rate:  2729.8 Hz, active area: 7533.0 cm^2   -->  0.36 Hz/cm^2
    # digis in sector 14:   47833  --> hit rate:  2074.2 Hz, active area: 7438.5 cm^2   -->  0.28 Hz/cm^2
    # digis in sector 15:  678716  --> hit rate: 29431.6 Hz, active area: 7533.0 cm^2   -->  3.91 Hz/cm^2
    # digis in sector 16:   38354  --> hit rate:  1663.2 Hz, active area: 3766.5 cm^2   -->  0.44 Hz/cm^2
    # digis in sector 17:   59963  --> hit rate:  2600.2 Hz, active area: 6277.5 cm^2   -->  0.41 Hz/cm^2
    # digis in sector 18:   68878  --> hit rate:  2986.8 Hz, active area: 7533.0 cm^2   -->  0.40 Hz/cm^2
    # digis in sector 19:   40949  --> hit rate:  1775.7 Hz, active area: 5022.0 cm^2   -->  0.35 Hz/cm^2
    # digis in sector 20:   45779  --> hit rate:  1985.1 Hz, active area: 6277.5 cm^2   -->  0.32 Hz/cm^2
    # digis in sector 21:  220033  --> hit rate:  9541.4 Hz, active area: 7533.0 cm^2   -->  1.27 Hz/cm^2
    # digis in sector 22:  218223  --> hit rate:  9462.9 Hz, active area: 5022.0 cm^2   -->  1.88 Hz/cm^2
    # digis in sector 23:   80710  --> hit rate:  3499.9 Hz, active area: 7060.5 cm^2   -->  0.50 Hz/cm^2
    # digis in sector 24:   94726  --> hit rate:  4107.7 Hz, active area: 7533.0 cm^2   -->  0.55 Hz/cm^2

     *** noise rate calculation in run 20044017 ***
    # events: 4000000   time window: 4 microseconds
    # digis in sector 13:   86570  --> hit rate:  3607.1 Hz, active area: 7533.0 cm^2   -->  0.36 Hz/cm^2
    # digis in sector 14:   64475  --> hit rate:  2686.5 Hz, active area: 7479.0 cm^2   -->  0.27 Hz/cm^2
    # digis in sector 15:  803500  --> hit rate: 33479.2 Hz, active area: 7533.0 cm^2   -->  3.33 Hz/cm^2
    # digis in sector 16:  875268  --> hit rate: 36469.5 Hz, active area: 7533.0 cm^2   -->  3.63 Hz/cm^2
    # digis in sector 17:   84583  --> hit rate:  3524.3 Hz, active area: 7533.0 cm^2   -->  0.35 Hz/cm^2
    # digis in sector 18:   83733  --> hit rate:  3488.9 Hz, active area: 7533.0 cm^2   -->  0.35 Hz/cm^2
    # digis in sector 19:   75410  --> hit rate:  3142.1 Hz, active area: 7533.0 cm^2   -->  0.32 Hz/cm^2
    # digis in sector 20:   69041  --> hit rate:  2876.7 Hz, active area: 7533.0 cm^2   -->  0.29 Hz/cm^2
    # digis in sector 21:  379157  --> hit rate: 15798.2 Hz, active area: 7533.0 cm^2   -->  1.58 Hz/cm^2
    # digis in sector 22:  266380  --> hit rate: 11099.2 Hz, active area: 5022.0 cm^2   -->  1.66 Hz/cm^2
    # digis in sector 23:   92924  --> hit rate:  3871.8 Hz, active area: 7060.5 cm^2   -->  0.41 Hz/cm^2
    # digis in sector 24:  113041  --> hit rate:  4710.0 Hz, active area: 7533.0 cm^2   -->  0.47 Hz/cm^2

     *** noise rate calculation in run 20046026 ***
    # events: 3999999   time window: 4 microseconds
    # digis in sector 13:  100147  --> hit rate:  3129.6 Hz, active area: 7533.0 cm^2   -->  0.42 Hz/cm^2
    # digis in sector 14:   72483  --> hit rate:  2265.1 Hz, active area: 7479.0 cm^2   -->  0.30 Hz/cm^2
    # digis in sector 15:  928257  --> hit rate: 29008.0 Hz, active area: 7533.0 cm^2   -->  3.85 Hz/cm^2
    # digis in sector 16: 1058237  --> hit rate: 33069.9 Hz, active area: 7533.0 cm^2   -->  4.39 Hz/cm^2
    # digis in sector 17:   98655  --> hit rate:  3083.0 Hz, active area: 7533.0 cm^2   -->  0.41 Hz/cm^2
    # digis in sector 18:  104391  --> hit rate:  3262.2 Hz, active area: 7533.0 cm^2   -->  0.43 Hz/cm^2
    # digis in sector 19:   91496  --> hit rate:  2859.3 Hz, active area: 7533.0 cm^2   -->  0.38 Hz/cm^2
    # digis in sector 20:   78588  --> hit rate:  2455.9 Hz, active area: 7533.0 cm^2   -->  0.33 Hz/cm^2
    # digis in sector 21:  336126  --> hit rate: 10503.9 Hz, active area: 7533.0 cm^2   -->  1.39 Hz/cm^2
    # digis in sector 22: 1257531  --> hit rate: 39297.9 Hz, active area: 7533.0 cm^2   -->  5.22 Hz/cm^2
    # digis in sector 23:  110199  --> hit rate:  3443.7 Hz, active area: 7060.5 cm^2   -->  0.49 Hz/cm^2
    # digis in sector 24:  131097  --> hit rate:  4096.8 Hz, active area: 7533.0 cm^2   -->  0.54 Hz/cm^2
     *** -------------------- ***

     *** noise rate calculation in run 20047059 ***
    # events: 3821205   time window: 4 microseconds
    # digis in sector 13: 4334227  --> hit rate: 141782.1 Hz, active area: 6696.0 cm^2   --> 21.17 Hz/cm^2
    # digis in sector 14:   65361  --> hit rate:  2138.1 Hz, active area: 7479.0 cm^2   -->  0.29 Hz/cm^2
    # digis in sector 15:  878499  --> hit rate: 28737.6 Hz, active area: 7533.0 cm^2   -->  3.81 Hz/cm^2
    # digis in sector 16:  575832  --> hit rate: 18836.7 Hz, active area: 6277.5 cm^2   -->  3.00 Hz/cm^2
    # digis in sector 17:   91663  --> hit rate:  2998.5 Hz, active area: 7533.0 cm^2   -->  0.40 Hz/cm^2
    # digis in sector 18:   95745  --> hit rate:  3132.0 Hz, active area: 7533.0 cm^2   -->  0.42 Hz/cm^2
    # digis in sector 19:   80065  --> hit rate:  2619.1 Hz, active area: 7533.0 cm^2   -->  0.35 Hz/cm^2
    # digis in sector 20:   72867  --> hit rate:  2383.6 Hz, active area: 7533.0 cm^2   -->  0.32 Hz/cm^2
    # digis in sector 21:  289911  --> hit rate:  9483.6 Hz, active area: 7533.0 cm^2   -->  1.26 Hz/cm^2
    # digis in sector 22: 1185530  --> hit rate: 38781.3 Hz, active area: 7533.0 cm^2   -->  5.15 Hz/cm^2
    # digis in sector 23:   82648  --> hit rate:  2703.6 Hz, active area: 5805.0 cm^2   -->  0.47 Hz/cm^2
    # digis in sector 24:  123111  --> hit rate:  4027.2 Hz, active area: 7533.0 cm^2   -->  0.53 Hz/cm^2
     *** -------------------- ***

     *** noise rate calculation in run 20050011 ***
    # events: 4000000, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   29841  --> hit rate:   932.5 Hz, active area: 4522.5 cm^2   -->  0.21 Hz/cm^2
    # digis in sector 14:   36660  --> hit rate:  1145.6 Hz, active area: 7479.0 cm^2   -->  0.15 Hz/cm^2
    # digis in sector 15:  467148  --> hit rate: 14598.4 Hz, active area: 7533.0 cm^2   -->  1.94 Hz/cm^2
    # digis in sector 16:  605427  --> hit rate: 18919.6 Hz, active area: 7533.0 cm^2   -->  2.51 Hz/cm^2
    # digis in sector 17:   57981  --> hit rate:  1811.9 Hz, active area: 7533.0 cm^2   -->  0.24 Hz/cm^2
    # digis in sector 18:   60181  --> hit rate:  1880.7 Hz, active area: 7533.0 cm^2   -->  0.25 Hz/cm^2
    # digis in sector 19:   49593  --> hit rate:  1549.8 Hz, active area: 7533.0 cm^2   -->  0.21 Hz/cm^2
    # digis in sector 20:   42326  --> hit rate:  1322.7 Hz, active area: 7533.0 cm^2   -->  0.18 Hz/cm^2
    # digis in sector 21:  142077  --> hit rate:  4439.9 Hz, active area: 7533.0 cm^2   -->  0.59 Hz/cm^2
    # digis in sector 22:  574469  --> hit rate: 17952.2 Hz, active area: 7533.0 cm^2   -->  2.38 Hz/cm^2
    # digis in sector 23:   70382  --> hit rate:  2199.4 Hz, active area: 7060.5 cm^2   -->  0.31 Hz/cm^2
    # digis in sector 24:   76421  --> hit rate:  2388.2 Hz, active area: 7533.0 cm^2   -->  0.32 Hz/cm^2

     *** noise rate calculation in run 20051018 ***
    # events: 4000000, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   34144  --> hit rate:  1067.0 Hz, active area: 5926.5 cm^2   -->  0.18 Hz/cm^2
    # digis in sector 14:   34869  --> hit rate:  1089.7 Hz, active area: 7479.0 cm^2   -->  0.15 Hz/cm^2
    # digis in sector 15:  469002  --> hit rate: 14656.3 Hz, active area: 7533.0 cm^2   -->  1.95 Hz/cm^2
    # digis in sector 16:  590280  --> hit rate: 18446.2 Hz, active area: 7533.0 cm^2   -->  2.45 Hz/cm^2
    # digis in sector 17:   46843  --> hit rate:  1463.8 Hz, active area: 6277.5 cm^2   -->  0.23 Hz/cm^2
    # digis in sector 18:   49768  --> hit rate:  1555.2 Hz, active area: 6277.5 cm^2   -->  0.25 Hz/cm^2
    # digis in sector 19:   47669  --> hit rate:  1489.7 Hz, active area: 7533.0 cm^2   -->  0.20 Hz/cm^2
    # digis in sector 20:   39545  --> hit rate:  1235.8 Hz, active area: 7114.5 cm^2   -->  0.17 Hz/cm^2
    # digis in sector 21:  128354  --> hit rate:  4011.1 Hz, active area: 7533.0 cm^2   -->  0.53 Hz/cm^2
    # digis in sector 22:  565095  --> hit rate: 17659.2 Hz, active area: 7533.0 cm^2   -->  2.34 Hz/cm^2
    # digis in sector 23:   68554  --> hit rate:  2142.3 Hz, active area: 7060.5 cm^2   -->  0.30 Hz/cm^2
    # digis in sector 24:   70018  --> hit rate:  2188.1 Hz, active area: 7533.0 cm^2   -->  0.29 Hz/cm^2

     *** noise rate calculation in run 20052001 ***
    # events: 4028288, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:       0  --> hit rate:     0.0 Hz, active area:   0.0 cm^2   --> 0.00 Hz/cm^2
    # digis in sector 14:       0  --> hit rate:     0.0 Hz, active area:   0.0 cm^2   --> 0.00 Hz/cm^2
    # digis in sector 15:       0  --> hit rate:     0.0 Hz, active area:   0.0 cm^2   --> 0.00 Hz/cm^2
    # digis in sector 16:       0  --> hit rate:     0.0 Hz, active area:   0.0 cm^2   --> 0.00 Hz/cm^2
    # digis in sector 17:       0  --> hit rate:     0.0 Hz, active area:   0.0 cm^2   --> 0.00 Hz/cm^2
    # digis in sector 18:       0  --> hit rate:     0.0 Hz, active area:   0.0 cm^2   --> 0.00 Hz/cm^2
    # digis in sector 19:       0  --> hit rate:     0.0 Hz, active area:   0.0 cm^2   --> 0.00 Hz/cm^2
    # digis in sector 20:       0  --> hit rate:     0.0 Hz, active area:   0.0 cm^2   --> 0.00 Hz/cm^2
    # digis in sector 21:       0  --> hit rate:     0.0 Hz, active area:   0.0 cm^2   --> 0.00 Hz/cm^2
    # digis in sector 22:       0  --> hit rate:     0.0 Hz, active area:   0.0 cm^2   --> 0.00 Hz/cm^2
    # digis in sector 23:       0  --> hit rate:     0.0 Hz, active area:   0.0 cm^2   --> 0.00 Hz/cm^2
    # digis in sector 24:       0  --> hit rate:     0.0 Hz, active area:   0.0 cm^2   --> 0.00 Hz/cm^2

     *** noise rate calculation in run 20056028 ***
    # events: 4000000, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   42617  --> hit rate:  1331.8 Hz, active area: 3226.5 cm^2   -->  0.41 Hz/cm^2
    # digis in sector 14:   26658  --> hit rate:   833.1 Hz, active area: 2740.5 cm^2   -->  0.30 Hz/cm^2
    # digis in sector 15:  194441  --> hit rate:  6076.3 Hz, active area: 3348.0 cm^2   -->  1.81 Hz/cm^2
    # digis in sector 16:   25663  --> hit rate:   802.0 Hz, active area: 2416.5 cm^2   -->  0.33 Hz/cm^2
    # digis in sector 17:   40148  --> hit rate:  1254.6 Hz, active area: 2943.0 cm^2   -->  0.43 Hz/cm^2
    # digis in sector 18:   54746  --> hit rate:  1710.8 Hz, active area: 3469.5 cm^2   -->  0.49 Hz/cm^2
    # digis in sector 19:   29315  --> hit rate:   916.1 Hz, active area: 2349.0 cm^2   -->  0.39 Hz/cm^2
    # digis in sector 20:   42131  --> hit rate:  1316.6 Hz, active area: 3766.5 cm^2   -->  0.35 Hz/cm^2
    # digis in sector 21:   25759  --> hit rate:   805.0 Hz, active area: 756.0 cm^2   -->  1.06 Hz/cm^2
    # digis in sector 22:   13786  --> hit rate:   430.8 Hz, active area: 378.0 cm^2   -->  1.14 Hz/cm^2
    # digis in sector 23:   60549  --> hit rate:  1892.2 Hz, active area: 2929.5 cm^2   -->  0.65 Hz/cm^2
    # digis in sector 24:   46478  --> hit rate:  1452.4 Hz, active area: 2160.0 cm^2   -->  0.67 Hz/cm^2

     *** noise rate calculation in run 20058022 ***
    # events: 3816995, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   55251  --> hit rate:  1809.4 Hz, active area: 7533.0 cm^2   -->  0.24 Hz/cm^2
    # digis in sector 14:   38178  --> hit rate:  1250.3 Hz, active area: 7479.0 cm^2   -->  0.17 Hz/cm^2
    # digis in sector 15:  565818  --> hit rate: 18529.6 Hz, active area: 7533.0 cm^2   -->  2.46 Hz/cm^2
    # digis in sector 16:  845701  --> hit rate: 27695.2 Hz, active area: 7533.0 cm^2   -->  3.68 Hz/cm^2
    # digis in sector 17:   70629  --> hit rate:  2313.0 Hz, active area: 7533.0 cm^2   -->  0.31 Hz/cm^2
    # digis in sector 18:   79672  --> hit rate:  2609.1 Hz, active area: 7533.0 cm^2   -->  0.35 Hz/cm^2
    # digis in sector 19:   62022  --> hit rate:  2031.1 Hz, active area: 7533.0 cm^2   -->  0.27 Hz/cm^2
    # digis in sector 20:   46885  --> hit rate:  1535.4 Hz, active area: 7114.5 cm^2   -->  0.22 Hz/cm^2
    # digis in sector 21:  125332  --> hit rate:  4104.4 Hz, active area: 7533.0 cm^2   -->  0.54 Hz/cm^2
    # digis in sector 22:  805393  --> hit rate: 26375.2 Hz, active area: 7533.0 cm^2   -->  3.50 Hz/cm^2
    # digis in sector 23:   82417  --> hit rate:  2699.0 Hz, active area: 7060.5 cm^2   -->  0.38 Hz/cm^2
    # digis in sector 24:   85607  --> hit rate:  2803.5 Hz, active area: 7533.0 cm^2   -->  0.37 Hz/cm^2

     *** noise rate calculation in run 20059053 ***
    # events: 3804819, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   65614  --> hit rate:  2155.6 Hz, active area: 7533.0 cm^2   -->  0.29 Hz/cm^2
    # digis in sector 14:   44050  --> hit rate:  1447.2 Hz, active area: 7479.0 cm^2   -->  0.19 Hz/cm^2
    # digis in sector 15:  670123  --> hit rate: 22015.6 Hz, active area: 7533.0 cm^2   -->  2.92 Hz/cm^2
    # digis in sector 16:  843282  --> hit rate: 27704.4 Hz, active area: 7533.0 cm^2   -->  3.68 Hz/cm^2
    # digis in sector 17:   74638  --> hit rate:  2452.1 Hz, active area: 7533.0 cm^2   -->  0.33 Hz/cm^2
    # digis in sector 18:   85027  --> hit rate:  2793.4 Hz, active area: 7533.0 cm^2   -->  0.37 Hz/cm^2
    # digis in sector 19:   67816  --> hit rate:  2228.0 Hz, active area: 7533.0 cm^2   -->  0.30 Hz/cm^2
    # digis in sector 20:   54732  --> hit rate:  1798.1 Hz, active area: 7114.5 cm^2   -->  0.25 Hz/cm^2
    # digis in sector 21:  211831  --> hit rate:  6959.3 Hz, active area: 7533.0 cm^2   -->  0.92 Hz/cm^2
    # digis in sector 22: 1143035  --> hit rate: 37552.2 Hz, active area: 5940.0 cm^2   -->  6.32 Hz/cm^2
    # digis in sector 23:   87742  --> hit rate:  2882.6 Hz, active area: 7060.5 cm^2   -->  0.41 Hz/cm^2
    # digis in sector 24:  100501  --> hit rate:  3301.8 Hz, active area: 7533.0 cm^2   -->  0.44 Hz/cm^2

     *** noise rate calculation in run 20062032 ***
    # events: 4000000, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   36924  --> hit rate:  1153.9 Hz, active area: 6277.5 cm^2   -->  0.18 Hz/cm^2
    # digis in sector 14:   30166  --> hit rate:   942.7 Hz, active area: 7479.0 cm^2   -->  0.13 Hz/cm^2
    # digis in sector 15:  551184  --> hit rate: 17224.5 Hz, active area: 7533.0 cm^2   -->  2.29 Hz/cm^2
    # digis in sector 16:  776015  --> hit rate: 24250.5 Hz, active area: 7533.0 cm^2   -->  3.22 Hz/cm^2
    # digis in sector 17:   56473  --> hit rate:  1764.8 Hz, active area: 7533.0 cm^2   -->  0.23 Hz/cm^2
    # digis in sector 18:   57289  --> hit rate:  1790.3 Hz, active area: 7533.0 cm^2   -->  0.24 Hz/cm^2
    # digis in sector 19:   46916  --> hit rate:  1466.1 Hz, active area: 7533.0 cm^2   -->  0.19 Hz/cm^2
    # digis in sector 20:   33575  --> hit rate:  1049.2 Hz, active area: 7114.5 cm^2   -->  0.15 Hz/cm^2
    # digis in sector 21:  145137  --> hit rate:  4535.5 Hz, active area: 7533.0 cm^2   -->  0.60 Hz/cm^2
    # digis in sector 22: 1284175  --> hit rate: 40130.5 Hz, active area: 7533.0 cm^2   -->  5.33 Hz/cm^2
    # digis in sector 23:   86303  --> hit rate:  2697.0 Hz, active area: 7060.5 cm^2   -->  0.38 Hz/cm^2
    # digis in sector 24:   76730  --> hit rate:  2397.8 Hz, active area: 7533.0 cm^2   -->  0.32 Hz/cm^2

     *** noise rate calculation in run 20063026 ***
    # events: 1746373, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   30326  --> hit rate:  2170.6 Hz, active area: 7533.0 cm^2   -->  0.29 Hz/cm^2
    # digis in sector 14:   20013  --> hit rate:  1432.5 Hz, active area: 7479.0 cm^2   -->  0.19 Hz/cm^2
    # digis in sector 15:  300791  --> hit rate: 21529.7 Hz, active area: 7533.0 cm^2   -->  2.86 Hz/cm^2
    # digis in sector 16:  445283  --> hit rate: 31872.0 Hz, active area: 7533.0 cm^2   -->  4.23 Hz/cm^2
    # digis in sector 17:   32936  --> hit rate:  2357.5 Hz, active area: 7533.0 cm^2   -->  0.31 Hz/cm^2
    # digis in sector 18:   34600  --> hit rate:  2476.6 Hz, active area: 7533.0 cm^2   -->  0.33 Hz/cm^2
    # digis in sector 19:   29288  --> hit rate:  2096.3 Hz, active area: 7533.0 cm^2   -->  0.28 Hz/cm^2
    # digis in sector 20:   20797  --> hit rate:  1488.6 Hz, active area: 7114.5 cm^2   -->  0.21 Hz/cm^2
    # digis in sector 21:  407559  --> hit rate: 29171.8 Hz, active area: 7533.0 cm^2   -->  3.87 Hz/cm^2
    # digis in sector 22: 1080903  --> hit rate: 77367.7 Hz, active area: 7533.0 cm^2   --> 10.27 Hz/cm^2
    # digis in sector 23:   57129  --> hit rate:  4089.1 Hz, active area: 7060.5 cm^2   -->  0.58 Hz/cm^2
    # digis in sector 24:   47827  --> hit rate:  3423.3 Hz, active area: 7533.0 cm^2   -->  0.45 Hz/cm^2

     *** noise rate calculation in run 20065042 ***
    # events: 3815448, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   50932  --> hit rate:  1668.6 Hz, active area: 7533.0 cm^2   -->  0.22 Hz/cm^2
    # digis in sector 14:   34918  --> hit rate:  1144.0 Hz, active area: 7479.0 cm^2   -->  0.15 Hz/cm^2
    # digis in sector 15:  480258  --> hit rate: 15734.0 Hz, active area: 7533.0 cm^2   -->  2.09 Hz/cm^2
    # digis in sector 16:  686105  --> hit rate: 22477.9 Hz, active area: 7533.0 cm^2   -->  2.98 Hz/cm^2
    # digis in sector 17:   59199  --> hit rate:  1939.5 Hz, active area: 7533.0 cm^2   -->  0.26 Hz/cm^2
    # digis in sector 18:   66510  --> hit rate:  2179.0 Hz, active area: 7533.0 cm^2   -->  0.29 Hz/cm^2
    # digis in sector 19:   52207  --> hit rate:  1710.4 Hz, active area: 7533.0 cm^2   -->  0.23 Hz/cm^2
    # digis in sector 20:   42402  --> hit rate:  1389.2 Hz, active area: 7114.5 cm^2   -->  0.20 Hz/cm^2
    # digis in sector 21:  104190  --> hit rate:  3413.4 Hz, active area: 5022.0 cm^2   -->  0.68 Hz/cm^2
    # digis in sector 22:   56352  --> hit rate:  1846.2 Hz, active area: 2551.5 cm^2   -->  0.72 Hz/cm^2
    # digis in sector 23:   75815  --> hit rate:  2483.8 Hz, active area: 7060.5 cm^2   -->  0.35 Hz/cm^2
    # digis in sector 24:   84127  --> hit rate:  2756.1 Hz, active area: 7533.0 cm^2   -->  0.37 Hz/cm^2

     *** noise rate calculation in run 20066035 ***
    # events: 3834212, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   12392  --> hit rate:   404.0 Hz, active area: 7533.0 cm^2   -->  0.05 Hz/cm^2
    # digis in sector 14:    7924  --> hit rate:   258.3 Hz, active area: 7452.0 cm^2   -->  0.03 Hz/cm^2
    # digis in sector 15:  133509  --> hit rate:  4352.6 Hz, active area: 7533.0 cm^2   -->  0.58 Hz/cm^2
    # digis in sector 16:  243933  --> hit rate:  7952.5 Hz, active area: 7533.0 cm^2   -->  1.06 Hz/cm^2
    # digis in sector 17:   20114  --> hit rate:   655.7 Hz, active area: 7533.0 cm^2   -->  0.09 Hz/cm^2
    # digis in sector 18:   20972  --> hit rate:   683.7 Hz, active area: 7533.0 cm^2   -->  0.09 Hz/cm^2
    # digis in sector 19:   16090  --> hit rate:   524.6 Hz, active area: 7533.0 cm^2   -->  0.07 Hz/cm^2
    # digis in sector 20:   11769  --> hit rate:   383.7 Hz, active area: 7087.5 cm^2   -->  0.05 Hz/cm^2
    # digis in sector 21:   17437  --> hit rate:   568.5 Hz, active area: 5022.0 cm^2   -->  0.11 Hz/cm^2
    # digis in sector 22:  330621  --> hit rate: 10778.6 Hz, active area: 6102.0 cm^2   -->  1.77 Hz/cm^2
    # digis in sector 23:   22461  --> hit rate:   732.3 Hz, active area: 7060.5 cm^2   -->  0.10 Hz/cm^2
    # digis in sector 24:   20939  --> hit rate:   682.6 Hz, active area: 7533.0 cm^2   -->  0.09 Hz/cm^2

     *** noise rate calculation in run 20068038 ***
    # events: 4000000, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   24602  --> hit rate:   768.8 Hz, active area: 7533.0 cm^2   -->  0.10 Hz/cm^2
    # digis in sector 14:   13786  --> hit rate:   430.8 Hz, active area: 7479.0 cm^2   -->  0.06 Hz/cm^2
    # digis in sector 15:  267530  --> hit rate:  8360.3 Hz, active area: 7533.0 cm^2   -->  1.11 Hz/cm^2
    # digis in sector 16:  451854  --> hit rate: 14120.4 Hz, active area: 7533.0 cm^2   -->  1.87 Hz/cm^2
    # digis in sector 17:   30687  --> hit rate:   959.0 Hz, active area: 7533.0 cm^2   -->  0.13 Hz/cm^2
    # digis in sector 18:   27037  --> hit rate:   844.9 Hz, active area: 7533.0 cm^2   -->  0.11 Hz/cm^2
    # digis in sector 19:   23005  --> hit rate:   718.9 Hz, active area: 7533.0 cm^2   -->  0.10 Hz/cm^2
    # digis in sector 20:   17663  --> hit rate:   552.0 Hz, active area: 7087.5 cm^2   -->  0.08 Hz/cm^2
    # digis in sector 21:   55952  --> hit rate:  1748.5 Hz, active area: 7533.0 cm^2   -->  0.23 Hz/cm^2
    # digis in sector 22:   19728  --> hit rate:   616.5 Hz, active area: 2511.0 cm^2   -->  0.25 Hz/cm^2
    # digis in sector 23:   32880  --> hit rate:  1027.5 Hz, active area: 7060.5 cm^2   -->  0.15 Hz/cm^2
    # digis in sector 24:   36568  --> hit rate:  1142.8 Hz, active area: 7533.0 cm^2   -->  0.15 Hz/cm^2

     *** noise rate calculation in run 20069039 ***
    # events: 3836763, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   25114  --> hit rate:   818.2 Hz, active area: 7533.0 cm^2   -->  0.11 Hz/cm^2
    # digis in sector 14:   15695  --> hit rate:   511.3 Hz, active area: 7479.0 cm^2   -->  0.07 Hz/cm^2
    # digis in sector 15:  272174  --> hit rate:  8867.3 Hz, active area: 7533.0 cm^2   -->  1.18 Hz/cm^2
    # digis in sector 16:  436977  --> hit rate: 14236.5 Hz, active area: 7533.0 cm^2   -->  1.89 Hz/cm^2
    # digis in sector 17:   32044  --> hit rate:  1044.0 Hz, active area: 7533.0 cm^2   -->  0.14 Hz/cm^2
    # digis in sector 18:   29814  --> hit rate:   971.3 Hz, active area: 7533.0 cm^2   -->  0.13 Hz/cm^2
    # digis in sector 19:   25389  --> hit rate:   827.2 Hz, active area: 7533.0 cm^2   -->  0.11 Hz/cm^2
    # digis in sector 20:   19992  --> hit rate:   651.3 Hz, active area: 7087.5 cm^2   -->  0.09 Hz/cm^2
    # digis in sector 21:   57763  --> hit rate:  1881.9 Hz, active area: 7533.0 cm^2   -->  0.25 Hz/cm^2
    # digis in sector 22:  619873  --> hit rate: 20195.2 Hz, active area: 7182.0 cm^2   -->  2.81 Hz/cm^2
    # digis in sector 23:   36493  --> hit rate:  1188.9 Hz, active area: 7060.5 cm^2   -->  0.17 Hz/cm^2
    # digis in sector 24:   35993  --> hit rate:  1172.6 Hz, active area: 7533.0 cm^2   -->  0.16 Hz/cm^2

     *** noise rate calculation in run 20071033 ***
    # events: 3126618, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   19386  --> hit rate:   775.0 Hz, active area: 7533.0 cm^2   -->  0.10 Hz/cm^2
    # digis in sector 14:   12760  --> hit rate:   510.1 Hz, active area: 7479.0 cm^2   -->  0.07 Hz/cm^2
    # digis in sector 15:  207550  --> hit rate:  8297.7 Hz, active area: 7533.0 cm^2   -->  1.10 Hz/cm^2
    # digis in sector 16:  339374  --> hit rate: 13567.9 Hz, active area: 7533.0 cm^2   -->  1.80 Hz/cm^2
    # digis in sector 17:   24294  --> hit rate:   971.3 Hz, active area: 7533.0 cm^2   -->  0.13 Hz/cm^2
    # digis in sector 18:   15341  --> hit rate:   613.3 Hz, active area: 5022.0 cm^2   -->  0.12 Hz/cm^2
    # digis in sector 19:   19750  --> hit rate:   789.6 Hz, active area: 7533.0 cm^2   -->  0.10 Hz/cm^2
    # digis in sector 20:   17258  --> hit rate:   690.0 Hz, active area: 7087.5 cm^2   -->  0.10 Hz/cm^2
    # digis in sector 21:   54180  --> hit rate:  2166.1 Hz, active area: 7533.0 cm^2   -->  0.29 Hz/cm^2
    # digis in sector 22: 1883829  --> hit rate: 75314.2 Hz, active area: 7128.0 cm^2   --> 10.57 Hz/cm^2
    # digis in sector 23:   27838  --> hit rate:  1112.9 Hz, active area: 7060.5 cm^2   -->  0.16 Hz/cm^2
    # digis in sector 24:   27927  --> hit rate:  1116.5 Hz, active area: 7533.0 cm^2   -->  0.15 Hz/cm^2

     *** noise rate calculation in run 20072021 ***
    # events: 3842991, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   23820  --> hit rate:   774.8 Hz, active area: 7533.0 cm^2   -->  0.10 Hz/cm^2
    # digis in sector 14:   14889  --> hit rate:   484.3 Hz, active area: 7465.5 cm^2   -->  0.06 Hz/cm^2
    # digis in sector 15:  264279  --> hit rate:  8596.1 Hz, active area: 7533.0 cm^2   -->  1.14 Hz/cm^2
    # digis in sector 16:  414829  --> hit rate: 13493.0 Hz, active area: 7519.5 cm^2   -->  1.79 Hz/cm^2
    # digis in sector 17:   28011  --> hit rate:   911.1 Hz, active area: 7533.0 cm^2   -->  0.12 Hz/cm^2
    # digis in sector 18:   28679  --> hit rate:   932.8 Hz, active area: 7533.0 cm^2   -->  0.12 Hz/cm^2
    # digis in sector 19:   23959  --> hit rate:   779.3 Hz, active area: 7533.0 cm^2   -->  0.10 Hz/cm^2
    # digis in sector 20:   27202  --> hit rate:   884.8 Hz, active area: 5805.0 cm^2   -->  0.15 Hz/cm^2
    # digis in sector 21:   59612  --> hit rate:  1939.0 Hz, active area: 7533.0 cm^2   -->  0.26 Hz/cm^2
    # digis in sector 22:  331327  --> hit rate: 10777.0 Hz, active area: 3780.0 cm^2   -->  2.85 Hz/cm^2
    # digis in sector 23:   33852  --> hit rate:  1101.1 Hz, active area: 7060.5 cm^2   -->  0.16 Hz/cm^2
    # digis in sector 24:   33879  --> hit rate:  1102.0 Hz, active area: 7533.0 cm^2   -->  0.15 Hz/cm^2

     *** noise rate calculation in run 20077024 ***
    # events: 6222318, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   33957  --> hit rate:   682.2 Hz, active area: 7533.0 cm^2   -->  0.09 Hz/cm^2
    # digis in sector 14:   20675  --> hit rate:   415.3 Hz, active area: 7479.0 cm^2   -->  0.06 Hz/cm^2
    # digis in sector 15:  400509  --> hit rate:  8045.8 Hz, active area: 7519.5 cm^2   -->  1.07 Hz/cm^2
    # digis in sector 16:  649066  --> hit rate: 13039.1 Hz, active area: 7533.0 cm^2   -->  1.73 Hz/cm^2
    # digis in sector 17:   43926  --> hit rate:   882.4 Hz, active area: 7533.0 cm^2   -->  0.12 Hz/cm^2
    # digis in sector 18:   42601  --> hit rate:   855.8 Hz, active area: 7533.0 cm^2   -->  0.11 Hz/cm^2
    # digis in sector 19:   35124  --> hit rate:   705.6 Hz, active area: 7533.0 cm^2   -->  0.09 Hz/cm^2
    # digis in sector 20:   30692  --> hit rate:   616.6 Hz, active area: 7114.5 cm^2   -->  0.09 Hz/cm^2
    # digis in sector 21:   67574  --> hit rate:  1357.5 Hz, active area: 5737.5 cm^2   -->  0.24 Hz/cm^2
    # digis in sector 22:   43016  --> hit rate:   864.1 Hz, active area: 3766.5 cm^2   -->  0.23 Hz/cm^2
    # digis in sector 23:   49683  --> hit rate:   998.1 Hz, active area: 7060.5 cm^2   -->  0.14 Hz/cm^2
    # digis in sector 24:   50124  --> hit rate:  1006.9 Hz, active area: 7533.0 cm^2   -->  0.13 Hz/cm^2

     *** noise rate calculation in run 20079031 ***
    # events: 3916210, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   19987  --> hit rate:   638.0 Hz, active area: 7533.0 cm^2   -->  0.08 Hz/cm^2
    # digis in sector 14:   12249  --> hit rate:   391.0 Hz, active area: 7479.0 cm^2   -->  0.05 Hz/cm^2
    # digis in sector 15:  235279  --> hit rate:  7509.8 Hz, active area: 7533.0 cm^2   -->  1.00 Hz/cm^2
    # digis in sector 16:  381011  --> hit rate: 12161.3 Hz, active area: 7533.0 cm^2   -->  1.61 Hz/cm^2
    # digis in sector 17:   26277  --> hit rate:   838.7 Hz, active area: 7533.0 cm^2   -->  0.11 Hz/cm^2
    # digis in sector 18:   26054  --> hit rate:   831.6 Hz, active area: 7533.0 cm^2   -->  0.11 Hz/cm^2
    # digis in sector 19:    8446  --> hit rate:   269.6 Hz, active area: 3766.5 cm^2   -->  0.07 Hz/cm^2
    # digis in sector 20:   17275  --> hit rate:   551.4 Hz, active area: 7087.5 cm^2   -->  0.08 Hz/cm^2
    # digis in sector 21:   48725  --> hit rate:  1555.2 Hz, active area: 7533.0 cm^2   -->  0.21 Hz/cm^2
    # digis in sector 22:   14502  --> hit rate:   462.9 Hz, active area: 2808.0 cm^2   -->  0.16 Hz/cm^2
    # digis in sector 23:   29629  --> hit rate:   945.7 Hz, active area: 7060.5 cm^2   -->  0.13 Hz/cm^2
    # digis in sector 24:   30363  --> hit rate:   969.1 Hz, active area: 7533.0 cm^2   -->  0.13 Hz/cm^2

     *** noise rate calculation in run 20081028 ***
    # events: 3815230, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   33005  --> hit rate:  1081.4 Hz, active area: 7533.0 cm^2   -->  0.14 Hz/cm^2
    # digis in sector 14:   20986  --> hit rate:   687.6 Hz, active area: 7479.0 cm^2   -->  0.09 Hz/cm^2
    # digis in sector 15:  247561  --> hit rate:  8110.9 Hz, active area: 7533.0 cm^2   -->  1.08 Hz/cm^2
    # digis in sector 16:   44972  --> hit rate:  1473.4 Hz, active area: 5022.0 cm^2   -->  0.29 Hz/cm^2
    # digis in sector 17:   36439  --> hit rate:  1193.9 Hz, active area: 7533.0 cm^2   -->  0.16 Hz/cm^2
    # digis in sector 18:   21877  --> hit rate:   716.8 Hz, active area: 5022.0 cm^2   -->  0.14 Hz/cm^2
    # digis in sector 19:   12936  --> hit rate:   423.8 Hz, active area: 3807.0 cm^2   -->  0.11 Hz/cm^2
    # digis in sector 20:   24114  --> hit rate:   790.1 Hz, active area: 7114.5 cm^2   -->  0.11 Hz/cm^2
    # digis in sector 21:  120801  --> hit rate:  3957.9 Hz, active area: 7533.0 cm^2   -->  0.53 Hz/cm^2
    # digis in sector 22: 1083655  --> hit rate: 35504.2 Hz, active area: 5022.0 cm^2   -->  7.07 Hz/cm^2
    # digis in sector 23:   39945  --> hit rate:  1308.7 Hz, active area: 7060.5 cm^2   -->  0.19 Hz/cm^2
    # digis in sector 24:   43181  --> hit rate:  1414.8 Hz, active area: 7533.0 cm^2   -->  0.19 Hz/cm^2

     *** noise rate calculation in run 20083038 ***
    # events: 3893736, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   23415  --> hit rate:   751.7 Hz, active area: 7533.0 cm^2   -->  0.10 Hz/cm^2
    # digis in sector 14:   13388  --> hit rate:   429.8 Hz, active area: 7479.0 cm^2   -->  0.06 Hz/cm^2
    # digis in sector 15:  214336  --> hit rate:  6880.8 Hz, active area: 7533.0 cm^2   -->  0.91 Hz/cm^2
    # digis in sector 16:  353486  --> hit rate: 11347.9 Hz, active area: 7533.0 cm^2   -->  1.51 Hz/cm^2
    # digis in sector 17:   25874  --> hit rate:   830.6 Hz, active area: 7533.0 cm^2   -->  0.11 Hz/cm^2
    # digis in sector 18:   17518  --> hit rate:   562.4 Hz, active area: 5022.0 cm^2   -->  0.11 Hz/cm^2
    # digis in sector 19:   21393  --> hit rate:   686.8 Hz, active area: 7533.0 cm^2   -->  0.09 Hz/cm^2
    # digis in sector 20:   11998  --> hit rate:   385.2 Hz, active area: 5022.0 cm^2   -->  0.08 Hz/cm^2
    # digis in sector 21:   53777  --> hit rate:  1726.4 Hz, active area: 7533.0 cm^2   -->  0.23 Hz/cm^2
    # digis in sector 22:   14515  --> hit rate:   466.0 Hz, active area: 2511.0 cm^2   -->  0.19 Hz/cm^2
    # digis in sector 23:   31882  --> hit rate:  1023.5 Hz, active area: 7060.5 cm^2   -->  0.14 Hz/cm^2
    # digis in sector 24:   31652  --> hit rate:  1016.1 Hz, active area: 7533.0 cm^2   -->  0.13 Hz/cm^2

     *** noise rate calculation in run 20084021 ***
    # events: 3910030, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   23514  --> hit rate:   751.7 Hz, active area: 7533.0 cm^2   -->  0.10 Hz/cm^2
    # digis in sector 14:   13887  --> hit rate:   444.0 Hz, active area: 7479.0 cm^2   -->  0.06 Hz/cm^2
    # digis in sector 15:  225636  --> hit rate:  7213.4 Hz, active area: 7533.0 cm^2   -->  0.96 Hz/cm^2
    # digis in sector 16:  377472  --> hit rate: 12067.4 Hz, active area: 7533.0 cm^2   -->  1.60 Hz/cm^2
    # digis in sector 17:   28092  --> hit rate:   898.1 Hz, active area: 7533.0 cm^2   -->  0.12 Hz/cm^2
    # digis in sector 18:   17793  --> hit rate:   568.8 Hz, active area: 5022.0 cm^2   -->  0.11 Hz/cm^2
    # digis in sector 19:   23165  --> hit rate:   740.6 Hz, active area: 7533.0 cm^2   -->  0.10 Hz/cm^2
    # digis in sector 20:   18217  --> hit rate:   582.4 Hz, active area: 7114.5 cm^2   -->  0.08 Hz/cm^2
    # digis in sector 21:   62790  --> hit rate:  2007.3 Hz, active area: 7533.0 cm^2   -->  0.27 Hz/cm^2
    # digis in sector 22:   21034  --> hit rate:   672.4 Hz, active area: 3753.0 cm^2   -->  0.18 Hz/cm^2
    # digis in sector 23:   31718  --> hit rate:  1014.0 Hz, active area: 7060.5 cm^2   -->  0.14 Hz/cm^2
    # digis in sector 24:   32147  --> hit rate:  1027.7 Hz, active area: 7533.0 cm^2   -->  0.14 Hz/cm^2

     *** noise rate calculation in run 20085021 ***
    # events: 3931160, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   21108  --> hit rate:   671.2 Hz, active area: 7533.0 cm^2   -->  0.09 Hz/cm^2
    # digis in sector 14:   12501  --> hit rate:   397.5 Hz, active area: 7479.0 cm^2   -->  0.05 Hz/cm^2
    # digis in sector 15:  221491  --> hit rate:  7042.8 Hz, active area: 7533.0 cm^2   -->  0.93 Hz/cm^2
    # digis in sector 16:  366374  --> hit rate: 11649.7 Hz, active area: 7519.5 cm^2   -->  1.55 Hz/cm^2
    # digis in sector 17:   27220  --> hit rate:   865.5 Hz, active area: 7533.0 cm^2   -->  0.11 Hz/cm^2
    # digis in sector 18:   16238  --> hit rate:   516.3 Hz, active area: 5022.0 cm^2   -->  0.10 Hz/cm^2
    # digis in sector 19:   20848  --> hit rate:   662.9 Hz, active area: 7533.0 cm^2   -->  0.09 Hz/cm^2
    # digis in sector 20:   17076  --> hit rate:   543.0 Hz, active area: 7087.5 cm^2   -->  0.08 Hz/cm^2
    # digis in sector 21:   53753  --> hit rate:  1709.2 Hz, active area: 7533.0 cm^2   -->  0.23 Hz/cm^2
    # digis in sector 22:   15822  --> hit rate:   503.1 Hz, active area: 2835.0 cm^2   -->  0.18 Hz/cm^2
    # digis in sector 23:   28874  --> hit rate:   918.1 Hz, active area: 7060.5 cm^2   -->  0.13 Hz/cm^2
    # digis in sector 24:   31127  --> hit rate:   989.8 Hz, active area: 7533.0 cm^2   -->  0.13 Hz/cm^2

     *** noise rate calculation in run 20086021 ***
    # events: 3931771, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   30846  --> hit rate:   980.7 Hz, active area: 7533.0 cm^2   -->  0.13 Hz/cm^2
    # digis in sector 14:   20416  --> hit rate:   649.1 Hz, active area: 7479.0 cm^2   -->  0.09 Hz/cm^2
    # digis in sector 15:  343468  --> hit rate: 10919.6 Hz, active area: 7533.0 cm^2   -->  1.45 Hz/cm^2
    # digis in sector 16:  466636  --> hit rate: 14835.4 Hz, active area: 7533.0 cm^2   -->  1.97 Hz/cm^2
    # digis in sector 17:   40329  --> hit rate:  1282.2 Hz, active area: 7533.0 cm^2   -->  0.17 Hz/cm^2
    # digis in sector 18:   30600  --> hit rate:   972.8 Hz, active area: 5022.0 cm^2   -->  0.19 Hz/cm^2
    # digis in sector 19:   33836  --> hit rate:  1075.7 Hz, active area: 7533.0 cm^2   -->  0.14 Hz/cm^2
    # digis in sector 20:  199241  --> hit rate:  6334.3 Hz, active area: 7114.5 cm^2   -->  0.89 Hz/cm^2
    # digis in sector 21:   53065  --> hit rate:  1687.1 Hz, active area: 5022.0 cm^2   -->  0.34 Hz/cm^2
    # digis in sector 22:   26969  --> hit rate:   857.4 Hz, active area: 2511.0 cm^2   -->  0.34 Hz/cm^2
    # digis in sector 23:   50215  --> hit rate:  1596.4 Hz, active area: 7060.5 cm^2   -->  0.23 Hz/cm^2
    # digis in sector 24:   52747  --> hit rate:  1676.9 Hz, active area: 7533.0 cm^2   -->  0.22 Hz/cm^2

     *** noise rate calculation in run 20087027 ***
    # events: 3452020, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   60027  --> hit rate:  2173.6 Hz, active area: 7533.0 cm^2   -->  0.29 Hz/cm^2
    # digis in sector 14:   42999  --> hit rate:  1557.0 Hz, active area: 7479.0 cm^2   -->  0.21 Hz/cm^2
    # digis in sector 15:  182480  --> hit rate:  6607.7 Hz, active area: 7533.0 cm^2   -->  0.88 Hz/cm^2
    # digis in sector 16:  300815  --> hit rate: 10892.7 Hz, active area: 7506.0 cm^2   -->  1.45 Hz/cm^2
    # digis in sector 17:   62261  --> hit rate:  2254.5 Hz, active area: 7533.0 cm^2   -->  0.30 Hz/cm^2
    # digis in sector 18:   38095  --> hit rate:  1379.4 Hz, active area: 5022.0 cm^2   -->  0.27 Hz/cm^2
    # digis in sector 19:   55776  --> hit rate:  2019.7 Hz, active area: 7533.0 cm^2   -->  0.27 Hz/cm^2
    # digis in sector 20:   46798  --> hit rate:  1694.6 Hz, active area: 7114.5 cm^2   -->  0.24 Hz/cm^2
    # digis in sector 21:   39240  --> hit rate:  1420.9 Hz, active area: 7533.0 cm^2   -->  0.19 Hz/cm^2
    # digis in sector 22:   12020  --> hit rate:   435.3 Hz, active area: 2524.5 cm^2   -->  0.17 Hz/cm^2
    # digis in sector 23:   67851  --> hit rate:  2456.9 Hz, active area: 7060.5 cm^2   -->  0.35 Hz/cm^2
    # digis in sector 24:   72859  --> hit rate:  2638.3 Hz, active area: 7533.0 cm^2   -->  0.35 Hz/cm^2

     *** noise rate calculation in run 20088019 ***
    # events: 3898510, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   75863  --> hit rate:  2432.4 Hz, active area: 7533.0 cm^2   -->  0.32 Hz/cm^2
    # digis in sector 14:   57684  --> hit rate:  1849.6 Hz, active area: 7479.0 cm^2   -->  0.25 Hz/cm^2
    # digis in sector 15:  202339  --> hit rate:  6487.7 Hz, active area: 7533.0 cm^2   -->  0.86 Hz/cm^2
    # digis in sector 16:  337914  --> hit rate: 10834.7 Hz, active area: 7533.0 cm^2   -->  1.44 Hz/cm^2
    # digis in sector 17:   81738  --> hit rate:  2620.8 Hz, active area: 7533.0 cm^2   -->  0.35 Hz/cm^2
    # digis in sector 18:   45849  --> hit rate:  1470.1 Hz, active area: 5022.0 cm^2   -->  0.29 Hz/cm^2
    # digis in sector 19:   31502  --> hit rate:  1010.1 Hz, active area: 3766.5 cm^2   -->  0.27 Hz/cm^2
    # digis in sector 20:   60227  --> hit rate:  1931.1 Hz, active area: 7114.5 cm^2   -->  0.27 Hz/cm^2
    # digis in sector 21:   50704  --> hit rate:  1625.7 Hz, active area: 7533.0 cm^2   -->  0.22 Hz/cm^2
    # digis in sector 22:   14743  --> hit rate:   472.7 Hz, active area: 2511.0 cm^2   -->  0.19 Hz/cm^2
    # digis in sector 23:   87169  --> hit rate:  2794.9 Hz, active area: 7060.5 cm^2   -->  0.40 Hz/cm^2
    # digis in sector 24:   93097  --> hit rate:  2985.0 Hz, active area: 7533.0 cm^2   -->  0.40 Hz/cm^2

     *** noise rate calculation in run 20089023 ***
    # events: 3844835, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   74317  --> hit rate:  2416.1 Hz, active area: 7533.0 cm^2   -->  0.32 Hz/cm^2
    # digis in sector 14:   55117  --> hit rate:  1791.9 Hz, active area: 7479.0 cm^2   -->  0.24 Hz/cm^2
    # digis in sector 15:  197409  --> hit rate:  6418.0 Hz, active area: 7533.0 cm^2   -->  0.85 Hz/cm^2
    # digis in sector 16:  330331  --> hit rate: 10739.4 Hz, active area: 7533.0 cm^2   -->  1.43 Hz/cm^2
    # digis in sector 17:   79994  --> hit rate:  2600.7 Hz, active area: 7533.0 cm^2   -->  0.35 Hz/cm^2
    # digis in sector 18:   45364  --> hit rate:  1474.8 Hz, active area: 5022.0 cm^2   -->  0.29 Hz/cm^2
    # digis in sector 19:   71511  --> hit rate:  2324.9 Hz, active area: 7533.0 cm^2   -->  0.31 Hz/cm^2
    # digis in sector 20:   57725  --> hit rate:  1876.7 Hz, active area: 7114.5 cm^2   -->  0.26 Hz/cm^2
    # digis in sector 21:   34465  --> hit rate:  1120.5 Hz, active area: 5022.0 cm^2   -->  0.22 Hz/cm^2
    # digis in sector 22:   14805  --> hit rate:   481.3 Hz, active area: 2538.0 cm^2   -->  0.19 Hz/cm^2
    # digis in sector 23:   84354  --> hit rate:  2742.4 Hz, active area: 7060.5 cm^2   -->  0.39 Hz/cm^2
    # digis in sector 24:   89831  --> hit rate:  2920.5 Hz, active area: 7533.0 cm^2   -->  0.39 Hz/cm^2

     *** noise rate calculation in run 20090038 ***
    # events: 3938681, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   87048  --> hit rate:  2762.6 Hz, active area: 7533.0 cm^2   -->  0.37 Hz/cm^2
    # digis in sector 14:   66440  --> hit rate:  2108.6 Hz, active area: 7479.0 cm^2   -->  0.28 Hz/cm^2
    # digis in sector 15:  255037  --> hit rate:  8094.0 Hz, active area: 7533.0 cm^2   -->  1.07 Hz/cm^2
    # digis in sector 16:  377060  --> hit rate: 11966.6 Hz, active area: 7533.0 cm^2   -->  1.59 Hz/cm^2
    # digis in sector 17:   94342  --> hit rate:  2994.1 Hz, active area: 7533.0 cm^2   -->  0.40 Hz/cm^2
    # digis in sector 18:   55726  --> hit rate:  1768.5 Hz, active area: 5022.0 cm^2   -->  0.35 Hz/cm^2
    # digis in sector 19:   85705  --> hit rate:  2720.0 Hz, active area: 7533.0 cm^2   -->  0.36 Hz/cm^2
    # digis in sector 20:   65670  --> hit rate:  2084.1 Hz, active area: 6277.5 cm^2   -->  0.33 Hz/cm^2
    # digis in sector 21:   37276  --> hit rate:  1183.0 Hz, active area: 2511.0 cm^2   -->  0.47 Hz/cm^2
    # digis in sector 22:   20587  --> hit rate:   653.4 Hz, active area: 2632.5 cm^2   -->  0.25 Hz/cm^2
    # digis in sector 23:   97378  --> hit rate:  3090.4 Hz, active area: 7060.5 cm^2   -->  0.44 Hz/cm^2
    # digis in sector 24:  104768  --> hit rate:  3325.0 Hz, active area: 7533.0 cm^2   -->  0.44 Hz/cm^2

     *** noise rate calculation in run 20092048 ***
    # events: 3911676, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   77682  --> hit rate:  2482.4 Hz, active area: 7533.0 cm^2   -->  0.33 Hz/cm^2
    # digis in sector 14:   59159  --> hit rate:  1890.5 Hz, active area: 7479.0 cm^2   -->  0.25 Hz/cm^2
    # digis in sector 15:  181295  --> hit rate:  5793.4 Hz, active area: 7533.0 cm^2   -->  0.77 Hz/cm^2
    # digis in sector 16:  296702  --> hit rate:  9481.3 Hz, active area: 7506.0 cm^2   -->  1.26 Hz/cm^2
    # digis in sector 17:   84829  --> hit rate:  2710.8 Hz, active area: 7533.0 cm^2   -->  0.36 Hz/cm^2
    # digis in sector 18:   74074  --> hit rate:  2367.1 Hz, active area: 7533.0 cm^2   -->  0.31 Hz/cm^2
    # digis in sector 19:   75518  --> hit rate:  2413.2 Hz, active area: 7533.0 cm^2   -->  0.32 Hz/cm^2
    # digis in sector 20:   65478  --> hit rate:  2092.4 Hz, active area: 7114.5 cm^2   -->  0.29 Hz/cm^2
    # digis in sector 21:   44704  --> hit rate:  1428.5 Hz, active area: 7533.0 cm^2   -->  0.19 Hz/cm^2
    # digis in sector 22: 4918299  --> hit rate: 157167.2 Hz, active area: 6345.0 cm^2   --> 24.77 Hz/cm^2
    # digis in sector 23:   87552  --> hit rate:  2797.8 Hz, active area: 7060.5 cm^2   -->  0.40 Hz/cm^2
    # digis in sector 24:   91993  --> hit rate:  2939.7 Hz, active area: 7533.0 cm^2   -->  0.39 Hz/cm^2

     *** noise rate calculation in run 20095024 ***
    # events: 4000000, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   97102  --> hit rate:  3034.4 Hz, active area: 7533.0 cm^2   -->  0.40 Hz/cm^2
    # digis in sector 14:   75811  --> hit rate:  2369.1 Hz, active area: 7479.0 cm^2   -->  0.32 Hz/cm^2
    # digis in sector 15:  487822  --> hit rate: 15244.4 Hz, active area: 2511.0 cm^2   -->  6.07 Hz/cm^2
    # digis in sector 16: 2939633  --> hit rate: 91863.5 Hz, active area: 7533.0 cm^2   --> 12.19 Hz/cm^2
    # digis in sector 17:  112502  --> hit rate:  3515.7 Hz, active area: 7519.5 cm^2   -->  0.47 Hz/cm^2
    # digis in sector 18:   93709  --> hit rate:  2928.4 Hz, active area: 7533.0 cm^2   -->  0.39 Hz/cm^2
    # digis in sector 19:  114142  --> hit rate:  3566.9 Hz, active area: 7533.0 cm^2   -->  0.47 Hz/cm^2
    # digis in sector 20:   90453  --> hit rate:  2826.7 Hz, active area: 7114.5 cm^2   -->  0.40 Hz/cm^2
    # digis in sector 21:  105922  --> hit rate:  3310.1 Hz, active area: 7533.0 cm^2   -->  0.44 Hz/cm^2
    # digis in sector 22: 1308456  --> hit rate: 40889.2 Hz, active area: 6412.5 cm^2   -->  6.38 Hz/cm^2
    # digis in sector 23:  114177  --> hit rate:  3568.0 Hz, active area: 7060.5 cm^2   -->  0.51 Hz/cm^2
    # digis in sector 24:  118272  --> hit rate:  3696.0 Hz, active area: 7533.0 cm^2   -->  0.49 Hz/cm^2

     *** noise rate calculation in run 20097029 ***
    # events: 4000000, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  109603  --> hit rate:  3425.1 Hz, active area: 7533.0 cm^2   -->  0.45 Hz/cm^2
    # digis in sector 14:   85594  --> hit rate:  2674.8 Hz, active area: 7479.0 cm^2   -->  0.36 Hz/cm^2
    # digis in sector 15: 1004823  --> hit rate: 31400.7 Hz, active area: 6277.5 cm^2   -->  5.00 Hz/cm^2
    # digis in sector 16: 3102580  --> hit rate: 96955.6 Hz, active area: 7533.0 cm^2   --> 12.87 Hz/cm^2
    # digis in sector 17:  125097  --> hit rate:  3909.3 Hz, active area: 7533.0 cm^2   -->  0.52 Hz/cm^2
    # digis in sector 18:  105498  --> hit rate:  3296.8 Hz, active area: 7533.0 cm^2   -->  0.44 Hz/cm^2
    # digis in sector 19:  134232  --> hit rate:  4194.8 Hz, active area: 7533.0 cm^2   -->  0.56 Hz/cm^2
    # digis in sector 20:  102620  --> hit rate:  3206.9 Hz, active area: 7114.5 cm^2   -->  0.45 Hz/cm^2
    # digis in sector 21:  135462  --> hit rate:  4233.2 Hz, active area: 7533.0 cm^2   -->  0.56 Hz/cm^2
    # digis in sector 22:   28684  --> hit rate:   896.4 Hz, active area: 3469.5 cm^2   -->  0.26 Hz/cm^2
    # digis in sector 23:  124454  --> hit rate:  3889.2 Hz, active area: 7060.5 cm^2   -->  0.55 Hz/cm^2
    # digis in sector 24:  134817  --> hit rate:  4213.0 Hz, active area: 7533.0 cm^2   -->  0.56 Hz/cm^2

     *** noise rate calculation in run 20098034 ***
    # events: 3747444, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  216014  --> hit rate:  7205.4 Hz, active area: 7533.0 cm^2   -->  0.96 Hz/cm^2
    # digis in sector 14:  173923  --> hit rate:  5801.4 Hz, active area: 7479.0 cm^2   -->  0.78 Hz/cm^2
    # digis in sector 15: 2209445  --> hit rate: 73698.4 Hz, active area: 5022.0 cm^2   --> 14.68 Hz/cm^2
    # digis in sector 16: 5166266  --> hit rate: 172326.3 Hz, active area: 7533.0 cm^2   --> 22.88 Hz/cm^2
    # digis in sector 17:  251961  --> hit rate:  8404.4 Hz, active area: 7533.0 cm^2   -->  1.12 Hz/cm^2
    # digis in sector 18:  155989  --> hit rate:  5203.2 Hz, active area: 5022.0 cm^2   -->  1.04 Hz/cm^2
    # digis in sector 19:  309828  --> hit rate: 10334.6 Hz, active area: 7533.0 cm^2   -->  1.37 Hz/cm^2
    # digis in sector 20:  226815  --> hit rate:  7565.7 Hz, active area: 7114.5 cm^2   -->  1.06 Hz/cm^2
    # digis in sector 21:  452835  --> hit rate: 15104.8 Hz, active area: 5022.0 cm^2   -->  3.01 Hz/cm^2
    # digis in sector 22:  270415  --> hit rate:  9020.0 Hz, active area: 3766.5 cm^2   -->  2.39 Hz/cm^2
    # digis in sector 23:  251961  --> hit rate:  8404.4 Hz, active area: 7060.5 cm^2   -->  1.19 Hz/cm^2
    # digis in sector 24:  275865  --> hit rate:  9201.8 Hz, active area: 7533.0 cm^2   -->  1.22 Hz/cm^2

     *** noise rate calculation in run 20100042 ***
    # events: 3934158, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  330290  --> hit rate: 10494.3 Hz, active area: 7411.5 cm^2   -->  1.42 Hz/cm^2
    # digis in sector 14: 19286137  --> hit rate: 612778.4 Hz, active area: 7303.5 cm^2   --> 83.90 Hz/cm^2
    # digis in sector 15: 1290534  --> hit rate: 41004.1 Hz, active area: 7533.0 cm^2   -->  5.44 Hz/cm^2
    # digis in sector 16: 3188186  --> hit rate: 101298.2 Hz, active area: 7533.0 cm^2   --> 13.45 Hz/cm^2
    # digis in sector 17:  140207  --> hit rate:  4454.8 Hz, active area: 7465.5 cm^2   -->  0.60 Hz/cm^2
    # digis in sector 18:  111718  --> hit rate:  3549.6 Hz, active area: 7452.0 cm^2   -->  0.48 Hz/cm^2
    # digis in sector 19:  147642  --> hit rate:  4691.0 Hz, active area: 7425.0 cm^2   -->  0.63 Hz/cm^2
    # digis in sector 20:  169822  --> hit rate:  5395.8 Hz, active area: 7060.5 cm^2   -->  0.76 Hz/cm^2
    # digis in sector 21:  128477  --> hit rate:  4082.1 Hz, active area: 5022.0 cm^2   -->  0.81 Hz/cm^2
    # digis in sector 22:   43007  --> hit rate:  1366.5 Hz, active area: 2821.5 cm^2   -->  0.48 Hz/cm^2
    # digis in sector 23: 5499283  --> hit rate: 174728.7 Hz, active area: 7033.5 cm^2   --> 24.84 Hz/cm^2
    # digis in sector 24:  136217  --> hit rate:  4328.0 Hz, active area: 7506.0 cm^2   -->  0.58 Hz/cm^2

     *** noise rate calculation in run 20101027 ***
    # events: 2468702, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  105287  --> hit rate:  5331.1 Hz, active area: 7398.0 cm^2   -->  0.72 Hz/cm^2
    # digis in sector 14: 12216280  --> hit rate: 618557.9 Hz, active area: 7290.0 cm^2   --> 84.85 Hz/cm^2
    # digis in sector 15:  694874  --> hit rate: 35184.2 Hz, active area: 7533.0 cm^2   -->  4.67 Hz/cm^2
    # digis in sector 16: 1979180  --> hit rate: 100213.6 Hz, active area: 7533.0 cm^2   --> 13.30 Hz/cm^2
    # digis in sector 17:   85961  --> hit rate:  4352.5 Hz, active area: 7465.5 cm^2   -->  0.58 Hz/cm^2
    # digis in sector 18:   69745  --> hit rate:  3531.5 Hz, active area: 7452.0 cm^2   -->  0.47 Hz/cm^2
    # digis in sector 19:   87832  --> hit rate:  4447.3 Hz, active area: 7425.0 cm^2   -->  0.60 Hz/cm^2
    # digis in sector 20:   69409  --> hit rate:  3514.4 Hz, active area: 7060.5 cm^2   -->  0.50 Hz/cm^2
    # digis in sector 21:   92597  --> hit rate:  4688.5 Hz, active area: 7533.0 cm^2   -->  0.62 Hz/cm^2
    # digis in sector 22:   24885  --> hit rate:  1260.0 Hz, active area: 3685.5 cm^2   -->  0.34 Hz/cm^2
    # digis in sector 23: 3012916  --> hit rate: 152555.7 Hz, active area: 7020.0 cm^2   --> 21.73 Hz/cm^2
    # digis in sector 24: 1624865  --> hit rate: 82273.2 Hz, active area: 7519.5 cm^2   --> 10.94 Hz/cm^2

     *** noise rate calculation in run 20104033 ***
    # events: 3861860, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  123340  --> hit rate:  3992.2 Hz, active area: 4968.0 cm^2   -->  0.80 Hz/cm^2
    # digis in sector 14:  137663  --> hit rate:  4455.9 Hz, active area: 6939.0 cm^2   -->  0.64 Hz/cm^2
    # digis in sector 15: 17741170  --> hit rate: 574243.1 Hz, active area: 7492.5 cm^2   --> 76.64 Hz/cm^2
    # digis in sector 16: 9162022  --> hit rate: 296554.7 Hz, active area: 7519.5 cm^2   --> 39.44 Hz/cm^2
    # digis in sector 17:  211537  --> hit rate:  6847.0 Hz, active area: 7182.0 cm^2   -->  0.95 Hz/cm^2
    # digis in sector 18:  184665  --> hit rate:  5977.2 Hz, active area: 7249.5 cm^2   -->  0.82 Hz/cm^2
    # digis in sector 19:   87891  --> hit rate:  2844.8 Hz, active area: 3618.0 cm^2   -->  0.79 Hz/cm^2
    # digis in sector 20:  209079  --> hit rate:  6767.4 Hz, active area: 6939.0 cm^2   -->  0.98 Hz/cm^2
    # digis in sector 21: 7274862  --> hit rate: 235471.4 Hz, active area: 7533.0 cm^2   --> 31.26 Hz/cm^2
    # digis in sector 22: 2474158  --> hit rate: 80083.1 Hz, active area: 2619.0 cm^2   --> 30.58 Hz/cm^2
    # digis in sector 23:  288045  --> hit rate:  9323.4 Hz, active area: 6709.5 cm^2   -->  1.39 Hz/cm^2
    # digis in sector 24:  229860  --> hit rate:  7440.1 Hz, active area: 7074.0 cm^2   -->  1.05 Hz/cm^2

     *** noise rate calculation in run 20106023 ***
    # events: 1264767, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:    4918  --> hit rate:   486.1 Hz, active area: 1633.5 cm^2   -->  0.30 Hz/cm^2
    # digis in sector 14:     111  --> hit rate:    11.0 Hz, active area: 1120.5 cm^2   -->  0.01 Hz/cm^2
    # digis in sector 15:    8958  --> hit rate:   885.3 Hz, active area: 823.5 cm^2   -->  1.08 Hz/cm^2
    # digis in sector 16:      53  --> hit rate:     5.2 Hz, active area: 391.5 cm^2   -->  0.01 Hz/cm^2
    # digis in sector 17:     454  --> hit rate:    44.9 Hz, active area: 2146.5 cm^2   -->  0.02 Hz/cm^2
    # digis in sector 18:     756  --> hit rate:    74.7 Hz, active area: 2295.0 cm^2   -->  0.03 Hz/cm^2
    # digis in sector 19:    1155  --> hit rate:   114.2 Hz, active area: 2065.5 cm^2   -->  0.06 Hz/cm^2
    # digis in sector 20:    1360  --> hit rate:   134.4 Hz, active area: 2767.5 cm^2   -->  0.05 Hz/cm^2
    # digis in sector 21:       7  --> hit rate:     0.7 Hz, active area:  94.5 cm^2   -->  0.01 Hz/cm^2
    # digis in sector 22:       2  --> hit rate:     0.2 Hz, active area:  13.5 cm^2   -->  0.01 Hz/cm^2
    # digis in sector 23:    1108  --> hit rate:   109.5 Hz, active area: 2619.0 cm^2   -->  0.04 Hz/cm^2
    # digis in sector 24:    1037  --> hit rate:   102.5 Hz, active area: 2929.5 cm^2   -->  0.03 Hz/cm^2

     *** noise rate calculation in run 20106024 ***
    # events: 1155507, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:    2962  --> hit rate:   320.4 Hz, active area: 1984.5 cm^2   -->  0.16 Hz/cm^2
    # digis in sector 14:      95  --> hit rate:    10.3 Hz, active area: 891.0 cm^2   -->  0.01 Hz/cm^2
    # digis in sector 15:   13202  --> hit rate:  1428.2 Hz, active area: 796.5 cm^2   -->  1.79 Hz/cm^2
    # digis in sector 16:      39  --> hit rate:     4.2 Hz, active area: 405.0 cm^2   -->  0.01 Hz/cm^2
    # digis in sector 17:     376  --> hit rate:    40.7 Hz, active area: 2254.5 cm^2   -->  0.02 Hz/cm^2
    # digis in sector 18:     693  --> hit rate:    75.0 Hz, active area: 2956.5 cm^2   -->  0.03 Hz/cm^2
    # digis in sector 19:    1146  --> hit rate:   124.0 Hz, active area: 1512.0 cm^2   -->  0.08 Hz/cm^2
    # digis in sector 20:    1938  --> hit rate:   209.6 Hz, active area: 2578.5 cm^2   -->  0.08 Hz/cm^2
    # digis in sector 21:    9183  --> hit rate:   993.4 Hz, active area: 297.0 cm^2   -->  3.34 Hz/cm^2
    # digis in sector 22:    3561  --> hit rate:   385.2 Hz, active area: 108.0 cm^2   -->  3.57 Hz/cm^2
    # digis in sector 23:    1033  --> hit rate:   111.7 Hz, active area: 2686.5 cm^2   -->  0.04 Hz/cm^2
    # digis in sector 24:     899  --> hit rate:    97.3 Hz, active area: 2808.0 cm^2   -->  0.03 Hz/cm^2

     *** noise rate calculation in run 20116024 ***
    # events: 4000000, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  218373  --> hit rate:  6824.2 Hz, active area: 6885.0 cm^2   -->  0.99 Hz/cm^2
    # digis in sector 14: 1047229  --> hit rate: 32725.9 Hz, active area: 6331.5 cm^2   -->  5.17 Hz/cm^2
    # digis in sector 15: 12456581  --> hit rate: 389268.2 Hz, active area: 7249.5 cm^2   --> 53.70 Hz/cm^2
    # digis in sector 16: 6346179  --> hit rate: 198318.1 Hz, active area: 7438.5 cm^2   --> 26.66 Hz/cm^2
    # digis in sector 17:  132108  --> hit rate:  4128.4 Hz, active area: 4576.5 cm^2   -->  0.90 Hz/cm^2
    # digis in sector 18:  221368  --> hit rate:  6917.8 Hz, active area: 6925.5 cm^2   -->  1.00 Hz/cm^2
    # digis in sector 19:  563080  --> hit rate: 17596.2 Hz, active area: 6736.5 cm^2   -->  2.61 Hz/cm^2
    # digis in sector 20:  414069  --> hit rate: 12939.7 Hz, active area: 6547.5 cm^2   -->  1.98 Hz/cm^2
    # digis in sector 21: 4448681  --> hit rate: 139021.3 Hz, active area: 5602.5 cm^2   --> 24.81 Hz/cm^2
    # digis in sector 22: 10870065  --> hit rate: 339689.5 Hz, active area: 4941.0 cm^2   --> 68.75 Hz/cm^2
    # digis in sector 23:  256965  --> hit rate:  8030.2 Hz, active area: 6412.5 cm^2   -->  1.25 Hz/cm^2
    # digis in sector 24: 2794429  --> hit rate: 87325.9 Hz, active area: 5791.5 cm^2   --> 15.08 Hz/cm^2

     *** noise rate calculation in run 20117044 ***
    # events: 3925851, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  220357  --> hit rate:  7016.2 Hz, active area: 7101.0 cm^2   -->  0.99 Hz/cm^2
    # digis in sector 14:  272896  --> hit rate:  8689.1 Hz, active area: 6318.0 cm^2   -->  1.38 Hz/cm^2
    # digis in sector 15: 10163165  --> hit rate: 323597.5 Hz, active area: 7249.5 cm^2   --> 44.64 Hz/cm^2
    # digis in sector 16: 5714119  --> hit rate: 181938.9 Hz, active area: 7438.5 cm^2   --> 24.46 Hz/cm^2
    # digis in sector 17:  249966  --> hit rate:  7959.0 Hz, active area: 6628.5 cm^2   -->  1.20 Hz/cm^2
    # digis in sector 18:  180928  --> hit rate:  5760.8 Hz, active area: 7006.5 cm^2   -->  0.82 Hz/cm^2
    # digis in sector 19:   86040  --> hit rate:  2739.5 Hz, active area: 3334.5 cm^2   -->  0.82 Hz/cm^2
    # digis in sector 20:  161091  --> hit rate:  5129.2 Hz, active area: 6547.5 cm^2   -->  0.78 Hz/cm^2
    # digis in sector 21: 2727862  --> hit rate: 86855.8 Hz, active area: 3361.5 cm^2   --> 25.84 Hz/cm^2
    # digis in sector 22: 8795133  --> hit rate: 280039.1 Hz, active area: 5022.0 cm^2   --> 55.76 Hz/cm^2
    # digis in sector 23:  218920  --> hit rate:  6970.5 Hz, active area: 6412.5 cm^2   -->  1.09 Hz/cm^2
    # digis in sector 24:  505388  --> hit rate: 16091.7 Hz, active area: 6790.5 cm^2   -->  2.37 Hz/cm^2

     *** noise rate calculation in run 20119029 ***
    # events: 12182528, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  312225  --> hit rate:  3203.6 Hz, active area: 4806.0 cm^2   -->  0.67 Hz/cm^2
    # digis in sector 14:  421065  --> hit rate:  4320.4 Hz, active area: 6345.0 cm^2   -->  0.68 Hz/cm^2
    # digis in sector 15: 22276007  --> hit rate: 228565.1 Hz, active area: 7249.5 cm^2   --> 31.53 Hz/cm^2
    # digis in sector 16: 13907415  --> hit rate: 142698.4 Hz, active area: 7438.5 cm^2   --> 19.18 Hz/cm^2
    # digis in sector 17:  640509  --> hit rate:  6572.0 Hz, active area: 6871.5 cm^2   -->  0.96 Hz/cm^2
    # digis in sector 18:  499382  --> hit rate:  5124.0 Hz, active area: 7047.0 cm^2   -->  0.73 Hz/cm^2
    # digis in sector 19: 1081434  --> hit rate: 11096.2 Hz, active area: 6736.5 cm^2   -->  1.65 Hz/cm^2
    # digis in sector 20:  901025  --> hit rate:  9245.1 Hz, active area: 6493.5 cm^2   -->  1.42 Hz/cm^2
    # digis in sector 21: 4262591  --> hit rate: 43736.7 Hz, active area: 6817.5 cm^2   -->  6.42 Hz/cm^2
    # digis in sector 22: 8493433  --> hit rate: 87147.7 Hz, active area: 4927.5 cm^2   --> 17.69 Hz/cm^2
    # digis in sector 23:  565853  --> hit rate:  5806.0 Hz, active area: 6345.0 cm^2   -->  0.92 Hz/cm^2
    # digis in sector 24:  973206  --> hit rate:  9985.7 Hz, active area: 6858.0 cm^2   -->  1.46 Hz/cm^2

     *** noise rate calculation in run 20120023 ***
    # events: 3738581, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  128427  --> hit rate:  4294.0 Hz, active area: 4806.0 cm^2   -->  0.89 Hz/cm^2
    # digis in sector 14:  176057  --> hit rate:  5886.5 Hz, active area: 6264.0 cm^2   -->  0.94 Hz/cm^2
    # digis in sector 15: 7892103  --> hit rate: 263873.6 Hz, active area: 7249.5 cm^2   --> 36.40 Hz/cm^2
    # digis in sector 16: 4618371  --> hit rate: 154415.9 Hz, active area: 7317.0 cm^2   --> 21.10 Hz/cm^2
    # digis in sector 17:  198219  --> hit rate:  6627.5 Hz, active area: 6763.5 cm^2   -->  0.98 Hz/cm^2
    # digis in sector 18:  169257  --> hit rate:  5659.1 Hz, active area: 6966.0 cm^2   -->  0.81 Hz/cm^2
    # digis in sector 19:  178852  --> hit rate:  5979.9 Hz, active area: 6561.0 cm^2   -->  0.91 Hz/cm^2
    # digis in sector 20:  177054  --> hit rate:  5919.8 Hz, active area: 6466.5 cm^2   -->  0.92 Hz/cm^2
    # digis in sector 21: 1077930  --> hit rate: 36040.7 Hz, active area: 6858.0 cm^2   -->  5.26 Hz/cm^2
    # digis in sector 22: 2185695  --> hit rate: 73079.0 Hz, active area: 4927.5 cm^2   --> 14.83 Hz/cm^2
    # digis in sector 23:  187750  --> hit rate:  6277.4 Hz, active area: 6345.0 cm^2   -->  0.99 Hz/cm^2
    # digis in sector 24:  321152  --> hit rate: 10737.8 Hz, active area: 6844.5 cm^2   -->  1.57 Hz/cm^2

     *** noise rate calculation in run 20120024 ***
    # events: 10202648, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  328935  --> hit rate:  4030.0 Hz, active area: 4792.5 cm^2   -->  0.84 Hz/cm^2
    # digis in sector 14:  884412  --> hit rate: 10835.6 Hz, active area: 6291.0 cm^2   -->  1.72 Hz/cm^2
    # digis in sector 15: 22100786  --> hit rate: 270772.7 Hz, active area: 7263.0 cm^2   --> 37.28 Hz/cm^2
    # digis in sector 16: 12957240  --> hit rate: 158748.5 Hz, active area: 7330.5 cm^2   --> 21.66 Hz/cm^2
    # digis in sector 17:  557780  --> hit rate:  6833.8 Hz, active area: 6804.0 cm^2   -->  1.00 Hz/cm^2
    # digis in sector 18:  480994  --> hit rate:  5893.0 Hz, active area: 7006.5 cm^2   -->  0.84 Hz/cm^2
    # digis in sector 19:  511254  --> hit rate:  6263.7 Hz, active area: 6601.5 cm^2   -->  0.95 Hz/cm^2
    # digis in sector 20:  501474  --> hit rate:  6143.9 Hz, active area: 6520.5 cm^2   -->  0.94 Hz/cm^2
    # digis in sector 21: 3124549  --> hit rate: 38281.1 Hz, active area: 6858.0 cm^2   -->  5.58 Hz/cm^2
    # digis in sector 22: 6109208  --> hit rate: 74848.3 Hz, active area: 4927.5 cm^2   --> 15.19 Hz/cm^2
    # digis in sector 23:  533336  --> hit rate:  6534.3 Hz, active area: 6358.5 cm^2   -->  1.03 Hz/cm^2
    # digis in sector 24:  916262  --> hit rate: 11225.8 Hz, active area: 6871.5 cm^2   -->  1.63 Hz/cm^2

     *** noise rate calculation in run 20122029 ***
    # events: 7836709, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  161948  --> hit rate:  2583.2 Hz, active area: 4225.5 cm^2   -->  0.61 Hz/cm^2
    # digis in sector 14:  156788  --> hit rate:  2500.9 Hz, active area: 3753.0 cm^2   -->  0.67 Hz/cm^2
    # digis in sector 15: 8757307  --> hit rate: 139684.1 Hz, active area: 5656.5 cm^2   --> 24.69 Hz/cm^2
    # digis in sector 16: 5885666  --> hit rate: 93879.8 Hz, active area: 6520.5 cm^2   --> 14.40 Hz/cm^2
    # digis in sector 17: 1224937  --> hit rate: 19538.4 Hz, active area: 4995.0 cm^2   -->  3.91 Hz/cm^2
    # digis in sector 18: 3192053  --> hit rate: 50915.1 Hz, active area: 5265.0 cm^2   -->  9.67 Hz/cm^2
    # digis in sector 19:   65094  --> hit rate:  1038.3 Hz, active area: 2524.5 cm^2   -->  0.41 Hz/cm^2
    # digis in sector 20:  510034  --> hit rate:  8135.3 Hz, active area: 5076.0 cm^2   -->  1.60 Hz/cm^2
    # digis in sector 21: 4028331  --> hit rate: 64254.2 Hz, active area: 5791.5 cm^2   --> 11.09 Hz/cm^2
    # digis in sector 22: 7513563  --> hit rate: 119845.6 Hz, active area: 4387.5 cm^2   --> 27.32 Hz/cm^2
    # digis in sector 23:  167412  --> hit rate:  2670.3 Hz, active area: 3402.0 cm^2   -->  0.78 Hz/cm^2
    # digis in sector 24:  204359  --> hit rate:  3259.6 Hz, active area: 3591.0 cm^2   -->  0.91 Hz/cm^2


     *** noise rate calculation in run 20123021 ***
    # events: 3324743, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   60435  --> hit rate:  2272.2 Hz, active area: 3847.5 cm^2   -->  0.59 Hz/cm^2
    # digis in sector 14:   53571  --> hit rate:  2014.1 Hz, active area: 3240.0 cm^2   -->  0.62 Hz/cm^2
    # digis in sector 15: 3505885  --> hit rate: 131810.4 Hz, active area: 5697.0 cm^2   --> 23.14 Hz/cm^2
    # digis in sector 16: 2264735  --> hit rate: 85147.0 Hz, active area: 6520.5 cm^2   --> 13.06 Hz/cm^2
    # digis in sector 17:  287522  --> hit rate: 10809.9 Hz, active area: 4630.5 cm^2   -->  2.33 Hz/cm^2
    # digis in sector 18:   87653  --> hit rate:  3295.5 Hz, active area: 4806.0 cm^2   -->  0.69 Hz/cm^2
    # digis in sector 19:   81148  --> hit rate:  3050.9 Hz, active area: 4671.0 cm^2   -->  0.65 Hz/cm^2
    # digis in sector 20:  991155  --> hit rate: 37264.3 Hz, active area: 4860.0 cm^2   -->  7.67 Hz/cm^2
    # digis in sector 21: 1133125  --> hit rate: 42602.0 Hz, active area: 5778.0 cm^2   -->  7.37 Hz/cm^2
    # digis in sector 22: 1727401  --> hit rate: 64944.9 Hz, active area: 4320.0 cm^2   --> 15.03 Hz/cm^2
    digis in sector 23:  571145  --> hit rate: 21473.3 Hz, active area: 3928.5 cm^2   -->  5.47 Hz/cm^2
    # digis in sector 24: 4148993  --> hit rate: 155989.2 Hz, active area: 4900.5 cm^2   --> 31.83 Hz/cm^2


     *** noise rate calculation in run 20126033 ***
    # events: 3408273, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   43430  --> hit rate:  1592.8 Hz, active area: 3928.5 cm^2   -->  0.41 Hz/cm^2
    # digis in sector 14:   49880  --> hit rate:  1829.4 Hz, active area: 3334.5 cm^2   -->  0.55 Hz/cm^2
    # digis in sector 15: 3418710  --> hit rate: 125382.8 Hz, active area: 5724.0 cm^2   --> 21.90 Hz/cm^2
    # digis in sector 16: 1764264  --> hit rate: 64705.2 Hz, active area: 6372.0 cm^2   --> 10.15 Hz/cm^2
    # digis in sector 17:   61074  --> hit rate:  2239.9 Hz, active area: 4374.0 cm^2   -->  0.51 Hz/cm^2
    # digis in sector 18:   51958  --> hit rate:  1905.6 Hz, active area: 4630.5 cm^2   -->  0.41 Hz/cm^2
    # digis in sector 19:  389626  --> hit rate: 14289.7 Hz, active area: 4509.0 cm^2   -->  3.17 Hz/cm^2
    # digis in sector 20:  136420  --> hit rate:  5003.3 Hz, active area: 4603.5 cm^2   -->  1.09 Hz/cm^2
    # digis in sector 21:  770217  --> hit rate: 28248.1 Hz, active area: 5764.5 cm^2   -->  4.90 Hz/cm^2
    # digis in sector 22: 1088926  --> hit rate: 39936.9 Hz, active area: 4320.0 cm^2   -->  9.24 Hz/cm^2
    # digis in sector 23:   51134  --> hit rate:  1875.4 Hz, active area: 3955.5 cm^2   -->  0.47 Hz/cm^2
    # digis in sector 24:  101904  --> hit rate:  3737.4 Hz, active area: 4684.5 cm^2   -->  0.80 Hz/cm^2

     *** noise rate calculation in run 20133026 ***
    # events: 3549227, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   29435  --> hit rate:  1036.7 Hz, active area: 3496.5 cm^2   -->  0.30 Hz/cm^2
    # digis in sector 14:  196872  --> hit rate:  6933.6 Hz, active area: 2443.5 cm^2   -->  2.84 Hz/cm^2
    # digis in sector 15:  821912  --> hit rate: 28946.9 Hz, active area: 5521.5 cm^2   -->  5.24 Hz/cm^2
    # digis in sector 16:  374629  --> hit rate: 13194.0 Hz, active area: 6358.5 cm^2   -->  2.08 Hz/cm^2
    # digis in sector 17:  832439  --> hit rate: 29317.6 Hz, active area: 4914.0 cm^2   -->  5.97 Hz/cm^2
    # digis in sector 18:   44065  --> hit rate:  1551.9 Hz, active area: 4144.5 cm^2   -->  0.37 Hz/cm^2
    # digis in sector 19: 1190335  --> hit rate: 41922.3 Hz, active area: 4185.0 cm^2   --> 10.02 Hz/cm^2
    # digis in sector 20:  234215  --> hit rate:  8248.8 Hz, active area: 4414.5 cm^2   -->  1.87 Hz/cm^2
    # digis in sector 21:  729672  --> hit rate: 25698.3 Hz, active area: 5494.5 cm^2   -->  4.68 Hz/cm^2
    # digis in sector 22:  712069  --> hit rate: 25078.3 Hz, active area: 4185.0 cm^2   -->  5.99 Hz/cm^2
    # digis in sector 23:  130036  --> hit rate:  4579.7 Hz, active area: 3645.0 cm^2   -->  1.26 Hz/cm^2
    # digis in sector 24:   85024  --> hit rate:  2994.5 Hz, active area: 2673.0 cm^2   -->  1.12 Hz/cm^2

     *** noise rate calculation in run 20135016 ***
    # events: 3827723, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   24897  --> hit rate:   813.0 Hz, active area: 3456.0 cm^2   -->  0.24 Hz/cm^2
    # digis in sector 14:   34756  --> hit rate:  1135.0 Hz, active area: 2497.5 cm^2   -->  0.45 Hz/cm^2
    # digis in sector 15:  853370  --> hit rate: 27868.1 Hz, active area: 5332.5 cm^2   -->  5.23 Hz/cm^2
    # digis in sector 16:  342639  --> hit rate: 11189.4 Hz, active area: 6318.0 cm^2   -->  1.77 Hz/cm^2
    # digis in sector 17:   29836  --> hit rate:   974.3 Hz, active area: 3064.5 cm^2   -->  0.32 Hz/cm^2
    # digis in sector 18:   20649  --> hit rate:   674.3 Hz, active area: 3793.5 cm^2   -->  0.18 Hz/cm^2
    # digis in sector 19:   18548  --> hit rate:   605.7 Hz, active area: 4144.5 cm^2   -->  0.15 Hz/cm^2
    # digis in sector 20:   62012  --> hit rate:  2025.1 Hz, active area: 4266.0 cm^2   -->  0.47 Hz/cm^2
    # digis in sector 21:  225310  --> hit rate:  7357.8 Hz, active area: 5359.5 cm^2   -->  1.37 Hz/cm^2
    # digis in sector 22:  435681  --> hit rate: 14227.8 Hz, active area: 4023.0 cm^2   -->  3.54 Hz/cm^2
    # digis in sector 23:   22685  --> hit rate:   740.8 Hz, active area: 3537.0 cm^2   -->  0.21 Hz/cm^2
    # digis in sector 24:   32509  --> hit rate:  1061.6 Hz, active area: 3834.0 cm^2   -->  0.28 Hz/cm^2

     *** noise rate calculation in run 20136015 ***
    # events: 3821385, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   28672  --> hit rate:   937.9 Hz, active area: 3469.5 cm^2   -->  0.27 Hz/cm^2
    # digis in sector 14:  170043  --> hit rate:  5562.2 Hz, active area: 2416.5 cm^2   -->  2.30 Hz/cm^2
    # digis in sector 15:  982173  --> hit rate: 32127.5 Hz, active area: 5643.0 cm^2   -->  5.69 Hz/cm^2
    # digis in sector 16:  305359  --> hit rate:  9988.5 Hz, active area: 6264.0 cm^2   -->  1.59 Hz/cm^2
    # digis in sector 17:   49417  --> hit rate:  1616.5 Hz, active area: 4063.5 cm^2   -->  0.40 Hz/cm^2
    # digis in sector 18:   37685  --> hit rate:  1232.7 Hz, active area: 4131.0 cm^2   -->  0.30 Hz/cm^2
    # digis in sector 19:   18196  --> hit rate:   595.2 Hz, active area: 4050.0 cm^2   -->  0.15 Hz/cm^2
    # digis in sector 20:  328256  --> hit rate: 10737.5 Hz, active area: 3145.5 cm^2   -->  3.41 Hz/cm^2
    # digis in sector 21:  207268  --> hit rate:  6779.9 Hz, active area: 3429.0 cm^2   -->  1.98 Hz/cm^2
    # digis in sector 22:  309792  --> hit rate: 10133.5 Hz, active area: 3996.0 cm^2   -->  2.54 Hz/cm^2
    # digis in sector 23:   23166  --> hit rate:   757.8 Hz, active area: 3658.5 cm^2   -->  0.21 Hz/cm^2
    # digis in sector 24:   51351  --> hit rate:  1679.7 Hz, active area: 4009.5 cm^2   -->  0.42 Hz/cm^2

     *** noise rate calculation in run 20137020 ***
    # events: 3920751, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   26125  --> hit rate:   832.9 Hz, active area: 3388.5 cm^2   -->  0.25 Hz/cm^2
    # digis in sector 14:    8872  --> hit rate:   282.9 Hz, active area: 2470.5 cm^2   -->  0.11 Hz/cm^2
    # digis in sector 15:  780791  --> hit rate: 24892.9 Hz, active area: 5521.5 cm^2   -->  4.51 Hz/cm^2
    # digis in sector 16:  419463  --> hit rate: 13373.2 Hz, active area: 6345.0 cm^2   -->  2.11 Hz/cm^2
    # digis in sector 17: 4291199  --> hit rate: 136810.5 Hz, active area: 4792.5 cm^2   --> 28.55 Hz/cm^2
    # digis in sector 18:  656746  --> hit rate: 20938.1 Hz, active area: 4131.0 cm^2   -->  5.07 Hz/cm^2
    # digis in sector 19:  140083  --> hit rate:  4466.1 Hz, active area: 3307.5 cm^2   -->  1.35 Hz/cm^2
    # digis in sector 20: 1713699  --> hit rate: 54635.5 Hz, active area: 4158.0 cm^2   --> 13.14 Hz/cm^2
    # digis in sector 21:  864417  --> hit rate: 27559.0 Hz, active area: 4441.5 cm^2   -->  6.20 Hz/cm^2
    # digis in sector 22:  567785  --> hit rate: 18101.9 Hz, active area: 4171.5 cm^2   -->  4.34 Hz/cm^2
    # digis in sector 23:   66612  --> hit rate:  2123.7 Hz, active area: 3699.0 cm^2   -->  0.57 Hz/cm^2
    # digis in sector 24:   53761  --> hit rate:  1714.0 Hz, active area: 4117.5 cm^2   -->  0.42 Hz/cm^2

     *** noise rate calculation in run 20140010 ***
    # events: 2720567, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   19508  --> hit rate:   896.3 Hz, active area: 3388.5 cm^2   -->  0.26 Hz/cm^2
    # digis in sector 14:   38217  --> hit rate:  1755.9 Hz, active area: 2416.5 cm^2   -->  0.73 Hz/cm^2
    # digis in sector 15:  523312  --> hit rate: 24044.3 Hz, active area: 5575.5 cm^2   -->  4.31 Hz/cm^2
    # digis in sector 16:  218685  --> hit rate: 10047.8 Hz, active area: 6372.0 cm^2   -->  1.58 Hz/cm^2
    # digis in sector 17:   22748  --> hit rate:  1045.2 Hz, active area: 3820.5 cm^2   -->  0.27 Hz/cm^2
    # digis in sector 18:   70064  --> hit rate:  3219.2 Hz, active area: 4252.5 cm^2   -->  0.76 Hz/cm^2
    # digis in sector 19:   95187  --> hit rate:  4373.5 Hz, active area: 4104.0 cm^2   -->  1.07 Hz/cm^2
    # digis in sector 20:  159429  --> hit rate:  7325.2 Hz, active area: 4212.0 cm^2   -->  1.74 Hz/cm^2
    # digis in sector 21:  239867  --> hit rate: 11021.0 Hz, active area: 5481.0 cm^2   -->  2.01 Hz/cm^2
    # digis in sector 22:  557375  --> hit rate: 25609.3 Hz, active area: 4131.0 cm^2   -->  6.20 Hz/cm^2
    # digis in sector 23:   95706  --> hit rate:  4397.3 Hz, active area: 3685.5 cm^2   -->  1.19 Hz/cm^2
    # digis in sector 24:   68374  --> hit rate:  3141.5 Hz, active area: 3712.5 cm^2   -->  0.85 Hz/cm^2

     *** noise rate calculation in run 20142011 ***
    # events: 3999998, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   39163  --> hit rate:  1223.8 Hz, active area: 3402.0 cm^2   -->  0.36 Hz/cm^2
    # digis in sector 14:   18616  --> hit rate:   581.8 Hz, active area: 2484.0 cm^2   -->  0.23 Hz/cm^2
    # digis in sector 15:  701635  --> hit rate: 21926.1 Hz, active area: 5562.0 cm^2   -->  3.94 Hz/cm^2
    # digis in sector 16:  289306  --> hit rate:  9040.8 Hz, active area: 6331.5 cm^2   -->  1.43 Hz/cm^2
    # digis in sector 17:   28855  --> hit rate:   901.7 Hz, active area: 3969.0 cm^2   -->  0.23 Hz/cm^2
    # digis in sector 18:   29063  --> hit rate:   908.2 Hz, active area: 4239.0 cm^2   -->  0.21 Hz/cm^2
    # digis in sector 19:   18114  --> hit rate:   566.1 Hz, active area: 4050.0 cm^2   -->  0.14 Hz/cm^2
    # digis in sector 20:  221661  --> hit rate:  6926.9 Hz, active area: 4293.0 cm^2   -->  1.61 Hz/cm^2
    # digis in sector 21:  231708  --> hit rate:  7240.9 Hz, active area: 5454.0 cm^2   -->  1.33 Hz/cm^2
    # digis in sector 22:  441861  --> hit rate: 13808.2 Hz, active area: 4090.5 cm^2   -->  3.38 Hz/cm^2
    # digis in sector 23:   65936  --> hit rate:  2060.5 Hz, active area: 3550.5 cm^2   -->  0.58 Hz/cm^2
    # digis in sector 24:   51107  --> hit rate:  1597.1 Hz, active area: 4077.0 cm^2   -->  0.39 Hz/cm^2

     *** noise rate calculation in run 20142013 *** ( magnet off, nominal voltage )
    # events: 4000000, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   19363  --> hit rate:   605.1 Hz, active area: 3402.0 cm^2   -->  0.18 Hz/cm^2
    # digis in sector 14:   57835  --> hit rate:  1807.3 Hz, active area: 2430.0 cm^2   -->  0.74 Hz/cm^2
    # digis in sector 15:  537527  --> hit rate: 16797.7 Hz, active area: 5562.0 cm^2   -->  3.02 Hz/cm^2
    # digis in sector 16:  234931  --> hit rate:  7341.6 Hz, active area: 6399.0 cm^2   -->  1.15 Hz/cm^2
    # digis in sector 17:   26151  --> hit rate:   817.2 Hz, active area: 3780.0 cm^2   -->  0.22 Hz/cm^2
    # digis in sector 18:   16885  --> hit rate:   527.7 Hz, active area: 3469.5 cm^2   -->  0.15 Hz/cm^2
    # digis in sector 19:   16022  --> hit rate:   500.7 Hz, active area: 3982.5 cm^2   -->  0.13 Hz/cm^2
    # digis in sector 20:   46704  --> hit rate:  1459.5 Hz, active area: 3145.5 cm^2   -->  0.46 Hz/cm^2
    # digis in sector 21:  196067  --> hit rate:  6127.1 Hz, active area: 5251.5 cm^2   -->  1.17 Hz/cm^2
    # digis in sector 22:  160217  --> hit rate:  5006.8 Hz, active area: 3159.0 cm^2   -->  1.58 Hz/cm^2
    # digis in sector 23:   84492  --> hit rate:  2640.4 Hz, active area: 3631.5 cm^2   -->  0.73 Hz/cm^2
    # digis in sector 24:  175938  --> hit rate:  5498.1 Hz, active area: 3780.0 cm^2   -->  1.45 Hz/cm^2

     *** noise rate calculation in run 20142014 *** ( HV scan: +200V )
    # events: 4000000, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   24449  --> hit rate:   764.0 Hz, active area: 3510.0 cm^2   -->  0.22 Hz/cm^2
    # digis in sector 14:   51836  --> hit rate:  1619.9 Hz, active area: 2497.5 cm^2   -->  0.65 Hz/cm^2
    # digis in sector 15:  729080  --> hit rate: 22783.8 Hz, active area: 5535.0 cm^2   -->  4.12 Hz/cm^2
    # digis in sector 16:  344763  --> hit rate: 10773.8 Hz, active area: 6399.0 cm^2   -->  1.68 Hz/cm^2
    # digis in sector 17:   34523  --> hit rate:  1078.8 Hz, active area: 3901.5 cm^2   -->  0.28 Hz/cm^2
    # digis in sector 18:   76427  --> hit rate:  2388.3 Hz, active area: 3631.5 cm^2   -->  0.66 Hz/cm^2
    # digis in sector 19:   21857  --> hit rate:   683.0 Hz, active area: 4252.5 cm^2   -->  0.16 Hz/cm^2
    # digis in sector 20:  104891  --> hit rate:  3277.8 Hz, active area: 3145.5 cm^2   -->  1.04 Hz/cm^2
    # digis in sector 21:  264603  --> hit rate:  8268.8 Hz, active area: 5265.0 cm^2   -->  1.57 Hz/cm^2
    # digis in sector 22:  316388  --> hit rate:  9887.1 Hz, active area: 3132.0 cm^2   -->  3.16 Hz/cm^2
    # digis in sector 23:  101160  --> hit rate:  3161.2 Hz, active area: 3834.0 cm^2   -->  0.82 Hz/cm^2
    # digis in sector 24:  148098  --> hit rate:  4628.1 Hz, active area: 4009.5 cm^2   -->  1.15 Hz/cm^2

     *** noise rate calculation in run 20142015 *** ( HV scan: +400V )
    # events: 3999999, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   30604  --> hit rate:   956.4 Hz, active area: 3496.5 cm^2   -->  0.27 Hz/cm^2
    # digis in sector 14:   85711  --> hit rate:  2678.5 Hz, active area: 2592.0 cm^2   -->  1.03 Hz/cm^2
    # digis in sector 15: 1021594  --> hit rate: 31924.8 Hz, active area: 5481.0 cm^2   -->  5.82 Hz/cm^2
    # digis in sector 16:  507894  --> hit rate: 15871.7 Hz, active area: 6399.0 cm^2   -->  2.48 Hz/cm^2
    # digis in sector 17:   41153  --> hit rate:  1286.0 Hz, active area: 4090.5 cm^2   -->  0.31 Hz/cm^2
    # digis in sector 18:   24779  --> hit rate:   774.3 Hz, active area: 3712.5 cm^2   -->  0.21 Hz/cm^2
    # digis in sector 19:   29366  --> hit rate:   917.7 Hz, active area: 4212.0 cm^2   -->  0.22 Hz/cm^2
    # digis in sector 20:  113784  --> hit rate:  3555.8 Hz, active area: 3105.0 cm^2   -->  1.15 Hz/cm^2
    # digis in sector 21:  376517  --> hit rate: 11766.2 Hz, active area: 5278.5 cm^2   -->  2.23 Hz/cm^2
    # digis in sector 22:  315247  --> hit rate:  9851.5 Hz, active area: 3118.5 cm^2   -->  3.16 Hz/cm^2
    # digis in sector 23:  104600  --> hit rate:  3268.8 Hz, active area: 3874.5 cm^2   -->  0.84 Hz/cm^2
    # digis in sector 24:   83616  --> hit rate:  2613.0 Hz, active area: 4158.0 cm^2   -->  0.63 Hz/cm^2

     *** noise rate calculation in run 20142017 *** ( HV scan: +600V )
    # events: 2383791, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   36882  --> hit rate:  1934.0 Hz, active area: 3456.0 cm^2   -->  0.56 Hz/cm^2
    # digis in sector 14:   51927  --> hit rate:  2722.9 Hz, active area: 2727.0 cm^2   -->  1.00 Hz/cm^2
    # digis in sector 15: 1002211  --> hit rate: 52553.4 Hz, active area: 5562.0 cm^2   -->  9.45 Hz/cm^2
    # digis in sector 16:  520807  --> hit rate: 27309.8 Hz, active area: 6453.0 cm^2   -->  4.23 Hz/cm^2
    # digis in sector 17: 1009343  --> hit rate: 52927.4 Hz, active area: 4887.0 cm^2   --> 10.83 Hz/cm^2
    # digis in sector 18:   30988  --> hit rate:  1624.9 Hz, active area: 3685.5 cm^2   -->  0.44 Hz/cm^2
    # digis in sector 19:  359681  --> hit rate: 18860.8 Hz, active area: 4414.5 cm^2   -->  4.27 Hz/cm^2
    # digis in sector 20:  443965  --> hit rate: 23280.4 Hz, active area: 3982.5 cm^2   -->  5.85 Hz/cm^2
    # digis in sector 21:  623577  --> hit rate: 32698.8 Hz, active area: 4725.0 cm^2   -->  6.92 Hz/cm^2
    # digis in sector 22:  870289  --> hit rate: 45635.8 Hz, active area: 3199.5 cm^2   --> 14.26 Hz/cm^2
    # digis in sector 23:   78821  --> hit rate:  4133.2 Hz, active area: 2713.5 cm^2   -->  1.52 Hz/cm^2
    # digis in sector 24: 7317784  --> hit rate: 383726.2 Hz, active area: 4941.0 cm^2   --> 77.66 Hz/cm^2

     *** noise rate calculation in run 20142020 *** ( HV scan: +800V )
    # events: 893158, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:    9628  --> hit rate:  1347.5 Hz, active area: 3415.5 cm^2   -->  0.39 Hz/cm^2
    # digis in sector 14:   20158  --> hit rate:  2821.2 Hz, active area: 2578.5 cm^2   -->  1.09 Hz/cm^2
    # digis in sector 15:  692364  --> hit rate: 96898.3 Hz, active area: 5602.5 cm^2   --> 17.30 Hz/cm^2
    # digis in sector 16:  307241  --> hit rate: 42999.2 Hz, active area: 6426.0 cm^2   -->  6.69 Hz/cm^2
    # digis in sector 17:   35726  --> hit rate:  5000.0 Hz, active area: 3942.0 cm^2   -->  1.27 Hz/cm^2
    # digis in sector 18: 1333532  --> hit rate: 186631.6 Hz, active area: 4360.5 cm^2   --> 42.80 Hz/cm^2
    # digis in sector 19:   52295  --> hit rate:  7318.8 Hz, active area: 4293.0 cm^2   -->  1.70 Hz/cm^2
    # digis in sector 20:   19567  --> hit rate:  2738.5 Hz, active area: 4239.0 cm^2   -->  0.65 Hz/cm^2
    # digis in sector 21:  232545  --> hit rate: 32545.3 Hz, active area: 4536.0 cm^2   -->  7.17 Hz/cm^2
    # digis in sector 22:  377441  --> hit rate: 52823.9 Hz, active area: 3159.0 cm^2   --> 16.72 Hz/cm^2
    # digis in sector 23:   23756  --> hit rate:  3324.7 Hz, active area: 2700.0 cm^2   -->  1.23 Hz/cm^2
    # digis in sector 24:   24201  --> hit rate:  3387.0 Hz, active area: 4158.0 cm^2   -->  0.81 Hz/cm^2

     *** noise rate calculation in run 20142022 *** ( HV scan: -200V )
    # events: 1214451, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:    6570  --> hit rate:   676.2 Hz, active area: 3024.0 cm^2   -->  0.22 Hz/cm^2
    # digis in sector 14:    5646  --> hit rate:   581.1 Hz, active area: 1579.5 cm^2   -->  0.37 Hz/cm^2
    # digis in sector 15:   84094  --> hit rate:  8655.6 Hz, active area: 5413.5 cm^2   -->  1.60 Hz/cm^2
    # digis in sector 16:  104339  --> hit rate: 10739.3 Hz, active area: 5440.5 cm^2   -->  1.97 Hz/cm^2
    # digis in sector 17:  364056  --> hit rate: 37471.3 Hz, active area: 2808.0 cm^2   --> 13.34 Hz/cm^2
    # digis in sector 18:    3626  --> hit rate:   373.2 Hz, active area: 2727.0 cm^2   -->  0.14 Hz/cm^2
    # digis in sector 19: 1980613  --> hit rate: 203858.9 Hz, active area: 3294.0 cm^2   --> 61.89 Hz/cm^2
    # digis in sector 20:  429041  --> hit rate: 44160.0 Hz, active area: 2511.0 cm^2   --> 17.59 Hz/cm^2
    # digis in sector 21:  711880  --> hit rate: 73271.8 Hz, active area: 3658.5 cm^2   --> 20.03 Hz/cm^2
    # digis in sector 22:   97754  --> hit rate: 10061.5 Hz, active area: 1971.0 cm^2   -->  5.10 Hz/cm^2
    # digis in sector 23:  115858  --> hit rate: 11924.9 Hz, active area: 2970.0 cm^2   -->  4.02 Hz/cm^2
    # digis in sector 24:    6282  --> hit rate:   646.6 Hz, active area: 2146.5 cm^2   -->  0.30 Hz/cm^2

     *** noise rate calculation in run 20143014 ***
    # events: 3756220, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   21989  --> hit rate:   731.8 Hz, active area: 3388.5 cm^2   -->  0.22 Hz/cm^2
    # digis in sector 14:   20709  --> hit rate:   689.2 Hz, active area: 2403.0 cm^2   -->  0.29 Hz/cm^2
    # digis in sector 15:  589011  --> hit rate: 19601.2 Hz, active area: 5305.5 cm^2   -->  3.69 Hz/cm^2
    # digis in sector 16:  222124  --> hit rate:  7391.9 Hz, active area: 6061.5 cm^2   -->  1.22 Hz/cm^2
    # digis in sector 17:   54555  --> hit rate:  1815.5 Hz, active area: 3888.0 cm^2   -->  0.47 Hz/cm^2
    # digis in sector 18:   23206  --> hit rate:   772.3 Hz, active area: 4144.5 cm^2   -->  0.19 Hz/cm^2
    # digis in sector 19:   16642  --> hit rate:   553.8 Hz, active area: 3915.0 cm^2   -->  0.14 Hz/cm^2
    # digis in sector 20:  121879  --> hit rate:  4055.9 Hz, active area: 3901.5 cm^2   -->  1.04 Hz/cm^2
    # digis in sector 21:  159585  --> hit rate:  5310.7 Hz, active area: 3510.0 cm^2   -->  1.51 Hz/cm^2
    # digis in sector 22:   48245  --> hit rate:  1605.5 Hz, active area: 2092.5 cm^2   -->  0.77 Hz/cm^2
    # digis in sector 23:   95376  --> hit rate:  3173.9 Hz, active area: 3645.0 cm^2   -->  0.87 Hz/cm^2
    # digis in sector 24:   34049  --> hit rate:  1133.1 Hz, active area: 3699.0 cm^2   -->  0.31 Hz/cm^2

    eTOF noise rates in Run20

     *** noise rate calculation in run 20336043 ***
    # events: 4000000, time window: 3 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   71614  --> hit rate:  2983.9 Hz, active area: 7533.0 cm^2   -->  0.40 Hz/cm^2
    # digis in sector 14:   62632  --> hit rate:  2609.7 Hz, active area: 7479.0 cm^2   -->  0.35 Hz/cm^2
    # digis in sector 15:  573104  --> hit rate: 23879.3 Hz, active area: 7425.0 cm^2   -->  3.22 Hz/cm^2
    # digis in sector 16:  271952  --> hit rate: 11331.3 Hz, active area: 7479.0 cm^2   -->  1.52 Hz/cm^2
    # digis in sector 17:  215913  --> hit rate:  8996.4 Hz, active area: 7533.0 cm^2   -->  1.19 Hz/cm^2
    # digis in sector 18:   87839  --> hit rate:  3660.0 Hz, active area: 7263.0 cm^2   -->  0.50 Hz/cm^2
    # digis in sector 19:  116175  --> hit rate:  4840.6 Hz, active area: 7533.0 cm^2   -->  0.64 Hz/cm^2
    # digis in sector 20:  164158  --> hit rate:  6839.9 Hz, active area: 7276.5 cm^2   -->  0.94 Hz/cm^2
    # digis in sector 21:  150115  --> hit rate:  6254.8 Hz, active area: 7533.0 cm^2   -->  0.83 Hz/cm^2
    # digis in sector 22:  695547  --> hit rate: 28981.1 Hz, active area: 7438.5 cm^2   -->  3.90 Hz/cm^2
    # digis in sector 23:  115138  --> hit rate:  4797.4 Hz, active area: 7425.0 cm^2   -->  0.65 Hz/cm^2
    # digis in sector 24:  101760  --> hit rate:  4240.0 Hz, active area: 6966.0 cm^2   -->  0.61 Hz/cm^2

     *** noise rate calculation in run 20337008 ***
    # events: 4000000, time window: 3 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   98103  --> hit rate:  4087.6 Hz, active area: 7533.0 cm^2   -->  0.54 Hz/cm^2
    # digis in sector 14:   80595  --> hit rate:  3358.1 Hz, active area: 7533.0 cm^2   -->  0.45 Hz/cm^2
    # digis in sector 15:  839648  --> hit rate: 34985.3 Hz, active area: 7533.0 cm^2   -->  4.64 Hz/cm^2
    # digis in sector 16:  368782  --> hit rate: 15365.9 Hz, active area: 7533.0 cm^2   -->  2.04 Hz/cm^2
    # digis in sector 17:  291384  --> hit rate: 12141.0 Hz, active area: 7533.0 cm^2   -->  1.61 Hz/cm^2
    # digis in sector 18:  110253  --> hit rate:  4593.9 Hz, active area: 7479.0 cm^2   -->  0.61 Hz/cm^2
    # digis in sector 19:  157351  --> hit rate:  6556.3 Hz, active area: 7533.0 cm^2   -->  0.87 Hz/cm^2
    # digis in sector 20:  276382  --> hit rate: 11515.9 Hz, active area: 7479.0 cm^2   -->  1.54 Hz/cm^2
    # digis in sector 21:  218920  --> hit rate:  9121.7 Hz, active area: 7533.0 cm^2   -->  1.21 Hz/cm^2
    # digis in sector 22: 1068392  --> hit rate: 44516.3 Hz, active area: 7533.0 cm^2   -->  5.91 Hz/cm^2
    # digis in sector 23:  146947  --> hit rate:  6122.8 Hz, active area: 7533.0 cm^2   -->  0.81 Hz/cm^2
    # digis in sector 24:  151068  --> hit rate:  6294.5 Hz, active area: 7479.0 cm^2   -->  0.84 Hz/cm^2

     *** noise rate calculation in run 20346009 ***
    # events: 4000000, time window: 3 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  238491  --> hit rate:  9937.1 Hz, active area: 7533.0 cm^2   -->  1.32 Hz/cm^2
    # digis in sector 14:  199550  --> hit rate:  8314.6 Hz, active area: 7533.0 cm^2   -->  1.10 Hz/cm^2
    # digis in sector 15: 1096137  --> hit rate: 45672.4 Hz, active area: 7533.0 cm^2   -->  6.06 Hz/cm^2
    # digis in sector 16:  496046  --> hit rate: 20668.6 Hz, active area: 7533.0 cm^2   -->  2.74 Hz/cm^2
    # digis in sector 17:  683301  --> hit rate: 28470.9 Hz, active area: 7533.0 cm^2   -->  3.78 Hz/cm^2
    # digis in sector 18:  239368  --> hit rate:  9973.7 Hz, active area: 7533.0 cm^2   -->  1.32 Hz/cm^2
    # digis in sector 19:  406501  --> hit rate: 16937.5 Hz, active area: 7533.0 cm^2   -->  2.25 Hz/cm^2
    # digis in sector 20:  646412  --> hit rate: 26933.8 Hz, active area: 7519.5 cm^2   -->  3.58 Hz/cm^2
    # digis in sector 21:  271801  --> hit rate: 11325.0 Hz, active area: 7533.0 cm^2   -->  1.50 Hz/cm^2
    # digis in sector 22: 1160200  --> hit rate: 48341.7 Hz, active area: 7533.0 cm^2   -->  6.42 Hz/cm^2
    # digis in sector 23:  513077  --> hit rate: 21378.2 Hz, active area: 7533.0 cm^2   -->  2.84 Hz/cm^2
    # digis in sector 24:  431086  --> hit rate: 17961.9 Hz, active area: 7533.0 cm^2   -->  2.38 Hz/cm^2

     *** noise rate calculation in run 20346011 ***
    # events: 4000000, time window: 3 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  208174  --> hit rate:  8673.9 Hz, active area: 7533.0 cm^2   -->  1.15 Hz/cm^2
    # digis in sector 14:  160674  --> hit rate:  6694.8 Hz, active area: 7533.0 cm^2   -->  0.89 Hz/cm^2
    # digis in sector 15:  957714  --> hit rate: 39904.8 Hz, active area: 7533.0 cm^2   -->  5.30 Hz/cm^2
    # digis in sector 16:  450579  --> hit rate: 18774.1 Hz, active area: 7533.0 cm^2   -->  2.49 Hz/cm^2
    # digis in sector 17:  606711  --> hit rate: 25279.6 Hz, active area: 7533.0 cm^2   -->  3.36 Hz/cm^2
    # digis in sector 18:  203627  --> hit rate:  8484.5 Hz, active area: 7533.0 cm^2   -->  1.13 Hz/cm^2
    # digis in sector 19:  349016  --> hit rate: 14542.3 Hz, active area: 7479.0 cm^2   -->  1.94 Hz/cm^2
    # digis in sector 20:  577610  --> hit rate: 24067.1 Hz, active area: 7533.0 cm^2   -->  3.19 Hz/cm^2
    # digis in sector 21:  234783  --> hit rate:  9782.6 Hz, active area: 7533.0 cm^2   -->  1.30 Hz/cm^2
    # digis in sector 22: 1082163  --> hit rate: 45090.1 Hz, active area: 7533.0 cm^2   -->  5.99 Hz/cm^2
    # digis in sector 23:  455147  --> hit rate: 18964.5 Hz, active area: 7533.0 cm^2   -->  2.52 Hz/cm^2
    # digis in sector 24:  373332  --> hit rate: 15555.5 Hz, active area: 7533.0 cm^2   -->  2.06 Hz/cm^2

     *** noise rate calculation in run 20350023 ***
    # events: 4000000, time window: 3 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  452229  --> hit rate: 18842.9 Hz, active area: 6277.5 cm^2   -->  3.00 Hz/cm^2
    # digis in sector 14:  381191  --> hit rate: 15883.0 Hz, active area: 7533.0 cm^2   -->  2.11 Hz/cm^2
    # digis in sector 15: 1451083  --> hit rate: 60461.8 Hz, active area: 7533.0 cm^2   -->  8.03 Hz/cm^2
    # digis in sector 16:  844374  --> hit rate: 35182.2 Hz, active area: 7533.0 cm^2   -->  4.67 Hz/cm^2
    # digis in sector 17: 1534829  --> hit rate: 63951.2 Hz, active area: 7533.0 cm^2   -->  8.49 Hz/cm^2
    # digis in sector 18:  523592  --> hit rate: 21816.3 Hz, active area: 7533.0 cm^2   -->  2.90 Hz/cm^2
    # digis in sector 19:  963727  --> hit rate: 40155.3 Hz, active area: 7479.0 cm^2   -->  5.37 Hz/cm^2
    # digis in sector 20: 1386287  --> hit rate: 57762.0 Hz, active area: 7533.0 cm^2   -->  7.67 Hz/cm^2
    # digis in sector 21:  375397  --> hit rate: 15641.5 Hz, active area: 7533.0 cm^2   -->  2.08 Hz/cm^2
    # digis in sector 22: 1896169  --> hit rate: 79007.0 Hz, active area: 7533.0 cm^2   --> 10.49 Hz/cm^2
    # digis in sector 23: 1574695  --> hit rate: 65612.3 Hz, active area: 7533.0 cm^2   -->  8.71 Hz/cm^2
    # digis in sector 24: 1050046  --> hit rate: 43751.9 Hz, active area: 7533.0 cm^2   -->  5.81 Hz/cm^2

    Starting from run 20352046, I include the channels that receive the pulser into the noise rates (as the rejection of pulsers seems to be reliable),
    but I exclude the channels that have many repeated digis (stuck firmware) on a run-by-run basis

     *** noise rate calculation in run 20352046 ***
    # events: 4000000, time window: 3 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  627168  --> hit rate: 26132.0 Hz, active area: 7776.0 cm^2   -->  3.36 Hz/cm^2
    # digis in sector 14:  399429  --> hit rate: 16642.9 Hz, active area: 6480.0 cm^2   -->  2.57 Hz/cm^2
    # digis in sector 15: 1616765  --> hit rate: 67365.2 Hz, active area: 7776.0 cm^2   -->  8.66 Hz/cm^2
    # digis in sector 16:  943318  --> hit rate: 39304.9 Hz, active area: 7776.0 cm^2   -->  5.05 Hz/cm^2
    # digis in sector 17: 1798051  --> hit rate: 74918.8 Hz, active area: 7776.0 cm^2   -->  9.63 Hz/cm^2
    # digis in sector 18:  642743  --> hit rate: 26781.0 Hz, active area: 7776.0 cm^2   -->  3.44 Hz/cm^2
    # digis in sector 19: 1194852  --> hit rate: 49785.5 Hz, active area: 7722.0 cm^2   -->  6.45 Hz/cm^2
    # digis in sector 20: 1720462  --> hit rate: 71685.9 Hz, active area: 7776.0 cm^2   -->  9.22 Hz/cm^2
    # digis in sector 21:  426565  --> hit rate: 17773.5 Hz, active area: 7776.0 cm^2   -->  2.29 Hz/cm^2
    # digis in sector 22: 1930982  --> hit rate: 80457.6 Hz, active area: 7776.0 cm^2   --> 10.35 Hz/cm^2
    # digis in sector 23: 2236931  --> hit rate: 93205.5 Hz, active area: 7776.0 cm^2   --> 11.99 Hz/cm^2
    # digis in sector 24: 1379194  --> hit rate: 57466.4 Hz, active area: 7776.0 cm^2   -->  7.39 Hz/cm^2

     *** noise rate calculation in run 20353020 ***
    # events: 3999999, time window: 3 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  582019  --> hit rate: 24250.8 Hz, active area: 7776.0 cm^2   -->  3.12 Hz/cm^2
    # digis in sector 14:  444103  --> hit rate: 18504.3 Hz, active area: 7776.0 cm^2   -->  2.38 Hz/cm^2
    # digis in sector 15: 1547383  --> hit rate: 64474.3 Hz, active area: 7776.0 cm^2   -->  8.29 Hz/cm^2
    # digis in sector 16:  938901  --> hit rate: 39120.9 Hz, active area: 7722.0 cm^2   -->  5.07 Hz/cm^2
    # digis in sector 17: 1752882  --> hit rate: 73036.8 Hz, active area: 7776.0 cm^2   -->  9.39 Hz/cm^2
    # digis in sector 18:  611426  --> hit rate: 25476.1 Hz, active area: 7776.0 cm^2   -->  3.28 Hz/cm^2
    # digis in sector 19:  655959  --> hit rate: 27331.6 Hz, active area: 7708.5 cm^2   -->  3.55 Hz/cm^2
    # digis in sector 20: 1686331  --> hit rate: 70263.8 Hz, active area: 7776.0 cm^2   -->  9.04 Hz/cm^2
    # digis in sector 21:  415946  --> hit rate: 17331.1 Hz, active area: 7776.0 cm^2   -->  2.23 Hz/cm^2
    # digis in sector 22: 1887452  --> hit rate: 78643.9 Hz, active area: 7722.0 cm^2   --> 10.18 Hz/cm^2
    # digis in sector 23: 2209764  --> hit rate: 92073.5 Hz, active area: 7776.0 cm^2   --> 11.84 Hz/cm^2
    # digis in sector 24: 1294404  --> hit rate: 53933.5 Hz, active area: 7776.0 cm^2   -->  6.94 Hz/cm^2

     *** noise rate calculation in run 20354009 ***
    # events: 3999999, time window: 3 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  572816  --> hit rate: 23867.3 Hz, active area: 7722.0 cm^2   -->  3.09 Hz/cm^2
    # digis in sector 14:  426302  --> hit rate: 17762.6 Hz, active area: 7776.0 cm^2   -->  2.28 Hz/cm^2
    # digis in sector 15: 1372069  --> hit rate: 57169.6 Hz, active area: 7776.0 cm^2   -->  7.35 Hz/cm^2
    # digis in sector 16:  803083  --> hit rate: 33461.8 Hz, active area: 7722.0 cm^2   -->  4.33 Hz/cm^2
    # digis in sector 17: 1653853  --> hit rate: 68910.6 Hz, active area: 7776.0 cm^2   -->  8.86 Hz/cm^2
    # digis in sector 18:  559805  --> hit rate: 23325.2 Hz, active area: 7776.0 cm^2   -->  3.00 Hz/cm^2
    # digis in sector 19:  625738  --> hit rate: 26072.4 Hz, active area: 7762.5 cm^2   -->  3.36 Hz/cm^2
    # digis in sector 20: 1615787  --> hit rate: 67324.5 Hz, active area: 7776.0 cm^2   -->  8.66 Hz/cm^2
    # digis in sector 21:  353990  --> hit rate: 14749.6 Hz, active area: 7776.0 cm^2   -->  1.90 Hz/cm^2
    # digis in sector 22: 1770948  --> hit rate: 73789.5 Hz, active area: 7776.0 cm^2   -->  9.49 Hz/cm^2
    # digis in sector 23: 2228278  --> hit rate: 92844.9 Hz, active area: 7722.0 cm^2   --> 12.02 Hz/cm^2
    # digis in sector 24: 1294373  --> hit rate: 53932.2 Hz, active area: 7776.0 cm^2   -->  6.94 Hz/cm^2

    eTOF planning meeting April 9, files

     

    eTOF run-by-run QA

    Run-by-run QA for Run20

    In addition to the PWG QA by Ben and Takafumi, here is some QA of specific observables for the eTOF:

    The calibration was done using runs 20345015 to 20346003 and is stable since then with the occasional global time shift when eTOF misses the reset signal from bTOF or in some cases after an eTOF DAQ restart.
    These have not been taken into account for some runs in this QA, but they can be corrected for the final production, e.g. via a hybrid start time approach between eTOF and bTOF

    There are some issues with clock jumps on the Get4 level that need further investigation and special treatment.

    global hit position

    inverse beta vs momentum

    matching ratio

    The matching ratios are averaged over all runs in one day. Before day 353 in some of the runs eTOF was not included, so the ratio is lower. I'll redo those plot with the runs without eTOF excluded.
    From run 20352037 on, there is a special triggerId for all events with eTOF included and only those events are sampled in the ratio plots.

    time resolution

    eTOF run19 performance

    Below are some performance plots of the eTOF produced with the fast offline data from runs 20066013 and 20066020. I only applied manually some position offsets corrections and a vary crude T0 offset correction. No slewing corrections so far.

    The time of flight is obtained from the eTOF hit stop time and the start time from the bTOF header.

    Many things can (and should ;) ) still be optimized.

    eTOF software status

    This page tries to summarize the status of the eTOF software.

    Done:

    • integration of data storage classes into StEvent (StETofDigi, StETofHit, StETofHeader, StETofCollection)
    • StETofDigiMaker reads in .daq files and stores the digi inforamtion into StEvent

    Under review:

    • StETofCalibMaker, StETofHitMaker, StETofMatchMaker
    • StEtofPidTraits
    • data storage classes in MuDsts
    • simple version of StETofSimMaker
    • data storage clases in PicoDsts

    Under development:

    • StETofPidMaker (?) that decides which start time is used for the time-of-flight measurement, also calculates eTOF start time from matched tracks in 2019/20
    • database tables for calibrations & electronic id to geometry mapping  +  changes in the makers to fetch the relevant values from the database

    Needs update for next year:

    • event builder update on CBM side to scale from one sector to the full wheel of eTOF modules
    • online monitoring code needs to be adapted to cover all 12 sectors
    • offline monitoring via StETofQAMaker (?)

    Some milestones:

    Feburary 22nd-24th 2018: start of software development during eTOF workshop at Rice University

    March 24th 2018: first isobar data with eTOF information in the daq files (trigger window set)

    April 25th 2018: online monitoring Jevp plots finished for data taking in 2018; mapping from electronic channels to geometry understood

    April 28th 2018: noise rate monitoring via etofDaqDoer available

    May 12th 2018: StETofDigiMaker and StEvent data classes ready for software peer review

    June 5th 2018: firmware fix to properly save the reset time of the bTOF clock into the eTOF data

    July 25th 2018: first patch of eTOF reconstruction software was included in the STAR CVS repository and is part of software library version SL18f

    August 1st 2018: eTOF software included in production of calibration sample for Au+Au data at sqrt(sNN)=27GeV

    August 23rd 2018: first 1/beta vs. momentum plot for simulation with full eTOF wheel

         eTOF PID in simulation (2 electrons, 2 pions, 2 kaons, 2 protons per event)          global XY position of simulated eTOF hits

    September 6th 2018: first 1/beta vs. momentum plot for data

         first 1/beta vs. momentum plot from data  (w/o slewing corrections)

    September 29th 2018: eTOF PID plots with improved slewing corrections; overlapping strip analysis shows 100ps time resolution in all modules;
                                     some more details of the calibration procedure

         1/beta vs. momentum plot with better (not yet finished) slewing corrections

    December 6th 2018: first look to data taken with the full eTOF wheel during commissioning on November 23rd

    ----------------------------------------------
    status presentations:

    July 20th 2018, collaboration meeting at Lehigh University: slides

    September 6th 2018, eTOF meeting: slides

    October 17th 2018, S&C meeting: slides

    November 21th 2018, S&C meeting: slides

    December 11th/12th 2018, winter analysis meeting at BNL: calibration status, general software status

    eTOF threshold scan

    // -------------------------------
    data from second threshold scan

     *** noise rate calculation in run 20103026 ***
    # events: 1000000, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  766977  --> hit rate: 95872.1 Hz, active area: 7155.0 cm^2   --> 13.40 Hz/cm^2
    # digis in sector 14:   40663  --> hit rate:  5082.9 Hz, active area: 6885.0 cm^2   -->  0.74 Hz/cm^2
    # digis in sector 15: 5353336  --> hit rate: 669167.0 Hz, active area: 7492.5 cm^2   --> 89.31 Hz/cm^2
    # digis in sector 16: 1461350  --> hit rate: 182668.8 Hz, active area: 7519.5 cm^2   --> 24.29 Hz/cm^2
    # digis in sector 17:  151010  --> hit rate: 18876.2 Hz, active area: 7155.0 cm^2   -->  2.64 Hz/cm^2
    # digis in sector 18:   67198  --> hit rate:  8399.8 Hz, active area: 7249.5 cm^2   -->  1.16 Hz/cm^2
    # digis in sector 19:  495925  --> hit rate: 61990.6 Hz, active area: 7087.5 cm^2   -->  8.75 Hz/cm^2
    # digis in sector 20:   66509  --> hit rate:  8313.6 Hz, active area: 6898.5 cm^2   -->  1.21 Hz/cm^2
    # digis in sector 21: 1505986  --> hit rate: 188248.2 Hz, active area: 5022.0 cm^2   --> 37.48 Hz/cm^2
    # digis in sector 22:  656400  --> hit rate: 82050.0 Hz, active area: 2511.0 cm^2   --> 32.68 Hz/cm^2
    # digis in sector 23:   70528  --> hit rate:  8816.0 Hz, active area: 6709.5 cm^2   -->  1.31 Hz/cm^2
    # digis in sector 24:   71653  --> hit rate:  8956.6 Hz, active area: 7060.5 cm^2   -->  1.27 Hz/cm^2

     *** noise rate calculation in run 20103027 ***
    # events: 1000000, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  134150  --> hit rate: 16768.8 Hz, active area: 7141.5 cm^2   -->  2.35 Hz/cm^2
    # digis in sector 14:   37454  --> hit rate:  4681.8 Hz, active area: 6912.0 cm^2   -->  0.68 Hz/cm^2
    # digis in sector 15: 4433720  --> hit rate: 554215.0 Hz, active area: 6237.0 cm^2   --> 88.86 Hz/cm^2
    # digis in sector 16: 1586883  --> hit rate: 198360.4 Hz, active area: 7519.5 cm^2   --> 26.38 Hz/cm^2
    # digis in sector 17:   61994  --> hit rate:  7749.2 Hz, active area: 7155.0 cm^2   -->  1.08 Hz/cm^2
    # digis in sector 18:   36729  --> hit rate:  4591.1 Hz, active area: 4792.5 cm^2   -->  0.96 Hz/cm^2
    # digis in sector 19:   87519  --> hit rate: 10939.9 Hz, active area: 7074.0 cm^2   -->  1.55 Hz/cm^2
    # digis in sector 20:   56715  --> hit rate:  7089.4 Hz, active area: 6898.5 cm^2   -->  1.03 Hz/cm^2
    # digis in sector 21: 1517721  --> hit rate: 189715.1 Hz, active area: 5022.0 cm^2   --> 37.78 Hz/cm^2
    # digis in sector 22:       1  --> hit rate:     0.1 Hz, active area:  13.5 cm^2   -->  0.01 Hz/cm^2
    # digis in sector 23:   61547  --> hit rate:  7693.4 Hz, active area: 6709.5 cm^2   -->  1.15 Hz/cm^2
    # digis in sector 24:   64595  --> hit rate:  8074.4 Hz, active area: 7060.5 cm^2   -->  1.14 Hz/cm^2

     *** noise rate calculation in run 20103028 ***
    # events: 1000000, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   45543  --> hit rate:  5692.9 Hz, active area: 7141.5 cm^2   -->  0.80 Hz/cm^2
    # digis in sector 14:   34158  --> hit rate:  4269.8 Hz, active area: 6912.0 cm^2   -->  0.62 Hz/cm^2
    # digis in sector 15: 3966590  --> hit rate: 495823.8 Hz, active area: 7492.5 cm^2   --> 66.18 Hz/cm^2
    # digis in sector 16: 1156657  --> hit rate: 144582.1 Hz, active area: 7519.5 cm^2   --> 19.23 Hz/cm^2
    # digis in sector 17:   53479  --> hit rate:  6684.9 Hz, active area: 7168.5 cm^2   -->  0.93 Hz/cm^2
    # digis in sector 18:   41976  --> hit rate:  5247.0 Hz, active area: 7222.5 cm^2   -->  0.73 Hz/cm^2
    # digis in sector 19:   66140  --> hit rate:  8267.5 Hz, active area: 7047.0 cm^2   -->  1.17 Hz/cm^2
    # digis in sector 20:   47308  --> hit rate:  5913.5 Hz, active area: 6885.0 cm^2   -->  0.86 Hz/cm^2
    # digis in sector 21: 1138101  --> hit rate: 142262.6 Hz, active area: 7533.0 cm^2   --> 18.89 Hz/cm^2
    # digis in sector 22:       0  --> hit rate:     0.0 Hz, active area:   0.0 cm^2   --> 0.00 Hz/cm^2
    # digis in sector 23:   53293  --> hit rate:  6661.6 Hz, active area: 6696.0 cm^2   -->  0.99 Hz/cm^2
    # digis in sector 24:   57435  --> hit rate:  7179.4 Hz, active area: 7060.5 cm^2   -->  1.02 Hz/cm^2

     *** noise rate calculation in run 20103030 ***
    # events: 1000000, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   39536  --> hit rate:  4942.0 Hz, active area: 7141.5 cm^2   -->  0.69 Hz/cm^2
    # digis in sector 14:   29480  --> hit rate:  3685.0 Hz, active area: 6844.5 cm^2   -->  0.54 Hz/cm^2
    # digis in sector 15: 2795241  --> hit rate: 349405.1 Hz, active area: 7492.5 cm^2   --> 46.63 Hz/cm^2
    # digis in sector 16: 1175754  --> hit rate: 146969.2 Hz, active area: 7519.5 cm^2   --> 19.55 Hz/cm^2
    # digis in sector 17:  110045  --> hit rate: 13755.6 Hz, active area: 7141.5 cm^2   -->  1.93 Hz/cm^2
    # digis in sector 18:   39663  --> hit rate:  4957.9 Hz, active area: 7236.0 cm^2   -->  0.69 Hz/cm^2
    # digis in sector 19:   77559  --> hit rate:  9694.9 Hz, active area: 7087.5 cm^2   -->  1.37 Hz/cm^2
    # digis in sector 20:  417687  --> hit rate: 52210.9 Hz, active area: 6952.5 cm^2   -->  7.51 Hz/cm^2
    # digis in sector 21:  889255  --> hit rate: 111156.9 Hz, active area: 7533.0 cm^2   --> 14.76 Hz/cm^2
    # digis in sector 22:       0  --> hit rate:     0.0 Hz, active area:   0.0 cm^2   --> 0.00 Hz/cm^2
    # digis in sector 23:   44818  --> hit rate:  5602.2 Hz, active area: 6709.5 cm^2   -->  0.83 Hz/cm^2
    # digis in sector 24:   49044  --> hit rate:  6130.5 Hz, active area: 7060.5 cm^2   -->  0.87 Hz/cm^2

     *** noise rate calculation in run 20103031 ***
    # events: 2873005, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  147151  --> hit rate:  6402.3 Hz, active area: 7168.5 cm^2   -->  0.89 Hz/cm^2
    # digis in sector 14:  101234  --> hit rate:  4404.5 Hz, active area: 6912.0 cm^2   -->  0.64 Hz/cm^2
    # digis in sector 15: 13510661  --> hit rate: 587827.9 Hz, active area: 7492.5 cm^2   --> 78.46 Hz/cm^2
    # digis in sector 16: 4975350  --> hit rate: 216469.8 Hz, active area: 7519.5 cm^2   --> 28.79 Hz/cm^2
    # digis in sector 17:  157683  --> hit rate:  6860.5 Hz, active area: 7182.0 cm^2   -->  0.96 Hz/cm^2
    # digis in sector 18:  130330  --> hit rate:  5670.5 Hz, active area: 7263.0 cm^2   -->  0.78 Hz/cm^2
    # digis in sector 19:  204252  --> hit rate:  8886.7 Hz, active area: 7074.0 cm^2   -->  1.26 Hz/cm^2
    # digis in sector 20:  140967  --> hit rate:  6133.3 Hz, active area: 6925.5 cm^2   -->  0.89 Hz/cm^2
    # digis in sector 21: 3910036  --> hit rate: 170119.6 Hz, active area: 7533.0 cm^2   --> 22.58 Hz/cm^2
    # digis in sector 22:  293439  --> hit rate: 12767.1 Hz, active area: 2565.0 cm^2   -->  4.98 Hz/cm^2
    # digis in sector 23:  213913  --> hit rate:  9307.0 Hz, active area: 6709.5 cm^2   -->  1.39 Hz/cm^2
    # digis in sector 24:  172270  --> hit rate:  7495.2 Hz, active area: 7060.5 cm^2   -->  1.06 Hz/cm^2

    // -------------------------------
    old data with not properly functioning script

     *** noise rate calculation in run 20100027 ***
    # events: 3733030, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  118305  --> hit rate:  3961.4 Hz, active area: 7411.5 cm^2   -->  0.53 Hz/cm^2
    # digis in sector 14: 20638473  --> hit rate: 691076.4 Hz, active area: 7303.5 cm^2   --> 94.62 Hz/cm^2
    # digis in sector 15: 1297181  --> hit rate: 43435.9 Hz, active area: 7533.0 cm^2   -->  5.77 Hz/cm^2
    # digis in sector 16: 3114105  --> hit rate: 104275.4 Hz, active area: 7533.0 cm^2   --> 13.84 Hz/cm^2
    # digis in sector 17:  133229  --> hit rate:  4461.2 Hz, active area: 7465.5 cm^2   -->  0.60 Hz/cm^2
    # digis in sector 18:  104530  --> hit rate:  3500.2 Hz, active area: 7452.0 cm^2   -->  0.47 Hz/cm^2
    # digis in sector 19:  143004  --> hit rate:  4788.5 Hz, active area: 7425.0 cm^2   -->  0.64 Hz/cm^2
    # digis in sector 20:  125232  --> hit rate:  4193.4 Hz, active area: 7060.5 cm^2   -->  0.59 Hz/cm^2
    # digis in sector 21:  134453  --> hit rate:  4502.1 Hz, active area: 5022.0 cm^2   -->  0.90 Hz/cm^2
    # digis in sector 22: 2192004  --> hit rate: 73399.0 Hz, active area: 5022.0 cm^2   --> 14.62 Hz/cm^2
    # digis in sector 23: 3675345  --> hit rate: 123068.4 Hz, active area: 7033.5 cm^2   --> 17.50 Hz/cm^2
    # digis in sector 24:  157796  --> hit rate:  5283.8 Hz, active area: 7506.0 cm^2   -->  0.70 Hz/cm^2

     *** noise rate calculation in run 20100028 ***
    # events: 2266314, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  106478  --> hit rate:  5872.9 Hz, active area: 7411.5 cm^2   -->  0.79 Hz/cm^2
    # digis in sector 14: 11642363  --> hit rate: 642142.0 Hz, active area: 7290.0 cm^2   --> 88.09 Hz/cm^2
    # digis in sector 15:  889534  --> hit rate: 49062.8 Hz, active area: 7533.0 cm^2   -->  6.51 Hz/cm^2
    # digis in sector 16: 2164228  --> hit rate: 119369.4 Hz, active area: 7533.0 cm^2   --> 15.85 Hz/cm^2
    # digis in sector 17:  107441  --> hit rate:  5926.0 Hz, active area: 7465.5 cm^2   -->  0.79 Hz/cm^2
    # digis in sector 18:   71176  --> hit rate:  3925.8 Hz, active area: 7452.0 cm^2   -->  0.53 Hz/cm^2
    # digis in sector 19:   97150  --> hit rate:  5358.4 Hz, active area: 7425.0 cm^2   -->  0.72 Hz/cm^2
    # digis in sector 20:   78378  --> hit rate:  4323.0 Hz, active area: 5818.5 cm^2   -->  0.74 Hz/cm^2
    # digis in sector 21:  138570  --> hit rate:  7642.9 Hz, active area: 7533.0 cm^2   -->  1.01 Hz/cm^2
    # digis in sector 22: 1561490  --> hit rate: 86125.0 Hz, active area: 6196.5 cm^2   --> 13.90 Hz/cm^2
    # digis in sector 23: 2034990  --> hit rate: 112241.2 Hz, active area: 7020.0 cm^2   --> 15.99 Hz/cm^2
    # digis in sector 24:  111788  --> hit rate:  6165.7 Hz, active area: 7506.0 cm^2   -->  0.82 Hz/cm^2

     *** noise rate calculation in run 20100031 ***
    # events: 999999, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   71315  --> hit rate:  8914.4 Hz, active area: 7411.5 cm^2   -->  1.20 Hz/cm^2
    # digis in sector 14: 4647723  --> hit rate: 580965.9 Hz, active area: 7290.0 cm^2   --> 79.69 Hz/cm^2
    # digis in sector 15:  406157  --> hit rate: 50769.7 Hz, active area: 7533.0 cm^2   -->  6.74 Hz/cm^2
    # digis in sector 16:  983439  --> hit rate: 122930.0 Hz, active area: 7533.0 cm^2   --> 16.32 Hz/cm^2
    # digis in sector 17:   39605  --> hit rate:  4950.6 Hz, active area: 7465.5 cm^2   -->  0.66 Hz/cm^2
    # digis in sector 18:   31899  --> hit rate:  3987.4 Hz, active area: 7452.0 cm^2   -->  0.54 Hz/cm^2
    # digis in sector 19:   44589  --> hit rate:  5573.6 Hz, active area: 7425.0 cm^2   -->  0.75 Hz/cm^2
    # digis in sector 20:   27124  --> hit rate:  3390.5 Hz, active area: 5818.5 cm^2   -->  0.58 Hz/cm^2
    # digis in sector 21:   63972  --> hit rate:  7996.5 Hz, active area: 7533.0 cm^2   -->  1.06 Hz/cm^2
    # digis in sector 22:  707713  --> hit rate: 88464.2 Hz, active area: 5022.0 cm^2   --> 17.62 Hz/cm^2
    # digis in sector 23:  921464  --> hit rate: 115183.1 Hz, active area: 7020.0 cm^2   --> 16.41 Hz/cm^2
    # digis in sector 24:  115086  --> hit rate: 14385.8 Hz, active area: 7506.0 cm^2   -->  1.92 Hz/cm^2

     *** noise rate calculation in run 20100032 ***
    # events: 1000000, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   61629  --> hit rate:  7703.6 Hz, active area: 7411.5 cm^2   -->  1.04 Hz/cm^2
    # digis in sector 14: 1958226  --> hit rate: 244778.2 Hz, active area: 7317.0 cm^2   --> 33.45 Hz/cm^2
    # digis in sector 15:  926726  --> hit rate: 115840.8 Hz, active area: 7533.0 cm^2   --> 15.38 Hz/cm^2
    # digis in sector 16: 1931218  --> hit rate: 241402.2 Hz, active area: 7533.0 cm^2   --> 32.05 Hz/cm^2
    # digis in sector 17:   67903  --> hit rate:  8487.9 Hz, active area: 7465.5 cm^2   -->  1.14 Hz/cm^2
    # digis in sector 18:   65234  --> hit rate:  8154.2 Hz, active area: 7465.5 cm^2   -->  1.09 Hz/cm^2
    # digis in sector 19:   94376  --> hit rate: 11797.0 Hz, active area: 7425.0 cm^2   -->  1.59 Hz/cm^2
    # digis in sector 20:   51686  --> hit rate:  6460.8 Hz, active area: 7074.0 cm^2   -->  0.91 Hz/cm^2
    # digis in sector 21:  258684  --> hit rate: 32335.5 Hz, active area: 7533.0 cm^2   -->  4.29 Hz/cm^2
    # digis in sector 22: 1253888  --> hit rate: 156736.0 Hz, active area: 5076.0 cm^2   --> 30.88 Hz/cm^2
    # digis in sector 23:  391713  --> hit rate: 48964.1 Hz, active area: 7033.5 cm^2   -->  6.96 Hz/cm^2
    # digis in sector 24: 1726723  --> hit rate: 215840.4 Hz, active area: 7519.5 cm^2   --> 28.70 Hz/cm^2

     *** noise rate calculation in run 20100033 ***
    # events: 1000000, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  103222  --> hit rate: 12902.8 Hz, active area: 7384.5 cm^2   -->  1.75 Hz/cm^2
    # digis in sector 14: 5182142  --> hit rate: 647767.8 Hz, active area: 7276.5 cm^2   --> 89.02 Hz/cm^2
    # digis in sector 15:  414951  --> hit rate: 51868.9 Hz, active area: 7533.0 cm^2   -->  6.89 Hz/cm^2
    # digis in sector 16:  997696  --> hit rate: 124712.0 Hz, active area: 7533.0 cm^2   --> 16.56 Hz/cm^2
    # digis in sector 17:   38535  --> hit rate:  4816.9 Hz, active area: 7465.5 cm^2   -->  0.65 Hz/cm^2
    # digis in sector 18:   31555  --> hit rate:  3944.4 Hz, active area: 7452.0 cm^2   -->  0.53 Hz/cm^2
    # digis in sector 19:   43480  --> hit rate:  5435.0 Hz, active area: 7425.0 cm^2   -->  0.73 Hz/cm^2
    # digis in sector 20:   36141  --> hit rate:  4517.6 Hz, active area: 7060.5 cm^2   -->  0.64 Hz/cm^2
    # digis in sector 21:   65736  --> hit rate:  8217.0 Hz, active area: 7533.0 cm^2   -->  1.09 Hz/cm^2
    # digis in sector 22:  719292  --> hit rate: 89911.5 Hz, active area: 5022.0 cm^2   --> 17.90 Hz/cm^2
    # digis in sector 23:  894494  --> hit rate: 111811.8 Hz, active area: 7020.0 cm^2   --> 15.93 Hz/cm^2
    # digis in sector 24:   85582  --> hit rate: 10697.8 Hz, active area: 7506.0 cm^2   -->  1.43 Hz/cm^2

     *** noise rate calculation in run 20100034 ***
    # events: 997699, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  233657  --> hit rate: 29274.5 Hz, active area: 7398.0 cm^2   -->  3.96 Hz/cm^2
    # digis in sector 14: 5259678  --> hit rate: 658976.1 Hz, active area: 7290.0 cm^2   --> 90.39 Hz/cm^2
    # digis in sector 15:  423676  --> hit rate: 53081.6 Hz, active area: 7533.0 cm^2   -->  7.05 Hz/cm^2
    # digis in sector 16: 1013263  --> hit rate: 126950.0 Hz, active area: 7533.0 cm^2   --> 16.85 Hz/cm^2
    # digis in sector 17:   39876  --> hit rate:  4996.0 Hz, active area: 7465.5 cm^2   -->  0.67 Hz/cm^2
    # digis in sector 18:   32861  --> hit rate:  4117.1 Hz, active area: 7452.0 cm^2   -->  0.55 Hz/cm^2
    # digis in sector 19:   45815  --> hit rate:  5740.1 Hz, active area: 7411.5 cm^2   -->  0.77 Hz/cm^2
    # digis in sector 20:   39889  --> hit rate:  4997.6 Hz, active area: 7060.5 cm^2   -->  0.71 Hz/cm^2
    # digis in sector 21:   67030  --> hit rate:  8398.1 Hz, active area: 7533.0 cm^2   -->  1.11 Hz/cm^2
    # digis in sector 22:  738368  --> hit rate: 92508.9 Hz, active area: 5022.0 cm^2   --> 18.42 Hz/cm^2
    # digis in sector 23:  974990  --> hit rate: 122154.8 Hz, active area: 7020.0 cm^2   --> 17.40 Hz/cm^2
    # digis in sector 24:  271049  --> hit rate: 33959.3 Hz, active area: 7519.5 cm^2   -->  4.52 Hz/cm^2

     *** noise rate calculation in run 20100035 ***
    # events: 948229, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  306463  --> hit rate: 40399.4 Hz, active area: 7398.0 cm^2   -->  5.46 Hz/cm^2
    # digis in sector 14: 4145483  --> hit rate: 546477.1 Hz, active area: 7303.5 cm^2   --> 74.82 Hz/cm^2
    # digis in sector 15:  411724  --> hit rate: 54275.4 Hz, active area: 7533.0 cm^2   -->  7.21 Hz/cm^2
    # digis in sector 16:  973137  --> hit rate: 128283.5 Hz, active area: 7533.0 cm^2   --> 17.03 Hz/cm^2
    # digis in sector 17:   38389  --> hit rate:  5060.6 Hz, active area: 7465.5 cm^2   -->  0.68 Hz/cm^2
    # digis in sector 18:   32428  --> hit rate:  4274.8 Hz, active area: 7452.0 cm^2   -->  0.57 Hz/cm^2
    # digis in sector 19:   43909  --> hit rate:  5788.3 Hz, active area: 7425.0 cm^2   -->  0.78 Hz/cm^2
    # digis in sector 20:   32342  --> hit rate:  4263.5 Hz, active area: 7060.5 cm^2   -->  0.60 Hz/cm^2
    # digis in sector 21:   65064  --> hit rate:  8577.0 Hz, active area: 7533.0 cm^2   -->  1.14 Hz/cm^2
    # digis in sector 22:  711961  --> hit rate: 93854.0 Hz, active area: 5022.0 cm^2   --> 18.69 Hz/cm^2
    # digis in sector 23: 1304276  --> hit rate: 171935.8 Hz, active area: 7020.0 cm^2   --> 24.49 Hz/cm^2
    # digis in sector 24:  349383  --> hit rate: 46057.3 Hz, active area: 7519.5 cm^2   -->  6.13 Hz/cm^2

     *** noise rate calculation in run 20100039 ***
    # events: 1000000, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   79151  --> hit rate:  9893.9 Hz, active area: 7411.5 cm^2   -->  1.33 Hz/cm^2
    # digis in sector 14: 4682096  --> hit rate: 585262.0 Hz, active area: 7263.0 cm^2   --> 80.58 Hz/cm^2
    # digis in sector 15:  309045  --> hit rate: 38630.6 Hz, active area: 7533.0 cm^2   -->  5.13 Hz/cm^2
    # digis in sector 16:  763528  --> hit rate: 95441.0 Hz, active area: 7533.0 cm^2   --> 12.67 Hz/cm^2
    # digis in sector 17:   33034  --> hit rate:  4129.2 Hz, active area: 7465.5 cm^2   -->  0.55 Hz/cm^2
    # digis in sector 18:   26069  --> hit rate:  3258.6 Hz, active area: 7452.0 cm^2   -->  0.44 Hz/cm^2
    # digis in sector 19:   34169  --> hit rate:  4271.1 Hz, active area: 7411.5 cm^2   -->  0.58 Hz/cm^2
    # digis in sector 20:   25195  --> hit rate:  3149.4 Hz, active area: 7060.5 cm^2   -->  0.45 Hz/cm^2
    # digis in sector 21:   31559  --> hit rate:  3944.9 Hz, active area: 5022.0 cm^2   -->  0.79 Hz/cm^2
    # digis in sector 22:    9970  --> hit rate:  1246.2 Hz, active area: 2619.0 cm^2   -->  0.48 Hz/cm^2
    # digis in sector 23: 1338897  --> hit rate: 167362.1 Hz, active area: 7006.5 cm^2   --> 23.89 Hz/cm^2
    # digis in sector 24:   36306  --> hit rate:  4538.2 Hz, active area: 7492.5 cm^2   -->  0.61 Hz/cm^2

     *** noise rate calculation in run 20100040 ***
    # events: 1000000, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  102508  --> hit rate: 12813.5 Hz, active area: 7398.0 cm^2   -->  1.73 Hz/cm^2
    # digis in sector 14: 4735306  --> hit rate: 591913.2 Hz, active area: 7276.5 cm^2   --> 81.35 Hz/cm^2
    # digis in sector 15:  315507  --> hit rate: 39438.4 Hz, active area: 7533.0 cm^2   -->  5.24 Hz/cm^2
    # digis in sector 16:  782325  --> hit rate: 97790.6 Hz, active area: 7533.0 cm^2   --> 12.98 Hz/cm^2
    # digis in sector 17:   34683  --> hit rate:  4335.4 Hz, active area: 7465.5 cm^2   -->  0.58 Hz/cm^2
    # digis in sector 18:   27102  --> hit rate:  3387.8 Hz, active area: 7452.0 cm^2   -->  0.45 Hz/cm^2
    # digis in sector 19:   35919  --> hit rate:  4489.9 Hz, active area: 7411.5 cm^2   -->  0.61 Hz/cm^2
    # digis in sector 20:   32781  --> hit rate:  4097.6 Hz, active area: 7060.5 cm^2   -->  0.58 Hz/cm^2
    # digis in sector 21:   31134  --> hit rate:  3891.8 Hz, active area: 5022.0 cm^2   -->  0.77 Hz/cm^2
    # digis in sector 22:   10336  --> hit rate:  1292.0 Hz, active area: 2511.0 cm^2   -->  0.51 Hz/cm^2
    # digis in sector 23: 1307919  --> hit rate: 163489.9 Hz, active area: 7006.5 cm^2   --> 23.33 Hz/cm^2
    # digis in sector 24:   35091  --> hit rate:  4386.4 Hz, active area: 7492.5 cm^2   -->  0.59 Hz/cm^2

     *** noise rate calculation in run 20100041 ***
    # events: 1000000, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  112274  --> hit rate: 14034.2 Hz, active area: 7398.0 cm^2   -->  1.90 Hz/cm^2
    # digis in sector 14: 4939974  --> hit rate: 617496.8 Hz, active area: 7263.0 cm^2   --> 85.02 Hz/cm^2
    # digis in sector 15:  321142  --> hit rate: 40142.8 Hz, active area: 7533.0 cm^2   -->  5.33 Hz/cm^2
    # digis in sector 16:  797079  --> hit rate: 99634.9 Hz, active area: 7533.0 cm^2   --> 13.23 Hz/cm^2
    # digis in sector 17:   36123  --> hit rate:  4515.4 Hz, active area: 7465.5 cm^2   -->  0.60 Hz/cm^2
    # digis in sector 18:   27731  --> hit rate:  3466.4 Hz, active area: 7452.0 cm^2   -->  0.47 Hz/cm^2
    # digis in sector 19:   36027  --> hit rate:  4503.4 Hz, active area: 7411.5 cm^2   -->  0.61 Hz/cm^2
    # digis in sector 20:   39924  --> hit rate:  4990.5 Hz, active area: 7060.5 cm^2   -->  0.71 Hz/cm^2
    # digis in sector 21:   31635  --> hit rate:  3954.4 Hz, active area: 5022.0 cm^2   -->  0.79 Hz/cm^2
    # digis in sector 22:   10688  --> hit rate:  1336.0 Hz, active area: 2511.0 cm^2   -->  0.53 Hz/cm^2
    # digis in sector 23: 1264020  --> hit rate: 158002.5 Hz, active area: 7020.0 cm^2   --> 22.51 Hz/cm^2
    # digis in sector 24:   44138  --> hit rate:  5517.2 Hz, active area: 7506.0 cm^2   -->  0.74 Hz/cm^2

    //////////////////////////////////////////////////////////////   
    after removal of hot channels
    sector == 14  module == 1  strip ==  8   side == 2
    sector == 14  module == 1  strip == 20  side == 1
    sector == 14  module == 2  strip == 21  side == 2
    sector == 23  module == 1  strip == 14  side == 2

     *** noise rate calculation in run 20100027 ***
    # events: 3733030, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  118305  --> hit rate:  3961.4 Hz, active area: 7411.5 cm^2   -->  0.53 Hz/cm^2
    # digis in sector 14:   93134  --> hit rate:  3118.6 Hz, active area: 7195.5 cm^2   -->  0.43 Hz/cm^2
    # digis in sector 15: 1297181  --> hit rate: 43435.9 Hz, active area: 7533.0 cm^2   -->  5.77 Hz/cm^2
    # digis in sector 16: 3114105  --> hit rate: 104275.4 Hz, active area: 7533.0 cm^2   --> 13.84 Hz/cm^2
    # digis in sector 17:  133229  --> hit rate:  4461.2 Hz, active area: 7465.5 cm^2   -->  0.60 Hz/cm^2
    # digis in sector 18:  104530  --> hit rate:  3500.2 Hz, active area: 7452.0 cm^2   -->  0.47 Hz/cm^2
    # digis in sector 19:  143004  --> hit rate:  4788.5 Hz, active area: 7425.0 cm^2   -->  0.64 Hz/cm^2
    # digis in sector 20:  125232  --> hit rate:  4193.4 Hz, active area: 7060.5 cm^2   -->  0.59 Hz/cm^2
    # digis in sector 21:  134453  --> hit rate:  4502.1 Hz, active area: 5022.0 cm^2   -->  0.90 Hz/cm^2
    # digis in sector 22: 2192004  --> hit rate: 73399.0 Hz, active area: 5022.0 cm^2   --> 14.62 Hz/cm^2
    # digis in sector 23:  142215  --> hit rate:  4762.0 Hz, active area: 6993.0 cm^2   -->  0.68 Hz/cm^2
    # digis in sector 24:  157796  --> hit rate:  5283.8 Hz, active area: 7506.0 cm^2   -->  0.70 Hz/cm^2

     *** noise rate calculation in run 20100028 ***
    # events: 2266314, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  106478  --> hit rate:  5872.9 Hz, active area: 7411.5 cm^2   -->  0.79 Hz/cm^2
    # digis in sector 14:   92462  --> hit rate:  5099.8 Hz, active area: 7182.0 cm^2   -->  0.71 Hz/cm^2
    # digis in sector 15:  889534  --> hit rate: 49062.8 Hz, active area: 7533.0 cm^2   -->  6.51 Hz/cm^2
    # digis in sector 16: 2164228  --> hit rate: 119369.4 Hz, active area: 7533.0 cm^2   --> 15.85 Hz/cm^2
    # digis in sector 17:  107441  --> hit rate:  5926.0 Hz, active area: 7465.5 cm^2   -->  0.79 Hz/cm^2
    # digis in sector 18:   71176  --> hit rate:  3925.8 Hz, active area: 7452.0 cm^2   -->  0.53 Hz/cm^2
    # digis in sector 19:   97150  --> hit rate:  5358.4 Hz, active area: 7425.0 cm^2   -->  0.72 Hz/cm^2
    # digis in sector 20:   78378  --> hit rate:  4323.0 Hz, active area: 5818.5 cm^2   -->  0.74 Hz/cm^2
    # digis in sector 21:  138570  --> hit rate:  7642.9 Hz, active area: 7533.0 cm^2   -->  1.01 Hz/cm^2
    # digis in sector 22: 1561490  --> hit rate: 86125.0 Hz, active area: 6196.5 cm^2   --> 13.90 Hz/cm^2
    # digis in sector 23:   98355  --> hit rate:  5424.8 Hz, active area: 6979.5 cm^2   -->  0.78 Hz/cm^2
    # digis in sector 24:  111788  --> hit rate:  6165.7 Hz, active area: 7506.0 cm^2   -->  0.82 Hz/cm^2

     *** noise rate calculation in run 20100031 ***
    # events: 999999, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   71315  --> hit rate:  8914.4 Hz, active area: 7411.5 cm^2   -->  1.20 Hz/cm^2
    # digis in sector 14:   33827  --> hit rate:  4228.4 Hz, active area: 7182.0 cm^2   -->  0.59 Hz/cm^2
    # digis in sector 15:  406157  --> hit rate: 50769.7 Hz, active area: 7533.0 cm^2   -->  6.74 Hz/cm^2
    # digis in sector 16:  983439  --> hit rate: 122930.0 Hz, active area: 7533.0 cm^2   --> 16.32 Hz/cm^2
    # digis in sector 17:   39605  --> hit rate:  4950.6 Hz, active area: 7465.5 cm^2   -->  0.66 Hz/cm^2
    # digis in sector 18:   31899  --> hit rate:  3987.4 Hz, active area: 7452.0 cm^2   -->  0.54 Hz/cm^2
    # digis in sector 19:   44589  --> hit rate:  5573.6 Hz, active area: 7425.0 cm^2   -->  0.75 Hz/cm^2
    # digis in sector 20:   27124  --> hit rate:  3390.5 Hz, active area: 5818.5 cm^2   -->  0.58 Hz/cm^2
    # digis in sector 21:   63972  --> hit rate:  7996.5 Hz, active area: 7533.0 cm^2   -->  1.06 Hz/cm^2
    # digis in sector 22:  707713  --> hit rate: 88464.2 Hz, active area: 5022.0 cm^2   --> 17.62 Hz/cm^2
    # digis in sector 23:   46908  --> hit rate:  5863.5 Hz, active area: 6979.5 cm^2   -->  0.84 Hz/cm^2
    # digis in sector 24:  115086  --> hit rate: 14385.8 Hz, active area: 7506.0 cm^2   -->  1.92 Hz/cm^2

     *** noise rate calculation in run 20100032 ***
    # events: 1000000, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   61629  --> hit rate:  7703.6 Hz, active area: 7411.5 cm^2   -->  1.04 Hz/cm^2
    # digis in sector 14:   57949  --> hit rate:  7243.6 Hz, active area: 7209.0 cm^2   -->  1.00 Hz/cm^2
    # digis in sector 15:  926726  --> hit rate: 115840.8 Hz, active area: 7533.0 cm^2   --> 15.38 Hz/cm^2
    # digis in sector 16: 1931218  --> hit rate: 241402.2 Hz, active area: 7533.0 cm^2   --> 32.05 Hz/cm^2
    # digis in sector 17:   67903  --> hit rate:  8487.9 Hz, active area: 7465.5 cm^2   -->  1.14 Hz/cm^2
    # digis in sector 18:   65234  --> hit rate:  8154.2 Hz, active area: 7465.5 cm^2   -->  1.09 Hz/cm^2
    # digis in sector 19:   94376  --> hit rate: 11797.0 Hz, active area: 7425.0 cm^2   -->  1.59 Hz/cm^2
    # digis in sector 20:   51686  --> hit rate:  6460.8 Hz, active area: 7074.0 cm^2   -->  0.91 Hz/cm^2
    # digis in sector 21:  258684  --> hit rate: 32335.5 Hz, active area: 7533.0 cm^2   -->  4.29 Hz/cm^2
    # digis in sector 22: 1253888  --> hit rate: 156736.0 Hz, active area: 5076.0 cm^2   --> 30.88 Hz/cm^2
    # digis in sector 23:   75649  --> hit rate:  9456.1 Hz, active area: 6993.0 cm^2   -->  1.35 Hz/cm^2
    # digis in sector 24: 1726723  --> hit rate: 215840.4 Hz, active area: 7519.5 cm^2   --> 28.70 Hz/cm^2

     *** noise rate calculation in run 20100033 ***
    # events: 1000000, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  103222  --> hit rate: 12902.8 Hz, active area: 7384.5 cm^2   -->  1.75 Hz/cm^2
    # digis in sector 14:   33762  --> hit rate:  4220.2 Hz, active area: 7168.5 cm^2   -->  0.59 Hz/cm^2
    # digis in sector 15:  414951  --> hit rate: 51868.9 Hz, active area: 7533.0 cm^2   -->  6.89 Hz/cm^2
    # digis in sector 16:  997696  --> hit rate: 124712.0 Hz, active area: 7533.0 cm^2   --> 16.56 Hz/cm^2
    # digis in sector 17:   38535  --> hit rate:  4816.9 Hz, active area: 7465.5 cm^2   -->  0.65 Hz/cm^2
    # digis in sector 18:   31555  --> hit rate:  3944.4 Hz, active area: 7452.0 cm^2   -->  0.53 Hz/cm^2
    # digis in sector 19:   43480  --> hit rate:  5435.0 Hz, active area: 7425.0 cm^2   -->  0.73 Hz/cm^2
    # digis in sector 20:   36141  --> hit rate:  4517.6 Hz, active area: 7060.5 cm^2   -->  0.64 Hz/cm^2
    # digis in sector 21:   65736  --> hit rate:  8217.0 Hz, active area: 7533.0 cm^2   -->  1.09 Hz/cm^2
    # digis in sector 22:  719292  --> hit rate: 89911.5 Hz, active area: 5022.0 cm^2   --> 17.90 Hz/cm^2
    # digis in sector 23:   52366  --> hit rate:  6545.8 Hz, active area: 6979.5 cm^2   -->  0.94 Hz/cm^2
    # digis in sector 24:   85582  --> hit rate: 10697.8 Hz, active area: 7506.0 cm^2   -->  1.43 Hz/cm^2

     *** noise rate calculation in run 20100034 ***
    # events: 997699, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  233657  --> hit rate: 29274.5 Hz, active area: 7398.0 cm^2   -->  3.96 Hz/cm^2
    # digis in sector 14:  217536  --> hit rate: 27254.7 Hz, active area: 7182.0 cm^2   -->  3.79 Hz/cm^2
    # digis in sector 15:  423676  --> hit rate: 53081.6 Hz, active area: 7533.0 cm^2   -->  7.05 Hz/cm^2
    # digis in sector 16: 1013263  --> hit rate: 126950.0 Hz, active area: 7533.0 cm^2   --> 16.85 Hz/cm^2
    # digis in sector 17:   39876  --> hit rate:  4996.0 Hz, active area: 7465.5 cm^2   -->  0.67 Hz/cm^2
    # digis in sector 18:   32861  --> hit rate:  4117.1 Hz, active area: 7452.0 cm^2   -->  0.55 Hz/cm^2
    # digis in sector 19:   45815  --> hit rate:  5740.1 Hz, active area: 7411.5 cm^2   -->  0.77 Hz/cm^2
    # digis in sector 20:   39889  --> hit rate:  4997.6 Hz, active area: 7060.5 cm^2   -->  0.71 Hz/cm^2
    # digis in sector 21:   67030  --> hit rate:  8398.1 Hz, active area: 7533.0 cm^2   -->  1.11 Hz/cm^2
    # digis in sector 22:  738368  --> hit rate: 92508.9 Hz, active area: 5022.0 cm^2   --> 18.42 Hz/cm^2
    # digis in sector 23:  233194  --> hit rate: 29216.5 Hz, active area: 6979.5 cm^2   -->  4.19 Hz/cm^2
    # digis in sector 24:  271049  --> hit rate: 33959.3 Hz, active area: 7519.5 cm^2   -->  4.52 Hz/cm^2

     *** noise rate calculation in run 20100035 ***
    # events: 948229, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  306463  --> hit rate: 40399.4 Hz, active area: 7398.0 cm^2   -->  5.46 Hz/cm^2
    # digis in sector 14:  172820  --> hit rate: 22781.9 Hz, active area: 7195.5 cm^2   -->  3.17 Hz/cm^2
    # digis in sector 15:  411724  --> hit rate: 54275.4 Hz, active area: 7533.0 cm^2   -->  7.21 Hz/cm^2
    # digis in sector 16:  973137  --> hit rate: 128283.5 Hz, active area: 7533.0 cm^2   --> 17.03 Hz/cm^2
    # digis in sector 17:   38389  --> hit rate:  5060.6 Hz, active area: 7465.5 cm^2   -->  0.68 Hz/cm^2
    # digis in sector 18:   32428  --> hit rate:  4274.8 Hz, active area: 7452.0 cm^2   -->  0.57 Hz/cm^2
    # digis in sector 19:   43909  --> hit rate:  5788.3 Hz, active area: 7425.0 cm^2   -->  0.78 Hz/cm^2
    # digis in sector 20:   32342  --> hit rate:  4263.5 Hz, active area: 7060.5 cm^2   -->  0.60 Hz/cm^2
    # digis in sector 21:   65064  --> hit rate:  8577.0 Hz, active area: 7533.0 cm^2   -->  1.14 Hz/cm^2
    # digis in sector 22:  711961  --> hit rate: 93854.0 Hz, active area: 5022.0 cm^2   --> 18.69 Hz/cm^2
    # digis in sector 23:  202681  --> hit rate: 26718.4 Hz, active area: 6979.5 cm^2   -->  3.83 Hz/cm^2
    # digis in sector 24:  349383  --> hit rate: 46057.3 Hz, active area: 7519.5 cm^2   -->  6.13 Hz/cm^2

     *** noise rate calculation in run 20100039 ***
    # events: 1000000, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:   79151  --> hit rate:  9893.9 Hz, active area: 7411.5 cm^2   -->  1.33 Hz/cm^2
    # digis in sector 14:   33428  --> hit rate:  4178.5 Hz, active area: 7168.5 cm^2   -->  0.58 Hz/cm^2
    # digis in sector 15:  309045  --> hit rate: 38630.6 Hz, active area: 7533.0 cm^2   -->  5.13 Hz/cm^2
    # digis in sector 16:  763528  --> hit rate: 95441.0 Hz, active area: 7533.0 cm^2   --> 12.67 Hz/cm^2
    # digis in sector 17:   33034  --> hit rate:  4129.2 Hz, active area: 7465.5 cm^2   -->  0.55 Hz/cm^2
    # digis in sector 18:   26069  --> hit rate:  3258.6 Hz, active area: 7452.0 cm^2   -->  0.44 Hz/cm^2
    # digis in sector 19:   34169  --> hit rate:  4271.1 Hz, active area: 7411.5 cm^2   -->  0.58 Hz/cm^2
    # digis in sector 20:   25195  --> hit rate:  3149.4 Hz, active area: 7060.5 cm^2   -->  0.45 Hz/cm^2
    # digis in sector 21:   31559  --> hit rate:  3944.9 Hz, active area: 5022.0 cm^2   -->  0.79 Hz/cm^2
    # digis in sector 22:    9970  --> hit rate:  1246.2 Hz, active area: 2619.0 cm^2   -->  0.48 Hz/cm^2
    # digis in sector 23:  140984  --> hit rate: 17623.0 Hz, active area: 6966.0 cm^2   -->  2.53 Hz/cm^2
    # digis in sector 24:   36306  --> hit rate:  4538.2 Hz, active area: 7492.5 cm^2   -->  0.61 Hz/cm^2

     *** noise rate calculation in run 20100040 ***
    # events: 1000000, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  102508  --> hit rate: 12813.5 Hz, active area: 7398.0 cm^2   -->  1.73 Hz/cm^2
    # digis in sector 14:   55242  --> hit rate:  6905.2 Hz, active area: 7182.0 cm^2   -->  0.96 Hz/cm^2
    # digis in sector 15:  315507  --> hit rate: 39438.4 Hz, active area: 7533.0 cm^2   -->  5.24 Hz/cm^2
    # digis in sector 16:  782325  --> hit rate: 97790.6 Hz, active area: 7533.0 cm^2   --> 12.98 Hz/cm^2
    # digis in sector 17:   34683  --> hit rate:  4335.4 Hz, active area: 7465.5 cm^2   -->  0.58 Hz/cm^2
    # digis in sector 18:   27102  --> hit rate:  3387.8 Hz, active area: 7452.0 cm^2   -->  0.45 Hz/cm^2
    # digis in sector 19:   35919  --> hit rate:  4489.9 Hz, active area: 7411.5 cm^2   -->  0.61 Hz/cm^2
    # digis in sector 20:   32781  --> hit rate:  4097.6 Hz, active area: 7060.5 cm^2   -->  0.58 Hz/cm^2
    # digis in sector 21:   31134  --> hit rate:  3891.8 Hz, active area: 5022.0 cm^2   -->  0.77 Hz/cm^2
    # digis in sector 22:   10336  --> hit rate:  1292.0 Hz, active area: 2511.0 cm^2   -->  0.51 Hz/cm^2
    # digis in sector 23:  177015  --> hit rate: 22126.9 Hz, active area: 6966.0 cm^2   -->  3.18 Hz/cm^2
    # digis in sector 24:   35091  --> hit rate:  4386.4 Hz, active area: 7492.5 cm^2   -->  0.59 Hz/cm^2

     *** noise rate calculation in run 20100041 ***
    # events: 1000000, time window: 4 microseconds, area of half strip: 13.5 cm^2
    # digis in sector 13:  112274  --> hit rate: 14034.2 Hz, active area: 7398.0 cm^2   -->  1.90 Hz/cm^2
    # digis in sector 14:  115095  --> hit rate: 14386.9 Hz, active area: 7168.5 cm^2   -->  2.01 Hz/cm^2
    # digis in sector 15:  321142  --> hit rate: 40142.8 Hz, active area: 7533.0 cm^2   -->  5.33 Hz/cm^2
    # digis in sector 16:  797079  --> hit rate: 99634.9 Hz, active area: 7533.0 cm^2   --> 13.23 Hz/cm^2
    # digis in sector 17:   36123  --> hit rate:  4515.4 Hz, active area: 7465.5 cm^2   -->  0.60 Hz/cm^2
    # digis in sector 18:   27731  --> hit rate:  3466.4 Hz, active area: 7452.0 cm^2   -->  0.47 Hz/cm^2
    # digis in sector 19:   36027  --> hit rate:  4503.4 Hz, active area: 7411.5 cm^2   -->  0.61 Hz/cm^2
    # digis in sector 20:   39924  --> hit rate:  4990.5 Hz, active area: 7060.5 cm^2   -->  0.71 Hz/cm^2
    # digis in sector 21:   31635  --> hit rate:  3954.4 Hz, active area: 5022.0 cm^2   -->  0.79 Hz/cm^2
    # digis in sector 22:   10688  --> hit rate:  1336.0 Hz, active area: 2511.0 cm^2   -->  0.53 Hz/cm^2
    # digis in sector 23:   48717  --> hit rate:  6089.6 Hz, active area: 6979.5 cm^2   -->  0.87 Hz/cm^2
    # digis in sector 24:   44138  --> hit rate:  5517.2 Hz, active area: 7506.0 cm^2   -->  0.74 Hz/cm^2

    eToF Runlist 9.2 GeV

    https://docs.google.com/spreadsheets/d/1wycWpzHmOrwEJRk5_BpDO9fy1IvJH1hoxVB0elaC1tM/edit#gid=0

    FCS

    Forward Calorimeter System

    FCS DB

    Offline Database, Geometry for FCS

    • fcsDetectorPosition.idl (1 row) : Detector Position for each [det=0~3]
      struct fcsDetectorPosition {
      float xoff[4]; /* x offset [cm] from beam to front of near beam corner */
      float yoff[4]; /* y offset [cm] from beam to detector center */
      float zoff[4]; /* z offset [cm] from IR to front of near beam corner */
      };

    Offline Database, Calibration for FCS

    • fcsEcalGain.idl (1 row) : Ecal Gain [GeV/ch]
      struct fcsEcalGain {
      float gain[1496]; /* gain 2*748 */
      };
    • fcsEcalGainCorr.idl (1 row) : Ecal Gain Correction[unitless]
      struct fcsEcalGainCorr {
      float gaincorr[1496]; /* gain correction 2*748 */
      };
    • fcsHcalGain.idl (1 row) : Hcal Gain [GeV/ch]
      struct fcsHcalGain {
      float gain[520]; /* gain 2*260 */
      };
    • fcsHcalGainCorr.idl (1 row) : Hcal Gain Correction [unitless]
      struct fcsHcalGainCorr {
      float gaincorr[520]; /* gain correction 2*260 */
      };
    • fcsPresGain.idl (1 row) : Pres Gain [MIP/ch]
      struct fcsPresGain {
      float gain[384]; /* gain 2*192 */
      };
    • fcsPresValley.idl (1 row) : Pres Valley Position [MIP]
      struct fcsPresValley {
      float valley[384]; /* valley 2*192 */
      };
    • fcsEcalGainOnline.idl (1 row) : Ecal Gain Online[unitless]
      struct fcsEcalGainOnline {
      float gainOnline[1496]; /* online gain 2*748 */
      };
    • fcsHcalGainOnline.idl (1 row) : Hcal Gain Online [unitless]
      struct fcsHcalGainOnline {
      float gainOnline[520]; /* online gain 2*260 */
      };
    • fcsPresThreshold.idl (1 row) : Pres Threshold Position [MIP]
      struct fcsPresThreshold {
      float threshold[384]; /* online threshold 2*192 */
      };

    FGT


    Graphics 
    courtesy 
    of J.Kelsey
    MIT, 

     

    FGT. . . . . . . . . . P H Y S I C S  
    goals, publications, presentations

    (PowerPoint, PDF, KeyNote, etc. presentations given by members at meetings, conferences, etc.)


    FGT . . . . . . . . . . M E E T I N G S   

    minutes, calendar
    doc-upload: FGT-HN Upload contributions

    (list of people to contact for specific areas, general list of people in FGT group)

     

    FGT. . . . . . . . . . S I M U L A T I O N S 
    specific information on Monte Carlo
    geometry, detector response, event samples

     

    Hardware 
    FGT. . . . . . . . . H A R D W A R E -- D E T E C T O R 
    Triple GEM Foils 
    Integration - specific information about the integration of FGT into STAR, west support cylinder, etc.

    Thermo-model 16 cm
    FGT. . . . . . . . . . H A R D W A R E -- E L E C T R O N I C S  electronics, mother board, etc.

    FGT . . . . . . . . . . DAQ  readout, DAQ, etc.

    FGT. . . . . . . . . . H A R D W A R E -- S L O W C O N T R O L S  

     

    FGT. . . . . . . . . . S O F T W A R E 
    muDst container, geant hits, DB tables

     

     

    FGT . . . . . . . . . . C A L I B R A T I O N
    algo1, y2010
    FGT . . . . . . . . . . D O C U M E N T S  
    proposal,reviews,design notes, tables, responsibilities,etc
    (generally text documents related to the FGT)
    FGT . . . . . . . . . . A N A L Y S I S
    e/h algo Endcap, high-pT tracking
    FGT. . . . . . . . . . P H O T O - - G A L L E R Y + PR figures
    (useful plots, drawings, pictures, etc. in original format that people can use in presentations)
    FGT. . . . . . . . . . I N S T R U C T I O N S  
    edit web-page, HPSS I/O
    FGT. . . . . . . . . . V A R I A

    FGT sub-system web pages use Drupal nodes: 7217, 10145-156.


     

    FGT. . . . . . . . . . I N S T R U C T I O N S

    1. FGT OPS manual
    2. HPSS   HPSS I/O  private/production files
    3. Navigation through FGT web pages  
    4. Guidelines For MIT Tier2 Job Requests For MIT Tier2 Job Requests (Mike)

     

    FGT OPS manual

     This is cover page for  the future FGT OPS manual for year 2012


    A)  HV control - Instructions:

    1) Login to gateway: ssh rssh.rhic.bnl.gov -l [username] -X

    2) Login to rcf2: ssh rcf2.rhic.bnl.gov -l [username] -X

    3) Login to fgt-ops:    ssh fgt-ops.starp.bnl.gov -l fgttester -X    (needs password)

    4) cd snmp

    5) run_display


    B)  Reprograming of FGT ARC (electronics)

    1) from rcf2:   ssh operator@daqman.starp.bnl.gov   (needs password)

    2) cd /RTS/GERARD/ARC_support

    3) execute  REPROGRAM


    C)  Dumping FGT ARC register values to file

    1) from rcf2:   ssh operator@daqman.starp.bnl.gov   (needs password)

    2) cd corliss/FgtArc_20120215

    3) execute  read_both >> filename (where 'filename' is the log file of choice.  This is very fast.)

    3a) repeat the above to different filenames before and after stopping the run, as requested

    4) transfer back to rcf or wherever you like with scp


    D)  FGT Gas Supply

    1) All adjustments on the gas flow control panel on the STAR detector are done by experts only.

    2) The manual for monitoring the FGT Gas Supply by the Detector Operators is found here.


    E)  Changing latency of APVS  (expert only)

     edit /RTS/con/fgt/fgt_rdo_tune.txt in daqman as operator 

     start new run - nothing needs to be uploaded

     

     


    F)  FGT QA Monitoring / Shift crew instructions (defined by Bernd)

    FGT group schedule Run 12 (500GeV) 

    1) Location of FGT QA (XLS-file / Numbers-file / pdf-file)

    2) Procedure:

    a) Download above file (XLS or Numbers) and add your QA information

    b) Consult shift summary and only use 'good runs' for QA

    c) Check shift log for entries indicating FGT problems and write in spread sheet 

    Step 1) J-PLOTS QALook at JPlots for runs where FGT events are present (Use shift summary or FGT status

              Reference 1 (Until 03/19/2012)

              Reference 2 (03/19/2012 - NOW)

    Use 0 / 1 to mark no change and change compared to reference plot. In case of change, add note using spread sheet function and send

    email to FGT list.

    Step 2) Slow control QACheck database plots 

               then click on -> Conditions_fgt

              2a)  First  ->  FGT Gas System  ->  DISPLAY SELECTED CHANNELS  ->  Click Here  ->  adjust X, Y axis parameters,

               then  ->  DISPLAY!    Also can adjust size of plot with [+] or [-] zoom buttons.

               Reference plot 

    2b)  Next  ->  FGT Board Voltages  ->  FGT Board Voltages - measured, uX or u10X or u20X

               ->  DISPLAY SELECTED CHANNELS  -> Click Here  ->  etc. as above. 

               Reference plot

              2c)  Then  ->  FGT Board Currents  ->  FGT Board Currents - measured, uX or u10X or u20x

              ->  DISPLAY SELECTED CHANNELS  ->  Click Here  ->  etc. as above.

              Probably use  "Y Axis LOW" = .0002 and "Y Axis HIGH" = .0004 to view the currents.

              Reference plot

              Use 0 / 1 to mark no change and change compared to reference plot. In case of change, add note using spread sheet function and send email

              to FGT list.

     


     

    Not implemented yet!

    4)  Check pedestals  (http://www.star.bnl.gov/protected/spin/rfatemi/FGT/fgt_status_time.gif)

          *** need example plot showing good / bad condition ***

    5)  Record problems found  *** how / where ***

          *** need names of people to contact if there are problems found during the QA ***

          When complete for the day, record your name and date *** where ***

    JANs sub-page for FGT ops

     JAN: this page was created after Jerome opened write privileges

    RENEE : I am writing on Jan's child page

    Renee's Child Page

    Is it this easy to have a child?

    at the beginningyes, with time it gets more serious responsibility, but parents mature in sync (or not), Jan

    HPSS I/O

     

     


    HTAR :  save/restore  private directory (large set of small files) in HPSS

    For saving all files in directory 'aaa' on disk to your HPSS target bbb/ccc.tar and keep log file in ~/0x/ddd.log1 (needed if you want to kill/quit the window) do: 
     
    cd aaa
    du -hs .
    htar -cf bbb/ccc.tar file1 file2 fileX* ... >& ~/0x/ddd.log1 &
    (wait until job is done, for 50GB make take severa hours)
    hsi (goes directly to HPSS, be careful!)
    ?ls -l  bbb/
    ?quit (exit hsi)
     
    You should see 2 files : ccc.tar & ccc.tar.idx, the size of file ccc.tar should be close to the size of your oryginal directory.
    The .tar file shows up immediately but sinking is NOT finished until you see  .idx file.
    The log file should contain the string: HTAR: HTAR SUCCESSFUL , no warnings.
    However the most reliable way to verify the storage did not failed is to:
     
    retrieving files from HPSS  to disk directory eee do:
     
    mkdir eee
    cd eee
    htar -xf bbb/ccc.tar  >& ~/0x/ddd.log2 &
     
    Other options of HTAR command are described here: 
           http://drupal.star.bnl.gov/STAR/comp/sofi/hpss/htar

    Known problems:

    WARNING: htar_PreallocateSpace: HPSS OUT-OF-SPACE error preallocating
    **** bytes for file=[/home/salur/myfile.tar]
    This is a directory with 350G. Do I have to divide my files into  smaller.
     there is a 60GB max file size limit [when creating tarballs],   Another
    limit to be aware of, however, is an 8GB limit on member files

     

     


     Mass restoring of ~100GB files from HPSS using Data Carusel

    Prepare input file xxx.hpss with list of source +destination paths for all files  you need, one line per file

    Execute the command (at any location at rcas6nnn)

     hpss_user.pl -f xxx.hpss

    Wait 10-1000 minutes fro the files to show up at your destination (if all paths are correct)

     

     

     

    Instruction for new users uploading FGT documents

    Instruction for remote uploading of documents to the common FGT Drupal place to complement missing features of fgt-hn

     


    First time setup

    1. You need to login to drupal as you. The password is different than your regular RCF password. If you do not remember Drupal password select 'forgot password' and new one will be sent to you by e-mail
    2. got to the mother-web page:
      http://drupal.star.bnl.gov/STAR/subsys/upgr/fgt/hn-upload-contributions
      you can also navigate: SubSystems --> Detctor Upgrades --> FGT --> FGT-HN Upload
    3. Got to the bottom and select 'Add child page'
    4. Set the following fields:
      * Title: yourName-FGT-HN-Uploads (edit it)
      * Parent: ------FGT-HN-Upload contributions (selectable from menu)
      * STAR: --FGT (selectable)
      * Body: Your Name : list of attachements sent to FGT-HN (edit it, just 1 line)
      Upload a test jpg file using menu at the very bottom of the page. To attache new file, first 'Choose File', next 'Attach'
    5. Finally at the very bottom 'Submit' the whole page to Drupal.
    6. go to the 'mother-page' and you should see:
      * in the upper part your JPG
      * in the lower part (below bold text) your sub-page among other people who already joined, alphabetical order.
      If you can't see your contribution at this point contact me (Jan). Do not try it over - it will add mess to Drupal. If you add more than one page only the first one will be considered

     


    Uploading subsequent documents

    1. Login to drupal and go to your sub-page with this test JPEG
    2. Click Edit
    3. scroll down to upload section and add as many documents as needed, one by one
    4. you may change the name of uploaded file - just edit it
    5. Click 'Submit'
    6. go to the 'mother-page' and you should see your new documents listed in chronological order.
    7. Now you can sent an e-mail to FGT-HN referring your new uploaded documents as:
      .... new docs are uploaded to:
      http://drupal.star.bnl.gov/STAR/subsys/upgr/fgt/hn-upload-contributions 
      under myName, files xxx-yyy-jpg and zzz-uuu-mmm.pdf
      

    Report to me if there are any problems,
    Jan

    FGT. . . . . . . . . . P H Y S I C S

    1. Physics motivation
    2. Publications 
    3. Talks
      date person format title occasion
      April 2006 Frank pdf ppt Development of Tracking Detectors with Industrially Produced GEM foils TPC workshop, Berkeley
      October 2006 Frank pdf ppt Development of Tracking Detectors with Industrially Produced GEM Foils IEEE NSS 2006, San Diego, CA, USA
      January 2007 Bernd pdf FGT-Technical implementation, Cost, R&D plan, Schedule  BNL Detector Advisory Committee Meeting
      - - - - -
    4. NIM paper on the Fermilab test beam is now published in NIM. 
      F. Simon et al., Nuclear Instruments and Methods in Physics Research A 598 (2009) 432–438

     


     

    FGT. . . . . . . . . . S I M U L A T I O N S

    Simulation of physics background for development of e/h algo. ( Physics Background Simulation)  

    Selected topics:

     

    Physics Background Simulation

    Ongoing effort of preparing physics-background samples for developing of e/h discrimination algos for W reconstruction

    Existing Framework For FGT Simulations

    With the completion of the FGT hardware review, focus must now be made on the creation and implementation of efficient analysis algorithms. To this end a framework for creating simulation files must be established. Here I document the existing framework I created this past summer for a small FGT project.
        The creation of simulation files within STAR is handled in two steps.

    • starsim is called with a PYTHIA kumac, creating a fzd file.
    • The fzd file is converted to a full muDst with the bfc.

    The kumac sets all parameters of the underlying physics events, while the bfc handles the fine tuning of the reconstruction of the STAR detector.
        In the following analysis the simulation was split by the underlying physics processes: those producing W bosons and those from QCD 2->2 interactions. Both are produced with the geometry UPGR13 and sqrt(s) = 500 GeV, in accordance with the FGT upgrade. Moreover, each kumac accepts a random number seed to ensure that different jobs are independent of each other.  Specific details are included below,

    PYTHIA Settings
      W Hadron
    Geometry UPGR13 UPGR13
    sqrt(s) 500 GeV 500 GeV
    Vertex (mean, variance) 0 cm, 60 cm 0 cm, 60 cm
         
    Subprocesses 2, 15, 20, 23, 25, 31 11, 12, 13, 28, 53, 68, 96
    CKIN3 10 GeV 10 GeV
    CKIN4 None None
    Additional Cuts Require electron in the endcap with pT > 15 GeV None
    Additional Notes W bosons made explicitly unstable None
         
    Events / Job 4000 2000
    Cross Section / Job O(103 pb) O(103 pb)
    PYTHIA + GSTAR + BFC Running Time / Job O(20 hours) O(16 hours)
    PYTHIA  + GSTAR Running Time / Job O(12 hours) O(6 hours)
    Memory Used / Job O(1000 MB) O(1500 MB)
    Size of Output Files / Job O(2.5 GB) O(2.5 GB)
    Once starsim has completed, the files are processed through the bfc with the command

    bfc.C\(1,nEvents,"trs ssd upgr13 pixFastSim Idst IAna l0 tpcI fcf Tree logger ITTF Sti StiRnd PixelIT IstIT StiPulls genvtx NoSvtIt SsdIt MakeEvent McEvent geant evoutgeantout IdTruth tags bbcSim emcY2 EEfs big -dstout fzin MiniMcMk McEvOut clearmem","fileName.fzd")

    Detailed log files can be found at

    /star/institutions/mit/betan/FGT/Logs/

    while the existing files are at

    /star/institutions/mit/betan/ROOT_Files/FGT/

    Attached are the relevant scripts (the _.txt has been added to get around DRUPAL attachment restrictions), although the hardcoded paths are no longer valid.  The master script is submit_simulations.csh, which submits all the others to the RCAS queue star_cas_big.  In particular, W_job.csh and hadron_job.csh are each called with the command line arguments

    *_job.csh <em>nEvents fileName randomNumberSeed

    The *_job.csh scrips then call the starsim and bfc with the necessary arguments passed as appropriate.

    Filtered Simulations For The Development of Electron/Hadron Discrimination Algorithms

    Critical to the upcoming flavor physics at STAR is efficient electron identification in the endcap, particularly amongst the dominant charged hadron/meson background.  The development of such discrimination algorithms, however, requires extensive simulations.  To reduce the computational burden of these simulations to a practical level we must modify starsim, reconstructing only those events of interest.

    In this case we require electrons and charged hadrons/mesons in the endcap, and by adding a filter to PYTHIA we can ensure that only events matching the this criteria are reconstructed.  Below is a small study showing the significant reduction in computing time gained with this filtering.

    The filtered simulations required a charged hadron or meson in the endcap with pT > 15 GeV before GSTAR was allowed to reconstruct the event.  Candidate refers to an event meeting the above requirements in the PYTHIA record, and projected refers to an extrapolation estimate for 80 pb-1.

    Hadrons/Mesons Filtered Filtered (Projected) Unfiltered Unfiltered (Projected)
    Candidates 10 18,604 10 18,604
    Reconstructed GEANT Events 10 18,604 2863 5,326,325
    Integrated Luminousity (pb-1) 0.043 80 0.043 80
    Starsim Running Time (hours) 0.274 509  7.42  13,817
    BFC Running Time (hours) 0.845 1572  24.2  45,023

    Looking at the tracks in the GEANT record we see that the filtering worked as described.  Here every charged hadron/meson in every reconstructed event is shown, and in the Filtered sample we see a sharp cutoff at 15 GeV exactly as expected. 

    The same analysis can be done for the W boson simulations, here instead requiring an electron(positron) from a W decay in the endcap with pT > 15 GeV.

    Ws Filtered Filtered (Projected) Unfiltered Unfiltered (Projected)
    Candidates 10  672 10  672
    Reconstructed GEANT Events 10 10 357  24,020
    Integrated Luminousity (pb-1)  1.189 80  1.189 80
    Starsim Running Time (hours)  0.0361  2.42  0.822  55.3
    BFC Running Time (hours)  0.0882  5.93  2.807  188.8

    Again, comparing the two files shows the desired effects

    In both cases the computational demands seem impractical, but one has to remember that this extrapolation is based on the use of a single processor.  Real jobs will be run in parallel, significantly reducing the projected times to more reasonable levels.  Despite the reduction, however, it might still be necessary to reduce the desired integrated luminousity slightly.

    Filtering PYTHIA Events In Starsim

    Within the STAR framework, simulation files are created with the command starsim which runs both PYTHIA and GEANT.  Unfortunately, this means that there is no straightforward way to filter PYTHIA events before the GEANT reconstruction and producing simulation files for rare events can be very time consuming, as most of the CPU is wasted on the GEANT reconstruction of undesired events.

    The trick around this is to modify the PYTHIA libraries themselves.  In particular we want to modify the PYEVNT subroutine which is run during the generation of each PYTHIA event.  Begin by

    •     Checking out a copy of the pythia libraries from cvs
    •     Create a back up of pyevnt.F in case anything goes wrong
    •     Open up pyevnt.F in your favorite text editor
    •     Rename SUBROUTINE PYEVNT to SUBROUTINE PYEVNT_ORG
    •     Now create your own subroutine, SUBROUTINE PYEVNT
    •     Copy the variable declarations and commonblocks from the original PYEVNT

    The body of this new subroutine will in general go as the following

    while(conditions have not been met)

    Call the original PYEVNT
    Call any necessary auxillery subroutines
    Loop over particles
    if(not desired characteristic a) continue
    if(not desired characteristic b) continue
    ...
    if(not desired characteristic i) continue
    Calculate relevant kinematic variables
    Check conditions
    Call PYLIST

    Note that to avoid too many nested if loops we abort when the first test fails.

    For example, consider an analysis requiring a high energy electron in the endcap.  The usual PYTHIA settings allow one to require a high energy electron, but there is no way to restrict its location in the detector.  So in the above pseudocode becomes

    while(no high pT electron in the endcap)

    Call the original PYEVNT
    Loop over particles
    if(not electron) continue
    Calculate pT
    Calculate eta
    if(pT < 15) continue
    if(eta < 1) continue
    if(eta > 2) continue
    return
    Call PYLIST

    The main background to the above is charged hadrons/mesons.  In order to filter these we require that the particle has a charge of +/- 1, that is has the PDG ID of a hadron or a meson, that it has not decayed in the PYTHIA record, and fulfills the kinematic requirements.

    while(no high pT jet in the endcap)

    Call the original PYEVNT
    Loop over the jets at the end of the PYTHIA record
    if(not stable) continue
    if(charge not equal +/- 1) continue
    if(not hadron or meson) continue
    Calculate pT
    Calculate eta
    if(pT < 15) continue
    if(eta < 1) continue
    if(eta > 2) continue
    return
    Call PYLIST

    Once pyevnt.F has been succussfully modified it must be compiled with cons, and the path to the compiled library itself must be explicitly set in the kumac.  For example,

    gexec $STAR_LIB/libpythia_6410.so

    must be replaced by

    gexec /star/u/username/path_to_directory/.slxx_gccxxxx/lib/name_of_new_pythia_library.so

    Examples of modified pyevnt.F files for both the electron example and the jet example are attached, as is a kumac for use with the jet example.

    List of available M-C event samples (Jan)

    .

    M-C Event Samples Available   

    1. General  setup, assumptions
      1. use UPGR13 geometry
      2. run in SL07e
      3. use PPV vertex finder in reco
      4. displace & smear vertex in GEANT in X-Y plane: x0=+1mm, sig=200us, y0=-2mm, sig=200um (except setA)
      5. beam energy sqrt(s)=500 GeV
      6. detectors passed by track thrown at various eta at 3 Z-vertex location of -60, -30 an 0 cm.


    2. setA (produced by Mike @ MIT)
    3. BFC w/ generic vertex finder has ~50% efficiency,

      Vertex distribution: Gauss(Z=0, sigZ=60cm, Z=Y=0, sigX=sigY=0)

       Type of events   total events time per event
       Pythia, minB  400K, 100 jobs  GSTAR: 8.4 sec, BFC ???

      Files location: 80K eve at IUCF disk: /star/institutions/iucf/kocolosk/2008-02-15-fgt-hadron-background

      full sample at MIT ...



       

    4. setB (pilot sample by Jan @ RCF)
    5. Filtering of Pythia events: seed cell =10GeV ET, pair of cells>20 GeV ET,

      particle eta range [0.8,2.2], grid cell size: 0.14 in eta, 9 deg in phi

      BFC w/ PPV vertex finder has ~95% efficiency, SSD, STI not used in tracking

      Vertex distribution: Gauss(Z=-60, sigZ=5cm, Z,Y offset)


      Pythia event generator, 

      BFC chain:" DbV20080310 trs  -ssd upgr13  Idst IAna l0 tpcI fcf -ftpc Tree logger ITTF Sti StiRnd  -IstIT -SvtIt -NoSvtIt SvtCL,svtDb -SsdIt MakeEvent McEvent geant evout geantout IdTruth  bbcSim emcY2 EEfs bigbig -dstout fzin -MiniMcMk McEvOut clearmem -ctbMatchVtx VFPPV eemcDb beamLine"

       Type of events Pythia filter   total events time per event job name file size, MB
        W-events (kumac) 1/6.5  1K, 1 job GSTAR 10 sec, BFC 5.3 sec wElec4 fzd=149, stevent=132, geant=154, McEvent=132, muDst=14
      1:1 , OFF  5K, 1 job - wElec5 -
        QCD-events (kumac) pt=20-30 GeV 1/40  1K, 1 job GSTAR 11 sec, BFC 6.7 sec qcd2 fzd=203, StEvent=157, geant=212, McEvent=172, muDst=17
      1:1, OFF  5K, 1 job - qcd3 -
        QCD-events, scan pt 10...90 GeV varies  100 eve, 1 job - qcd_pt_xx_yy files at ...balewski/2008-FGT-simu/setB-pt-scan/

      Files location:  /star/institutions/iucf/balewski/2008-FGT-simu/setB-pilot/


      Custom code : Pythia, Generic Vertex finder, same location

       

    6.  setC - Pythia macros inspected by Jim: ppQCDprod.kumac & ppWprod.kumac
      Filtering of Pythia events: seed cell =10GeV ET, pair of cells>20 GeV ET,
      particle eta range [0.8,2.2], grid cell size: 0.14 in eta, 9 deg in phi

       
      * setC1: /star/institutions/iucf/balewski/2008-FGT-simu/setC2-pt-0.2inv_pb

      This is LT balanced for 0.2/pb, using your ppQCDprod.kumac

      pt1 pt2 neve
      10    ,15    ,373
      15    ,20    ,1252
      20    ,25    ,2516
      25    ,30    ,3367
      30    ,35    ,2330
      35    ,40    ,1015
      40    ,45    ,705
      45    ,50    ,292
      50    ,55    ,114
      55    ,60    ,67
      60    ,65    ,28
      65    ,70    ,13
      70    ,75    ,8
      75    ,80    ,4
      80    ,85    ,2
      85    ,90    ,1
      90    ,95    ,1
      

       


      * setC2 (only filtered events of various types)

       

       


      * setC3 (filtered vs. not filtered events)

       

       

      * setC4 ( filtered QCD events w/ various partonic PT )

    7. setC5 : QCD events , 3-stage filtering, LT~100/pb

     

    evaluation of Pythia Filter for QCD & W events (setC3)

     

    Based on setC3 the number of events with reco EM cluster ET>20 GeV is similar for the filtered and unfiltered pythia samples after LT correction - compare yellow & green areas: 

     

    ------------ From Brian -------

    Hi Jan,

    I processed the events in setC3 and I put the plots into the
    fgt-hn-contributions drupal page. (All plots show unfiltered events on
    the left and filtered events on the right).

    Transverse energy spectrum for the QCD events:
    http://drupal.star.bnl.gov/STAR/system/files/ETspectrumQCD.png

    Number of events passing ET>20GeV and ET>20GeV + Eta < 1.7 for QCD
    events: http://drupal.star.bnl.gov/STAR/system/files/OPcountsQCD.png

    Transverse energy spectrum for the W events:
    http://drupal.star.bnl.gov/STAR/system/files/ETspectrumW.png

    Number of events passing ET>20GeV and ET>20GeV + Eta < 1.7 for W events:
    http://drupal.star.bnl.gov/STAR/system/files/OPcountsW.png

    Brian

     

    PT scan of Pythia-filter rejection for QCD events ( Jan)

    Pythia Filer: cell>10 GeV ET, cluster >20 GeV ET , grid covers Endcap Pythia events

    Note, 'live' spreadsheet is attached at the bottom

     

    Examples of event filtering for some choices partonic PT ranges:

    pt range event counter and pt spectra eta-phi distributions, all pythia tracks and seed distribution

    partonic PT 15-20 GeV

    Filter=1/760

     

     

    partonic PT 40-45 GeV

    Filter=1/3.6

     

     

     

    partonic PT 75-80 GeV

    Filter=1/4.6

     

     

     

     

    Practical Pythia Event Filter (Jan)

    Filtering of Pythia events preserving those which may fire HT trigger in the Endcap.

    Motivation
    In order to develop efficient e/h discrimination algo for reco of electrons from Ws it is necessary to generate sizable sample of QCD physics background events with 1e5 or more events triggering Endcap HT >10 GeV. The brute force approach (run PYTHIA+GSTAR+BFC long enough) is investigated by my not succeed due to low yield of events of interests.

    Therefore, we are working on the in-fly Pythia events filtering. It must be more complex than accepting events with a single Pythia track above certain threshold because multiple tracks from jet (also hadrons) may deposit in a single tower cumulative energy exceeding the HT threshold even if all tracks have energy below this threshold.

    The tricky part is to decide if EEMC response may be large before GSTAR does very time consuming simulation of EM & hadronic showers for the whole event.


    Proposed Method (2-stage Pythia filtering)

    • Define : eta range [0.9,2.1], partonPTthres of 5 GeV, jetPTthres of 5 GeV (to be tuned later)
    • Inspect lines 7 & 8 of Pythia record (counting from 1) and drop event if none of partons is within eta range and above parton PT threshold.
      For survived events
    • Define 2D eta-phi grid covering eta [0.9,2.1] and 2pi in phi. Divide it in to cells of size 0.1x0.1. Clear grid for every event.
    • Loop over Pythia record 9..max and drop: partons, gluons, unstable particles, neutrino, muons.
      Retain all stable hadrons, e+, e- within eta range, project particle ET in to the eta-phi grid.
    • find seeds all cells with ET above 1/2 of jetThreshold.
    • for every seed from 8 pairs using every of its neighbors. If any pair ET sum is above jetThreshold event will be accepted and passed to GSTAR and eventually to BFC.

    Additional comments:
    * for Barrel eta range should be [-0.2,1.2] and cell size 0.05x0.05 in eta x phi.
    * the code should be written in F77.
    * at any step priority should be given to processing speed vs. program size. E.g. use lookup tables when reasonable.
    * provide few QA histos generated with Pythia using HBOOK + kumac to display them


    Content of Pythia record

    1 proton 1
    2 proton 2
    3 parton from proton1
    4 parton from proton2
    5 parton from proton 1 after intial state radiation
    6 parton from proton 2 after intial state radiation
    7 parton 1 after scatter
    8 parton 2 after scatter
    9 ... intermediate and final partons/particles
    ....
    

     

    evaluation of Bates minB sample (Jan)

    Evaluation of minB events ample produced by Mike at Bates in January of 2008.

    Events characteristic: Pythia MinB events, sqrt(s)=500 GeV, vertex Gauss(0,60cm)

    Below you see detectors acquired by e+/e- surrogate :  

      1st pi-,pi+ with pT>2.0 GeV and eta>0.7

    FGT disks are located at Z of 70,80,...,110 cm - our best guess location.

    3 events samples were chosen based on Geant vertex location of -60, 30, and 0 cm, with margin of +/-10 cm, what leads to smeared FGT disks.  

     

    Optimization of FGT disk location


     Several versions of FGT disk geometries has been studied in 2007

    as documented  here (disk-based HTML documentation)


     

    Z disk location accuracy

    Propagation of the Z hit location inaccuracy  on to the error of predicted Rxy location of the Endcap EM shower.

     
    Slide 1.  Every FGT disk is mounted only at  only 2 points to the Alu rail r=6mm, disks have R=38 cm, so the leverage arm is 1:63.
    The tension from individual (stiff) cables located also at 90 deg vs. Alu rods  could cause a tilt of disks ( during cables assembly or sliding the 6-disk unit to its final position at STAR).
     
    Slide 2.  An example of FGT-only reconstructed track. The inset shows there is an intrinsic fluctuation of primary ionization over 3mm in Z, we can't compensate off-line. This sets the scale for accuracy Z-alignment of the disks.
      
    Slide 3  shows  systematical displacement of predicted EM shower radial distance at the Endcap due to extreme ionization spread in 2 FGT disks . In reconstruction the mean Z location of FGT disks is used. The predicted radial distance of the reco tracks at Z=SMD  can be off by up to 1.2cm.
    This is not that small  but since we will impose isolation cut in EEMC, there should be no other significant energy deposit in calorimeter at much larger radii. Note, the EM shower radii is of 3 cm, but we claim its center will be reco with 1mm accuracy. 
     



     

     

    geometry in MC

     Documented evolution of implementation of FGT geometry in GSTAR, started on May, 2008

    Our studies are below seen as child pages.

    Here are links to other studies

     

    1 content of UPGR15, May 2008

     Selected cross sections of UPGR15 geometry, as of May 2008

    Fig 1

    Fig 2

    Fig 3

    Fig 4

    Fig 5


    Fig 5 Realistic geometry from Jim K. model, May 2008

    2 UPGR15+FGT as of May 2008

     UPGR15 geometry was modified to match best guess of FGT geometry as of May 2008.

    The active FGT area is : Z1=70,..., Z6=120cm, DZ=10 cm, Rin=11.5cm, Rout=37.5 cm 
    There is 1cm of additional dead material at Rin & Rout

     


    Fig 1. Zvertex=0 cm, green= eta [1.06,2.0]. , red=eta[2.5,4.0]

     

     


    Fig 2. Zvertex=+30 cm, green= eta [1.06,2.0]. , red=eta[2.5,4.0]

     

     


    Fig 3. Zvertex=-30 cm, green= eta [1.06,2.0]. , red=eta[2.5,4.0]

     


    Fig 4. Zoom in on 1 FGT disk.  Detected particle enters from the left, 'active' gas volume has depth of 3mm (between magenta ad blue lines),  FGT strips collect charge on the 1st green line.
     (units in cm)
    Zstart = 70.0 ! starting position along Z axis
    Z = { 0.0, 10.0, 20.0, 30.0, 40.0, 50.0} ! Z positions for GEM front face
    FThk = { 0.05, 0.05, 0.05, 0.05 } ! foil thicknesses inside GEM
    SThk = { 0.3, 0.2, 0.2, 0.2 } ! support/spacing thicknesses
    SR = 1.0 ! radial size for support

    USED material:

     * use aluminized mylar mixture instead of kapton
          Component C5  A=12    Z=6  W=5
          Component H4  A=1     Z=1  W=4
          Component O2  A=16    Z=8  W=2
          Component Al  A=27    Z=13 W=0.2302
          Mixture  ALKAP  Dens=1.432
     
    *     G10 is about 60% SiO2 and 40% epoxy
          Component Si  A=28.08  Z=14   W=0.6*1*28./60.
          Component O   A=16     Z=8    W=0.6*2*16./60.
          Component C   A=12     Z=6    W=0.4*8*12./174.
          Component H   A=1      Z=1    W=0.4*14*1./174.
          Component O   A=16     Z=8    W=0.4*4*16./174.
          Mixture   G10    Dens=1.7
     
    Block FGDO is the mother volume of the individual GEM disks
          Component Ar A=39.95   Z=18.   W=0.9
          Component C  A=12.01   Z=6.    W=0.1*1*12.01/44.01
          Component O  A=16.     Z=8.    W=0.1*2*16./44.01
          Mixture   Ar_mix  Dens=0.0018015
     
    Block FGFO describes the GEM foils
          Material ALKAP
    Block FGIS describes the inner support or spacer
          Material G10

    Fig 4b FGT disk front view in Geant


     

    Fig 5. 1st FGT disk by Jim K. as of April, 2008

     


    Fig 6. 3 FGT disks by Jim K. as of April, 2008  


    Fig 7 Realistic geometry from Jim K. as of May, 2008


    Fig 8 Disk material budget, from Doug, as of May, 2008


    Fig 9 APV location , from Doug, as of May, 2008


     

     

    3 new FGT geometry

     Modified FGT geometry of FGT, June 2008

     Detailed description (PDF) , ver 1

    4 compare geom 2007 vs. UPGR16

     

    Green dashed lines at eta=1.0,.1.06,2.0

    red lines at eta=2.5, 4.0


     geom=2007

     


    UPGR16, geom=2010

     


     

    Speculative FGT++

    another 6 disks are added at the following Z:

     Zstart  =   62.98  ! starting position along Z axis

     Z       = { 5.4, 15.4, 25.4, 35.4, 45.4, 55.4, 75., 90., 105., 120., 135., 150.}  ! Z positions of GEM front face

    Green dashed lines at eta=1.0,.1.09,2.0

    red lines at eta=2.5, 4.0

     

    5 FGT cables in Geant

     Notes,

     



    If I want to know the total area for one FGT disk what is the proper multiplier : 4, 28, or 24*28 ?

    Within the cable, multipliers for the individual "subcomponents" are in column L.

    Then, there are overall 4 cables per FGT disk - the column B tells you number of cables and I-J tells you where they go.

    In other words, for instance, overall total copper area in FGT-power cables is
    24*(7*5.176E-3+1*3.255E-3) = 0.948 cm^2.

    I know you asked to break out with one "line" per cable route - we can do this later but for now there are already a lot of lines... I'll leave them grouped as this and you should look at I-J to decide the lengths and where they are.

    By the way, I imagine the "patch" between cone/FGT cables and "external" cables lying on TPC endwheel, through services gap, to crates, occurs somewhere just outside the cone, within the first foot or so.


    6 radiation length study for UPGR16 + SSD

    Study of the dead material in front and behind FGT.

    3 versions of GEANT geometries were investigated:

    • UPGR16 + current SSD w/ current cables
    • UPGR16 w/ 'light' SSD (Alu support structure replaced by carbon, Cu cables replaced by Alu)
    • UPGR16 without SSD, nor SSD cables

    Plot below is just example of material using current SSD.

     Many more plots are in attached PDF, in particular figs 2a-c, 3a-c, 4a-c.

    7 PR track plots with UPGR16 & fixed barrel

    Geometry= UPGR16, 6 FGT disks , fixed barrel geometry.

    Single electrons, 20 GeV ET, thrown at eta=0, 0.4, 0.8, 1.2, 1.6, 2.0


    Fig 1, Z vertex=0

     


    Fig 2, Z vertex=+30 cm

     


    Fig 3, Z vertex=-30 cm

     

     

    FGT hits fired by backward tracks (Wei-Ming Zhang)

     

               1, MC tracks (eta < -1.3) are thrown backwards and FGT are found fired.
     

                 Fig. 1, An events with FGT hits fired by backward tracks (UPGR16)

               2, To investigate with tracks (1.3 |eta| < 3.1) from MC events of RQMD Au-Au 10 GeV
     

                Fig. 2

     

              There are two rows in Fig. 2. Each row has three plots, the left is track multiplicity of events, the middle pt of
          tracks, and the right dE of FGT hits in KeV. The top row is plotted for backward tracks which fire FGT and for fired FGT 
          hits (backward hits). The first two of the bottom are for all backward tracks. The right of the bottom is a dE spectrum
          of FGT hits fired by FORWARD tracks (forward hits).  From Fig. 1, we see
            1, A low level of backward hits  (mult_0/mult_1 = 200 shown in the top left plot)
            2, A relatively large energy loss of backward hits which is 2-3 time larger than that of forward hits.
               This suggests backward tracks which fire FGT have very low speed and deposit more energy than forward MIPs.

             Based on the above, we believe that backward hits in FGT come from multiple scattering.
     

            Fig. 3, Split spectra of the top row of Fig. 2 for individual FGT disks: disk1 (top) disk2 (middle)
                   and disk3 (bottome)
     

     

           Fig. 4, Split spectra of the top row of Fig. 2 for individual FGT disks: disk4 (top) disk5 (middle)
                   and disk6 (bottome)
     

     

     

     

     

     

     

     

    FGT. . . . . . . . . H A R D W A R E -- D E T E C T O R

    1. FGT foil
    2. FGT disk
    3. FGT strip design, gerber screen dumps, 2010-12-20
    4. 6-disk assembly
    5. integration w/ STAR
      date revision format component
      March 2008 ? revXXX? PDF west support cylinder
      March 2008 ? 4.0 ? PDF mounting of FGT foil quadrant

     

    Electrostatic Calculations for West Support Cone

     Hi all;
     I've put some results from the latest electrostatic
    calculations for the WSC at:
    http://hepwww.physics.yale.edu/star/upgrades/WSC-D.ppt
    http://hepwww.physics.yale.edu/star/upgrades/WSC-D.pdf
    The first page shows the key elements of the model and
    the changes from previous versions:
    - All edges of the WSC are radius'ed with 2 cm
    - The resistor chain is now from Jan's "good" drawing
    - The resistior chain is not rotated 16 degrees from
       top - this makes it easier to navigate the "region
       of interest" for studying the high field region.
    This page also shows the orientation of the coordinate
    system.
    The next six pages are pairs of field plots.  Each pair
    shows the magnitude of the electric field in a slice
    normal to the Z (X, Y) axis.  The slice is taken at
    the location of the maximum field.  The second of each
    pair of plots is just a zoom in on the hot spot.
    The last plot shows that maximum field in an X-slice
    vs. X.  If people have questions or suggestions, we
    can discuss this in tomorrow's meeting.
     Best regards,
      Dick Majka

     

    Electrostatic Calculations for West Support Cone

    Calculations from 24-July-2008 with "corrected" resistor chain geometry.

    Rusults compared using 1mm mesh and 2mm mesh

    FGT Thermo-model D=16cm

    Photo documentation of Thermo-model A of FGT, 2009-08-10

    FGT Thermo-model D=16cm

    Photo documentation of Thermo-model A of FGT, 2009-08-10

    Consistent with measurements posted by James on 2009-07-28.

    Gas system assembly drawings

     

    Please note the following attachments: 

    1. Gas system assembly drawings as of 5 February 2011.
    2. Gas system assembly drawings for cosmic ray tests in clean room at BNL: Safety review on 29 April 2011.
    3. Photo: front of gas system control panel for cosmic ray test - as of 10 May 2011.
    4. Photo: rear of gas system control panel for cosmic ray test - as of 10 May 2011 

    Donald Koetke

    donald.koetke@valpo.edu

    drawings for the FGT full-size prototype frames, HV layer, and GEM foils, August 11, 2008

      These are the current drawings for the FGT full-size prototype frames, HV layer, and GEM foils.  
                                                       Cheers,
                                                               Douglas

     

    general routing for the top and bottom GEM, August 16,

     these Gerber files  show the general routing for the top and bottom GEM foil layers plus a representative sample of the GEM foil holes showing the spacing from frame edges and segment boundaries.

     

    FGT . . . . . . . . . . A N A L Y S I S

    see below

     

    Estimation of error of A_L including detector response, May 2008 (Jan)

     This set of analysis was done in preparation to PAC presentation at BNL, May 2008, ( Jan)

    study 1 of S/B, A_L, LT=800/pb (Jan)


    A_L   on  this has a WRONG sign, I did not know RHICBOS is using different sign convention


    Section A



    Definitions:

    S1,S2 - "signal" yields for 2 spins states (helicity)   "1", "2" , S1+S2=S

    B1,B2 - "background" yields for 2 spins states  "1", "2" , B1+B2=B

    N1=S1+B1;  N2=S2+B2;  N1+N2=N=S+B - "raw" yields  measured in real experiment

     

    Assumptions:

    • background is unpolarized, so   B1=B2=B/2
    • there are 2 independent experiments:
    1. measure spin independent yields for signal counts 's' and  background 'b' counts, yielding the fraction w=b/s, V(w)=w2 *(1/b +1/s) ; (e.g. M-C Pythia simulation using W and QCD events)
    2. measure helicity dependent raw yields N1=S1+B1 and  N2=S2+B2 ; (e.g. theory calculation with specific assumption of AL(W+) , fixed eta & pT ranges, assumed LT & W reco efficiency)  yielding raw spin asymmetry :

     ALraw=(N1-N2)/(N1+N2)= (S1-S2)/(S+B);  V(ALraw)= 4*N1*N2/N3

     I used capital & small letters to distinguish this two experiments.

     

    Problem: find statistical error of 'signal' asymmetry:

    ALsig= (S1-S2)/S

     

    Solution:

    1. ALsig= (1+w) * ALraw 
    2. V(ALsig)= (1+w)2*V(ALraw ) + V(w)*(ALraw)2  where V(x) denotes variance of x, V(N)=N.

     



      Section B - theory



    Model predictions of A_L for W+, W- , used files:

     

    rb800.w+pola_grsv00_2.root,  rb800.w-pola_grsv00_2.root, rb800.w+unp_ct5m.root       rb800.w-unp_ct5m.root

    LT=800/pb, 

    Brown oval shows approximate coverage of IST+FGT

    Red diamond shows region with max A_L and ... little yield. 

     



    Section C - W reco efficiency, fixed ET using fact=1.25



     From Brian, e/h algo ver 2.4, LT=800/pb.

    This version uses only tower seed in bin 6-11, this is main reason efficiency is of 40%.

    I'll assume in further calculation the W-reco efficiency is of 70%, flat in lepton PT>20 GeV.  

    Left: W-yield black=input, green - after cut 15.

    Right: ratio. h->Smooth() was used for some histos. 

    PT=20.6  sum1=  1223  
    PT=25.6  sum1=  1251  sum2= 473 att=1/2.6
    PT=30.6  sum1=   987  sum2= 406 att=1/2.4
    PT=35.6  sum1=   771  sum2= 343 att=1/2.2
    PT=40.6  sum1=   372  sum2= 166 att=1/2.2
    PT=45.6  sum1=    74  sum2=  16 att=1/4.6
    


    Section D - QCD reco efficiency,fixed ET using fact=1.25



     From Brian, e/h algo ver 2.4, LT=800/pb. h->Smooth() was used for some histos. 

    This version uses only tower seed in bin 6-11 

    Left: QCD-yield black=input, green - after cut 15.

    Right: ratio= QCD attenuation, not the a;go gets ~3x  'weaker' at PT =[34-37] GeV , exactly where we need it the most

     

    PT averaged attenuation of QCD events

    PT=20.6  sum1=2122517   
    PT=25.6  sum1=528917  sum2=3992 att=1/133
    PT=30.6  sum1=135252  sum2= 736 att=1/184
    PT=35.6  sum1= 38226  sum2= 320 att=1/120
    PT=40.6  sum1= 11292  sum2= 127 att=1/89 
    PT=45.6  sum1=  3153  sum2=  41 att=1/77 
    

     



    Section E -  QCD/W  ratio after e/h cuts, algo ver 2.4,fixed ET using fact=1.25



    From Brian. h->Smooth() was used for some histos. 

    Left: final yield of QCD events (blue) and W-events (green) after e/h algo.

    Right: ratio. 

    I'll assume w=b/s is better than the red line, a continuous ET dependence:

    w(pt=20)=10
    w(pt=25)=1.
    w(pt=40)=0.5 

     

    PT=25.6  sum1=  3992  sum2= 473 att=1/8.4
    PT=30.6  sum1=   736  sum2= 406 att=1/1.8
    PT=35.6  sum1=   320  sum2= 343 att=1/0.9
    PT=40.6  sum1=   127  sum2= 166 att=1/0.8
    PT=45.6  sum1=    41  sum2=  16 att=1/2.5
     

     

    study 2 , err(AL) , eta=[1,2], LT=800/pb (Jan)

    A_L on  this has a WRONG sign, I did not know RHICBOS is using different sign convention, JAN 

     


    Section A) Theoretical calculations:

             Assumed LT=800/pb

    fpol=new TFile("rb800.w+pola_grsv00_2.root");   <--GRSV-VAL (maximal W polarization)
    funp=new TFile("rb800.w+unp_ct5m.root"); 

    histo 215

    Total W+ yield for lepton ET[20,45] GeV and eta [1,2] is of 7101 for unpolarized cross section and of -2556 for the helicity dependent part.

    Assuming 70% beam polarization measured spin dependent asymmetry:

    eps_L= P* del/sum= -0.25 +/-0.012

    (assuming err(eps)=1/sqrt(sum) )

     Fig 1 , W+ : top row - unpol & pol cross section GRSV-VAL (maximal W polarization),

    bottom left: integrated over eta, black=unpol, red=pol

    bottom right: asy=P *pol/unpol vs. lepton PT, green=fit


    Total W- yield for lepton ET[20,45] GeV and eta [1,2] is of 5574 for unpolarized cross section and of +2588 for the helicity dependent part.

    fpol=new TFile("rb800.w-pola_grsv00_2.root"); 
    funp=new TFile("rb800.w-unp_ct5m.root"); 

    histo 215

    Assuming 70% beam polarization measured spin dependent asymmetry:

    eps_L= P* del/sum= +0.325 +/-0.013

    Fig 2.  W- GRSV-VAL (maximal W polarization)

     



    Section  B) Folding in e+,e- reconstruction and QCD background

    Assumptions:

    1. LT=800/pb
    2. beam pol P=0.7
    3. e+,e- reco efficiency is 70%, no PT dependence
    4. QCD background to W contamination, after e/h algo no spin dependece
    5. lepton PT range w=backg/signal
      20-25 GeV 5.0 +/- 10%
      25-30 GeV 1.0  +/- 10%
      30-35 GeV 0.8 +/- 10%
      35-40 GeV 0.7 +/- 10%
      40-45 GeV 0.6 +/- 10%

     

    Formulas:

    • Theory yields : unpol=sig0(PT) & pol=del(pt) , S1=(sig0+del)/2, S2=(sig0-del)/2
    • Measured yields N1, N2 , for 2 helicity states
      • N1(PT)= eff*[ sig0+del + sig0 * w) /2
      • N2(PT)= eff*[ sig0-del + sig0 * w) /2
      • ALraw(PT)= P* (N1-N2)/ (N1+N2)
      • V(ALraw(PT))= 1/(N1+N2) <-- variance 
      • ALsig(PT)= (1+w(PT))* ALraw(PT) 
      • V(ALsig)= (1+w)2*V(ALraw ) + V(w)*(ALraw)2 
      • dil=1+w
      • QA= |(ALsig)/err(ALsig)| - must be above 3 for meaningful result

    Fig 3,  Results for W + GRSV-VAL (maximal W polarization)

     Left : N1(PT)=red, N2(PT) blue. Right: reconstructed signal AL

    ipt=0  y-bins=[41,50] unpol=1826.5 pol=-327.3  AL=-0.125 +/- 0.0234   QA=1.8
       B2S=5.0 , N1=3721 N2=3950 ALraw=-0.021 +/- 0.011, dil=6.00  ALsig=-0.125 +/- 0.069
    
    ipt=1  y-bins=[51,60] unpol=1403.0 pol=-265.8  AL=-0.133 +/- 0.0267   QA=2.9
       B2S=1.0 , N1=889 N2=1075 ALraw=-0.066 +/- 0.023, dil=2.00  ALsig=-0.133 +/- 0.046
    
    ipt=2  y-bins=[61,70] unpol=1233.7 pol=-384.7  AL=-0.218 +/- 0.0285   QA=4.7
       B2S=0.8 , N1=643 N2=912 ALraw=-0.121 +/- 0.025, dil=1.80  ALsig=-0.218 +/- 0.047
    
    ipt=3  y-bins=[71,80] unpol=1811.9 pol=-1041.9  AL=-0.403 +/- 0.0235   QA=10.0
       B2S=0.7 , N1=713 N2=1443 ALraw=-0.237 +/- 0.022, dil=1.70  ALsig=-0.403 +/- 0.040
    
    ipt=4  y-bins=[81,90] unpol=808.8 pol=-525.2  AL=-0.455 +/- 0.0352   QA=8.1
       B2S=0.6 , N1=269 N2=637 ALraw=-0.284 +/- 0.033, dil=1.60  ALsig=-0.455 +/- 0.056
    
    sum2=7084.000000 sum3=-2544.850098 asy=-0.251
    

    Fig 4,  Results for W - GRSV-VAL (maximal W polarization)

     

    ipt=0  y-bins=[41,50] unpol=1239.1 pol=490.8  AL=0.277 +/- 0.0284   QA=3.2
       B2S=5.0 , N1=2774 N2=2430 ALraw=0.046 +/- 0.014, dil=6.00  ALsig=0.277 +/- 0.086
    
    ipt=1  y-bins=[51,60] unpol=1452.7 pol=641.2  AL=0.309 +/- 0.0262   QA=6.6
       B2S=1.0 , N1=1241 N2=792 ALraw=0.154 +/- 0.022, dil=2.00  ALsig=0.309 +/- 0.047
    
    ipt=2  y-bins=[61,70] unpol=1426.5 pol=689.9  AL=0.339 +/- 0.0265   QA=7.5
       B2S=0.8 , N1=1140 N2=657 ALraw=0.188 +/- 0.024, dil=1.80  ALsig=0.339 +/- 0.045
    
    ipt=3  y-bins=[71,80] unpol=1135.3 pol=596.1  AL=0.368 +/- 0.0297   QA=7.6
       B2S=0.7 , N1=884 N2=467 ALraw=0.216 +/- 0.027, dil=1.70  ALsig=0.368 +/- 0.049
    
    ipt=4  y-bins=[81,90] unpol=313.8 pol=166.4  AL=0.371 +/- 0.0565   QA=4.3
       B2S=0.6 , N1=234 N2=117 ALraw=0.232 +/- 0.053, dil=1.60  ALsig=0.371 +/- 0.086
    
    sum2=5567.280273 sum3=2584.398926 asy=0.325
    

     

     

     





    Another set of results for W+, W- with 2 x worse B/S (the same PT dependence).

    Fig 5.  W+ GRSV-VAL (maximal W polarization)

     

    Fig 6.  W- GRSV-VAL (maximal W polarization)

     

    study 3 , cross check w/ FGT proposal, LT=800/pb (Jan)

     Cross check of my code vs. A_L from FGT proposal

    Input from RHICBOS , GRSV-VAL model:

      if(Wsign==1) {             fpol=new TFile("rb800.w+pola_grsv00_2.root");              funp=new TFile("rb800.w+unp_ct5m.root");              WPM="W+ ";     } else {           fpol=new TFile("rb800.w-pola_grsv00_2.root");           funp=new TFile("rb800.w-unp_ct5m.root");             WPM="W- ";     }
    

     The sign of pol cross section from RHICBOS has reversed convention, I have changed  it to Medison convention.

       hpol->Scale(-1.);

     

    Fig 1 W-

    from my macro, compare bottom right to blue from fig 2a
     


    Fig 2 W-, W+ from FGT proposal


    Fig 3 W+,

    from my macro, compare bottom right to blue from fig 2b

    study 4 , LT=300/pb, eta=+/-[1,2] , W+/- (Jan)

     Calculation of error of A_L for LT=300/pb,   for W± , eta=±[1,2],  PT>20 GeV/c

     

    • Input from RHICBOS , GRSV-STD model:
    • The sign of pol cross section from RHICBOS has reversed convention, I have changed  it to Medison convention.
      hpol->Scale(-1.);
    • beam Pol=70%
    • e+,e- reco efficiency is 70%, no PT dependence
    • QCD background to W signal (B/S) contamination, after e/h algo no spin dependece
        A B C
        ET_range
        (EEMC 3x3_cluster)
        assumed w=backg/signal QCD eve
        suppression needed for(B)
        20-25 GeV  5.0 +/- 20%  - (for W+ or W-)
        25-30 GeV  1.0  +/- 20% 1/539 or 1/520
        30-35 GeV  0.8 +/- 20% 1/196 or 1/169
        35-40 GeV  0.7 +/- 20% 1/43  or 1/ 69
        40-45 GeV  0.6 +/- 20% 1/33 or 1/86
        45-50 GeV  0.5 +/- 20% 1/119 or 1/289
        *) based on full Pythia+GSTAR+BFC simulations of QCD events,
        (study 1 of S/B, A_L, LT=800/pb (Jan) section E)
        after 3x3 EEMC cluster is found
    • eta of the lepton [1,2] (polarized beam is heading toward Endcap)

     

    Formulas:

    • Theory yields : unpol=sig0(PT) & pol=delL(pt) , S1=(sig0+delL)/2, S2=(sig0-delL)/2
    • Measured yields N1, N2 , for 2 helicity states
      • N1(PT)= eff*[ sig0 +P*delL + sig0 * w) /2
      • N2(PT)= eff*[ sig0 -P*delL + sig0 * w) /2
      • ALraw(PT)= 1/P (N1-N2)/ (N1+N2)
      • V(ALraw(PT))= 1/P2 1/(N1+N2) <-- variance 
      • ALsig(PT)= (1+w(PT))* ALraw(PT) 
      • V(ALsig)=1/P2 (1+w)2*V(ALraw ) + V(w)*(ALraw)2 
      • dil=1+w
      • QA= |(ALsig)/err(ALsig)| - must be above 3 for meaningful result

    Fig 1, W+, eta=[1,2] , ideal detector 


    Fig 2, W+, eta=[1,2] , 70% W effi+QCD backg 

     Table 1,  W+ , eta=[1,2] , LT=300/pb

    1 2 3 4 5 6 7 8 9 10
    reco EMC
    ET GeV

     

    reco W+ yield
    helicity: S1, S2
    reco W+  
    unpol yield
    QCD Pythia
    accepted yield
    assumed
    B/S
    reco signal 
    AL+err
    reco 
    AL/err
    AL dilution:
    1+B/S
    QCD yield 
    w/ EMC cluster
    needed QCD 
    suppression *)
    20-25 242 ,236 479 2397 5.0 0.019 +/-0.160 0.1 6.00    
    25-30 192 ,176 368 368 1.0 0.062 +/-0.105 0.6 2.00 198343 1/539
    30-35 190 ,133 323 259 0.8 0.249 +/-0.109 2.3 1.80 50719 1/196
    35-40 333 ,141 475 332 0.7 0.576 +/-0.098 5.9 1.70 14334 1/43
    40-45 151,61 212 127 0.6 0.606 +/-0.132 4.6 1.60 4234 1/33
    45-50 13,6 19 9 0.5 0.457 +/-0.393 1.2 1.50 1182 1/119

      *) after 3x3 EEMC cluster is found, to obtain B/S from column 5



     

    Fig 3, W-, eta=[1,2] , ideal detector 


    Fig 4, W-, eta=[1,2] , 70% W effi+QCD backg 

    Table 2,  W- , eta=[1,2] , LT=300/pb

    1 2 3 4 5 6 7 8 9 10
    reco EMC 
    ET GeV

     

    reco W+ yield 
    helicity: S1, S2
    reco W-  
    unpol yield
    QCD Pythia
    accepted yield
    assumed 
    B/S
    reco signal 
    AL+err
    reco 
    AL/err
    AL dilution: 
    1+B/S
    QCD yield 
    w/ EMC cluster
    needed QCD 
    suppression *)
    20-25 116,208 325 1626 5.0 -0.403 +/-0.205 2.0 6.00    
    25-30 133,248 381 381 1.0 -0.431 +/-0.112 3.8 2.00 198343 520
    30-35 126,247 374 299 0.8 -0.461 +/-0.107 4.3 1.80 50719 169
    35-40 97,200 298 208 0.7 -0.495 +/-0.115 4.3 1.70 14334 69
    40-45 26,55 82 49 0.6 -0.506 +/-0.203 2.5 1.60 4234 86
    45-50 2,5 8 4 0.5 -0.521 +/-0.617 0.8 1.50 1182 293

      *) after 3x3 EEMC cluster is found, to obtain B/S from column 5

     



     

    Fig 5, W+, eta=[-2,-1] , ideal detector 


    Fig 6, W+, eta=[-2,-1] , 70% W effi+QCD backg 

    Table 3,  W+ , eta=[-2,-1] , LT=300/pb

    1 2 3 4 5 6 7 8 9 10
    reco EMC 
    ET GeV

     

    reco W+ yield 
    helicity: S1, S2
    reco W-  
    unpol yield
    QCD Pythia
    accepted yield
    assumed 
    B/S
    reco signal 
    AL+err
    reco 
    AL/err
    AL dilution: 
    1+B/S
    QCD yield 
    w/ EMC cluster
    needed QCD 
    suppression *)
    20-25 316,168 484 2424 5.0 0.436 +/-0.175 2.5 6.00    
    25-30 235,137 373 373 1.0 0.375 +/-0.111 3.4 2.00 198343 531
    30-35 189,132 322 258 0.8 0.252 +/-0.109 2.3 1.80 50719 196
    35-40 255,223 479 335 0.7 0.094 +/-0.085 1.1 1.70 14334 43
    40-45 109,102 212 127 0.6 0.048 +/-0.124 0.4 1.60 4234 33
    45-50 10,9 19 9 0.5 0.027 +/-0.393 0.1 1.50 1182 119

      *) after 3x3 EEMC cluster is found, to obtain B/S from column 5

     

     



     

    Fig 7, W-, eta=[-2,-1] , ideal detector 


    Fig 8, W-, eta=[-2,-1] , 70% W effi+QCD backg 

    Table 3,  W+ , eta=[-2,-1] , LT=300/pb

    1 2 3 4 5 6 7 8 9 10
    reco EMC 
    ET GeV

     

    reco W+ yield 
    helicity: S1, S2
    reco W-  
    unpol yield
    QCD Pythia
    accepted yield
    assumed 
    B/S
    reco signal 
    AL+err
    reco 
    AL/err
    AL dilution: 
    1+B/S
    QCD yield 
    w/ EMC cluster
    needed QCD 
    suppression *)
    20-25 157,151 308 1544 5.0 0.024 +/-0.199 0.1 6.00 795943 515
    25-30 187,180 367 367 1.0 0.029 +/-0.105 0.3 2.00 198343 540
    30-35 187,179 367 293 0.8 0.033 +/-0.100 0.3 1.80 50719 173
    35-40 151,144 295 206 0.7 0.033 +/-0.108 0.3 1.70 14334 69
    40-45 41,40 81 49 0.6 0.026 +/-0.200 0.1 1.60 4234 86
    45-50 3,3 7 3 0.5 0.012 +/-0.624 0.0 1.50 1182 301

      *) after 3x3 EEMC cluster is found, to obtain B/S from column 5

     

    study 5 charge sign discrimination


    Fig 1 charge reco misidentification (details), M-C simulations

    FGT 6 identical disk have active area Rin=11.5 cm, Rout=37.6 cm;
    Z location: 70,80,90, 100,110,120 cm with respect to STAR ref frame.

     


    Fig 2


     Unpolarized yield for W+, RHICBOS

     


    Fig 3


     Unpolarized yield for W-, RHICBOS

     


    Fig 4


     Unpolarized & pol yield for W+, RHICBOS, ideal detector

     

    Fig 5


     Unpolarized & pol yield for W-, RHICBOS, ideal detector

     

    study 6 final plots for PAC, May 2008

    A_L   on  this has a  RHICBOS convention to match PHENIX sign choice, opposite to FGT proposal convention.

     

    Estimated statistical uncertainty for AL for charge leptons from W decay reconstructed in the Endcap



    Fig 1 - Ideal detector



    Fig 2 - Realistic detector efficiency & hadronic background, ideal charge reco

     

     

    Statistical significance of measured AL vs. zero , integrated over PT, for DSSV2008.
    kinematics LT=300/pb LT=100/pb
    W+, forward  8.6  5.3
    W-, forward   6.7  3.9
    W+, backward   5.1  3.0
    W-, backward   0.3  0.2

    Statistical significance of measured AL vs. zero , integrated over PT, for DRSV-VAL.
    kinematics LT=300/pb LT=100/pb
    W+, forward  8.6  5.3
    W-, forward   8.2  4.7
    W+, backward   5.9  3.4
    W-, backward   3.9  2.3



    Fig 3 - Realistic detector efficiency, hadronic background, and charge reco

     missing

    study 7 revised for White Paper, AL(eta), AL(ET) , (Jan)

     Plots show AL for W+, W- as function of ET (fig1) and eta (fig2,3) 

    I assumed beam pol=70%, electron/positron reco off 70%, QCD background included, no vertex cut (as for all earlier analysis).

    For AL(ET)  I integrated over eta [-2,-1] or [1,2] and assumed the following  B/S(ET) = 5.0 for ET>20 GEV, 1.0 for ET>25, 0.9 for ET>30,.... 

    For AL(Eta)  I integrated over ET>25 GeV and assumed a constant in eta & ET B/S=0.8. 


    Fig 1. AL(ET). Only Endcap coverage is shown. ( EPS.zip )



    Fig 2. AL(Eta) . Only Endcap coverage is shown. ( EPS.zip )



    Fig 3. AL(Eta) has continuous eta-axis, binning is exactly the same as in Fig 2.  It includes Endcap & Barrel coverage

    (PS.gz) generated by take2/do5.C, doAll21()



     

    study 8 no QCD backg AL(eta), AL(ET), also rapidity (Jan)

     Plots show AL for W+, W- as function of ET (fig1,2) and eta (fig3) 

    I assumed beam pol=70%, electron/positron reco off 70%, no vertex cut (as for all earlier analysis).

    NO QCD background dilution.

    For AL(ET)  I integrated over eta [-2,-1] , [-1,+1], or [1,2] and assumed no background

    For AL(Eta)  I integrated over ET>25 GeV and assumed no background

     

     


    Fig 1 ( PS.zip )


    Fig 2 ( PS.zip )

    Accounted for 2 beams at mid rapidity.


    Fig 3

    study 9 sensitivity of STAR at forward & mid rapidity

    The goal is to provide overall STAR sensitivity for LT=100/pb & 300/pb.


    Common assumptions:

    • beam pol=70%
    • W reco efficiency 70%
    • no losses due to the vertex cut 
    • integrated over ET>20 GeV

     This page is tricky, different assumptions/definitions are used for different eta ranges.

    A) Forward rapidity: eta  range [+1,+2],  (shown on fig 1a+b in study 7 revised for White Paper, AL(eta), AL(ET) , (Jan))

    determine the degree to which we can measure an asymmetry different from zero.

    • QCD background added, B/S changes with ET.
    • sensitivity  is defined as 3 x stat_error of reco AL 
    • method: fit constant to 'data', use 3x error of the fit
    3 sigma(measured AL)
      LT=100/pb LT=300/pb
    W+, forward 0.27 0.15
    W-, forward 0.30 0.18

     I fit constant to the black points what is equivalent to taking the weighted average.

    The value of the average is zero but the std dev of the average tells us sigma(measured AL).
    In the table I'm reporting 3 x this sigma.
     
    E.g.   for W+ forward we could distinguish on 3 sigma level  between 2 models of AL  if the values of AL differ by at least of 0.27 if we are given LT=100/pb.
     

    B) Mid-rapidity: eta  range [-1,+1],  (shown on fig 2a+b in study 8 no QCD backg AL(eta), AL(ET), also rapidity (Jan) )

    determine the ratio of  difference of DNS-MIN and DNS-MAX to sigma(measured AL)

    • account for 2x larger yield due to 2 polariazed beams
    • QCD background NOT added
    • sensitivity  is defined as ratio 
    • avr difference of DNS-MIN and DNS-MAX is used
    ratio  
       avr(ALMIN-ALMAX) LT=100/pb LT=300/pb
    W+, mid 0.15 13 21
    W-, mid 0.34 13 22

     

     C) Backward rapidity: eta  range [-2,-1],  (shown on fig 1c+d in study 7 revised for White Paper, AL(eta), AL(ET) , (Jan))

    determine the ratio of  difference of DNS-MIN and DNS-MAX to sigma(measured AL)

    • QCD background added, B/S changes with ET.
    • sensitivity  is defined as ratio 
    • avr difference of DNS-MIN and DNS-MAX is used
    ratio  
       avr(ALMIN-ALMAX) LT=100/pb LT=300/pb
    W+, backward 0.10 1 2
    W-, backward 0.5 5 9

     

     

    D) Mid-rapidity: eta  range [-1,+1], QCD background added (fig shown below, PS.zip )

    determine the ratio of  difference of DNS-MIN and DNS-MAX to sigma(measured AL)

    • account for 2x larger yield due to 2 polariazed beams
    • QCD background added, using B/S(ET) from M-C study at forward rapidity- it is the best what we can do today
    • sensitivity  is defined as ratio 
    • avr difference of DNS-MIN and DNS-MAX is used
    ratio  
       avr(ALMIN-ALMAX) LT=100/pb LT=300/pb
    W+, mid 0.15 7 11
    W-, mid 0.34 10 15

     

     

    study 9a sensitivity mid rapidity LT=10/pb

    Projection of STAR sensitivity for AL for W+,W-  at mid rapidity for LT=10/pb and pol=60% 

     

    Common assumptions:


    Fig 1,  no QCD background ( PS.zip )

      Dashed area denotes pt-averaged statistical error of STAR measurement.


    Fig 2,   includes QCD background ( PS.zip )

      Dashed area denotes pt-averaged statistical error of STAR measurement.

     

    study 9b sensitivity at mid rapidity LT=10/pb Pol=50% or 60%

     Projection of STAR sensitivity for AL for W+,W-  at mid rapidity for LT=10/pb and pol=60% or 50%

    Common assumptions:

    • LT=10 pb -1
    • beam pol=60% or 50% 
    • W reco efficiency 70%
    • no losses due to the vertex cut 
    • integrated over eta [-1,+1] 
    • see remaining details in  study 9 sensitivity of STAR at forward & mid rapidity 
    •   
      Estimated significance of STAR measurement
      beam pol  avr AL(W+)THEORY=0.35
      .  avr AL(W-)THEORY=0.15
      sig ALSTAR STAR signifcance   sig ALSTARSTAR signifcane
       50%  0.092  3.8 sigma    0.18  0.8 sigma
       60%  0.077  4.6 sigma    0.15  1 sigma

    Fig 1,   Pol=60% ( PS.zip )

      Dashed area denotes pt-averaged statistical error of STAR measurement.

     


    Fig 2,   Pol=50% ( PS.zip )

     

    e/h algo for Barrel

     not developed, add your analysis as child pages to this page

    e/h discrimination in the Endcap

    attach different analysis to this child-pages, make them self contained

    a) e/h isolation based on Geant Record (Mike, Mar 31)

     For the new QCD background,

    partonic pT 15 - 20 GeV : 50,000 events  LT=8.0/pb
    partonic pT 20 - 30 GeV : 20,000 events  LT=0.3/b
    partonic pT 30 - 50 GeV : 10,000 events  LT=0.4/pb
    partonic pT 50+       GeV :   1,000 events  LT=0.8/pb

    The high pT tail is still weak.  I could run more high pT events (with the time frame of one more day) or double the bin widths to smooth out the tail and allow for an efficiency plot up to ~50 GeV (albeit with relatively large error bars).  Let me know your choice.

    The W files are mit0009 and the QCD files are mit0012.
    I use r = 0.26 for the isolation cut, as per the original IUCF proposal, and use the jet finder for the away side veto.

    -Mike

     

     

    b) e/h isolation based on Geant Record (Mike, April 3)

    This is geant-track based analysis

    Mike:

    Bottom plots are S/B ratios.  

    These are extremely loose cuts and I don't think you can base the entire FGT analysis on them.
    First of all, the e/p cut isn't quite right.  I use the actual energy of the track whereas most hadrons would just drop MIPs into the calorimeter making the e/p cut very helpful.  I did try to do this in my code but the e/p cut became TOO good and I didn't want to have to deal with trying to convince people of that.
    Second of all, no neutral energy isolation cut is made to veto against neutral energy depositions.
    Third of all, no shower shape (transverse and longitudinal) information is used at all.
    I think it's fair to say that this is a worst case scenario analysis.

    Jan: 

    I try to understand better this last two S/B plots.
    The message from the lower figure is:
    * isolation cut alone results with background by a factor 2 to 3 for W+ and factor 4 to 10 for W- with reverse PT dependence for W+ & W-
    * away said jet veto helps almost nothing in e/h discrimination.
    * if only isolation & away side jet cuts are applied background dominates over signal by a factor of ~300 at PT of 20 GeV and improves to a factor ~2 at PT=40 GeV.

    Taken at the face value  Enddcap information add discrimination power of ~1000 for PT=20 GeV  changing toward factor of 6  for PT of 40 GeV/c. (Assuming we want S/B=3)

    My comment:
    The value of 6 is in reach but value of 1000 may be not trivial.

     e/p cut  - I agree with Mike, 
    Looks like the e/p cut applied to h-  reduces it up to a factor of 2  
    for pT of 40 GeV.
    This cut needs to be dropped for real data- we will not measure PT in  
    FGT with reasonable accuracy at large track PT - this reduces overall  
    e/h discrimination power for W- at  for PT>30 GeV, so more realistic  
    estimate of discrimination power of this (geant based) algo for W- at  
    PT of 40 GeV is rather  S/B=0.35 and we need additional factor of 8  
    from the endcap.

     

    BFC: Filtered Vs. Unfiltered Comparison

    Comparison of BFC with and without filtering

     

    See pdf file at bottom of page:

    The left-hand plots are the unfiltered events and the right-hand plots are the filtered

     

    1. Page 1 shows how many events survive each individual cut. Bins 0 and 1 do not mach because there are different numbers of events in each file and because in my code bin 1 is a cut on 15GeV, not 17GeV where the filtered cutoff was set. All subsequent bins match.
    2. Page 2 shows how many events pass the cuts sequentially. Again, all bins from 2 onward are the same.
    3. Page 3-8 show the trigger patch ET spectrum after sequential cuts were applied, the spectra look the same.
    4. Page 9 shows the ET weighted average eta position of the trigger patch vs the PT weighted average eta position of electrons going into the endcap from the geant record.
    5. Page 10 shows the same information for the phi position.
    6. Page 11 shows the ET of the trigger patch vs the PT of the thrown electrons.

     

    Geant Spectra for Filtered and Unfiltered W events

    Non-Filtered

    Electrons:

    Fig1: No eta cuts

     

    Fig 2: Eta between -1 and 1

     

    Fig 3: Eta between 1.2 and 2.4

     

    Positrons:

    Fig 4: No eta cuts

     

    Fig 5: Eta between -1 and 1

     

    Fig 6: Eta between 1.2 and 2.4

     

    Filtered on the pythia level: 20GeV in eta [0.8,2.2]

    Electrons:

    Fig 7: No eta cuts

     

    Fig 8: Eta between -1 and 1

     

    Fig 9: Eta between 1.2 and 2.4

     

    Positrons:

    Fig 10: No eta cuts

     

    Fig 11: Eta between -1 and 1

     

    Fig 12: Eta between 1.2 and 2.4

    Pythia analysis version 1.0

    Preliminary Analysis of runs in setC2


    This analysis was preformed on the events in setC2. See here for details. I ran 48,000 qcd background events and 16000 W events using version 1.0 of my analysis program.

     

    Fig 1: Z position of the primary vertex. The W events are on the left and the qcd background events are on the right.


    Fig 2: The number of primary tracks found for each event. The W events are on the left and the qcd background events are on the right.


    The first cut an event is required to pass is that the transverse energy in a 3X3 patch of towers, centered on the tower with the highest energy, must be greater than 15GeV. All subsequent cuts are made only on events which pass this trigger.


    Fig 3: The transverse energy deposited in the trigger patch for each event. The W events are on the left and the qcd background events are on the right.


    For this analysis, I used the cuts I developed from looking at single particle events. For the 1-D histograms the W events are in black and the qcd background events are in red. For the 2-D histograms the W events are on the left and the qcd background events are on the right.

    • cutOne: number of hit towers < 11
    • cutTwo: number of hit U strips > 24 and < 42
    • cutThree: not included
    • cutFour: second pre-shower energy > 0.013
    • cutFive: post-shower energy < 0.035
    • cutSix: post-shower over full patch ratio < 0.0007
    • cutSeven: three U strip over all U strip energy > 0.5 and < 0.8
    • cutEight: seven U strip over all U strip energy > 0.65 and < 0.92
    • cutNine: total U plane energy > 0.2
    • cutTen: total U plane energy > 0.3
    • cutEleven: post energy below line running through (0,0) and (0.3,0.05)

     


    Fig 4. These plots show how many events passed a particular cut. W events are on the left and qcd background events are on the right. For reference, a total of 5997 W events passed the trigger and 6153 qcd background events passed the trigger.


    ver 1.2 : e/h isolation based on TPC tracks (Brian)

    Hi everyone,

    I have some results for my first iteration of isolation cut and I would
    appreciate some feedback as to whether they look reasonable or not.

     These plots were made after requiring that the 3x3 trigger patch ET be greater than 20GeV.


    First I find the eta and phi coordinates of the highest tower in the end
    cap. I then set a value for the radius. I then loop over all tracks and
    towers and if their endcap crossing point lies within my radius I add
    their Pt or Et to the total.

     

    By highest tower you mean:
    - reco vertex is found
    - ADC is converted to ET in event reference frame (you are using event-
    eta, not detector eta
    - you find highest ET tower
    I use energy to find the highest tower, not transverse energy.  StEEmcTower highTow = mEEanalysis->hightower(0). And for any event to be processed further, it must have a reco primary vertex found.


    By highest tower you mean:
    - reco vertex is found
    - ADC is converted to ET in event reference frame (you are using event-
    eta, not detector eta
    - you find highest ET tower
    I use energy to find the highest tower, not transverse energy.  StEEmcTower highTow = mEEanalysis->hightower(0). And for any event to be processed further, it must have a reco primary vertex found.
     

    StEEmcTower StEEmcA2EMaker::hightower(   Int_t     layer = 0    )
    Returns the tower with the largest ADC response

    3x3 trigger patch is built around this tower by construction. I ask for the high tower, then I ask for all of its neighbors. The high tower and its neighbors (usually 8, but could be less for high towers on sector boundries) make up the trigger patch.

    I then set a value for the radius.I then loop over all tracksDo you mean primary tracks with flag()>0 and nFit/nPos>0.51 ?
    Yes, I use these QA conditions.

     

     

    These plots show the ratio of the transverse energy in the 3x3 trigger
    patch to the total transverse energy in the isolation radius. I ran W
    events from files setC2_Weve_N (black curve) and QCD events from files
    setC2_qcd_N (red curve).


    Trigger ET over Et in iso radius r=0.1:


    Trigger ET over ET in iso radius r=0.26:


    Trigger ET over ET in iso radius r=0.7:

     

    Pythia Analysis ver 2.3: Effects of cuts

    Effect of cuts on Trigger Patch ET Spectrum


    Here I take a first look at how various cuts effect the trigger patch ET spectrum. For this first run, I look at 13 cuts. The first four reduce the energy range and areas of the endcap that I look in. All of these cuts are applied before any additional cuts are made. The final eleven cuts are various isolation and endcap cuts. For this analysis I used sample ppWprod from setC2 for the W sample and pt30-50 from setC4 for the QCD sampel. Details here. These figures are not scaled to 800inv_pb so all W events must be multiplied by (800/1014) and all QCD events must be multiplied by (800/12.1).


    Here are the cuts:

     

    • Cut One: This cut requires that a reconstructed vertex is found, that the vertex lies in the range z=[-70,-50] and that the trigger patch ET be at least 15GeV.
    • Cut Two: This cut requires that the trigger patch ET be at least 20GeV.
    • Cut Three: This cut requires that the ET weighted average eta value of the trigger patch be less than 1.7
    • Cut Four: This cut requires that the highest energy tower be in etabin 6, 7, 8, 9, 10, or 11.
    • Cut Five: This is a cut on the ratio of the trigger patch ET to the transverse energy of all towers within a radius of 0.45 from the hightower. The cut was > 0.96 to pass.
    • Cut Six: This is a cut on the transverse distance between the point where a track crosses the endcap and the position of the high tower. The cut was set at < 0.7 to pass but this was a typo, the correct cut would have been closer to 0.07.
    • Cut Seven: This is a cut on the number of tracks above 1GeV which cross the endcap within a radius of 0.70 of the hightower. There must be 0 or 1 tracks to pass.
    • Cut Eight: This is a cut on the ratio of the energy of the seven highest U strips to the energy of all U strips under the trigger patch. The cut was > 0.7 to pass.
    • Cut Nine: This is a cut on the ratio of the energy of the two highest towers in the trigger patch to the energy of all the towers in the trigger patch. The cut was > 0.9 to pass.
    • Cut Ten: This is a cut on the number of towers with energy greater than 800MeV in the same sector as the trigger patch. The cut was < 6 to pass.
    • Cut Eleven: This is a cut on the number of hit strips in the U plane in the sector containing the trigger patch. The cut was < 48 to pass.
    • Cut Twelve: This is a cut on the ratio of the post-shower energy in the trigger patch to the full energy in the trigger patch. The cut was < 0.0005 to pass.
    • Cut Thirteen: This is a cut on the post-shower energy in the trigger patch. The cut was < 0.04 to pass.


    The following plots show the effects the above cuts had on the trigger patch ET spectrum when applied in the order given.

    Fig 1: This plot shows the number of events that passed a set of cuts. Bin 1 shows the raw number of events processed and each subsequent bin shows all events that passed a given cut and all cuts before it. So bin eight will show all events that passed cuts 1-7. W events are on the left and QCD events are on the right.


    Fig 2: This plot shows the effects of the four phase space cuts on the trigger patch ET spectrum. Again W events are on the left and QCD events are on the right.


    Fig 3: This plot shows the effects of the remaining 9 cuts on the trigger patch ET spectrum. Here the black line, instead of representing the raw spectrum, represents the ET spectrum after the four phase space cuts have been applied. Again the cuts are sequential. W events are on the left and QCD events are on the right.


    Fig 4: This plot shows the effect of all thirteen cuts in an easier to see form. The black line is the raw spectrum and the red line is the spectrum after the four phase space cuts and the blue line is the spectrum after all remaining cuts. W events on the left and QCD on the right. For the W sample: In the region 20-60, the raw sample contains 4319 events, the sample after the phase space cuts contains 2400 events, and the sample after all cuts contains 1924 cuts. For the QCD sample: In the region 20-60, the raw sample contains 7022 events, the sample after the phase space cuts contains 4581 events, and the sample after all cuts contains 160 events.

     

     

    Pythia Analysis ver 2.4: Cuts on all setC4 jobs

    Effect of Cuts on Trigger Patch ET

    In this analysis I look at the effect the cuts I used in Ver 2.3, as well as two new cuts, have on the transverse energy spectrum of the 3x3 trigger patch. For this analysis I used all files in setC4 for the QCD background and I used setC2_Wprod_N for the W samples. Details here.  I weighted all events to 800 inverse pb. As before the first four cuts restrict the energy range and the area of the endcap we look at and the final eleven cuts are  endcap and isolation cuts. The black curves are the QCD events and the red curves are the W events. All ET and eta's are event eta.

     

    Here are the cuts: (Note, ordering different from ver 2.3)

     

    • Cut One: This cut requires that a reconstructed vertex is found, that the vertex lies in the range z=[-70,-50] and that the trigger patch ET be at least 15GeV.
    • Cut Two: This cut requires that the Trigger patch ET be greater than 20GeV.
    • Cut Three: This cut requires that the ET weighted average eta value of the trigger patch be less than 1.7.
    • CutFour: This cut requires that the highest energy tower be in etabin 6, 7, 8, 9, 10, or 11.
    • Cut Five: This is a cut on the ratio of the trigger patch ET to the total ET of all towers within a radius of 0.45 of the high tower. The cut was > 0.96 to pass.
    • Cut Six: This is a cut on the transverse distance between where a track crossed the endcap and the high tower. Counts over all tracks above 1GeV within a radius of 0.70 of the high tower. The Cut was < 0.07 to pass (fixed from Ver 2.3)
    • Cut Seven: (new) This is a cut on the transverse energy found in all towers (barrel and endcap) which lie in a region +/- 0.7 radians from Phi(hightower) + Pi. The cut was < 6.0 to pass.
    • Cut Eight: (new) This is a cut on the transverse distance between where a track crossed the endcap and the high tower. Counts over all tracks above 0.5 GeV within a radius of 0.70 of the high tower. The cut was < 0.07 to pass.
    • Cut Nine: This is a cut on the number of tracks above 1GeV which cross the endcap within a radius of 0.70 of the hightower. There must be 0 or 1 tracks to pass.
    • Cut Ten: This is a cut on the ratio of the energy in the seven highest SMD strips in the U plane under the patch to the energy in all the U strips under the patch. The cut was > 0.7 to pass.
    • Cut Eleven: This is a cut on the ratio of the energy in the two highest towers in the trigger patch to the energy in all the towers of the trigger patch. The cut was > 0.9 to pass.
    • Cut Twelve: This is a cut on the number of towers above 800 MeV in the sector (or sectors) containing the trigger patch. The cut was < 6 to pass.
    • Cut Thirteen: This is a cut on the number of hit strips in the U plane in the sector containing the high tower. The cut was < 48 to pass.
    • Cut Fourteen: This is a cut on the ratio of the post-shower energy in the patch to the full energy in the patch. The cut was < 0.0005 to pass.
    • Cut Fifteen: This is a cut on the post-shower energy in the patch. The cut was < 0.04 to pass.

     

    Fig 1: This plot show the effects of all 15 cuts. The black line is the spectrum before any cuts are applied, the red line is the spectrum after the phase space cuts (1-4) are made, and the green line is the spectrum after all 15 cuts have been applied. QCD events are on the left and W events are on the right. For the QCD sample: The raw spectrum contains 718100 events in the region 20-60, the spectrum after the phase space cuts contains 452000 events, and the spectrum after all cuts contains 5215 events. For the W sample: The raw spectrum contains 3476 events in the region 20-60, the spectrum after the phase space cuts contains 1933 events, and the spectrum after all cuts contains 1407 events.

     

     

    Fig 2: This plot shows the the QCD spectrum(black) after cuts 1-15 and the W spectrum(red)  after cuts 1-15 on the same figure.

    Analysis for Each Pt Range Seperatly(Work in progress)

    Results for Each Pt Bin

     

    The first plot shows the effects that the first four cuts (the phase space cuts) have on the trigger patch transverse energy spectrum.  The black line is the raw spectrum and the subsequent lines are the cuts, detailed on the parent page.

    The second plot shows the effects the remaining cuts have on the ET spectrum. The black line is the spectrum after the phase space cuts, the red line is the spectrum after the isolation cut, the green line is the spectrum after the awayside energy cut and the blue line is the spectrum after all cuts have been applied.

    The QCD events are on the left and the W events are on the right. Event ET is used. This is detected ET, not thrown ET. Everything is scaled to 800 inv_pb

    Pt 50-inf:

     

    Pt 30-50:

     

    Pt 20-30:

     

    Pt 15-20:

     

    Pt 10-15:

    Pythia analysis ver 2.3: isolation cuts and EEmc cuts

    First Set of Proposed Cuts


    The following plots show the cuts I intend to use on my first iteration of the e/h discrimination code. These plots were generated using ppWprod from setC2 (row 11) for W events and mit0015>10 from set C4 (row 13) for QCD events. Details are here. All events in plots have reco vertex in range z=[-70,-50] and have a trigger patch ET > 20GeV. Furthermore the isolation cut plots have the added condition that the high tower does not fall in etabins 1-5 or 12. The W sample is in black and the QCD sample is in red.


    Fig 1: Plot of the ratio of transverse energy in the 3x3 trigger patch to transverse energy of towers located inside a radius of 0.45. Cut: > 0.95.


    Fig 2: Plot of the displacement between a track and the center of the high tower when there is only one track above 1GeV in a radius of 0.70. Cut: < 0.6.


    Fig 3: Plot of the number of hit towers with energy above .8GeV that lie in the same sector as the trigger patch. Cut: < 6.


    Fig 4: Plot of the number of hit strips in the U plane in the sector containing the trigger patch. Cut: < 48.


    Fig 5: Plot of the energy in the post-shower layers of the 3x3 trigger patch. Cut < 0.04.


    Fig 6: Plot of the ratio of energy in the post-shower layers of the 3x3 trigger patch to the total energy in the trigger patch. Cut: < 0.0005. Fig 7: Plot of the ratio of energy in the seven highest U strips under the 3x3 trigger patch to the energy in all the U strips under the trigger patch. Cut: > 0.7.


    Fig 8: Plot of the ratio of the energy in the two highest towers in the 3x3 trigger patch to the energy in all towers of the trigger patch.

    BFC: 0, 1, and 2 step Filtering Comparison

    Comparison Between 0, 1, and 2 Step BFC Filtering

     

    For this analysis, I used code V2.5 which is the most recent version to use reconstructed tracks and vertices. I used a scaling factor of 1.25. Event ET used.

     

    QCD events

    Fig 1: Plot showing how many times each cut is passed individually. Values should coincide after bin 2

    Fig 2: Plot showing how many events pass each cut in sequence. Values should coincide after bin 2

    Fig 3: Plot showing spectrum after cuts 1-4 have been applied

    Fig 4: Plot showing spectrum after cuts 1-5 have been applied

    Fig 5: Plot showing spectrum after cuts 1-6 have been applied

    Fig 6: Plot showing spectrum after cuts 1-7 have been applied

    Fig 7: Plot showing spectrum after cuts 1-8 have been applied

     

    W events

    Fig 8: Plot showing how many times each cut is passed individually. Values should coincide after bin 2

    Fig 9: Plot showing how many events pass each cut in sequence. Values should coincide after bin 2

    Fig 10: Plot showing spectrum after cuts 1-4 have been applied

    Fig 11: Plot showing spectrum after cuts 1-5 have been applied

    Fig 12: Plot showing spectrum after cuts 1-6 have been applied

    Fig 13: Plot showing spectrum after cuts 1-7 have been applied

    Fig 14: Plot showing spectrum after cuts 1-8 have been applied

     

    Investigation of ET scaling factor

    ET scaling factor

     

    In order to investigate the discrepensy between the thrown pt of a lepton and the ET that the endcap detects, I have run electron events with single energies through the big full chain Jan has used for all the simulations and ploted the detected ET. I ran three energies 20GeV, 30GeV, and 40GeV. I ran 5000 events at each energy and each event has only one electron going into the endcap.

     

    Fig. 1: This plot shows the detected ET of the electrons which were thrown at 40GeV.

     

    Fig. 2: This plot shows the detected ET of the electrons which were thrown at 30GeV.

     

    Fig. 3: This plot shows the detected ET of the electrons which were thrown at 20GeV.

     

    We see that multiplying the detected ET by a scale factor of 1.23 recovers the thrown Pt of the electron.

     

    Plots for PAC (brian)

    Transverse Energy correction factor of 1.23 included

    Code Version 2.5

     

     

    Fig 1: Isolation cut

    Fig 2: Awayside isolation cut

    Fig 3: Seven strip cut

    Fig 4: Post patch over full patch cut

    Fig 5: All cuts spectrum with linear scale for W signal

    Fig 6: All cuts spectrum with log scale for W signal

    Fig 7: All cuts spectrum showing only W+ linear scale

    Fig 8: All cuts spectrum showing only W+ log scale

    Fig 9: All cuts spectrum showing only W- linear scale

    Fig 10: All cuts spectrum showing only W- log scale

     

    Brief explanation of cuts found here.

     

     

    Below are plots showing the contributions that other W decay channels will have to the spectrum. All spectrum are weighted to 800inv_pb. Details can be found in rows 14-18 of the plot detailing events in setC2 here.

     

    Fig 11. Spectrum of W decay events.

    Fig 12. Spectrum of Z production events.

    Fig 13. Spectrum of W jet events.

    Fig 14. Spectrum of Z jet events.

    Fig 15 Spectrum of W Z events.

     

    Fig 16: Table of integrated yields for W, Z processes scaled to 800 inv_pb after all cuts have been applied.

     

      PT>20 PT>25
    W_Prod 1827 1359
    W_Dec 64 30
    Z_Prod 113 67
    W_Jet 212 144
    Z_Jet 18 8
    WZ 4 2

     Fig 17a,b: Table of Pythia cross section and branching ratio 

     

    Fig 18: Table of integrated QCD yields for all pT bins scaled to 800 inv_pb after all cuts have been applied.

     

      pT>20 pT>25
    All pT 16460 4892
    pT50-inf 67 54
    pT30-50 2247 1430
    pT20-30 9179 2611
    pT15-20 3628 558
    pT10-15 1032 86
    pT05-10 308 154

     

    Pythia Analysis Summary

    Analysis Summary

     

    Introduction

    A future goal of the STAR collaboration is the measurment of flavor separated polarized anti-quark distribution functions. To make these measurments we will be looking at charged leptons - electrons and positrons - arising from the decay of W bosons created in quark anti-quark collisions. One difficulty in making this measurment is the large flux of background hadronic particles, giving a signal to background on the order of 1/1000. The following details my efforts at developing an algorithm which can reject the hadronic background while retaining the signal leptons to achieve a S/B of greater than one-to-one over a significant portion of the observed lepton transverse energy spectrum which runs from roughly 20-50GeV.

     

    Basic Philosophy

    Discrimination between leptons and hadrons is posible because of the different processes giving rise to signal and background events as well as different showering properties of leptons and hadrons in the EEMC. Hadronic events tend to be dijets, meaning these events will deposit two blobs of energy located 180 degrees away in azimuth from eachother. Leptonic events, on the other hand, arise from the decay of W bosons which produce a charged lepton and a neutrino. The neutrino is not detected so only one blob of energy is deposited in the detector. This difference allows for the application of an away-side cut which will veto events having significant energy 180 degrees away from the candidate lepton. Hadrons and leptons also behave differently inside of the EEMC with hadrons tending to produce larger and wider showers as compared to leptons due to collisions with nuclei. This difference in shower behavior means isolation cuts on the energy around a candidate lepton can be effective. Finally, the EEMC is roughly 21 electron radiation lengths deep and only one hadron radiation length deep meaning that much more hadronic energy will escape the back of the detector, allowing for cuts based on the amount of energy leaving the detector. Below are pictures from starsim showing the evolution of hadronic and leptonic showers in the EEMC:

    Fig 1: Picture of a 30GeV charged pion going into the EEMC generated using starsim.

     

    Fig 2: Picture of a 30GeV electron going into the EEMC generated using starsim.

     

     

    As described above, the algorithm discriminates between leptons and hadrons in three basic ways: looking at the number of tracks and energy around candidate leptons, vetoing events with too many tracks or too much energy 180 degrees away in azimuth from the candidate lepton, and by comparing shower evolution in the EEMC. Another important aspect of the discrimination algorithm is the trigger patch. The trigger patch is always taken to consist of the tower with the highest energy - which is also taken as the position of the candidate electron - and all adjacent towers, usually yielding a 3x3 patch. This trigger patch is the area of the EEMC where shower evolution is investigated. The investigation of shower evolution is possible in the EEMC due to the five separate readout layers: the two pre-shower layers, the two shower maximum detectors (SMDs) and the post-shower layer. The pre-shower layers are located at the front of the detector where few leptons will have started to shower, the SMDs are located five out of 24 layers deep and are positioned at the depth where there is maximum shower energy deposition, and the post-shower layer is at the back of the detector where most lepton showers will have died out. By looking at the energy deposited in the separate layers, one can get an idea of the longitudinal shower development inside the EEMC. One can also see the transverse shower development by looking at the number of hit strips in the SMD layers. Many combinations of EEMC quantites were tried over the course of developing the discrimination algorithm, but the best discrimination by far was achieved by looking at the ratio of the energies in the seven highest adjacent SMD strips under the trigger patch to the energy of all SMD strips under the trigger patch. Less effective but still usefull EEMC cuts include the ratio of the energy in the two trigger patch towers with the highest energy to the energy in the full trigger patch and the ratio of the energy in the post-shower layer to the energy in the full trigger patch. The away-side and isolation cuts provide powerful discrimination as well and are largly independent of the EEMC cuts listed above.

     

    Code Versions

    The discrimination code has gone through many versions as new cuts were tested, tracking methods were improved, and other improvements were made. Below is a brief discription of each version as well as the source code for future reference.

     

    V1.0:

    Version 1.0 is the first version of the discrimination code designed to work with the Pythia simulations run at MIT. This version borrowed heavily from earlier code designed to work on single thrown lepton and hadron events. The code used to access the EEMC information was taken directly from this earlier work. In addition to the EEMC anlysis this version includes code to access tracking information from the MuDst files as well as a function to determine if a track passed close to the tower with highest energy. Also included was a primative function which looked at the awayside energy only in the endcap. This code looked at approximately eleven different cuts and displayed what the trigger patch transverse energy spectrum would look like after each one was applied as well as what the spectrum would look like when several different combinations of cuts were applied.

    • Original source code can be viewed here
    • Page detailing analysis using this code can be viewed here

     

    V1.1:

    Version 1.1 follows the same basic pattern as version 1.0 detailed above. The major difference is in the trigger criteria used to determine wether an event should be analysed. In version 1.0, all events needed to have a trigger patch ET greater than 15GeV in order to be processed further. In addition to the 15GeV condition, version 1.1 requires that all events have a found vertex and that that vertex be found in within ten centimeters of z=-60. (The simulations were generated with the interaction point at -60 cm so that tracks with large etas would pass through more TPC volume). The next significant change was the inclusion of a function which calculated the transverse energy deposited in a tower taking into accout the z position of the vertex as ET value for a particle originating at z=-60 can be significantly different from the ET value gotten assuming the particle originated at z=0. Minor changes include the addition of a function which allows the setting of the energy threshold needed to consider a tower hit, a function which gives the ET weighted position of the trigger patch, and the investigation of several new cut quantities.

    • Original source code can be viewed here

     

    V2.0:

    Version 2.0 differs from the 1.x versions in that it implements functions allowing isolation cuts, both same side and away side, utilizing tower energies, track momenta, and barrel information. Simulations done by Les Bland showed that these kinds of cuts had the potential to provide nearly two orders of magnitude in background reduction. These functions add up the transverse energy/momentum of all towers/tracks which fall within a user set region. For the same side isolation cut, this region is a circle with user set radius centered on the high tower. For the away side isolation cut, the region is a slice in phi with a user set width around the line which is 180 degrees away in azimuth from the high tower. In addition to the isolation cut functions, this version addopts the new cuts from version 1.1 and adds histograms exploring various radii for the isolation cuts. The code to calculate the transverse energy was also changed again, this time to set the 'center' plane of the tower to the position of the SMD plane as opposed to the actual physical center because we expect the most shower activity at the SMD depth.

    • Original source code can be viewed here

     

    V2.1:

    Version 2.1 is very similar to version 2.0 and contains only minor changes. One concern with previous versions was that events occuring at high eta did not have reconstructed tracks because of poor TPC tracking in this region and thus it was hard to get a good idea of how well cuts using tracking were doing. To investigate this problem, several isolation cuts and histograms were made with the condition that the hightower not be located in etabins 1-5 - which excluded high eta events - or etabins 11&12(because of edge effects around the transition from endcap to barrel). In addition to this bin restriction investigation, new histograms were made to study the effects of moving the trigger patch ET threshold from 15 to 20GeV on the cut quantites from version 2.0. Other small changes included the addition of a crude function to determine approximately how many electrons/positrons were going into the endcap and a modification to the away side isolation function to read out the track, barrel, and endcap energies separately.

    • Original source code can be viewed here

     

    V2.2:

    Version 2.2 adds two functions which provide a different way to carry out the same side isolation cuts. The first function counts the number of tracks above a certain threshold which cross the endcap within some radius of the high tower and also calculates the transverse displacement between the track and the high tower if there is one and only one track in the radius. This one and only one track scheme is flawed and is modified in later versions. The second function calculates the transverse energy deposited in the barrel and endcap towers within a certain radius. The idea behind the new isolation cuts was that QCD events should have many tracks around the high tower while W decay events should have few, maybe only one. It was also thought that events with one and only one track in the isolation radius may only be displaced some slight amount from the high tower and that this could give good lepton hadron discrimination. There are many new histograms exploring these ideas.

    • Original source code can be viewed here

     

    V2.3:

    Version 2.3 makes many changes to how the cuts are organized. The trigger patch ET spectrum is shown after each individual cut as well as after each cut in succession. The cuts in this version were chosen from the cuts in the previous version which showed the best discrimination power, all other cuts were removed. In addition to re-organizing the cuts, new trigger conditions were included. Along with the trigger conditions from previous versions, events now must have a trigger patch ET greater than 20GeV, have the high tower be in etabins 6-11, and have the ET weighted trigger patch eta position be greater than 1.7 to be processed further.

    • Original source code can be viewed here
    • Page detailing analysis using this code can be viewed here

     

    V2.4:

    Version 2.4 is very similar to version 2.3 in terms of cuts. Cuts on the away side energy and the displacement between tracks and the high tower have been added. This version of the code has also been cleaned up significantly, with many extraneous histograms and function calls being removed. This version also allowed for overall weighting of the event samples being processed. This allows for the scaling of all pythia samples to the same integrated luminosity. This method of weighting the samples was cumbersome because the code must be recompiled for each seperate simulation batch and if a mistake were made, the whole pythia sample would need to be rerun. In the future, this method is dropped in favor of a script which weights the batches while merging the seperate event files.

    • Original source code can be viewed here
    • Page detailing analysis using this code can be viewed here

     

    V2.5:

    Version 2.5 is the last version of the code to use cuts based on reconstructed tracks from the TPC. It is also the version used to produce plots for several presentations and proposals. As such, it has many histograms which were specifically requested for those presentations, chief among them histograms showing the effects of cuts on the trigger patch ET spectra for electrons and positrons separately. In anticipation of using GEANT tracking in future versions a function was created which would count tracks going into a isolation region using the GEANT record. This function looped over all vertices within three centimeters of the primary vertex and counted all the tracks above a certain threshold which made it into the isolation region. This method often double-counts tracks and is modified in later versions. In addition to the new functions and histograms, two glitches in the code have been fixed in this version. The first is the addition of an ET scaling factor which single particle simulations had shown were needed to make the thrown electron pt and detector response agree, the value used is 1.23. The second correction fixed a problem in calculating the phi displacement between two objects in the endcap. Because the endcap coordinate system splits the detector into two halves and maps one 0 -> 180 and the other 0 -> -180, just taking the difference between phi coordinates will occasionaly lead to the wrong actual displacement. To rectify this, the difference in phi is now calculated as acos(cos(phi1 - phi2)) instead of just (phi1 - phi2). In this version the cuts remain the same as in version 2.4.

    • Original source code can be viewed here

     

    V3.0:

    Version 3.0 uses GEANT for tracking and vertex finding information. Looking at the results from previous code, it was thought that tracking efficency and vertex finding were still too poor. It was decided to use GEANT tracking in anticipation of the improved tracking which would be available with the FGT upgrade. With the perfect tracking from GEANT, it was possible to explore the upper limit of how much discrimination track cuts could provide. Because of the perfect tracking, the volume of the detector looked at would not need to be restricted as much. Thus, the trigger conditions were changed from version 2.5 to eliminate the restriction that the ET weighted eta position of the trigger patch must be less than 1.7. The restriction on the etabin of the high tower was also relaxed so that it could be located in bins 2-11 (bins 1 and 12 were still excluded to reduce edge effects). The isolation cuts are now handled by four functions, two calculate the energy deposited in the calorimeters for the same side and away side cuts, and two count the number of tracks above a certain pt threshold going into the same side and away side regions. The track counting functions still have the double counting problem in this version. The last major change to this code is the way in which the cuts are organized and displayed. In this version the cuts are divided into three groups, the initial trigger cuts that all events must pass, the isolation cuts and the EEMC cuts. Each cut quantity is displayed after the trigger cuts as usual, but now each cut is displayed after the trigger cut and the isolation cuts have been applied and each cut is displayed again after all other cuts have been applied so that in the end, each cut is displayed three separate times. This was done to see which cuts where independent of each other.

    • Original source code can be viewed here
    • Page detailing analysis using this code can be viewed here

     

    V3.1:

    Version 3.1 of the code solves the double counting problem that the previous track counting functions had. In this version the track counting is done recursively. The GEANT track and vertex infromation is set up as a network of connected nodes. Each node is a vertex and represents a particle decay and the lines coming out of each node represent the resultant particles of this decay. These lines will then connect to other nodes if that particle decays. The track counting function loops through all the particle lines coming from the primary vertex and asks where the next vertex connected to that line is, if there is no other vertex or the vertex is located greater than three centimeters away from the primary vertex, the particle is considered stable and is counted (if it is heading into the proper region). If the next vertex is less than three centimeters from the primary vertex however, the function is called again and loops over the tracks coming from this secondary vertex and the process is repeated until all tracks have been looped over. In this way, the code avoids counting the track of a particle which decays along with the tracks of its daughter particles. In addition to the double counting problem the values of the cuts were changed in this version. Version 3.0 threw away too many signal events so version 3.1 used the same set of cuts but just loosened the values where the cuts were made so they did not throw away as many events.

    • Original source code can be viewed here
    • Page detailing analysis using this code can be viewed here

     

    V3.2:

    Version 3.2 is very similar to version 3.1, the major differences are the values used for the cuts. The quantities cut on are the same as in version 3.0 and 3.1, but now the values have been tightened slightly compared with version 3.1, so now each cut throws away around 2 or 3% of the signal. In addition to this tightening, the away side energy and track quantities were plotted against the trigger patch ET in a 2-D plot. This allowed for cuts which rejected much more background at low pt than was possible before with the same 1-D cuts.

    • Original source code can be viewed here
    • Page detailing analysis using this code can be viewed here

     

    V3.21:

    Version 3.21 is almost exactly the same as version 3.2, the major difference is that this version introduces an ineffiecency into two of the track cuts. The cuts on the number of charged tracks above .5 and 5GeV going into the same side isolation circle both have large numbers of events with zero tracks in the QCD background sample and very few events with zero tracks in the signal sample. It is likely that many of these neutral tracks are Pi0's. With GEANT tracking, these Pi0's can be identified with 100% efficiency, but for real data, material in front of the detector will cause many of the Pi0's to convert giving rise to charged tracks. This conversion process will make the zero charged track cuts less efficient. To explore the effects of these conversions, a random number generator was used to pass 30% of events with zero charged tracks in both cuts. The only other change from version 3.2 was to move the 2-D histograms showing the trigger patch ET vs the cut quantites after all other cuts but themselves were applied so that they would be incremented properly.

    • Original source code can be viewed here
    • Page detailing analysis using this code can be viewed here

     

    Results

    Detailed results from the latest version of the discrimination code can be found on the pages linked to in sections 3.2 and 3.21 but the major points as well as some summary plots will be shown here.

     

    Fig 3: This plot shows the final trigger patch ET spectra after all cuts have been applied. The QCD background is in black and the W signal is in red. The plot on the left is from v3.2 and the plot on the right is from v3.21.

     

    Fig 4: This plot shows the signal-to-background ratio for versions 3.2 and 3.21 after all cuts have been applied.

     

    Fig 5: This table shows the effectivness of each cut for version 3.2. 

    Cut QCD Events Cut Signal Events Cut
    Iso Cut 235 of 754: 31% 18 of 1151: 2%
    Small Iso Track 82 of 536: 15% 11 of 1131: 1%
    Away ET Cut 1300 of 1754: 74% 9 of 1135: 1%
    Away Track Cut 924 of 1378: 67% 45 of 1165: 4%
    Big Iso Track 3542 of 3997: 89% 4 of 1124: <1%
    2 Highest Towers 63 of 521: 12% 21 of 1141: 2%
    7 U Strips 527 of 1195: 44% 31 of 1175: 3%
    7 V Strips 579 of 1195: 48% 33 of 1175: 3%
    Hit U Strips 7 of 468: 1% 10 of 1138: <1%
    Hit V Strips 8 of 468: 2% 10 of 1138: <1%
    PS patch/Full patch 151 of 606: 25% 10 of 1130: <1%

     

    Fig 6: This table shows the effectivness of each cut for version 3.21. 

    Cut QCD Events Cut Signal Events Cut
    Iso Cut 739 of 2852: 26% 17 of 1107: 2%
    Small Iso Track 1042 of 2931: 36% 11 of 1089: 1%
    Away ET Cut 3983 of 5897: 68% 15 of 1093: 1%
    Away Track Cut 3461 of 4897: 71% 63 fo 1121: 6%
    Big Iso Track 3310 of 5171: 64% 4 of 1080: <1%
    2 Highest Towers 110 of 2024: 5% 19 of 1097: 2%
    7 U Strips 1194 of 3620: 33% 30 of 1131: 3%
    7 V Strips 1323 of 3620: 37% 32 of 1131: 3%
    Hit U Strips 27 of 1966: 1% 10 of 1096: 1%
    Hit V Strips 44 of 1966: 2% 10 of 1096: 1%
    PS patch/Full patch 189 of 2095: 9% 10 of 1088: 1%

     

    Figures 5 and 6 show the effect of each cut after all other cuts have been applied. (The exception is for cuts on U and V plane quantites, in which case all other cuts but the similar cut on the other plane are applied). For each cut, the number of events which survived the other cuts and the number of those events which the cut removes are shown. So for example, looking at the iso cut in figure three, 754 background events survive the thirteen other cuts and the iso cut removes 235 of these surviving events. This can be taken as a way to measure how independent a cut is from all the others. The tables show that with perfect tracking, the cut on the number of charged tracks above 5GeV is by far the best cut, but with the 30% ineffieciency added in the effectiveness of this cut is reduced so that it is around parity with the away side cuts.

     

    Conclusions

    As the figures above show, the discrimination code achieves a signal-to-background ratio of greater than one-to-one over a significant portion of the lepton transverse energy range, indicating that background should not be an insurmountable barrier to preforming the W analysis at STAR. These results were obtained using GEANT and any analysis of real data will need to monitor vertex finding and tracking efficency to ensure they do not degrade the effectiveness of the cuts too much. The addition of the FGT should help with these issues and it is likely that the code will need to be optimized to take advantage of the improved tracking of the FGT. Finally, this analysis is based on a powerful discrimination code which can be refined and modified to suit future analyses.

     

     

     

     

    Pythia Analysis ver 3.0: Analysis of set C4 events using geant record

    Analysis of all setC4 events using geant tracks and vertices

     

    In this analysis, I looked at all QCD background events from setC4 as well as W events from setC2 Wprod detailed here. I used version 3.0 of my analysis code which uses geant vertices and tracks instead of reconstructed vertices and tracks as version 2.5 did. I have also made changes to the quantities I make cuts on, see below. All transverse energy is event ET and an energy scaling factor of 1.23 is used.

     

    Cuts:

    I make 14 cuts described below. Plots of the cut spectra can be found here. Pages 1-11 show the cut spectra after 3 preliminary phase space cuts have been made (spectra of the first 3 cuts are not shown). Every event must pass the first three cuts to be considered. Pages 12-18 show cut spectra 8-14 after cuts 1-7 have been applied. Pages 19-29 show cut spectra after all cuts but its own have been applied. Pages 30-32 show a possible alternative cut.

    1. Cut 1: Require that the geant vertex be in the region [-70,-50]
    2. Cut 2: Require that the trigger patch ET > 20GeV
    3. Cut 3: Require that the high tower not be in eta bin 1 or 12
    4. Cut 4: Isolation cut - Ratio of the ET in the trigger patch to the ET in an iso cone with r=0.45. Require > 0.96 to pass
    5. Cut 5: Track iso cut - Number of geant charged tracks with pT>0.5GeV in an iso cone with r=0.7. Require < 3 to pass
    6. Cut 6: Awayside isolation cut - ET in a region Pi +/- 0.7 radians from the high tower in phi. Require < 6.0 to pass
    7. Cut 7: Awayside track cut - Number of geant charged tracks with pT>0.5GeV in same region as cut 6. Require < 7 to pass
    8. Cut 8: Track near high tower - Number of geant charged tracks with pT>5.0GeV in an iso cone with r=0.7. Require < 2 to pass
    9. Cut 9: Ratio of energy in two highest towers in trigger patch to energy in all towers in trigger patch. Require > 0.9 to pass
    10. Cut 10: Ratio of energy in 7 highest adjacent U strips under the patch to the energy in all U strips under the patch. Require > 0.7 to pass
    11. Cut 11: Ratio of energy in 7 highest adjacent V strips under the patch to the energy in all V strips under the patch. Require > 0.7 to pass
    12. Cut 12: Number of hit U strips in sector containing the trigger patch. Require < 48 to pass
    13. Cut 13: Number of hit V strips in sector containing the trigger patch. Require < 48 to pass
    14. Cut 14: Ratio of energy in patch post-shower layer to full energy in patch. Require < 0.0005 to pass

     

    I have also made several 2-D plots of other quantities which may provide good electron/hadron discrimination. These plots can be found here.

    1. Page 1 is a plot of the trigger patch energy Vs. the post shower energy
    2. Page 2 is a plot of the energy in both the U and V strips under the trigger patch Vs. the energy in all post shower layers in the trigger patch
    3. Page 3 is a plot of the energy in both the U and V strips under the trigger patch Vs. the energy in all 2nd pre-shower layers in the trigger patch
    4. Page 4 is a plot of the trigger patch energy Vs. the energy in all 2nd pre-shower layers in the trigger patch

     

    Spectra

    The plots showing the effect of the various cuts on the trigger patch ET spectrum can be found here. Pages 1-14 show the effect that the first 3 cuts plus the individual cut has on the trigger patch ET spectrum. So, for example, page 7 shows the spectrum after cuts 1, 2, 3, and 7. Pages 15-26 shows the effects that a number of cuts applied sequenitally has on the spectrum. In all plots the black curve is the detected ET spectrum before any cuts have been made and the red curve is the ET spectrum after the cuts in question have been applied.

     

    Fig 1: This plot shows the effects of all the cuts applied in sequence on the trigger patch ET spectrum

     

    Fig 2: This plot shows the final trigger patch ET spectra after all cuts have been applied. The QCD background is in black and the W signal is in red.

     

    Pythia Analysis ver 3.1: Analysis of set C5 events using geant record

    Analysis of all setC5 events using geant tracks and vertices

     

    In this analysis I looked at all QCD background events from setC5, which has an integrated luminosity on the order of what we expect to get in data. I also looked at W events from setC2 Wprod. The details of both event samples can be found here. This analysis was done using version 3.1 of my code which uses geant vertices and tracks, meaning we have perfect tracking for the cuts. All transverse energy is event ET and an energy scaling factor of 1.23 is used. All samples are scaled to LT=800 inverse pb.

     

    Cuts:

    I make 14 cuts which are described below. These cuts have been loosened compared to those in version 3.0, now each cut throws away no more that 1% of the signal. I have also added conditions to cuts 5 and 8 to reject events where no charged track was found. Plots of the cut spectra can be found here. Pages 1-11 show the cut spectra after the 3 preliminary phase space cuts have been made (spectra of the first 3 cuts are not shown). All events must pass the first three cuts to be analysed further. Pages 12-18 show cut spectra 8-14 after cuts 1-7 have been applied to them. Pages 19-29 show all the cut spectra after all cuts but their own have been applied.

    1. Cut 1: Require that the geant vertex be in the region [-70,-50]
    2. Cut 2: Require that the trigger patch ET>20GeV
    3. Cut 3: Require that the high tower not be in eta bin 1 or 12
    4. Cut 4: Isolation cut - Ration of the ET in the trigger patch to the ET in an iso cone with r=0.45. Require > 0.94 to pass
    5. Cut 5: Track iso cut - Number of geant charged tracks with pT>0.5GeV in an iso cone with r=0.7. Require < 5 and > 0 to pass
    6. Cut 6: Awayside isolation cut - ET in a region Pi +/- 0.7 radians from the high tower in phi. Require < 9.0 to pass
    7. Cut 7: Awayside track cut - Number of geant charged tracks with pT>0.5GeV in same region as cut 6. Require < 10.0 to pass
    8. Cut 8: Track near high tower - Number of geant charged tracks with pT>5.0GeV in an iso cone with r=0.7. Require there be 1 and only 1 track to pass
    9. Cut 9: Ratio of energy in two highest towers in trigger patch to energy in all towers in trigger patch. Require > 0.75 to pass
    10. Cut 10: Ratio of energy in 7 highest adjacent U strips under the patch to the energy in all U strips under the patch. Require > 0.45 to pass
    11. Cut 11: Ratio of energy in 7 highest adjacent V strips under the patch to the energy in all V strips under the patch. Require > 0.45 to pass
    12. Cut 12: Number of hit U strips in sector containing the trigger patch. Require < 52 to pass
    13. Cut 13: Number of hit V strips in sector containing the trigger patch. Require < 54 to pass
    14. Cut 14: Ratio of energy in patch post-shower layer to full energy in patch. Require < 0.002 to pass

     I have also made several 2-D plots of other quantities which my provide good electron/hadron discrimination. These plots can be found here.

    • Page 1 is a plot of the iso radius (r=0.7) ET Vs. the trigger patch ET
    • Page 2 is a plot of the away side ET Vs. the trigger patch ET
    • Page 3 is a plot of the trigger patch ET Vs. the number of charged tracks above 0.5GeV in the away side region
    • Page 4-6 show the same plots as pages 1-3 after cuts 1-7 have been applied
    • Page 7-9 show the same plots as pages 1-3 after all cuts have been applied
    • Page 10 is a plot of the trigger patch energy Vs. the post shower energy
    • Page 11 is a plot of the energy in both the U and V strips under the trigger patch Vs. the energy in all post shower layers under the trigger patch
    • Page 12 is a plot of the energy in both the U and V strips under the trigger patch Vs. the energy in all 2nd pre-shower layers in the trigger patch
    • Page 13 is a plot of the trigger patch energy Vs. the energy in all 2nd pre-shower layers in the trigger patch
    • Page 14-17 show the same plots as pages 10-13 after cuts 1-7 have been applied
    • Page 18-21 show the same plots as pages 10-13 after all cuts have been applied

     

    Spectra

    The plots showing the effect of the various cuts on the trigger patch ET spectrum can be found here. Pages 1-14 show the effect that the first 3 cuts plus the individual cut has on the trigger patch ET spectrum. So, for example, page 7 shows the spectrum after cuts 1, 2, 3, and 7. Pages 15-26 show the effects that a number of cuts applied sequentially has on the spectrum. In all plots the black curve is the detected ET spectrum before any cuts have been made and the red curve is the ET spectrum after the cuts in question have been applied.

     

    Fig 1: This plot shows the effects of all the cuts applied in sequence on the trigger patch ET spectrum. For the QCD sample: The raw spectrum contains 2.6E6 events in the region 20-70, the spectrum after the phase space cuts (1-3) contains 2.2E6 events, and the spectrum after all cuts contains 1.5E4 events. For the W sample: The raw spectrum contains 4645 events in the region 20-70, the spectrum after the phase space cuts contains 3747 events and the spectrum after all cuts contains 3378 events.

     

    Fig 2: This plot shows the final trigger patch ET spectra after all cuts have been applied. The QCD background is in black and the W signal is in red.

     

     

    Pythia Analysis ver 3.21: Analysis of set C5 events using geant record

    Analysis of all setC5 events using Geant tracks and vertices

     

    In this analysis I attempt to get a feeling for the effects that imperfect tracking will have on the trigger patch ET spectrum while still using geant tracking information. My main focus was on the inefficencies which will arise in trying to identify converting Pi0's, so I focused on cuts 5 and 8. Both of these cuts have large numbers of events with no charged tracks and it is likely that many of these events are Pi0's. In v3.2 of the analysis code cuts 5 and 8 both throw away events with no charged tracks, so to simulate the tracking inefficency I use a random number generator to allow 30% of events with zero charged tracks to pass each cut. The code I used for this analysis is v3.21 which uses the same cuts as v3.2 except for the code introducing the 30% inefficency. Descriptions of the cuts used as well as explanations of the plots in the pdf files can be found on the page describing v3.2 of my code found here. I have also changed the 2-D plots of the trigger patch ET Vs. the cut quantities found on pages 23-33 of the 2-D plot pdf. They now show the plots after all cuts but themselves have been applied as opposed to v3.2 where they were shown after all cuts were applied.

     

    Fig 1: These plots show the effects of all the cuts applied in sequence on the trigger patch ET spectrum. The top plot is from v3.2 and the bottom plot is from v3.21. The number of events cut for v3.2 can be found on the main page for that analysis. The numbers for v3.21 are given here. For the QCD sample: The raw spectrum contains 9.9E5 events in the region 20-70, the spectrum after the phase space cuts (1-3) contains 8.2E5 events, and the spectrum after all cuts contains 1906 events. For the W sample: The raw spectrum contains 1671 events in the region 20-70, the spectrum after the phase space cuts contains 1347 events and the spectrum after all cuts contains 1078 events.

     

    Fig 2: This plot shows the final trigger patch ET spectra after all cuts have been applied. The QCD background is in black and the W signal is in red. The plot on the left is from v3.2 and the plot on the right is from v3.21.

     

     

     

     

    Pythia Analysis ver 3.2: Analysis of set C5 events using geant record

    Analysis of all setC5 events using Geant tracks and vertices

     

    In this analysis I looked at all QCD background events from setC5, which has an integrated luminosity on the order of what we expect to see in the data. I alo looked at W events from setC2 Wprod. The details of both event samples can be found here. This analysis was done using version 3.2 of my code which uses geant vertices and tracks, meaning we have perfec tracking for the cuts. All transverse energy is event ET and an energy scaling factor of 1.23 is used. All samples are scaled to LT=300 inverse pbnote that the samples in all previous versions were scaled to 800 inverse pb.

     

    Cuts:

    I make 14 cuts which are described below. These cuts have been tightened slightly compared to those in version 3.1, now each cut throws away between 2-3% of the signal, except for cuts 6 and 7. For cuts 6 and 7, I have used a 2-D plot of the patch ET Vs. the cut quantity to cut harder on the low pT background. I have also included plots of quantites which we thought had potential to be used as cuts.

     

    Plots of the 1-D spectra can be found here. Pages 1-11 show the cut spectra after the 3 preliminary phase space cuts have been made (spectra of the first 3 cuts are not shown). All events must pass the first three cuts to be analysed further. Pages 12-18 show cut spectra 8-14 after cuts 1-7 have been applied to them. Pages 19-29 show all the cut spectra after all cuts but their own have been applied. Pages 30 and 31 contain plots showing the ratio of U(V) strips above 0.5MeV under the trigger patch to the total number of U(V) strips under the patch. Pages 32 and 33 contain plots showing the number of U(V) strips above 0.5MeV in the sector(s) containing the trigger patch. Pages 34-37 contain the last four plots with cuts 1-7 applied and pages 38-41 contain the last four plots with all cuts applied.

     

    Plots of the 2-D spectra can be found here. Pages 1-33 show the trigger patch ET Vs. the cut quantites so we can fine tune the cuts to eliminate background in particular ET ranges. Pages 1-11 show the cut spectra after the 3 phase space cuts have been made. Pages 12-22 show the cut spectra after cuts 1-7 have been applied. Pages 23-33 show the cut spectra after all cuts have  been applied. Pages 34 and 35 show the trigger patch Energy Vs. the ratio of energy in the 7 highest adjacent U(V) strips under the trigger patch to all U(V) strips under the trigger patch. Page 36 shows the trigger patch Energy Vs. the ratio of pre-shower energy in the trigger patch to all energy in the trigger patch. Pages 37-39 show the last three quantities with cuts 1-7 applied and pages 40-42 show the same quantities with all cuts applied. Pages 43 and 44 show trigger patch ET Vs. the ratio of U(V) strips above 0.5MeV under the trigger patch to the total number of U(V) strips under the patch. Pages 45 and 46 show the trigger patch ET Vs. the number of U(V) strips above 0.5MeV in the sector(s) containing the trigger patch. Pages 47-50 show the last four plots with cuts 1-7 applied and pages 51-54 show the plots with all cuts applied. Page 55 is a plot of the trigger patch energy Vs. the post shower energy. Page 56 is a plot of the energy in both the U and V strips under the trigger patch Vs. the energy in all post shower layers under the trigger patch. Page 57 is a plot of the energy in both the U and V strips under the trigger patch Vs. the energy in all 2nd pre-shower layers in the trigger patch. Page 58 is a plot of the trigger patch energy Vs. the energy in all 2nd pre-shower layer in the trigger patch. Pages 59-62 show the last four plots after cuts 1-7 have been applied and pages 63-66 show the same plots after all cuts have been applied.

     

    1. Cut 1: Require that the geant vertex be in the range [-70,-50]
    2. Cut 2: Require that the trigger patch ET>20GeV
    3. Cut 3: Require that the high tower not be in eta bin 1 or 12
    4. Cut 4: Isolation cut - Ratio of the ET in the trigger patch to the ET in an iso cone with r=0.45. Require > 0.96 to pass (1-D cut)
    5. Cut 5: Track iso cut - Number of geant charged tracks with pT>0.5GeV in an iso cone with r=0.7. Require < 4 and > 0 to pass (1-D cut)
    6. Cut 6: Awayside isolation cut - ET in a region Pi +/- 0.7 radians from the high tower in phi. (2-D cut)
    7. Cut 7: Awayside track cut - Number of geant charged tracks with pT>0.5GeV in same region as cut 6. (2-D cut)
    8. Cut 8: Track near high tower - Number of geant charged tracks with pT>5.0GeV in an iso cone with r=0.7. Require there be 1 and only 1 track to pass (1-D cut)
    9. Cut 9: Ratio of energy in two highest towers in trigger patch to energy in all towers in trigger patch. Require > 0.83 to pass (1-D cut)
    10. Cut 10: Ratio of energy in 7 highest adjacent U strips under the patch to the energy in all U strips under the patch. Require > 0.6 to pass (1-D cut)
    11. Cut 11: Ratio of energy in 7 highest adjacent V strips under the patch to the energy in all V strips under the patch. Require > 0.6 to pass (1-D cut)
    12. Cut 12: Number of hit U strips in sector containing the trigger patch. Require < 52 to pass (1-D cut)
    13. Cut 13: Number of hit V strips in sector containing the trigger patch. Require < 52 to pass (1-D cut)
    14. Cut 14: Ratio of energy in patch post-shower layer ot full energy in patch. Require < 0.001 to pass (1-D cut)

     

    Specta

    The plots showing the effect of the various cuts on the trigger patch ET spectrum can be hound here. Pages 1-14 show the effect that the first 3 cuts plus the individual cut has on the trigger patch ET spectrum. So, for example, page 7 shows the spectrum after cuts 1, 2, 3, and 7. Pages 15-26 show the effects that a number of cuts applied sequentially has on the spectrum. In all plots the black curve is the detected ET spectrum before any cuts have been made and the red curve is the ET spectrum after the cuts in question have been applied.

     

    Fig 1: This plot shows the effects of all the cuts applied in sequence on the trigger patch ET spectrum. For the QCD sample: The raw spectrum contains 9.9E5 events in the region 20-70, the spectrum after the phase space cuts (1-3) contains 8.2E5 events, and the spectrum after all cuts contains 455 events. For the W sample: The raw spectrum contains 1742 events in the region 20-70, the spectrum after the phase space cuts contains 1405 events and the spectrum after all cuts contains 1120 events.

     

     

    Fig 2: This plot shows the final trigger patch ET spectra after all cuts have been applied. The QCD background is in black and the W signal is in red.

     

    FGT . . . . . . . . . . C A L I B R A T I O N


    1. Algorithm for calibration
    2. status tables for year 2010

     


     

    Beam Test at DESY, May 2011

    Planning the Beamtest at DESY

     

     

     

     

    Important Dates:

     Scheduled from May 16-30

     

    Contacts:

     

    general:

    Anselm Vossen (avossen@indiana.edu)

     

    DESY testbeam page:

    http://adweb.desy.de/~testbeam/

    People that are going

     

     

    Equipment that has to be shipped

    Equipment/People/Shipping
     

    Equipment

    Qty Size Weight  ? Contact Person/Institution Shipping Details (from where to where) When?

    Comments (e.g. shipping details

    safety aspects)

       FGT Quadrants          MIT      
       Det Fixture          MIT      
       positioning/alignment          MIT      
       gas-setup          Valpo/MIT      might be available at DESY - see below
       bottled gas (premixed?)          DESY      
       FEE board 6        MIT      
       terminator board          MIT      
       interconnector board          MIT      
       cable and patch          MIT      
      signal cables         IU/UKY      
      power cables         IU/UKY      
      RG-59 HV cables 3       DESY?      
      readout crate 1 2.5'x 4'x 3.5x 100 lbs   ANL      
      ARC 1 1'x 1'x 0.3' 4 lb   ANL      
      ARM 3 1'x1'x0.3' 4 lb   IU      
      HVPS 1 1'x1'x0.3' 8 lb   IU      
      computer incl. DRORC 1 1'x3'x4' 73 lb   DESY?      
      Nim logic         DESY      
      data fiber 1       IU      
      hand tools and scope         DESY      
      diff probe 1       IU      
      Sys clock source                already provided by ARC
      Misc computer equipment (PC etc)                

     

    Gas at Desy

     

    The DESY gas group can supply premixed gas. We have to ask well in advance.

    Flowmeters etc. should also be available. We have to bring tools

     

     Test Beam Description

     

    can be found here:

     

    adweb.desy.de/home/testbeam/WWW/Description.html

     

     

    Safety at DESY

     

    Inspection List for equipment

     

    General:

    adweb.desy.de/~testbeam/documents/AllgemeineSicherheitsunterweisungD5englisch.pdf

    Radiation protection

    adweb.desy.de/~testbeam/documents/RadiationProtectionInstructionsTestBeam-NM-1-2006.pdf

    Testbeam safety briefing:

    adweb.desy.de/~testbeam/documents/SafetyBriefingTestBeam.pdf

    We have to assign one person responsible for radiation safety

    That person is:

     

     

     

     Travel Infos for Hamburg/Germany

    Fly into Hamburg

    Today the price from JFK is ~$950 roundtrip

     

    Within Germany, rail is good idea if you book in advance

    www.bahn.de/i/view/USA/en/index.shtml

    ask me if you want to look for special fares ( I saw that that site is in German)

     

    HOUSING

    DESY hostel:

    guest-services.desy.de/hostel_in_hamburg/hostels_info/

     

    I haven't been at DESY, so if somebody else has a better idea, please tell me

     

    CHECKLIST

     

     OPEN ISSUES

    Gas

     

    Planned Measurements

    Planned Measurements

     

    For each of these measurements we should have a plan/protocoll what we want to do (conditions, time, software...)

     

    • Normal
      0

      false
      false
      false

      EN-US
      X-NONE
      X-NONE

      /* Style Definitions */
      table.MsoNormalTable
      {mso-style-name:"Table Normal";
      mso-tstyle-rowband-size:0;
      mso-tstyle-colband-size:0;
      mso-style-noshow:yes;
      mso-style-priority:99;
      mso-style-parent:"";
      mso-padding-alt:0in 5.4pt 0in 5.4pt;
      mso-para-margin:0in;
      mso-para-margin-bottom:.0001pt;
      mso-pagination:widow-orphan;
      font-size:11.0pt;
      font-family:"Calibri","sans-serif";
      mso-ascii-font-family:Calibri;
      mso-ascii-theme-font:minor-latin;
      mso-hansi-font-family:Calibri;
      mso-hansi-theme-font:minor-latin;
      mso-bidi-font-family:"Times New Roman";
      mso-bidi-theme-font:minor-bidi;}

      Reproduction of test beam results at FNAL:  drupal.star.bnl.gov/STAR/blog/surrow/2011/feb/02/fgt-testbeam-fnal-prototype-triple-gem-detectors

    • Cluster reconstruction, Cluster size,
    • R-Phi Correlations
    • Efficiency
    • Cluster amplitude
    • HV scans
    • Residuals
    • Dependence on

      Normal
      0

      false
      false
      false

      EN-US
      X-NONE
      X-NONE

      /* Style Definitions */
      table.MsoNormalTable
      {mso-style-name:"Table Normal";
      mso-tstyle-rowband-size:0;
      mso-tstyle-colband-size:0;
      mso-style-noshow:yes;
      mso-style-priority:99;
      mso-style-parent:"";
      mso-padding-alt:0in 5.4pt 0in 5.4pt;
      mso-para-margin:0in;
      mso-para-margin-bottom:.0001pt;
      mso-pagination:widow-orphan;
      font-size:11.0pt;
      font-family:"Calibri","sans-serif";
      mso-ascii-font-family:Calibri;
      mso-ascii-theme-font:minor-latin;
      mso-hansi-font-family:Calibri;
      mso-hansi-theme-font:minor-latin;
      mso-bidi-font-family:"Times New Roman";
      mso-bidi-theme-font:minor-bidi;}

      on inclination angle for P/Phi reconstruction

    • Noise (i.e. electronics noise of the readout). RMS of pedestals calibrated to MIP. Optimum parameters for APV in FGT. (Parameters will affect gain as well as noise, of course.) Necessity (or hopefully not) of tracking pedestals by capid.
    •  Number of timebins to use in readout, and algorithm to get pulse amplitude/time from the timebin data. Uniformity of signal shape over the detector.
    • Crosstalk

    • Normal
      0

      false
      false
      false

      EN-US
      X-NONE
      X-NONE

      /* Style Definitions */
      table.MsoNormalTable
      {mso-style-name:"Table Normal";
      mso-tstyle-rowband-size:0;
      mso-tstyle-colband-size:0;
      mso-style-noshow:yes;
      mso-style-priority:99;
      mso-style-parent:"";
      mso-padding-alt:0in 5.4pt 0in 5.4pt;
      mso-para-margin:0in;
      mso-para-margin-bottom:.0001pt;
      mso-pagination:widow-orphan;
      font-size:11.0pt;
      font-family:"Calibri","sans-serif";
      mso-ascii-font-family:Calibri;
      mso-ascii-theme-font:minor-latin;
      mso-hansi-font-family:Calibri;
      mso-hansi-theme-font:minor-latin;
      mso-bidi-font-family:"Times New Roman";
      mso-bidi-theme-font:minor-bidi;}

      Rate dependence (trigger rate and signal rate).

    • Normal
      0

      false
      false
      false

      EN-US
      X-NONE
      X-NONE

      /* Style Definitions */
      table.MsoNormalTable
      {mso-style-name:"Table Normal";
      mso-tstyle-rowband-size:0;
      mso-tstyle-colband-size:0;
      mso-style-noshow:yes;
      mso-style-priority:99;
      mso-style-parent:"";
      mso-padding-alt:0in 5.4pt 0in 5.4pt;
      mso-para-margin:0in;
      mso-para-margin-bottom:.0001pt;
      mso-pagination:widow-orphan;
      font-size:11.0pt;
      font-family:"Calibri","sans-serif";
      mso-ascii-font-family:Calibri;
      mso-ascii-theme-font:minor-latin;
      mso-hansi-font-family:Calibri;
      mso-hansi-theme-font:minor-latin;
      mso-bidi-font-family:"Times New Roman";
      mso-bidi-theme-font:minor-bidi;}

      .Radiation effects on the frontend electronics

    • Normal
      0

      false
      false
      false

      EN-US
      X-NONE
      X-NONE

      /* Style Definitions */
      table.MsoNormalTable
      {mso-style-name:"Table Normal";
      mso-tstyle-rowband-size:0;
      mso-tstyle-colband-size:0;
      mso-style-noshow:yes;
      mso-style-priority:99;
      mso-style-parent:"";
      mso-padding-alt:0in 5.4pt 0in 5.4pt;
      mso-para-margin:0in;
      mso-para-margin-bottom:.0001pt;
      mso-pagination:widow-orphan;
      font-size:11.0pt;
      font-family:"Calibri","sans-serif";
      mso-ascii-font-family:Calibri;
      mso-ascii-theme-font:minor-latin;
      mso-hansi-font-family:Calibri;
      mso-hansi-theme-font:minor-latin;
      mso-bidi-font-family:"Times New Roman";
      mso-bidi-theme-font:minor-bidi;}

       EMI issues. (For the most part if there are any, have to be

       addressed first and independently of the rest of the testing, test

      beam or otherwise.)

    • Normal
      0

      false
      false
      false

      EN-US
      X-NONE
      X-NONE

      /* Style Definitions */
      table.MsoNormalTable
      {mso-style-name:"Table Normal";
      mso-tstyle-rowband-size:0;
      mso-tstyle-colband-size:0;
      mso-style-noshow:yes;
      mso-style-priority:99;
      mso-style-parent:"";
      mso-padding-alt:0in 5.4pt 0in 5.4pt;
      mso-para-margin:0in;
      mso-para-margin-bottom:.0001pt;
      mso-pagination:widow-orphan;
      font-size:11.0pt;
      font-family:"Calibri","sans-serif";
      mso-ascii-font-family:Calibri;
      mso-ascii-theme-font:minor-latin;
      mso-hansi-font-family:Calibri;
      mso-hansi-theme-font:minor-latin;
      mso-bidi-font-family:"Times New Roman";
      mso-bidi-theme-font:minor-bidi;}

      Optimum gain setting for the FGT in consideration of the limited dynamic range of APV. What tail of large events can we afford to cut off and still make the position resolution? Conversely how much noise is affordable and still make the position resolution. Presumably get to a plot of resolution as a function of HV setting.

    • Normal
      0

      false
      false
      false

      EN-US
      X-NONE
      X-NONE

      /* Style Definitions */
      table.MsoNormalTable
      {mso-style-name:"Table Normal";
      mso-tstyle-rowband-size:0;
      mso-tstyle-colband-size:0;
      mso-style-noshow:yes;
      mso-style-priority:99;
      mso-style-parent:"";
      mso-padding-alt:0in 5.4pt 0in 5.4pt;
      mso-para-margin:0in;
      mso-para-margin-bottom:.0001pt;
      mso-pagination:widow-orphan;
      font-size:11.0pt;
      font-family:"Calibri","sans-serif";
      mso-ascii-font-family:Calibri;
      mso-ascii-theme-font:minor-latin;
      mso-hansi-font-family:Calibri;
      mso-hansi-theme-font:minor-latin;
      mso-bidi-font-family:"Times New Roman";
      mso-bidi-theme-font:minor-bidi;}

      Survival of FEE to breakdown/discharges in detector

    FGT . . . . . . . . . . D O C U M E N T S

    1. Proposal (December 2007)
    2. Project documentation on deltag5  at MIT server
    3. Tree-structure of FGT web page (graph)
    4. Conclusions of Reviews
      date place format occasion
      January 2008 MIT pdf STAR Forward GEM Tracker Review Committee
    5. NIM paper from the 2007 FermiLab beam test of GEM disks is available on ArXiv http://arxiv.org/abs/0808.3477

    OTHER DOCUMENTS

    1. RHIC long range plan as of December 2006 (out link)
    2. RHIC PAC, May 2008, STAR BUR
    3. STAR Detector West Side PicturesClick Here (deltag5 @MIT)
    4. Frank's Thesis 

     

    Documents/Minutes of prelimnary safety review, Sept. 09

    Files from the prelimnary safety review at BNL Sept 09

    FGT . . . . . . . . . . DAQ

     DAQ related information for FGT

     

    APV Readout System Long Cable Test Setup

    The APV front-end ASIC which forms the core of the Forward GEM Tracker readout system combines a sensitive preamplifier, switched-capacitor analog memory array, and low-voltage differential analog output buffer.

    Operating such an ASIC directly over a long cable, with the analog output digitized at the far end of the cable, potentially presents some challenges. Of course, there are are also opportunities, to minimize the power dissipation inside the inner field cage region, minimize dead materials, and to maximize reliability by placing most of the electronics in an easily serviceable location on the STAR electronics platform.

    At IUCF we are constructing a pair of test boards, one which models the APV readout module "ARM", the other which models the APV cable connector board.

    Besides testing the performance of the MIT APV motherboards with a long cable interface, this set of test boards is important to:

    • Develop the interface definitions between the IUCF "ARM" and the MIT "APV Motherboard"
    • Demonstrate a low-dropout low-voltage regulator for the APV power, and characterize its performance
    • Demonstrate the control of APV chips and an I2C temperature sensor via the UART - I2C bridge chip
    • Evaluate the effects of various thermal and grounding options for the FGT
    • Run the APV's for detector testing (either with an external ADC and clock source, or with the connector board here and the full ARM to be developed

    Here are the design files:

    The connector board should be compatible with a pair of FGT APV Motherboards, a total of 10 APV chips. The mock readout board provides only two channels of analog line receiver, testing two different concepts for this. One is a DC coupled line receiver using Analog Devices # AD8129, the other is a transformer-coupled line receiver using a TI # OPA684 opamp. The transformer solution should have superior noise and common-mode rejection and lower power, but it is of course not DC coupled as we would wish. It could be used if DC restoration is applied digitally in the FPGA (on the real ARM).

    First results:

    Here is the APV readout sequence, looking good:

     

    The "noise" which is apparent here is merely crosstalk from the clock signal, and as such will be easily removed by the filter in front of the ADC. Neither that filter nor the cable frequency equalization filter is in place for the measurement above.

    Clock frequency was 40 MHz; APV was triggered at 1 kHz from a pulse generator. No inputs connected, no shield box. Cable is Belden 1424A, 110 feet, coiled up on workbench.

    ARM - FEE Final Interface Prototype

    Details of final prototype of FEE interface circuitry for FGT APV Readout Module "ARM". This includes the isolated remote-regulated power supplies, the isolated I2C line interface, the isolated LVDS clock & trigger line drivers, and the differential analog receiver. Cable connectors and pinouts here are proposed for final application. (In actual ARM, cables interface through a rear transition board in the crate through the 96-pin DIN connector to ARM module. This is not implemented here - cables connect directly using same connector type and pinouts on the cables.)

     

    Here are the details:

    Brief overview of FGT FEE/Readout electronics chain

    This is a short overview of the front end and readout electronics of the FGT. The emphasis is on providing some documentation on all the components. At present most of the items exist only as prototypes, I will try to keep this page current as things get finalized.

    First, the front end board, which has 5 APV chips. Two front end boards are used per FGT quadrant (24 quadrants total). One sits at each end of the quadrant. As four quadrants fit together to form an FGT disk, the pairs of front end boards from adjacent quadrants physically sit close to each other. They are serviced by a common cable, interconnect board, and terminator board (to be described shortly).

    Below is a picture of the first FGT quadrant (actually just a mechanical test assembly, the pad plane is an old design and is defective). Apologies this is a shiny object photographed on a shiny table in the Bates cleanroom. What you see here is mainly the aluminized mylar gas window which is also the ground plane, sits a couple of mm above the pad plane. On each end of the quadrant is a row of five Samtec MEC6 card-edge connectors (0.635 mm contact pitch): four 140 pin connectors and one 80 pin in the center of the other four. This connects the 640 = 5 APV * (128 ch/APV) signals from the pad plane to the FEE board. The ground is carried separately (see description below). The constraints on FGT inner and outer radius do not permit assigning any of these connector contacts to grounds.

     

    Below is a picture of a front end board installed on an FGT quadrant. This is a view from "outside" i.e. this is the side of the front end board that does not have the APV chips. Actually in this picture is a mechanical mockup board, but is mechanically identical to the final design. The white RTV which can be seen in this picture is covering the edge of this quadrant assembly.

     

    And here is the actual front end board. In this and the picture above you can see the two ground contact points which will be fastened with screws and washers to wide contact strips extending from the ground foil. This provides the connection of APV signal ground. The ground foil is also similarly connected to the bypass capacitor feeding bias voltage to the bottom of the last GEM foil (connections not detailed here).

    Note that it is the back side of this board that would be seen in the quadrant picture above. The two front end boards serviced as a pair have the APV chips mounted on the outward facing sides. Similarly the ground foil connections are made on this outward facing side. From the quadrant's perspective, the chip side of the APV board faces in toward the center of the quadrant.

    The front-end board is serviced by the "interconnect board" on one end and a "terminator board" on the other. All interface lines run through from end to end so that the design is symmetrical. Half of the frontend boards have the terminator on the right in this picture, half have the interconnect board on the right. There is only a single flavor of front end board.

    The schematic of the front end board is (here). It consist only of five APV chips and a minimal set of support components.

    The APV chip is a 128-channel preamp / shaper / SCA / readout mux chip developed for CMS tracker silicon, and also subsequently deployed for GEM readout in COMPASS. Most of the operational details are described in the user guide and in presentations available on the CMS tracker web pages. The APV chip incorporates an analog FIR filter (what the user guide refers to as deconvolution mode) for tail cancellation / pileup reduction at the expense of higher noise. We don't intend to make use of it. It should be noted that the appropriateness of the (fixed) filter coefficients is of course dependent on the sample clock rate. Which is another reason we don't intend to use it.

    The APV chip sample clock can be run at least as fast as 40 MHz (as in CMS). For RHIC, we prefer to lock to a multiple of our collision frequency so that the signals can be synchronously sampled avoiding any additional complications or errors from asynchronous sampling. We will use 4x RHIC strobe, 37.532 MHz, a reasonable match to the capability of the APV chip.

    The APV chip readout clock can be the sample clock or 1/2 the sample clock rate. With 1/2 the sample clock a single-point readout requires 280/37.532 MHz = 7.46 us. This is amply fast in comparison with other STAR detectors even if multi-point readout is employed. So we will use only the 1/2 rate readout because it significantly eases the signal transport problem.

    [Insert here description of the interconnect board / terminator board. (voltage reg, clk/trig receiver, temp sensors, POR circuit, terminations)]

    The above comprises the front end electronics subsystem. The interfaces from the front end electronics to the readout electronics are, in total:

    1. Power supplied at +/-1.8 V with remote sense of +1.8, ground, and -1.8 at the terminals of the front end electronics (interconnect board). The power supply is in the readout electronics, and is isolated from ground.
    2. SCL/VSS pair and SDA/VSS pair that connect to the APV chips and other devices, e.g. temperature sensors, on the front end electronics. This I2C interface runs with 2.5 V logic and complies with the I2C standard except in regard to line capacitance limitations where special considerations are taken. The pull-up current source is in the readout electronics. The I2C master in the readout electronics is isolated from ground.
    3. CLK+/- low voltage differential logic signal (continuously running). Source is isolated from ground (transformer coupled) in the readout electronics. The line is terminated at the front end electronics and at the readout board (double-termination).
    4. TRIG+/- low voltage differential logic signal. Source is isolated from ground (transformer coupled) in the readout electronics. The line is double terminated.
    5. 10x (in case of FGT) or up to 12x (in case of IST) analog signal output lines each from one APV chip. All are double terminated and are received at the readout board with a high common-mode impedance line receiver.

    (more description/documentation of the readout system to come...)

    Event size estimation

     Estimation of FGT event size for pp events @ 500 GeV is 20KB/eve after ZS, physics event will use 80% of this volume.

    1. empty event: 3.7KB Concluding: for 3-sample empty events we need 3.7 KB   300*(2+2+2+2)+30000*(2+2+2+2)/200+100=3700 bytes.
      • total # of channels: 128 ch X 10 APV x4 quad x 6 disks = 30K channels.
      • zero suppression (ZS) is  aiming to keep 1/100 of ADC channels, set to about 3 sigma above ped. Needs testing with data & real electronics, time stability, beam halo, AC-noise . Perhaps we can ZS more.
      • with ZS we need 4 bytes per hit:
        • we need 2 bytes (log2(30k)=14.9) for channel ID
        • ADC value needs up to 12 bits, assume we write it as 16 with 4 bits unused, as with ESMD
      • we may want to keep ADC for 3 time slots per event (using 25 ns integration window) increasing hit size to 8=4+2+2 bytes.
      • it is enough to keep 1/1000 events w/o ZS for monitoring of pedestals
      • event header needs 100 bytes
    2.  

    3. pp data taking
      • assume every track fires 5 strips (will be more at Rin and less at Rout due to varied width of phi strips)
      • trigger event is always embedded in 6 minBias pileup events 
        • at top RHIC luminosity we have 1.5 minB interactions per bXing
        • 300 nsec of analog pulse --> need to account for 4 pre-trigger bXings --> 4x1.5=6 minB eve/trig eve 
        • based on OLD Pythia simulations (obsolete disk size & location) @ 500 GeV one expects 1 track per quadrant per minB event.
      • conclusion: 1440 ADCs will fire due to underlying pileup event   (4 tracks * 6 disks * 2 planes *6 events *5 strips= 1440 

         

      • W-trigger will fire mostly on jets, ASSUME HT consumes lot of jet energy and there is 10 charged (low pT) tracks in such jet. 10 tracks * 6 disks * 2 planes * 5 strips= 600 ADCs
      • total physics event content will be 2000 ADC channels. This requires 16KB (2000 *8 bytes)
    4.  

    5.  Heavy Ion even - no study was performed so far.

     

    FGT DAQ drawings and hardware documentation

    These are the official drawings and documentation used for fabrication of the FGT "DAQ" hardware. We will try to maintain this page up-to-date with any revisions.

    APV Readout Controller (ARC)

    • Pictures
    • Schematic (pdf)
    • Schematic (format=?)
    • BOM
    • Layout (format=?)
    • Board fabrication data
    • Assembly data

    APV Readout Module (ARM)

    ARM Back-of-Crate Board (ABC)

    • Pictures
    • Schematic (pdf)
    • Schematic (Altium)
    • BOM
    • Layout (Altium)
    • Fabrication data files

    FGT FEE Patch Panel

    • Pictures
    • Schematic (pdf)
    • Schematic (..)
    • BOM
    • Layout (..)
    • Fabrication data files

    Other documentation

    FGT . . . . . . . . . . M E E T I N G S

     TIME is going backwards.

    2009-05-21 , RHIC user meeting @ BNL

    2009-05-12 , quarterly review presentation of the FGT project

    2009-03-23, RCS presentation, Bernd 

    2009-08-08, 3rd quarterly review at BNL,Bernd

     

    FGT-HN Upload site  (StarTube) : FGT-HN Upload contributions  ,   Instruction for new users uploading FGT documents  


    Weekly FGT phone meetings (minutes by Doug Hasell or Gerard)

    2009

    June 19,  26

    2008

    February: 1, 8, 15, 22, 29

    March: 7, 14, 21, 28

    April 11 : Brian e/h-algo , Dave: X/X0 in Endcap

    May 2 ,


     IEEE in Dresden, 2008, Frank's talk about FGT


    You do not have access to view this node, September 2008 GT meeting at BNL 


    September 2008, Local seminar at MIT by Jan

    Title: The STAR Forward GEM Tracker, Talk (PDF),  abstract


    2nd STAR IFC integration meeting, BNL, July 23, 2008 

    You do not have access to view this node


    Forward Tracking Extension Meeting at MIT, June, 2008

    Agenda


    STAR Collaboration Meeting, Davis, June, 2008


    STAR IFC integration meeting, BNL, May 16, 2008

    • You do not have access to view this node
    • see slides from  Jim K. ,  Doug, Jan

    BNL PAC, May 8, 2008, 


     IEEE NSS 2008 in Dresden, Germany, May 2008,

    Franks abstract ,


     FGT meeting at IUCF on April 24-25  on April 24-26, 2008


    RSC meeting on April 21, 2008 , all talks ,  DG, W, ppTrans


    DIS2008, London April 2008, web-page, some talks: ppTrans RHIC program (PPT) Jan


    FGT review at MIT, January 7-8, 2008 ( detailed agenda )

    •  Monday, January 7, 2008: MIT-LNS, 13:00-18:00 , Talks (tmp link to MIT)
    •  Tuesday, January 8, 2008: MIT-Bates, 09:00-17:00 Talks (tmp link to MIT)

     

    Cosmic Stand Safety Review April 29th 2011 at BNL

    Date: Friday April 29, 2001
    > Time: 1:30 pm
    > Place: C-AD LCR

    overview cosmic teststand: Anselm

    FGT disks (internal HV distribution etc) : Ben/Jason
    electronics/cables/etc. : Gerard
    Gas system (short) : Don

    FGT-HN Upload contributions

    avossen
    1FGT_safety_elec_20090902.pptxVND.OPENXMLFORMATS-OFFICEDOCUMENT.PRESENTATIONML.PRESENTATION 1.46 MBApril 20 2011 12:16:01.
    2FGT_safety_elec_20090902.pdfPDF 1.48 MBApril 20 2011 12:16:29.
    3FGT_daq.pdfPDF 1.48 MBApril 20 2011 12:16:42.
    4FGT_mechanics.pdfPDF 2.8 MBApril 20 2011 12:17:23.
    5FGT_overview.pdfPDF 7.79 MBApril 20 2011 12:17:36.
    6Minutes STAR FGT Review_0902091.docMSWORD 60.5 KBApril 20 2011 12:17:46.
    7doc-package-for-FGT-safety-review.zipZIP 1.78 MBApril 20 2011 12:17:57.
    8FGT_AssRoom_Reqs.pdfPDF 98.51 KBApril 24 2011 19:52:08.

    balewski
    1Gas Supply for Detector Operators v2.doc.pdfPDF 2.5 MBFebruary 20 2012 17:40:05.

    bbuck

    dkoetke
    1FGT Gas System plan v3c.pdfPDF 478.7 KBFebruary 05 2011 07:10:17.
    2FGT Gas System plan CosmicTest_v2.ppt.pdfPDF 242.58 KBMay 11 2011 19:10:24.
    3Library - 1873.jpgJPEG 823.4 KBMay 11 2011 19:11:07.
    4Library - 1876.jpgJPEG 802.17 KBMay 11 2011 19:11:40.

    gvisser

    rfatemi
    1fgt-db.pdfPDF 106.38 KBJanuary 10 2012 17:25:22.

    sgliske
    1FGT_HowTo.pdfPDF 43.36 KBDecember 24 2013 13:02:32.


    Children pages below contain FGT-HN uploads from individual users.
    New users: please follow this instruction.
    Warning: never remove uploaded attachments - it would mess up automatic # on the summary page for any subsequent uploads. Instead upload revised attachment.

     

    AdamKocoloski-FGT-HN-Uploads

    BrianPage-FGT-HN-Uploads

    Brian Page: Drupal Test

    JanBalewski-FGT-HN-Uploads

    Jan Balewski: list of attachments sent to FGT-HN

    ReneeFatemi-FGT-HN-Uploads

    bad dog!

    WeimingZhang-FGT-HN-Uploads

    Wei-Ming Zhang:Sunset.jpg

     

     

        

    WillJacobs-FGT-HN-Uploads

    • hmmm, says that  "body Field is required"
    • ok, here is a two-liner (BTW I clicked also "FGT in STAR* box)

     


    PICTURE


    name_1 name_2 name_3
    test table
    bin_1 1.229 9898
    bin_2 1.344 98989
    bin_3 2.8998 09989
    Bin_4 4.22908 90090

    gvisser-FGT-HN-Uploads

    The following links are "attachments" for FGT hypernews archive
    ----------------------------------------------------------------------------------------

    miro-FGT-HN-uploads

    Miro: List of Attachments Sent to FGT-HN

    FGT. . . . . . . . . . H A R D W A R E -- E L E C T R O N I C S

     

     

    01 2D_GEM Sensor Board for test at FANL / Miro

    In this document one can find schematic topology and architecture of 2D_GEM Sensor Board which was used for test at FANL.

    02 2D_GEMCU for Test in FANL / Miro

    In this document one can find complete schematic topology and block diagrams for 2D_GEM Readout Control Unit which was used in test run at FANL.

    03 APV Module for Test 2D_GEM at FANL / Miro

    In this document one can find complete schematic topology and hardware architecture for APV Module which was used for test of 2D_GEM at FANL.

    04 APV Motherboard Design / Miro

    Document "APV Motherboard Design" for FGT April's meeting at IUCF on April 24, 2008.

    05 APV_MODULE for STAR/FGT READOUT

    APV_MODULE is PCB where on top of APV_BOARD is bond with epoxy glue SIG_BOARD

    06 BNL 26Sep@008 Meeting

    FEE Design

    07 Document for Charge Sharing 2D_Readout module / Miro

    In this document one can find GERBER files for Charge Sharing 2D_Readout module. This module will be used for test and study charge sharing effect in GEM detectors. All other components for assembling of this charge sharing setup we will use from old GEM detectors which we used for test purposes at FANL last year.

    NOTE: Files in "TSTBOARD.zip" are possible open with P-CAD2004 software tool and file "tstboard.pdf" is "pdf" accessible file where one can see architecture of mentioned unit.

    07 Document for GEM/FANL prototype / Miro

    This document was requested by Dave Underwood and Gerrard Visser for they purposes to compare they design with GEM/FANL prototype buildup at MIT/Bates Linear Accelerator. 

    08 Module for FGT Mechanical Test / Miro

    In attached document is multipurpose module with which one can test all mechanical and connectivity features.

    09 VHDL Programs and State Machines for GEM/FANL / Miro

    In this document one can find VHDL Programs and State Machines for 2D_GEM Control Unit which was engaged in readout process from GEM detector at FANL.

    10 WSC and the GEM quarter section frames (Jason)

     drawings showing the final dimensions of the WSC and the GEM quarter section frames,

    8-19-2009, Jason Bessuille

    Front End Electronics Drawings and Hardware Documentation

    These are the official drawings and documentation used for fabrication of the FGT Front End Electronics hardware. We will try to maintain this page up-to-date with any revisions.

    To view design files, download the Altium Viewer (no cost).

    FGT Readout Module (FRM)

    Terminator Board

    Interconnect Board

    High Voltage Board

    FGT. . . . . . . . . . H A R D W A R E -- S L O W C O N T R O L S

     Documentation of FGT Slow Controls Subsystem

    (add child pages with specific documents below)

     

    The manufacture's calibration for these flow meters is shown here.  It is nonlinear as you can see. So, going from ~40 mm to >65 mm more than doubles the flow rate: it is well over 100 cc/min.

    Scale reading Flow Rate
      (mm)                     (cc/min)
    ----------------------   --------------
        65                         104.0
        60                           91.5
        55                           79.5
        50                           69.0
        45                           59.2
        40                           49.5
        35                           41.7
        30                           34.2
        25                           27.7
        20                           22.0
        15                           17.5
        10                           13.4
          5                            10.0

    Regards,
    Don

    FGT. . . . . . . . . . P H O T O - - G A L L E R Y


    Fig 1 HOW GEM foil works


    Fig 2 Filed lines through GEM foil


    Fig 3 triple GEM HV


    Fig 4 FOM for AL vs. lepton eta & PT, RHICBOS, GSRV-STD


    Fig 5,Graphics courtesy of Tai Sakuma, MIT

     


    Fig 6,Graphics courtesy of Jim Kelsey, MIT

    more plots like this is here PPT , PDF presented at BNL, May 16, 2008.


     Fig 7. Assembled setup for mechanical and electrical test for Quadrant STAR/FQT/FEE

     


    See other (large) photos below

     

     


     Fig 8. Y2008, full STAR


     Fig 9. UPGR16, full STAR


     Fig 10. UPGR16,inner trackers , side view


     Fig 10. UPGR16,inner trackers , side view


     Fig 11. Full size GEM foil , December 2008

     Fig 12. APV bonding, Januar 2009

    Fig 13a-d. Photos of the APV-on-a-cable test setup for FGT (full size in attachments at the bottom), May 2009.

    FGT. . . . . . . . . . S O F T W A R E

    Current Software task list. Manpower and results

     docs.google.com/spreadsheet/ccc

     

     

     

     

    Database Access

    How to access database:

    • Instantiate StFgtDbMaker ->  StFgtDbMaker *myStFgtDbMaker=new StFgtDbMaker();
    • Then get tables ->  StFgtDb * fgtTables = myStFgtDbMaker->getDbTables();
    • Now you can go get whatever geometry stuff you might need from StFgtDb.  For example:
      • fgtTables->getPhysicalCoordinateFromGeoId(geoId, &disc, &quad, &layer, &ordinate, &lower, &upper);
      • fgtTables->getGeoIdfromElecCoord(rdo, arm, apv, ch);

    Electronic ID Formula:

        if ( apv > 11 ) apv = apv - 2;

        ElectId = channel + 128 * ( apv + 20 * (arm + 6 * ( rdo - 1) ) );

     

    General Database Info (from Dmitry)

    1. For real .daq files processing, timestamp is taken from event - it cannot be set by user. For example, if event timestamp is XYZ, then db maker will get db entries with time validity spanning from [XYZ - some_time_1] to [XYZ + some_time_2] where some_time1 is the beginTime of the db entry received, and some_time_2 is the beginTime of the next db entry with beginTime > XYZ.

    2. DBV option (in chain) only freezes validity range, which means "do not consider calibrations uploaded later than DBVXXYYZZ". This allows to reproduce any past conditions. So, if you set DBV to today's date, you will get latest calibration dataset. If you set it to some past date (e.g. 2010-01-01) then you will get only those datasets which were uploaded before 2010-01-01. So, tables are read using both beginTime and entryTime..

    FGT Pedestal Maker, Reader, and Plotter

    See Renee's instructions how to create and upload pedestals/status attached
    -----------------

    I had hoped to write this tutoral once all code was in CVS and I had automated all of the loading and writing scripts. Unfortunately these conditions are only partially fullfilled at this time, so this procedure describes how to make pedestal and status files needed to load to the database:

    • mkdir FgtPed
    • cd FgtPed
    • stardev
    • cvs co StRoot/StFgtPool/StFgtRawDaqReader
    • cvs co StRoot/StFgtPool/StFgtCosmicTestStandGeom
    • cvs co StRoot/StFgtPool/StFgtPedMaker
    • cvs co StRoot/StFgtPool/StFgtStatusMaker
    • cvs co StRoot/StFgtPool/StFgtQaMakers
    • cons
    • cvs co StRoot/StFgtPool/StFgtQaMakers/macro/makeFgtPedAndStat.C

    You will need to open makeFgtPedAndStat.C and set the database time manually (next step is to automate this).  This timestamp is used to get the correct mapping:
    • dbMkr->SetDateTime(20121108,000000);
    You also need to set the file and output file name fields at the top, but this is easily done dynamically if you wish.  Next simply run the macro :  root4star -b -q makeFgtPedAndStat.C.  Eight files will be part of the output:
    • Filename.FGT-ped-DB.dat
    • Filename.FGT-ped-stat-info.txt
    • Filename.FGT-ped-stat.pdf
    • Filename.FGT-ped-stat.ps
    • Filename.FGT-ped-stat.root
    • Filename.FGT-ped-stat.txt
    • Filename.FGT-stat-DB.dat
    • fgtMapDump.csv
    The .dat files need to be loaded to the database using the macros
    • cvs co StRoot/StFgtUtil/database/macros/write_fgt_pedestal.C
    • cvs co StRoot/StFgtUtil/database/macros/write_fgt_status.C
    • cvs co StRoot/StFgtUtil/database/macros/fgtPedestal.h
    • cvs co StRoot/StFgtUtil/database/macros/fgtStatus.h
    Now you need to open each of these macros and set the database time and file input correctly before running via the standard root4star -b -q write_fgt_pedestal.C.



    Software for computing (making), reading (from file or DB) and plotting pedistals has been written.  The DB functionality is not fully implemented as of Jan 10, 2012.

     


    Files

    The current code for reading and writing pedistals is contained in the following files

    $CVSROOT/offline/StFgtDevel/StRoot/StFgtPedMaker/StFgtPedMaker.h
    $CVSROOT/offline/StFgtDevel/StRoot/StFgtPedMaker/StFgtPedMaker.cxx
    $CVSROOT/offline/StFgtDevel/StRoot/StFgtPedMaker/StFgtPedReader.h
    $CVSROOT/offline/StFgtDevel/StRoot/StFgtPedMaker/StFgtPedReader.cxx
    

    An example of using the pedistal maker is in the file

    $CVSROOT/offline/StFgtDevel/StRoot/StFgtPedMaker/macro/makeCosmicPeds.C

    An example of using the pedistal reader is in the file

    $CVSROOT/offline/StFgtDevel/StRoot/StFgtPool/StFgtPedPlotter/macro/plotPedsFromFile.C

    An auxillary class to make a nice plot of pedistals is found in the files

    $CVSROOT/offline/StFgtDevel/StRoot/StFgtPool/StFgtPedPlotter/StFgtPedPlotter.h
    $CVSROOT/offline/StFgtDevel/StRoot/StFgtPool/StFgtPedPlotter/StFgtPedPlotter.cxx
    

    After the software review, it is expected to move these files to StRoot instead of offline/StFgtDevel/StRoot


    StFgtPedMaker

    The StFgtPedMaker is designed to use the FGT online containers in StEvent.  The pedistals are the mean ADC value over all events processed by the chain, while the "RMS" is actually the standard deviation.  Running sums are computed in the StFgtPedMaker::Make function, and the final values are computed in ::Finish member function.  The values can then be written to a file, which contains four columns: (1) geoId of the strip (2) timebin (3) pedistal, i.e. mean ADC (4) RMS, i.e. st. dev.

    The pedistal maker has the following user functions to modify the options:

       void setToSaveToFile( const Char_t* filename );
       void setToSaveToDb( Bool_t doIt = 1 );
       void setTimeBinMask( Short_t mask = 0xFF );
    

    To save to file, one uses the "setToSaveToFile" function and passes the filename to which the information should be saved.  The "setToSaveToDb" function is not yet implemented.  It was decided not to allow this functionality in this class, but rather have a decidated macro to upload a text file generated by this class into the DB.  The time bin mask is set via the "setTimeBinMask" function.  All time bins which are flagged "false" in the mask will be ignored.  Note: time bin 0x01 is the 0th time bin, 0x10 is the 4th time bin, etc. 

    The status of the strips (e.g. dead, broken, and/or hot strips) is not considered in making the pedistals.  All pedistals are computed for the time bins specified in the time bin mask.  As status of the strips is given by the StFgtStatusMaker/Reader, it is expected that the code querying the StFgtPedReader for a pedistal will also query the StFgtStatusReader for a status of the strip, and then choose to act accordingly.  In this manner, the status does not have to be computed before computing the pedistals, but instead should be computed before using the pedistal information.


    StFgtPedReader

    It was anticipated that all calls for pedistals, in all software, would use the StFgtPedReader.  The StFgtPedReader initially either loads the pedistals from file, or from the database, and holds them in an associative array, allowing both sparse data and for fast look ups.  In future versions, one could change the implimentation to a static array, if one desired faster processing but a larger memory imprint.  The code has the following accessor function, to read the time bine for a given geoId and time bin:

       // accessor: input is geoId and timebin, output is ped and
       // st. dev. (err).  Returns error if pedistal not found for given geoId and timebin
       Int_t getPed( Int_t geoId, Int_t timebin, Float_t& ped, Float_t& err ) const;

    One can also set a time bin mask via

       void setTimeBinMask( Short_t mask = 0xFF );
    

    Time bins with bits set to false will be ignored.  The fewer time bins loaded, the faster the initial load and the faster the look up time for each individual geoId afterwards.

    The DB interface still needs to be programmed as of Jan 10, 2012.


    StFgtPedPlotter

    This produces a nice plot of the pedistals for a given quadrant (10 APVs).  The procedure is straight forward. See the macro and the code for an example of how this is done.  Note: the ped. plotter gives an example of how to use the ped. reader.


    Comments

    Usually 2,000 are used to compute pedistals.  The amount of time taken by the StFgtPedReader/Maker is significantly less than the amount of time used by the RawMaker to create the FGT containers in the StEvent and read the DAQ file from disk, and therefore can be considered negligible for the present.

    An example pedistal file is attached.  This file is from the cosmic test stand, when quadrants 008, 018, 007 were on the top, middle, and bottom possition, respectively.  Plots of typical pedistal RMS can be found on page 1 of the QA plots produced during the cosmic test stand.  The base directory is http://www.star.bnl.gov/protected/spin/sgliske/fgtCosmicQA/, from which you can then select a quadrant, and then select a .pdf file.  The files are named via the quandrant and the time the data was taken.

    FGT Simulation

    Random notes from email exchanges:

     

    A bfc that produces muDsts from simulation files looks like this
    root4star -b bfc.C'(10,"MakeEvent,ITTF,NoSsdIt,NoSvtIt,Idst,VFPPVnoCTB,logger,-EventQA,-dstout,tags,Tree,EvOut,analysis,dEdxY2,IdTruth,useInTracker,-hitfilt,tpcDB,TpcHitMover,TpxClu,McAna,fzin,y2012,tpcrs,geant,geantout,beamLine,eemcDb,McEvOut,bigbig,emcY2,EEfs,bbcSim,ctf,-CMuDST,sdt20120501.060500","pp200_QCDprodMBc.fzd")' -q > & Log1

    There is  an fzd file in avossen/tmp/4jason/

     

    The code is available at
    StRoot/StFgtSimulator/
    and should also be available as

    The bfc to run it is in StRoot/StFgtSimulator/macros/bfc.C
    You can run it from the StFgtDevel dir with the following
    command line:

    %root4star -b
    StRoot/StFgtSimulator/macros/bfc.C'(10,"MakeEvent,ITTF,NoSsdIt,NoSvtIt,Idst,VFPPVnoCTB,\
      logger,-EventQA,-dstout,tags,Tree,EvOut,analysis,dEdxY2,\
      IdTruth,useInTracker,-hitfilt,tpcDB,TpcHitMover,TpxClu,\
      McAna,fzin,y2012,tpcrs,geant,geantout,beamLine,eemcDb,\
      McEvOut,bigbig,emcY2,EEfs,bbcSim,ctf,-CMuDST","pp200_QCDprodMBc.fzd")'

     

     

    Legend for Status in DB

    Strip Status

    Status bits are failure states, i.e. status of 0 is good, anything else is bad. Strip status bits are defined as

    • bit 1: pedestal out of range (current range is 100-1200 ADC)
    • bit 2: RMS out of range (current range is 10-80 ADC)
    • bit 3: Fraction of integral near pedestal value (i.e +/- 1 RMS of the pedestal) out of range (current range is 0.6 to 0.95)
    • bit 4: not used
    • bit 5: APV chip bad (threshold is currently 64 dead strips)
    • bit 6: strip not connected

    Note, for bit 5, all strips are set to have this bit fail if more than the threshold number of strips on this APV failed the tests corresponding to bits 1-3.

    Offline Software

    Getting started with plots for the FGT

    (some old documentation on how to read MuDSTs: drupal.star.bnl.gov/STAR/blog/avossen/2012/apr/26/how-read-mudsts)
    Prepare your libraries
    If you're brand new or starting fresh directory for FGT, you'll need to check out some files... This is a list of all the offline FGT software you're likely to need for whatever it is you're working on...
    > mkdir mydir
    > cd mydir
    > cvs co StRoot/RTS
    > cvs co StRoot/St_base
    > cvs co StRoot/StEvent
    > cvs co StRoot/StFgtA2CMaker
    > cvs co StRoot/StFgtClusterMaker
    > cvs co StRoot/StFgtDbMaker
    > cvs co StRoot/StFgtPool
    > cvs co StRoot/StFgtRawMaker
    > cvs co offline/StFgtDevel/StRoot/StMuDSTMaker

    > ln -s offline/StFgtDevel/StRoot/StFgtMuDSTMaker/ StRoot/StFgtMuDSTMaker

    The last step is necessary since StFgtMuDSTMaker is not yet in devel.
    Make sure you are in the proper STAR version (you want "development") and compile...
    > starver dev
    > cons

    Pick your data
    After your libraries installed correctly, pick a daq file for your FGT studies... (If you're using MuDST files this will be different..)
    > ls /star/data03/daq/2012/xxx/13xxxyyy
    where 13xxxyyy is the run number you want. (If the run number you want isn't there, restore the daq files yourself or ask somebody to do it.)
    Before running over data, make sure to run klog to ensure you get a token to communicate with the database..
    > klog

    >For MuDSTFiles the corresponding #define directive in StFgtPool/StFgtClusterTools/StFgtGeneralBase.h has to be set.

    Run over the data
    To fill all the plots you want, you'll need to run this command (when sitting in mydir)...
    > cd mydir
    > root4star -b -q StRoot/StFgtPool/StFgtClusterTools/macros/agvEffs.C'("/star/data03/daq/2012/131/13173068p_rf/st_physics_13173068_raw_202001.root",10,10000,2)' > & output.txt
    The above command will attempt to run over 10,000 events from the example daq file, using disc 2 as the disc that is removed for efficiency calculations. Piping the output to file is necessary in order to cut down on running time. On average, 10,000 events takes ~45 minutes as long as you use an output file.

    Look at the plots
    The agvEffs.C macro will output some .root files...
    -clusterEff.root
    -clusterPics.root contains visual "pictures" of the first 1000 clusters found in the daq file.
    -pulses.root counts pulses in the electronics etc
    -signalShapes.root contains a whole heck of a lot of plots... To see exactly all that is put into signalShapes.root, take a look at StFgtClusterTools/StFgtGenAVEMaker.cxx
    There are some friendly macros to help you pull out the plots you want in an organized way. Run these guys to output a whole ton of .png files. A few examples are...
    > root4star -b -q StRoot/StFgtPool/StFgtClusterTools/macros/saveSignalChar.C'("signalShapes.root")' //this will output histograms per quadrant
    > root4star -b -q StRoot/StFgtPool/StFgtClusterTools/macros/saveSignalCharAPV.C'("signalShapes.root")' //this will output histograms per APV
    > root4star -b -q StRoot/StFgtPool/StFgtClusterTools/macros/saveClusterSizes.C'("signalShapes.root")' //this will output cluster size histograms per quadrant
    For now it's been most convenient to just dump the .png files into your protected directory but hopefully something a little more elegant is on its way very soon...

    Streamlining the process
    Right now we have a shell script that pulls daq files name from a list and then runs over them one after the other, dumping the output .root files into a directory. Look around for things like "runChain.sh" and "l13173068.list" if you want to have a go at that.
    > ./runChain.sh > & output.txt
    Coming soon: pre-written xml job for STAR scheduler.

    Available data sets
    Most recent runs are located in http://www.star.bnl.gov/protected/spin/ezarndt/fgt/
    Coming soon: plots better organized into a scroll-friendly format.

    Online Software

    The online software uses the 'JPlot' framework:

     

    > cvs co OnlTools/Jevp

    > cvs co OnlTools/PDFUtil

    > cvs co StRoot/RTS

    > cvs co StRoot/StDaqLib

    >cvs co StRoot/StEvent

    The framework currently only compiles correctly with the 'pro' library version:

    > starver pro

    > cons

     

    To run do:

    > OnlTools/Jevp/launch fgtBuilder -file filename -pdf outputfilename.pdf

    Jevp instructions can be found in

    OnlTools/Jevp/readme.txt

    The fgt specific code is in OnlTools/Jevp/StJevpBuilders/fgtBuilder.{h,cxx}

     

     

     

     

     

     

    Test Stand

    FGTEventDisplay Overview and Instructions

    At the moment FGTEventDisplay incorporates much of the code that we've been using to generate plots, and allows the viewing of individual events. It's certainly not perfect (and it will likely be replaced at some point in the near future), but since it contains this functionality, I thought I might give a brief overview of what it can currently do and how to use it.

    Here's what it can do:

    * Calculate and display pedestals

    Currently, pedestals are calculated from the first 1000 events in a file. These pedestals are only calculated the first time a daq file is viewed using the FGTEventDisplay, when the APV range is changed, and when you force the pedestals to be recalculated. Otherwise, it saves the pedestals to a file in the FGTEventDisplay directory.

    When the pedestals are displayed, they are displayed as ADC response per channel. Three graphs are shown with error bars, at 1, 2 and 3 sigmas. Many of the algorithms in the code use the three sigma cutoff when accepting an ADC response for use.

    Figure 1: Example pedestal plot

    * Generate Radio Hits graphs

    The code can generate and save radio hits plots, similar to those that I have been posting to track down the APV mapping problem. The graphs that are generated are just raw 2-D histograms that are then saved to root files. The processing done to improve the appearance of those graphs as well as applying the device boundaries are actually done by a utility macro included with FGTEventDisplay. This macro is called Display.C and is in the same directory as FGTEventDisplay.

    Note that all 7 time bins are accumulated by this code separately, and all 7 time bins are saved as separate histograms to the root file.

    There are two types of radio hits graphs that are generated. The first plots only the maximum hits. The algorithm finds the maximum value for each event that is above the pedestal 3*sigma and then fills that location in the histogram/radio plot. The second algorithm finds all values that exceed 3*sigma above pedestal and then fills those locations in the histogram/radio plot. These two types of graphs are saved in separate root files.

    Figure 2: Example max hits plot

    Figure 3: Example all r/phi matches plot

    * Generate ADC Response graphs

    This code generates and saves per-phi, per-r, and per-channel ADC response plots. Once again, these are just 2-D histograms that are then saved to a root file, and processing to improve the appearance of these plots was done in Display.C. The algorithm selects all ADC responses greater than 3-sigma over pedestal for each event. The channel number in this case is the APV number (offset so that all APV ranges start at 0) times 128 plus the actual channel number. Phi and R values are determined based on the mapping that is included with the FGTEventDisplay (this is located in fgt_mapping.txt). Once again, note that all 7 time bins are accumulated by this code separately, and all 7 time bins are saved as separate histograms to the root file.

    Figure 4: Example ADC response vs channel plot

    Figure 5: Example ADC response vs phi plot

    Figure 6: Example ADC response vs r plot

    * Generate Raw ADC Response graphs

    This code generates and saves per-phi, per-r, and per-channel ADC response plots, very similar to the above, but it does not apply pedestal subtraction or thresholds. The channel number is still the APV number (offset so that all APV ranges start at 0) times 128 plus the actual channel number. Phi and R values are determined based on the mapping that is included with the FGTEventDisplay (this is located in fgt_mapping.txt). Once again, note that all 7 time bins are accumulated by this code separately, and all 7 time bins are saved as separate histograms to the root file.

    Figure 7: Example raw ADC response vs channel plot

    Figure 8: Example raw ADC response vs phi plot

    Figure 9: Example raw ADC response vs r plot

    * Display individual events

    FGTEventDisplay allows you to iterate both forward and (slowly) backwards through individual events in a daq file, as well as jump (slowly) to individual events. For each event, at least three graphs are shown, possibly four. The three that are always shown are ADC response versus R, ADC response versus Phi, and ADC response versus channel (using the same channel calculation method and mapping as described earlier).

    The fourth graph will only display if the values for the current event are found that exceed 3*sigma over pedestal. It shows all possible hit locations in R and Phi for these values. Currently this is not a radio plot, but this may change in the future for clarity.

    The time bin selected for display here is always the fourth time bin.

    Figure 10: Example default event display plot



    Anselm's clustering code is currently in a developmental version of FGTEventDisplay. We are working on incorporating a correction for common mode noise, relative R/Phi gains, and gain matching into this clustering code.


    Here are instructions for downloading, compiling, and using FGTEventDisplay:

    To download:

    FGTEventDisplay is currently stored in a googlecode SVN repository. In order to use this repository, you have to set up your svn proxy properly so that you can contact the googlecode.com (this has already been done on fgt-ops). Probably the easiest way to do this is to attempt to download the code first by issuing the command:

    svn co https://fgt-daq-reader.googlecode.com/svn/trunk/FGTEventDisplay FGTEventDisplay

    This will almost certainly fail, but it should create the file ~/.subversion/servers. You will need to edit that file and by adding the following two lines to the end of the file:

    http-proxy-host=proxy.sec.bnl.local
    http-proxy-port=3128


    Once that is done, try issuing the command again. This time it should work (let me know if it doesn't), and it should create a directory FGTEventDisplay containing all the program files. From then on you can just update that working copy to get updates by issuing the command

    svn update

    in the FGTEventDisplay directory. This should automatically merge changes into your files, without clobbering your own changes, although if there are conflicts you may have trouble.

    If you want or need access to this repository, please send e-mail to Dr. Fatemi and let her know. At that point she'll probably ask me to add you, and I'll try and remember how to do that.

    To compile:

    Go into your FGTEventDisplay directory and issue the command:

    make

    Compilation stuff should happen, and you should be left with an FGTEventDisplay executable.

    Please note that you should NOT use Rts_Example.sh to compile. I can't guarantee that it will work, and it is included in the repository only for historical reasons (because, historically, we've been too lazy to remove it).

    To run:

    Go into your FGTEventDisplay directory (this is IMPORTANT. . . the code will not run from another directory) and issue the command:

    ./FGTEventDisplay <location of DAQ file>

    Replacing <location of DAQ file> with the actual path to a DAQ file.

    This will start the program. The code will either automatically generate or load pedestals, depending on whether or not they have already been calculated by some previous run.

    Then the program will show the main text menu. The menu lists most of your options, and most of them should be pretty straight forward.

    However, there are a few things that should be mentioned. First, by default, the code will assume that you are using the APV range 17 through 21. If you are using a different range, you need to set that difference in the program options, and currently the software only supports ranges 17 through 21 and 12 through 16. To get to options, just type "o" at the main menu. Once there, type "a" to change the APV range. then press "q" to return to the main menu. Doing this will now AUTOMATICALLY force pedestal recalculation, so at this point you should be able to use the code normally.

    Also, in options you can tell the program to display bar graphs when displaying events (instead of scatter plots), and change the marker style in the scatter plots. These ONLY effect the event display.

    Figure 11: Example default event display with bars


    The event display plots do not currently allow any user interaction. This is unfortunate, and I'm planning on fixing it in the future, but right now nearly all x-windows events are ignored by that window, so resizing and clicking and even moving it off screen (for some window managers) will not work as expected.

    The daq reader does not have a mechanism for iterating backwards, or a mechanism for jumping to an arbitrary event. As such, iterating backwards (using "h" or "k") may be very, very slow, as the program has to iterate forwards from the beginning of the daq file to the previous event. Similarly, though I have tried to make it as efficient as possible, jumping to an event may be very slow (although, jumping to an event *forward* of your current position in the file will start from your current position, so it should be more efficient).

    When jumping to an event directly, you should use the event number that is displayed by the program in the main menu as you are iterating through the daq file. That event number should appear right above the command prompt.

    Finally, many of the plots above are colorful and contoured. These are not the raw images that the FGTEventDisplay will produce. With the exception of the individual event display and the pedestal display, FGTEventDisplay will produce root files containing histograms. Generating the plot for these histograms must be done separately. A macro, Display.C, is included with FGTEventDisplay that can be used to generate these nice, colorful plots, however it currently requires modification to function with every possible root file and contained histogram.

     

    Organization

    FGT. . . . . . . . . . V A R I A

    1. Frank's Web page at MIT
    2. Dave's Web page Argonne

    3. ccc

     

    TPC resistor chain at phi=106 deg

     Here is what we know about TPC resistor chain in IFC positioned at 106 deg in phi.

    •  the foil covering resistors  is made of the Al plus Kapton is 0.09 percent of a radiation length thick for a straight through track.
    • material is used for to round cylinders  supporting that foil is made of 1" pipe made from G10, wall thickness ~ 1mm
    • Foils structure is described on TPC page we are using one capton foil with Al stripes
      http://www.star.bnl.gov/public/tpc/tpc.html
    • resistors: between stripes we put 2MoM resistor, consisting of 2  with 1 MoM nominal- see picture in attachment.
      i don't know a power, but it is 4 mm diameter and 13 mm  ceramic body.
      there are 360 ONE MoM resistors on each side of TPC We have 182 stripes to define uniform field and between each pair of stripes we put 2 MoM

     


    As seen from the East toward West:

    BAD drawing:

    Good drawing:

     

     


    As seen from the West toward East:

     

    FGT. . . . . . . . . . x -T R A S H (node for recycling)

    Any child pages belonging here is a trash.
    Note only the owner can recycle them by attaching to a new mother-page and changing the title,URL, and content.

    Jan

    fgt-subpage-test-Renee

    this is my link to my previous blog

    my trash1- jan

     this age can be used for some testes

    trash1

     Bhla Bhla

     

    Deleted Documents

    Deleted documents.

    FPD & FMS & FPS

     The STAR Forward Pion Detector (FPD) and Forward Meson Spectrometer(FMS)

    [Test using 'oleg'] account - PLEASE REMOVE LATER]
     

    Database

    Now the FPD/FMS data base tables have been created including geometry, mapping and calibration tables. These pages describe the data structure and how to use the FPD/FMS database. Please note the online database browser is a nice tool to get intuitive informaton of the tables stored. StFmsDbMaker, which acts as the interface between end user makers and the database is developed and described in more details in the corresponding subsection.

    Calibration

    Please refer to the mapping section for more information about detectorId definition and positions.
     



    The following datasets (with names) are in the database:
    St_fmsGain ("Calibrations/fms/fmsGain")
    St_fmsGainCorrection ("Calibrations/fms/fmsGainCorrection")
    St_fmsBitShiftGain ("Calibrations/fms/fmsBitShiftGain")
    St_fmsLed ("Calibrations/fms/fmsLed")
    St_fmsLedRef ("Calibrations/fms/fmsLedRef")
    St_fmsPi0Mass ("Calibrations/fms/fmsPi0Mass")
    St_fmsRec ("Calibrations/fms/fmsRec")


     


    fmsGain and fmsGainCorrection are the original calibrations inherited from the FPD.
    For the FMS:
    Large cells, north -> detectorId=8
    Large cells, south -> detectorId=9
    Small cells, north -> detectorId=10
    Small cells, south -> detectorId=11

    The channels are counting from near-beam to outside and bottom up.
    Large cells are arrays of 17x34, small cells are 12x24.
    The corners and central holes are included in the channel count, i.e. these entries have gain=0.

    $STAR/StDb/idl/fmsGain.idl

    struct fmsGain
    {
        octet detectorId;     /* DetectorId */
        unsigned short ch;     /* Ch 1-578 */
        float gain;     /* gain GeV/ch */
    }; 

    $STAR/StDb/idl/fmsGainCorrection.idl

    struct fmsGainCorrection
    {
        octet detectorId;     /* DetectorId */
        unsigned short ch;     /* Ch 1-578 */
        float corr;     /* gain correction factor */
    }; 

    $STAR/StDb/idl/fmsBitShiftGain.idl

    *       description: // FMS & FPD detector bit shift gain                                                                                                    

    *                     0 for no bit shift                                                                                                                     

    *                     +1 for shifting up (adc=1,2,3,4... becomes 2,4,6,8...)                                                                                 

    *                     -1 for shifting down (adc=1,2,3,4... becomes 0,1,1,2...)                                                                               

    *                     up to +-5 

    struct fmsBitShiftGain
    {
        octet detectorId;     /* DetectorId */
        unsigned short ch;     /* Ch 1-578 */
        short bitshift;     /* bit shift gain */
    }; 

     

     



    The following datatables are not indexed. They are only used for the FMS.
    The BLOB arrays follow the same order as the channels for the gains (starting with 0 instead of 1, obviously).

    $STAR/StDb/idl/fmsLed.idl

    struct fmsLed
    {
        float adc_ln[578];     /* LED ADC value Large North */
        float err_ln[578];     /* LED uncertainty Large North */
        float adc_ls[578];     /* Large South */
        float err_ls[578];     /* */
        float adc_sn[288];     /* Small North */
        float err_sn[288];     /* */
        float adc_ss[288];     /* Small South */
        float err_ss[288];     /* */
    };
    
    

    $STAR/StDb/idl/fmsLedRef.idl

    struct fmsLedRef
    {
        float adc_ln[578];     /* LED ADC value Large North */
        float err_ln[578];     /* LED uncertainty Large North */
        float adc_ls[578];     /* Large South */
        float err_ls[578];     /* */
        float adc_sn[288];     /* Small North */
        float err_sn[288];     /* */
        float adc_ss[288];     /* Small South */
        float err_ss[288];     /* */
    };
    
    

    $STAR/StDb/idl/fmsPi0Mass.idl

    struct fmsPi0Mass
    {
        float mass_ln[578];     /* Reconstructed pi0 mass Large North */
        float err_ln[578];     /* Mass uncertainty Large North */
        float mass_ls[578];     /* Large South */
        float err_ls[578];     /* */
        float mass_sn[288];     /* Small North */
        float err_sn[288];     /* */
        float mass_ss[288];     /* Small South */
        float err_ss[288];     /* */
    };
    
    

     



    fmsRec contains parameters for the cluster and photon reconstruction, which have previously been hard coded or stored in local files. These parameters are not likely to change, but have been moved into the database for consistency and backwards compatibility.

    $STAR/StDb/idl/fmsRec.idl

    struct fmsRec
    {
        unsigned short ROW_LOW_LIMIT;     /* */
        unsigned short COL_LOW_LIMIT;     /* */
        float CEN_ROW_LRG;     /* */
        unsigned short CEN_ROW_WIDTH_LRG;     /* */
        unsigned short CEN_UPPER_COL_LRG;     /* */
        float CEN_ROW_SML;     /* */
        unsigned short CEN_ROW_WIDTH_SML;     /* */
        unsigned short CEN_UPPER_COL_SML;     /* */
        float CORNER_ROW;     /* */
        float CORNER_LOW_COL;     /* */
        unsigned short CLUSTER_BASE;     /* */
        unsigned short CLUSTER_ID_FACTOR_DET;     /* */
        unsigned short TOTAL_TOWERS;     /* */
        float PEAK_TOWER_FACTOR;     /* */
        float TOWER_E_THRESHOLD;     /* */
        float BAD_2PH_CHI2;     /* */
        float BAD_MIN_E_LRG;     /* */
        float BAD_MAX_TOW_LRG;     /* */
        float BAD_MIN_E_SML;     /* */
        float BAD_MAX_TOW_SML;     /* */
        float VALID_FT;     /* */
        float VALID_2ND_FT;     /* */
        float VALID_E_OWN;     /* */
        float SS_C;     /* */
        float SS_A1;     /* */
        float SS_A2;     /* */
        float SS_A3;     /* */
        float SS_B1;     /* */
        float SS_B2;     /* */
        float SS_B3;     /* */
        unsigned short CAT_NTOWERS_PH1;     /* */
        float CAT_EP1_PH2;     /* */
        float CAT_EP0_PH2;     /* */
        float CAT_SIGMAMAX_MIN_PH2;     /* */
        float CAT_EP1_PH1;     /* */
        float CAT_EP0_PH1;     /* */
        float CAT_SIGMAMAX_MAX_PH1;     /* */
        float PH1_START_NPH;     /* */
        float PH1_DELTA_N;     /* */
        float PH1_DELTA_X;     /* */
        float PH1_DELTA_Y;     /* */
        float PH1_DELTA_E;     /* */
        unsigned short PH2_START_NPH;     /* */
        float PH2_START_FSIGMAMAX;     /* */
        float PH2_RAN_LOW;     /* */
        float PH2_RAN_HIGH;     /* */
        float PH2_STEP_0;     /* */
        float PH2_STEP_1;     /* */
        float PH2_STEP_2;     /* */
        float PH2_STEP_3;     /* */
        float PH2_STEP_4;     /* */
        float PH2_STEP_5;     /* */
        float PH2_STEP_6;     /* */
        float PH2_MAXTHETA_F;     /* */
        float PH2_LOWER_NPH;     /* */
        float PH2_LOWER_XF;     /* */
        float PH2_LOWER_YF;     /* */
        float PH2_LOWER_XMAX_F;     /* */
        float PH2_LOWER_XMAX_POW;     /* */
        float PH2_LOWER_XMAX_LIMIT;     /* */
        float PH2_LOWER_5_F;     /* */
        float PH2_LOWER_6_F;     /* */
        float PH2_UPPER_NPH;     /* */
        float PH2_UPPER_XF;     /* */
        float PH2_UPPER_YF;     /* */
        float PH2_UPPER_XMIN_F;     /* */
        float PH2_UPPER_XMIN_P0;     /* */
        float PH2_UPPER_XMIN_LIMIT;     /* */
        float PH2_UPPER_5_F;     /* */
        float PH2_UPPER_6_F;     /* */
        float PH2_3_LIMIT_LOWER;     /* */
        float PH2_3_LIMIT_UPPER;     /* */
        float GL_LOWER_1;     /* */
        float GL_UPPER_DELTA_MAXN;     /* */
        float GL_0_DLOWER;     /* */
        float GL_0_DUPPER;     /* */
        float GL_1_DLOWER;     /* */
        float GL_1_DUPPER;     /* */
        float GL_2_DLOWER;     /* */
        float GL_2_DUPPER;     /* */
    };

    FPS DB tables


    Geometry related DB tables for FPS
    • fpsConstant.idl (1 row) : Basic constants
      struct fpsConstant {
      unsigned short nQuad; /* 4 */
      unsigned short nLayer; /* 3 */
      unsigned short maxSlat; /* 21 */
      unsigned short maxQTaddr; /* 8 */
      unsigned short maxQTch; /* 32 */
      };
    • fpsChannelGeometry.idl (12 rows) : Number of slats for each Quad & Layer
      struct fpsChannelGeometry {
      unsigned short quad; /* 1=Q1(South Top), 2=Q2(South Bottom),3=Q3(North Top), 4=Q4(North Bottom) */
      unsigned short layer; /* 1=layer1, 2=layer2, 3=layer3 */
      unsigned short nslat; /* # of slat (19,20 or 21) */
      };
    • fpsSlatId.idl (252 rows) : Slat Id to Quad & Layer & Slat#
      struct fpsSlatId {
      unsigned short slatid; /* 0-251: slat Id = (quad-1)*nLayer*maxSlat + (layer-1)*maxSlat + (Slat-1)*/
      unsigned short quad; /* 1-4, 0 for none-exsistant slat */
      unsigned short layer; /* 1-3, 0 for none-exsistant slat */
      unsigned short slat; /* 1-21, 0 for none-exsistant slat */
      };
    • fpsPosition.idl (252 rows) : Detector dimensions and positions
      struct fpsPosition {
      unsigned short slatid; /* 0-251: slat Id = (quad-1)*nLayer*maxSlat + (layer-1)*maxSlat + (slat-1)*/
      float xwidth; /* x width (cm) */
      float ywidth; /* y width (cm) */
      float zwidth; /* z width/thickness (cm) */
      float xoffset; /* xoffset from beam line to cetner of detector (cm) */
      float yoffset; /* yoffset from beam line to center of detector (cm) */
      float zoffset; /* z position from IR (cm) */
      };
    • fpsMap.idl : QT map (252 rows) :
      struct fpsMap {
      unsigned short slatid; /* 0-251: slat Id = (quad-1)*nLayer*maxSlat + (layer-1)*maxSlat + (slat-1)*/
      short QTaddr; /* 0-7 : QT Address */
      short QTch; /* 0-31 : QT channel */
      };

    Calibration related DB tables for FPS
    • fpsGain.idl : Gain (252 rows) :
      struct fpsGain {
      unsigned short slatid; /* 0-251: slat Id = (quad-1)*nLayer*maxSlat + (layer-1)*maxSlat + (slat-1)*/
      float MIP; /* Single MIP peak ADC [ch] */
      float Sigma; /* MIP peak width [ch]*/
      float Valley; /* Valley location between noise and MIP peak [ch]*/
      };
    • fpsStatus.idl : Status (252 rows) :
      struct fpsStatus {
      unsigned short slatid; /* 0-251: slat Id = (quad-1)*nLayer*maxSlat + (layer-1)*maxSlat + (slat-1)*/
      unsigned short status; /* 0=Good, 1=bad, 9=Unused */
      };
    • fpsPed.idl : Pedestals(252 rows) :
      struct fpsPed {
      unsigned short slatid; /* 0-251: slat Id = (quad-1)*nLayer*maxSlat + (layer-1)*maxSlat + (slat-1) */
      float Mean; /* Mean of Gaussian Fit for ADC [ch] */
      float Sigma; /* Sigma of Guassian Fit for ADC [ch] */
      };

    FPost DB tables

    FMS Postshower detector DB tables (Geometry/fpost)
    • fpostConstant.idl (1 row) : Basic constants
      struct fpostConstant {
      unsigned short nQuad; /* 2 (South/North) */
      unsigned short nLayer; /* 6 */
      unsigned short maxSlat; /* 43 */
      unsigned short maxQTaddr; /* 8 */
      unsigned short maxQTch; /* 32 */
      };
    • fpostChannelGeometry.idl (12 rows) : Number of slats for each Quad & Layer
      struct fpostChannelGeometry {
      unsigned short quad; /* 1=Q1(South), 2=Q2(North) */
      unsigned short layer; /* 1=layer1, 2=layer2, 3=layer3 4=layer4 5=layer5 6=layer6 */
      unsigned short nslat; /* # of slat (9,14,25,43, or 34) */
      };
    • fpostSlatId.idl (241 rows) : Slat Id to Quad & Layer & Slat#
      struct fpostSlatId {
      unsigned short slatid; /* 0-240: slat Id */
      unsigned short quad; /* 1-2, 0 for none-exsistant slat */
      unsigned short layer; /* 1-6, 0 for none-exsistant slat */
      unsigned short slat; /* 1-43, 0 for none-exsistant slat */
      };
    • fpostPosition.idl (241 rows) : Detector dimensions and positions
      struct fpostPosition {
      unsigned short slatid; /* 0-240: slat Id */
      float length; /* length ( Depends on S1,S2,etc. cm) */
      float width; /* width (5 cm) */
      float thickness; /* thickness (1 cm) */
      float angle_xy /* angle in the xy plan measured with respect to the positive x-axis (45 South, 135 North)*/
      float xoffset; /* xoffset from beam line to center of detector (cm) */
      float yoffset; /* yoffset from beam line to center of detector (cm) */
      float zoffset; /* z position from IR (cm) */
      };
    • fpostMap.idl : QT map (241 rows) :
      struct fpostMap {
      unsigned short slatid; /* 0-240: slat Id */
      short QTaddr; /* 0-7 : QT Address */
      short QTch; /* 0-31 : QT channel */
      };

    Calibration related DB tables for FPOST
    • fpostGain.idl : Gain (241 rows) :
      struct fpostGain {
      unsigned short slatid; /* 0-240: slat Id */
      float MIP; /* Single MIP ADC ch */
      };
    • fpostStatus.idl : Status (241 rows) :
      struct fpostStatus {
      unsigned short slatid; /* 0-240: slat Id */
      unsigned short status; /* 0=Good, 1=bad, 9=Unused */
      };
    • fpostPed.idl : Pedestals(241 rows) :
      struct fpostPed {
      unsigned short slatid; /* 0-240: slat Id */
      float Mean; /* Mean of Gaussian Fit for ADC [ch] */
      float Sigma; /* Sigma of Guassian Fit for ADC [ch] */
      };

    Geometry & Mapping

     The following FMS database tables are defined for geometry and mapping.

     

     

    • Channel geometry

       $STAR/StDb/idl/fmsChannelGeometry.idl 

    /* fmsGeometry.idl
    *
    * Table: fmsGeometry
    *
    * description: // FPD & FMS & FHC detector geometry
    */
    /* Detector Name detectorId ew ns type nX nY */
    /* FPD-North 0 0 0 0 7 7 */
    /* FPD-South 1 0 1 0 7 7 */
    /* FPD-North-Pres 2 0 0 1 7 1 */
    /* FPD-South-Pres 3 0 1 1 7 1 */
    /* FPD-North-SMDV 4 0 0 2 48 1 */
    /* FPD-South-SMDV 5 0 1 2 48 1 */
    /* FPD-North-SMDH 6 0 0 3 1 48 */
    /* FPD-South-SMDH 7 0 1 3 1 48 */
    /* FMS-North-Large 8 1 0 4 17 34 */
    /* FMS-South-Large 9 1 1 4 17 34 */
    /* FMS-North-Small 10 1 0 4 12 24 */
    /* FMS-South-Small 11 1 1 4 12 24 */
    /* FHC-North 12 1 0 5 9 12 */
    /* FHC-South 13 1 1 5 9 12 */

    struct fmsChannelGeometry {
    octet detectorId; /* detector Id */
    octet type; /* 0=SmallCell,1=Preshower,2=SMD-V,3=SMD-H,4=LargeCell,5=HadronCal */
    octet ew; /* 0=east, 1=west */
    octet ns; /* 0=north, 1=south */
    octet nX; /* # of columns, max_channel is nX*nY */
    octet nY; /* # of rows, max_channel is nX*nY */
    };

    • Detector position

       $STAR/StDb/idl/fmsDetectorPosition.idl 

    /* fmsPosition.idl
    *
    * Table: fmsPosition
    *
    * description: // FPD & FMS & FHC detector width and positions
    */

    struct fmsDetectorPosition {
    octet detectorId; /* detector Id */
    float xwidth; /* x width */
    float ywidth; /* y width */
    float xoffset; /* xoffset from beam line to inner edge of detector */
    float yoffset; /* yoffset from beam line to center of detector */
    float zoffset; /* z position where we measure x,y */
    };

    • Detector map

       $STAR/StDb/idl/fmsMap.idl 

    /* fmsMap.idl
    *
    * Table: fmsMap
    *
    * description: // FMS & FPD detector map
    */

    struct fmsMap {
    octet detectorId; /* DetectorId */
    unsigned short ch; /* Ch 1-578*/
    octet qtCrate; /* QT crate# 1-4 & 7 */
    octet qtSlot; /* QT slot# 1-16 */
    octet qtChannel; /* QT channel# 0-31 */
    };

    • PatchPanel to detector map

       $STAR/StDb/idl/fmsPatchPannelMap.idl 

    /* fmsPatchPanelMap.idl
    *
    * Table: fmsPatchPanelMap
    *
    * description: // FMS detector to patch panel map
    */

    /* module 1=North Large, 2=South Large, 3=North Small ,4=South Small => moduleIDs */

    struct fmsPatchPanelMap {

    /* channel# 1-548 for L and 1-288 for S*/
    octet ppPanel[548]; /* panel# 1-2 */
    octet ppRow[548]; /* row# 1-20 */
    octet ppColumn[548]; /* column# 1-16 */
    };

    • PatchPanel to QT map

       $STAR/StDb/idl/fmsQTMap.idl 

    /* fmsQTMap.idl
    *
    * Table: fmsQTMap
    *
    * description: // FMS patch panel to QT map
    */

    /* north=1/south=2 => sideIDs */

    struct fmsQTMap {
    /* panel# 1-2 */
    /* row# 1-20 */
    /* column# 1-16 */
    octet qtCrate[2][20][16]; /* QT crate# 1-4 */
    octet qtSlot[2][20][16]; /* QT slot# 1-16 */
    octet qtChannel[2][20][16]; /* QT channel# 0-31 */
    };

     

    R/W DB

    Below is mostly copied from Akio's page on how to read/write DB tables in a macro.

     FMS DB example

    • Simple "how to Creating Offline DB Tables" from Dmitry 
        o Anyone in STAR should be able to read DB from RCAS machines
        o To write to DB, you need write permission to DB. Currently only Akio and Dmitry can write. Dmitry can add people as needed.
        o Only Dmitry can delete, or mark as de-activated DB entries.
        o For reading DB : unsetenv DB_ACCESS_MODE or setenv DB_ACCESS_MODE read
        o For writing DB : setenv DB_ACCESS_MODE write, and do not forget to unsetenv DB_ACCESS_MODE when done.

    • STAR DB Broswer : Calibration DB , Geometry DB

    • Look into other DB table definisions in $STAR/StDb/idl/*.idl 
        o Up to 3 dimensional array is supported 
        o Plus using one queue, DB can return array (ModuleID) of this table 
        o Thus 4 dimensional array (3 dim array in table + 1 module ID) used in fmsQTMap.idl (see below). 
        o Array is fixed length, but module ID can be variable length. 
        o For DB efficiencies, using big array block/blab is disfavored, and use small tables & moduleID is recomended.

    • Edit Computing/Subsystem drupal page for documentation!

    • To read/write into DB using macros 
           copy ~akio/dbServers.xml ~/
           unsetenv DB_SERVER_LOCAL_CONFIG

    • For usual DB access through BFC/St_db_Maker 
          setenv DB_SERVER_LOCAL_CONFIG /afs/rhic.bnl.gov/star/packages/conf/dbLoadBalancerLocalConfig_BNL.xml

    • Proposed tables and codes to read from and write into the database (examples are located at ~jgma/psudisk/fms/)
        o fmsPatchPanelMap.idl for FMS detector to patchpanel map
        Input file : qtmap2pp.txt
        Example root macro : fms_db_patchpanelmap.C 
           ** To read input file and write to DB
           root4star -q -b fms_db_patchpanelmap.C'("readtext writedb")'

          ** To read DB and write to text file
          root4star -q -b fms_db_patchpanelmap.C'("readdb writetext")'
          diff qtmap2pp.txt qtmap2pp.txt_dbout

        o fmsQTMap.idl for FMS patchpanel to QT map
        Input file for run8 : qtmap_run8.txt
        Input file for run9 : qtmap2009V1.txt
        Example root macro : fms_db_qtmap.C
           ** To read input file and write to DB for run8 (replace 2nd argument = 8 with 9 for run9)
           root4star -q -b fms_db_qtmap.C'("readtext writedb",8)'
           ** To read DB and write to text file for run8
           root4star -q -b fms_db_qtmap.C'("readdb writetext",8)'
           diff qtmap_run8.txt qtmap_run8.txt_dbout

        o fmsQTMap.idl for FPD,FMS,FHC detector to QT map
        For FMS, combine 2 DB tables (fmsPatchPanel & fmsQTMap) to create this table
        Root macro for FMS: fms_db_pp_qt_merge.C
           ** To read 2 maps from DB (fmsPatchPanel & fmsQTMap) and write fmsMap to DB for run8 (replace 2nd argument = 8 with 9 for run9)
           root4star -q -b fms_db_pp_qt_merge.C'("merge writedb",8)'
           ** To read fmsMap DB and write to text file for run8
           root4star -q -b fms_db_pp_qt_merge.C'("readdb writetext",8)'
           more fms_db_pp_qt_merge.txt

        Root macro For FPD : fpd_db_map.C

        o fmsChannelGeometry.idl for FPD,FMS,FHC detector basic numbers (id, type, Easr/West and North/south, # of row/column)
        Root macro : fms_db_ChannelGeometry.C

        o fmsDetectorPosition.idl for FPD,FMS,FHC detector position in STAR frame 
        Root macro : fms_db_detectorposition.C

        o fmsGain.idl/fmsGainCorrection.idl for FPD,FMS,FHC gain(detectorId, channel number, gain/gaincorr)
        Root macro : fms_db_fmsgain.C/fms_db_fmsgaincorr.C (they work in a similar way)        ** To read input gain file and write gain information to DB for run8 pp200 (the macro takes combined option of run8/run9 and dAu200/pp200)
           root4star -q -b fms_db_pp_fmsgain.C'("readtext writedb", "run8 pp200")'
           ** To read fmsGain DB and write to text file for run8 pp200
           root4star -q -b fms_db_fmsgain.C'("readdb writetext", "run8 pp200")'
     

    StFmsDbMaker

    StFmsDbMaker is the interface between the STAR FMS database and user makers. It provides access methods to all FPD/FMS related data in the STAR database such as mapping and calibration. (currently only at /star/u/jgma/psudisk/fms/StRoot/StFmsDbMaker) Small chain to test : root4star -q -b /star/u/jgma/psudisk/fms/mudst.C. The following is a list of functions provided by StFmsDbMaker. Please refer to the source code under CVS for details (will be checked in to CVS after peer review.).

    List of functions

    //! getting the whole table
    fmsDetectorPosition_st* DetectorPosition();
    fmsChannelGeometry_st* ChannelGeometry();
    fmsMap_st* Map();
    fmsPatchPanelMap_st* PatchPanelMap();
    fmsQTMap_st* QTMap();
    fmsGain_st* Gain();
    fmsGainCorrection_st* GainCorrection();

    //! utility functions related to FMS geometry and calibration
    Int_t maxDetectorId(); //! maximum value of detector Id
    Int_t detectorId(Int_t ew, Int_t ns, Int_t type); //! convert to detector Id
    Int_t eastWest(Int_t detectorId); //! east or west to the STAR IP
    Int_t northSouth(Int_t detectorId); //! north or south side
    Int_t type(Int_t detectorId); //! type of the detector
    Int_t nRow(Int_t detectorId); //! number of rows
    Int_t nColumn(Int_t detectorId); //! number of column
    Int_t maxChannel(Int_t detectorId); //! maximum number of channels
    Int_t getRow(Int_t detectorId, Int_t ch); //! get the row number for the channel
    Int_t getColumn(Int_t detectorId, Int_t ch); //! get the column number for the channel
    Int_t getChannel(Int_t detectorId, Int_t row, Int_t column); //! get the channel number
    StThreeVectorF detectorOffset(Int_t detectorId); //! get the offset of the detector
    Float_t getXWidth(Int_t detectorId); //! get the X width of the cell
    Float_t getYWidth(Int_t detectorId); //! get the Y width of the cell
    Float_t getGain(Int_t detectorId, Int_t ch); //! get the gain for the channel
    Float_t getGainCorrection(Int_t detectorId, Int_t ch); //! get the gain correction for the channel
    StThreeVectorF getStarXYZ(Int_t detectorId,Float_t FmsX, Float_t FmsY); //! get the STAR frame coordinates
    Float_t getPhi(Int_t detectorId,Float_t FmsX, Float_t FmsY); //! get the STAR frame phi angle
    Float_t getEta(Int_t detectorId,Float_t FmsX, Float_t FmsY, Float_t Vertex); //! get the STAR frame pseudo rapidity

    //! fmsMap related
    Int_t maxMap();
    void getMap(Int_t detectorId, Int_t ch, Int_t* qtCrate, Int_t* qtSlot, Int_t* qtChannel);

    //! fmsPatchPanelMap related
    Int_t maxModule();

    //! fmsQTMap related
    Int_t maxNS();

    //! fmsGain/GainCorrection related
    Int_t maxGain();
    Int_t maxGainCorrection();

    //! set time stamp
    //void setDateTime(Int_t date, Int_t time);

    //! text dump for debugging
    void dumpFmsChannelGeometry(Char_t* filename="dumpFmsChannelGeometry.txt");
    void dumpFmsDetectorPosition(Char_t* filename="dumpFmsDetectorPosition.txt");
    void dumpFmsMap (Char_t* filename="dumpFmsMap.txt");
    void dumpFmsPatchPanelMap (Char_t* filename="dumpFmsPatchPanelMap.txt");
    void dumpFmsQTMap (Char_t* filename="dumpFmsQTMap.txt");
    void dumpFmsGain (Char_t* filename="dumpFmsGain.txt");
    void dumpFmsGainCorrection (Char_t* filename="dumpFmsGainCorrection.txt");

    How to use it in user maker

    #include "StFmsDbMaker/StFmsDbMaker.h"

    Then use the global pointer to get access to all the FMS db table information by the methods above. For example to dump one table to a text file:

    gStFmsDbMaker->dumpFmsChannelGeometry("ttest.txt");

    To get a pointer to the channel geometry table:

    fmsChannelGeometry_st *channelgeometry = gStFmsDbMaker->ChannelGeometry();

    In your macro you'll need to load the .so files and create maker instances in the chain:

    gSystem->Load("StDbBroker.so");
    gSystem->Load("St_db_Maker.so");
    gSystem->Load("StFmsDbMaker.so");

    cout << "Setting up chain" << endl;
    StChain* chain = new StChain;

    cout << "Setting up St_db_Maker" << endl;
    St_db_Maker* dbMaker = new St_db_Maker("db", "MySQL:StarDb", "$STAR/StarDb");
    dbMaker->SetDEBUG();
    dbMaker->SetDateTime(20090601, 0);

    cout << "Setting up StFmsDbMaker" << endl;
    StFmsDbMaker* fmsdb = new StFmsDbMaker("fmsdbmaker");
    fmsdb->setDebug(1);

    //Then your makers...

    Please pay attention to the SetDateTime option, it sets the timestamp so the data extracted from the db will be corresponding to this time stamp. Please refer to Dmitry's page for more information about time stamp.

    Timedep corrction for run 11 and tower masking

    Since tower masking is  done with the same inputs in database by setting time dep. correction values negative  for bad towers, 1) how towers are masked 1) how time dep. correction files are written, 3) how they are loaded in datbase, and 4)  how the information can be fetched from database are explained below .

     

    • How towers are maseked

    The xml file(Sub_L.xml) take run by run jobs and list towers for each run.

    Channels are populated in different ADC & energy thresholds : 
    /star/data01/pwg/mriganka/fms2015/jetData2011/new/hotCh/StRoot/StFmsHitMaker/StFmsHitMaker.cxx
    SET-1
                h0->Fill(((d-8)*1000)+c);
                hh0->Fill(((d-8)*1000)+c,adc);
    SET-2
                if(adc>10){
                h10->Fill(((d-8)*1000)+c);
                hh10->Fill(((d-8)*1000)+c,adc);
                }
    SET-3
                if(adc>50){
                h50->Fill(((d-8)*1000)+c);
                hh50->Fill(((d-8)*1000)+c,adc);
                }
    SET-4
                if(adc>100){
                h100->Fill(((d-8)*1000)+c);
                hh100->Fill(((d-8)*1000)+c,adc);
                }
    SET-5
               he0->Fill(((d-8)*1000)+c);
                hhe0->Fill(((d-8)*1000)+c,e);
    SET-6
               if(e>1){
                he1->Fill(((d-8)*1000)+c);
                hhe1->Fill(((d-8)*1000)+c,e);
                }
    SET-7
               if(e>10){
                he10->Fill(((d-8)*1000)+c);
                hhe10->Fill(((d-8)*1000)+c,e);
                }
    SET-8
                if(e>20){
                he20->Fill(((d-8)*1000)+c);
                hhe20->Fill(((d-8)*1000)+c,e);
                }

     
    For each SET 4 top bincontent towers are being selected. A tower is being marked to be masked if belong in the 4*8 set. There are several common.
    Generally the number is ,ess than 20.

    Writes hot channels run by run  in "txt" directory with gif files in "gif"  : the inputs from the txt file will be used when writing in database. Each *.txt file for one run.
    /star/data01/pwg/mriganka/fms2015/jetData2011/new/hotCh/test/run.C. The information from *.txt files will be used when database would be written run by run.

    • How time dependent files are prepared

    Basic concept is to mark channels for which correction is significnat. It is such that with minum inputs in database, we can perfrom time dep. correction without overloading the database.
    Some details are written  here : https://drupal.star.bnl.gov/STAR/blog/mriganka/fms-led-time

    In starver SL14i from PSU TimeDepCorr.root was locally  generated.  We are listing the channels to be corrected for each time slot. Each time slot is a section of 10k data. Since the evnets nummber is unique in trigger file and microDst files there is one to one relation with time slots and MuDst event numbers.

    /star/data01/pwg/mriganka/fms2015/jetData2011/old/led/test1

    running the macro txt1/read.C lists : Run Number       #Events(fms_st)    time-slots    Entries in Database
    taking f_runNumber.txt which is generated Sub_L.xml(RunFastJet.C)  take list(schedA96107FA7AD49F7008BAA3DE299BDEE7_$JOBINDEX.list) file which written for each run.

    -- inputs in the file(f_runNumber.tx) written in  StRoot/StJetMaker/mudst/StjFMSMuDst.cxx
    StjFMSMuDst::RCorrect(TMatrix* padc,TMatrix* pEmat,Int_t det,Int_t iEW,Int_t runnum,Int_t EvNum,Int_t SegNM)
    /star/data02/pwg/mriganka/root12fms/fmstxt/RunDep.root (Steven Heppelmann run11 list)

    fprintf(f,"%10d %6d  %6d\n", mCurrentRunNumber, nEnt, timeSlot);
    fprintf(f," %hu %hu %lu  %f \n", detectorId, ch, 999000000, 1); -- for last time slot
    fprintf(f," %hu %hu %lu  %f \n", detectorId, ch, endEvent, corr); -- for time slots where chages are required

     

    • Writing in database

     In couple with time dependent masked tower inputs are set into database (time dep. correction values are set to negative for mased towers).
    /star/data01/pwg/mriganka/fms2015/jetData2011/old/led/test1/AKIO/writeFMSAll.C
    Masked tower inputs : sprintf(fileTowers,"/star/data01/pwg/mriganka/fms2015/jetData2011/new/hotCh/test/txt/tower_rejected_%d.txt",iR);
    Time dep. correction inputs : sprintf(fileTimeDep,"txt/f_%d.txt",iR);

    • Reading after implementation of time dep. and hot tower rejection in StFmsHitMaker

    The macro is running for Jet recontruction with FMS towers using tower masking and tome dep. correction.
    /star/data01/pwg/mriganka/fms2015/jetData2011/new/hotCh/t/mod/RunJetFinder2009pro.C
    Two switches are set in StFmsHitMaker
    fmshitMk->SetTimeDepCorr(1);   // does time dep, correction : default is "0"
    fmshitMk->SetTowerRej (default = 1) // how tower rejection

     

    Software

    The FPD/FMS related software page.

    FMS data model

    StFmsHit, StFmsCluster and StFmsPoint are the data model defined for storing FMS data in StEvent/StMuDST:

    • StFmsHit
      protected:
      UShort_t mDetectorId; // Detector Id
      UShort_t mChannel; // Channel in the detector
      UShort_t mQTCrtSlotCh; // QT Crate/Slot/Ch, 4 bits for Crate and Slot, 8 bits for channal
      UShort_t mAdc; // ADC values
      UShort_t mTdc; // TDC values
      Float_t mEnergy; // corrected energy
    • StFmsCluster
      protected:
      Int_t mDetectorId;
      Float_t mTotalEnergy ; // total energy contained in this cluster (0th moment)
      Float_t mX ; // mean x ("center of gravity") in local grid coordinate (1st moment)
      Float_t mY ; // mean y ("center of gravity") in local grid coordinate (1st moment)
      Float_t mThetaAxis; // theta angle in x-y plane that define the direction of least-2nd-sigma axis
      Float_t mSigmaX ; // 2nd moment in x
      Float_t mSigmaY ; // 2nd moment in y
      Float_t mSigmaMin; // 2nd sigma w.r.t to the least-2nd-sigma axis
      Float_t mSigmaMax; // 2nd sigma w.r.t to the axis orthogonal to the least-2nd-sigma axis
      StPtrVecFmsHit mHits;
    • StFmsPoint
      protected:
      Int_t mDetectorId;
      Int_t mCategory ; // catagory (1: 1-photon, 2: 2-photon, 0: could be either 1- or 2-photon)
      Float_t mFittedEnergy ; // fitted energy of the hit
      Float_t mX ; // fitted x in local grid coordinate (1st moment)
      Float_t mY ; // fitted y in local grid coordinate (1st moment)
      Float_t mChiSquare; // Chi-square of the fitting
      StPtrVecFmsCluster mMotherCluster;
      StPtrVecFmsPoint mSiblingPoint;

    StFmsCollection is the entry point to FMS related data in StEvent/StMuDST.

    public:
    void addHit(StFmsHit*);
    void addCluster(StFmsCluster*);
    void addPoint(StFmsPoint*);

    unsigned int nHits() const;
    unsigned int nClusters() const;
    unsigned int nPoints() const;

    StSPtrVecFmsHit& hits();
    const StSPtrVecFmsHit& hits() const;
    StSPtrVecFmsCluster& clusters();
    const StSPtrVecFmsCluster& clusters() const;
    StSPtrVecFmsPoint& points();
    const StSPtrVecFmsPoint& points() const;
    protected:
    StSPtrVecFmsHit mHits;
    StSPtrVecFmsCluster mClusters;
    StSPtrVecFmsPoint mPoints;

    StFmsPointMaker

    This page describe the StFmsPointMaker.

    FTPC

     

     

    Welcome to the FTPC Homepage

    First FTPC Event
    First FTPC Event

    Detector Operation - all information necessary to operate FTPCs

    Data Quality Assurance

    Pad Monitor

    Calibrations

    Documentation

    Software

    Hardware

    DAQ

    InterlockSystem

    SlowControl

     

     

     

    DAQ

    FTPC DAQ

    The FTPC data aquisition system is tied in with the main STAR DAQ. From the receivers onward, all hardware components are identical. The software for the electronics channel mapping is committed to the STAR cvs repository $CVSROOT/online/ftpc/MapFtpcElectronicsToDaq

     

    The signals from the FTPC electronics are sent to DAQ where each 
    readout board,mezzanine card,asic is mapped to an FTPC sector,daqrow,daqpad.

    HARDWARE:

    2 identical FTPCs - Ftpc West = 1, Ftpc East = 2
    5 rings/FTPC
    2 padrows/ring
    10 padrows/FTPC
    6 sectors/padrow
    6 sectors/padrow x 10 padrows/FTPC = 60 hardware sectors/FTPC
    160 pads/sector
    9600 pads/padrow
    = 19200 electronics channels for both FTPCs

    ELECTRONICS:
    20 readout boards (RDOs)
    10 RDOs/FTPC
    3 mezzanine cards/RDO
    = 3 mezzaanine cards/RDO x 10 RDOs/FTPC = 30 electronic sectors/FTPC

    DAQ:
    The signals from the FTPC electronics are mapped to the hardware with the FTPC_PADKEY.h file.

    DAQ notation:
    30 daqsectors/FTPC (Ftpc West 1-30, FtpcEast 31-60)
    2 daqrows/daqsector
    320 daqpads/daqsector = 2 x 160 daqpads/daqrow

    Ftpc Ring Padrow Sector daqrow daqpads
    1 1 1,2 1,7 1,2 1-160
    2,8 1,2 161-320
    3,9 1,2 321-480
    4,10 1,2 481-640
    5,11 1,2 641-800
    6,12 1,2 801-960
    2 3,4 13,19 1,2 1-160
    14,20 1,2 161-320
    15,21 1,2 321-480
    16,22 1,2 481-640
    17,23 1,2 641-800
    18,24 1,2 801-960
    3 5,6 25,31 1,2 1-160
    26,32 1,2 161-320
    27,33 1,2 321-480
    28,34 1,2 481-640
    29,35 1,2 641-800
    30,36 1,2 801-960
    4 7,8 37,43 1,2 1-160
    38,44 1,2 161-320
    39,45 1,2 321-480
    40,46 1,2 481-640
    41,47 1,2 641-800
    42,48 1,2 801-960
    5 9,10 49,55 1,2 1-160
    50,56 1,2 161-320
    51,57 1,2 321-480
    52,58 1,2 481-640
    53,59 1,2 641-800
    54,60 1,2 801-960
    2 1 1,2 6,12 1,2 1-160
    5,11 1,2 161-320
    4,10 1,2 321-480
    3,9 1,2 481-640
    2,8 1,2 641-800
    1,7 1,2 801-960
    2 3,4 18,24 1,2 1-160
    17,23 1,2 161-320
    16,22 1,2 321-480
    15,21 1,2 481-640
    14,20 1,2 641-800
    13,19 1,2 801-960
    3 5,6 30,36 1,2 1-160
    29,35 1,2 161-320
    28,34 1,2 321-480
    27,33 1,2 481-640
    26,32 1,2 641-800
    35,31 1,2 801-960
    4 7,8 42,48 1,2 1-160
    41,47 1,2 161-320
    40,46 1,2 321-480
    39,45 1,2 481-640
    38,44 1,2 641-800
    37,43 1,2 801-960
    5 9,10 54,60 1,2 1-160
    53,59 1,2 161-320
    52,58 1,2 321-480
    51,57 1,2 481-640
    50,56 1,2 641-800
    49,55 1,2 801-960


    Data Quality Assurance


    FTPC Data Quality Assurance and Quality Control

       During data taking it is very important to monitor the quality of the data being taken. This should be done at least once a day and whenever the FTPC has been turned back on.

    •   Online histograms - "Panitkin Plots"
    •   FTPC Fast Offline QA histograms 
    •   FTPC temperature readings - check if the FTPC temperature readings are in the Slow Control Archive. There should be readings for both Ftpc body (West 6 readings, East 6 readings) and Ftpc extra (West 6 readings, East 7 readings) temperatures. It is especially important to check these after the FEEs have been turned off; there is a non-negligible possibility that the body temperature readings for FTPC East do not come up again. If this is the case, call an expert.

     

     

     

     

     

     

    Fast Offline QA Histograms

    FTPC Fast Offline Histograms

     

     

    Online histograms - "Panitkin Plots"

     

    During a run, online histograms are collected for each sub-system. The online histograms give the first indication of detector hardware problems. Therefore it is extremely important to check these histograms in the control room on evp.starp.bnl.gov at the end of each run. They should also be saved so that they are available for future viewing via the RunLog Browser.

    A histogram description has 9 arguments:
    1 for 1-D, 2 for 2-D histo
    histogram name
    histogram title
    number of bins x-axis
    lower bound x-axis
    upper bound x-axis
    0 for 1-D histos, number of bins y-axis for 2-D histos
    0 for 1-D histos, lower bound y-axis for 2-D histos
    0 for 1-D histos, upper bound y-axis for 2-D histos

    The following histograms are collected for the FTPC (they are on pages 25-27 up until January 3, 2008 when BEMC histograms were added. The FTPC histograms moved to pages 30-32 starting with run 9003070):

    Please inform the FTPC group if you observe large scale changes in any of the FTPC histograms.
    The plots on page numbers 26 and 27/ pages 31 and 32 starting with run 9003070 are the most important. They must be checked for every run.

    Currently the sample histograms shown below are for the dAu 2007 Run 8340066
    • page 6  FTPC event size on a log10 scale
                  Contains the event size histograms for TPC, BEMC, FTPC, L3, SVT and TOF
                  Args: 1,h11_ftp_evsize,log of FTPC Buffer Size,50,0.,10.,0,0,0

    •  
    • page 7  FTPC fraction of total event in %
                  Contains the event size fraction histograms for TPC, BEMC, FTPC, L3, SVT and TOF
                  Args: 1,h105_ftp_frac,FTP Event Size Fraction (%),50,0.,100.,0,0,0

    •  
    • page 8  FTPC event size on a log10 scale for the last 2 minutes
                  Contains the event size vs time histograms for the TPC and FTPC
                  Args: 2,h337_ftp_time_size_2min,FTPC Event Size vs time(sec),600,0.,600.,80,0.,8.

    •  
    • page 25/page 30:
                   % of FTPC occupied channels
                  Args: 1,h49_ftp,FTPC Occupancy (in %),100,0.,100.,0,0,0

                  FTPC event size on a log10 scale
                  Args: 1,h11_ftp_evsize,log of FTPC Buffer Size,50,0.,10.,0,0,0

                   % of FTPC occupied channels for laser runs
                  Args: 1,h51_ftp_OccLaser,FTPC Occupancy (in %) Lasers,100,0.,100.,0,0,0

                   Total charge in FTPC on log10 scale
                  Args: 1,h48_ftp,log of Total FTPC Charge,30,0.,6.,0,0,0

                   % of FTPC occupied channels for pulser runs
                  Args: 1,h50_ftp_OccPulser,FTPC Occupancy (in %) Pulsers,100,0.,100.,0,0,0
    • page 26/page 31FTPC chargestep histograms
                  Args: 1,h109_ftp_west_time,FTPC West timebins,256,-0.5,255.5,0,0,0
                  Args: 1,h110_ftp_east_time,FTPC East timebins,256,-0.5,255.5,0,0,0

                  The chargestep corresponds to the maximum drift time in the FTPC (clusters from inner radius
                  electrode) and is located near 170 timebins. This position will change slightly with atmospheric
                  pressure. The other features of these plots are due to electronic noise and pile-up background.


                  If there is no step visible near 170, something is wrong. Please contact an FTPC expert.

    • page 27/page 32:
                  Args: 2,h338_ftp_west,FTPC West pad charge: pad vs row,10,0.,10.,960,0.,960.
                  Args: 2,h339_ftp_east,FTPC East pad charge: pad vs row,10,0.,10.,960,0.,960.

                  Red areas = hot FTPC electronics
                  White areas = dead FTPC electronics

                  Always check these 2 plots in the first run after an FTPC trip. If new red or white areas show up,
                  first check the FTPC anode voltages. If the anode voltage readings are correct, please contact
                  an FTPC expert.


                  On December 17, 2007, Alexei masked out RDO 10 East. On page 27/32 RDO 10 East is now a white area.

     

     

    Slow Control Archive

    FTPC Slow Control Archives

     

     

     

     

    Detector Operation

     

       FTPC expert(s) currently on call:

       Alexei Lebedev           alebedev@bnl.gov         Phone: 3101         Cell phone: 631 255 4977

     

    TO ALL DETECTOR OPERATORS:

    Please remember to place the FTPC into "Physics" mode and verify anode voltages (1800 W, 1800 E) after taking pedestals.

    In case of an FTPC alarm,please check the "Temperature-Pressure" window - if ALL readings in FTPC East and/or West = 0, call Alexei.

    In case of an anode trip, please enter the time the trip occurred and which FTPC and sector tripped into the FTPC log book.

    Y2004 Standard set points

      Detector always ON
      Change configuration to operate in different modes or to set the detector to rest waiting for beam
      Set the system in ``Magnet ramp'' configuration ONLY when the magnet is ramping

    ALWAYS!!!!!! verify Anode Voltages = voltages for selected run mode !!!


    Voltage Settings
    Cathode voltage (2 channels)     -10kV±5V 
    Anode voltage (2x6 channels), Physics    +1800V ±2V West, +1800V ±2V East
    Anode voltage (2x6 channels), laser    +1200V±2V West, +1200V±2V East
    Anode voltage (2x6 channels), pedestals    +1000V±2V West, +1000V±2V East
    Anode voltage (2x6 channels), stand by   0V West/East, (+~15V are displayed) 
    Gating grid (2x4 channels)  -76V±2V open, -76V±115V closed

     



    Y2007/2008 Standard Values
    Water pressure in (West/East)     -400mbar -> -100mbar 
    Water temperature (West/East)     <31 C 
       
    O2(ppm) (West/East)     <10ppm
    H2O (dp C) Westt/East)     <-50 C dp
    Ar flow = CO2 flow (West/East)     72l/h->78l/h; West=East 
       
    Cathode current 0.14mA 
    Anode current  -15nA < I < 15nA 

     

     

     

    For more information, please consult:

    * FTPC Operations Manual


    * FTPC Shift Checklist


    * FTPC Run School

     

     

     

     

    InterlockSystem

    From this page it is possible to read or download (in form of a 3 pages postscript file) the FTPC interlock document.

    Last update 9/2/2000 by Gaspare Lo Curto

     

     

    Pad Monitor

    FTPC Pad Monitor

      

    The pad monitor we use was developed by Andreas Schuettauf. It is referred to as the Munich Pad Monitor. (The FTPC pad monitor development was started by Jennifer Klay in Davis. Her documentation contains alot of useful information. Unfortunately, Jennifer left STAR before she finished the pad monitor.)

     

     

    documentation

    FTPC PadMonitor Project

    Welcome! You have found the webpage dedicated to providing information and documentation on the FTPC PadMonitor. The FTPCs (Forward Time Projection Chamber) are a key sub-system of the STAR Experiment at RHIC. The PadMonitor is a software program designed to allow for monitoring of FTPC performance. The program can be separated into two basic parts: the GUI (Graphical User Interface) and the data I/O interface. The GUI has been designed using Java with the data I/O interface provided by the Java Native Interface to C and C++ code. This choice of languages reflects the desire to marry cross-platform transportability with legacy code already written for STAR DAQ data. In addition, we hope to be able to run the PadMonitor as a Servlet or Javascript from the Web, allowing collaborators access to view detector performance or issue trigger commands from a distance.


     

    The following pages provide code, a description of the code and its development, and links to useful sites, as well as information about the STAR data acquisition and the FTPC prospective raw data format.

     

  • Raw Data Format This page provides background on the STAR DAQ Raw Data Format and the proposed FTPC version of the DAQ RDF.
  • FTPC PadMonitor Code Information includes a Java Class Library description, links to STAR DAQ Documentation and an explanation of Herb Ward's "Mock Data" writer code.
  • Source Code The most recent updates of the code can be found here. Please note that DAQ Format Reader code may be modified and older than what is available from the STAR CVS Repository.
  • Current Status/Immediate Future Informational page; also contains screenshots of the current program.
  • Links Various resources for this project as well as links to STAR information may be found here.
  •  

    Raw Data Format

    STAR DAQ Links

    STAR DAQ Home Page This is the local working home page for the DAQ Group. Specific links of interest on this site include:

  • Software Documentation This page contains the Format Reader Specification for the DAQ RDF Format Reader written by M. Levine, M. Schulz and J. Landgraf as well as other DAQ documentation.
  • Raw Data Format Document describes the structure of data files written out by DAQ. Information in this document details the general pointer structure and the specific pointer structure for the main TPC. Space has been provided to include documentation from sub-system groups.


  •  

    FTPC Raw Data Format

    The FTPC data format resembles that of the main TPC in many ways. Both systems utilize the same basic readout electronics, however certain physical differences between the two detectors call for different handling. These differences are outlined here:

    Main TPC

    24 Sectors-each one handled by a single VME crate
    Each VME crate contains 6 receiver boards and one "Sector Broker" (to handle global sector characteristics and communication)
    Each receiver board contains 3 mezzanine boards which buffer the data and host the STAR Cluster Finding ASICs (pedestal subtraction,gain correction, 10bit->8bit data conversion, 2D cluster finding)

    To reconstruct a single sector's data, one must gather:
    From each of six receiver boards, the contributions from all three mezzanine boards

    24 sectors in the main TPC
    384 pads per sector
    45 padrows per sector
    Number of pads per padrow variable (due to wedge-shape of sectors)
    512 timebins per pad

    Forward TPCs

    2 Chambers-each one handled by a single VME Crate
    Each crate contains 10 receiver boards and one "Chamber Broker" (performs the same functions as the Sector Broker but for a single FTPC Chamber)
    Each receiver board handles three FTPC Sectors (30 sectors per chamber)
    Each receiver board has 3 mezzanine boards. The simplest sector->mezzanine mapping is 1:1, but may not necessarily be so. In order to be general, the pointer structure is set up such that from the receiver board, one points to a sector and from the sector one points to the mezzanine board.

    To reconstruct a single sector's data, one must gather:
    From one receiver board, the sector via contribution from one mezzanine board

    2 chambers in the FTPC sub-system
    30 sectors per chamber
    320 pads per sector
    2 padrows per sector
    160 pads per padrow
    512 timebins per pad

    Ideally, one would like to hide this heirarchy behind a simpler user interface. This has been done by making the FTPC Format Reader very similar to the main TPC. Users request data from a specific sector, numbered 1 to 60 (1-30 for West FTPC, 31-60 for East FTPC). The user numbering scheme follows the FTPC Cabling design drawings. The mapping to correct receiver board and mezzanine contributions for a given sector is provided by a header file included with the Format Reader.

    FTPC Raw Data Format Document (postscript)


    DAQ/Data Schematics

    View some schematic pictures of the DAQ design and the current Raw Data Format:

  • Schematic of DAQ Design
  • Schematic of Data Format


  • The following is a diagramatical sketch of the information path explained in the DAQ Raw Data Format Document.

  • STAR Data Model...1
  • STAR Data Model...2
  • STAR Data Model...3
  • STAR Data Model...4
  • STAR Data Model...5
  • STAR Data Model...6
  •  

    SlowControl

     

    FTPC Slow Control

     

     

     

    Software

     

    FTPC Software

    • Overview of the FTPC software
    • List of the FTPC software location and contact people

    Calibration Software

    Simulation and Reconstruction Software

    •  SEE FTPC in ITTF for information on Maria Mora's integration of the FTPC into ITTF

     

    How to

     

     

     

     

     

     

     

     

     

     

     

     

    List

     

    Group

    Task Description/Problems

    Location of Source Code

    Contact

    DAQ

     

     

     

     

    Map FTPC electonics to DAQ

    $CVSROOT/online/ftpc/MapFtpcElectronicsToDaq

    Janet

     

    FTPC "gain table" for DAQ

     

    Frank

    Online

     

     

     

     

    FTPC Detector Control
         Documentation
         Interactive control panels running under
         ftpccrew@cassini.starp

    /afs/rhic.bnl.gov/star/doc_public/www/ftpc/Operations

    Terry

     

    Slow Control

     

    Terry

     

    FTPC Slow Control Monitoring Facility
    Plots and lists contents of slow control archives
    (On January 23 the FTPC Y2004 slow control archive was split into 2 parts.)

     

    Terry

     

    Pad Monitor

    $CVSROOT/online/ftpc/FtpcPadMonitor

    Janet

     

    Online Tools:
         1) Event Pool Reader - read and histogram events from /evp
         2) FTPC Display - evp event->*.ps file which contains 2-D
             pad vs. timebin plots and 3-D plots of deposited charge
         3) Noise Finder - evp event -> *.ps file which shows dead
             and/or noisy pads (see page 23 !!)

     

    Terry

    Janet

     

    Online histograms ("Panitkin plots")

     

    Janet

    Drift Velocity Monitor

    OBSOLETE

     

     

     

    LabView program running on bond.starp

     

     

     

    bond.starp -> virgo.starp samba connection

     

     

     

    virgo.starp cron job - copies files from /DV2/Today to /DV2/Store

     

     

     

    Conversion of DVM data files to root format

     

     

     

    StFtpcDVMMaker - DVM data analysis programs

    $CVSROOT/offline/StFtpcDVMMaker

     

    Calibration

     

     

     

     

    Noise Finder:
         GetGain,FindNoise,WriteAmpSlope

     

    Terry

     

    Drift Maps:
         Magboltz1

     

     

         Magboltz2

     

     

    $CVSROOT/StRoot/StFtpcDriftMapMaker
    $CVSROOT/StRoot/macros/examples/FtpcDriftMapMaker.C

    $CVSROOT/online/ftpc/Magboltz2

    Janet

     

    Laser Analysis:
         1) Convert st_laser*.daq files to *.root files
         2) StFtpcLaserMaker

     

    Terry

    Databases

     

     

     

     

    Slow Control -> Online database -> Offline database

     

    Terry

     

    Offline database:
         Geometry_ftpc/
                                   ftpcAsicMap
                                   ftpcClusterGeometry
                                   ftpcDimensions
                                   ftpcInnerCathode
                                   ftpcPadrowZ
         Calibrations_ftpc/
                                   ftpcAmpOffset
                                   ftpcAmpSlope
                                   ftpcCoordTrans
                                   ftpcDeflection
                                   ftpcDriftField
                                   ftpcEField
                                   ftpcElectronics
                                   ftpcGas
                                   ftpcGasOut
                                   ftpcGasSystem
                                   ftpcTimeOffset
                                   ftpcVDrift
                                   ftpcVoltage
                                   ftpcdDeflectiondP
                                   ftpcdVDriftdP

     

    Terry

     

    StDb/idl - idl definition files for FTPC database tables

    $CVSROOT/StDb/idl

    Janet

    Simulation/ Reconstruction

     

     

     

     

    StFtpcSlowSimMaker

    $CVSROOT/StRoot/StFtpcSlowSimMaker

    Frank

     

    StFtpcClusterMaker

    $CVSROOT/StRoot/StFtpcClusterMaker

    Joern

     

    StFtpcTrackMaker

    $CVSROOT/StRoot/StFtpcTrackMaker

    Markus

     

    pams/ftpc/idl
         FTPC idl files used in FTPC Slow Simulator

    $CVSROOT/pams/ftpc/idl

    Janet

    QA

     

     

     

     

    St_QA_Maker

    $CVSROOT/StRoot/St_QA_Maker

    Gene

    Janet

    Embedding

     

     

     

     

    StFtpcMixerMaker

    $CVSROOT/StRoot/StFtpcMixerMaker

    Frank

    Analysis

     

     

     

     

    StFtpcMcAnalysisMaker
         creates the NTuples used for efficiency, momentum
         resolution and so on by a wide variety of tools.
         (Code works but not yet ready to pass a CVS code
         review)

     

    Frank

    ITTF

     

     

     

     

    StiFtpc

    $CVSROOT/StRoot/StiFtpc

    Maria Mora-Corall

    Documentation

     

     

     

     

    Web pages

    /afs/rhic.bnl.gov/star/doc_public/www/ftpc
    /afs/rhic.bnl.gov/star/doc_private/www/ftpc

    Janet

     
    This page was written by Janet Seyboth on Febuary 10, 2004

     

     

    Online

     

    FTPC "Online" Calibration Software

    The FTPC "online" calibration software is a collection of programs and macros which run on the FTPC
    online machines (virgo-run09.starp,cassini-run09.starp) processing either data directly from the event pool or from
    a *.daq file.


    All the FTPC calibration software is committed to the online/ftpc branch of the STAR CVS reposititory.
    Each committed module contains a README file and a doc subdirectory which contain all the pertinent
    information regarding the purpose and use of the program and/or macros contained in the module.

    The FTPC "online" calibration program library is located on virgo-run09.starp in /data/FtpcOnlineLibrary
    and can be used from the ftpccrew account on virgo-run09.starp.



    Attention: The FTPC 3-D display software does not run with ROOT_LEVEL 4.04.02


                                                          Programs in the FtpcOnlineLibrary
    Calib_Tool:
                         
    calib_tool creates a graphical user interface which allows the user to interactively analyze
                          the *.root files produced when the daq files from an FTPC laser run are processed
                         with the FTPC "private chain". The FTPC "private chain" produces a special output
                         file when DEBUGFILE is defined in StFtpcClusterMaker and StFtpcTrackMaker.

    CardFinder:
                        The
    CardFinder utilities,PadAnalysisCreate and PadAnalysisWrite, analyze the pads, locate
                        the bad and/or noisy chips and produce lists of the bad electronics.

    EvpPoolReader:
                         Reads, processes and displays event(s) from the event pool: /evp/a, /evp/b or
                         the "live" stream (the event which is currently being input into the event pool).
                         Due to firewall restrictions you can only access the event pool and the live stream from the  starp sub-net .
                        Click
    here for running instructions.
    FtpcPadMonitor:
                       
    padmon is a software program designed for monitoring the FTPC hardware performance.

    NoiseFinder:
                      
    NoiseFinder contains the programs and macros which produce the FTPC online and offline
                      gain tables.
                     Run the NoiseFinder programs GetGain and FindNoise from the FtpcOnlineLibrary
                     on virgo.starp.bnl.gov to create a gain table. The gain table flags out dead
                     and/or noisy pads. The NoiseFinder utility, WriteAmpSlope_cc.so, converts the
                     gain table into an ftpcAmpSlope.C file which is added to the offline data base
                     in the Calibrations_ftpc/ftpcAmpSlope table.

    Online_Tool:
                     Online_Tool contains the shell script ftpc_display and the L3 display
                     macros. The ftpc_display shell script provides an interface to FTPC online
                     event processing. An event from the event pool or from a daq file is read in
                    and the cluster finding and tracking results are displayed with the 3-D viewer.
                    Click
    here for running instructions.

     

    calib_tool

    FTPC Calib_Tool

    calib_tool creates a graphical user interface which allows the user to interactively analyze
    the *.root files produced when the daq files from an FTPC laser run are processed with
    the FTPC "private chain"
               "ftpc db globT detDb tpcDb dbutil in dst event"

    The FTPC "private chain" produces a special output file when DEBUGFILE is defined in
    StFtpcClusterMaker and StFtpcTrackMaker.

    For information on calib_tool, type

         calib_tool -h

    Which will print the following

    Usage: calib_tool [-l] [-b] [-n] [-q] [dir] [file1.root]
    Options:
            -b : run in batch mode without graphics
            -n : do not execute logon and logoff macros as specified in .rootrc
            -q : exit after processing command line macro files
            -l : do not show splash screen
           dir : if dir is a valid directory cd to it before executing

        ?        : print usage
       -h        : print usage
      --help    : print usage
      -config  : print ./configure options

    Choose [dir] and [file] via panel. There is a sample *.root file in the examples
    sub-directory.

     

    This page was created by Janet Seyboth on March 7, 2006

    Overview

    This page was updated by Janet Seyboth on January 25, 2001

     

    Reconstruction

     

    FTPC Reconstruction

     

     

     

    Hit Finding

                                   StFtpcClusterMaker Documentation

    If raw data already exists, either produced by the FTPC slow simulator (StFtpcSlowSimMaker) or from a daq data import, StFtpcClusterMaker will immediately invoke the cluster finder StFtpcClusterFinder . Otherwise, the FTPC fast simulator StFtpcFastSimu will be invoked to generate hits from geant data.


     

    Tracking

     

    StFtpcTrackMaker - FTPC Conformal Mapping Tracker

    The StFtpcTrackMaker replaces St_fpt_Maker in the FTPC reconstruction chain.

     

    StFtpcTrackMaker uses the clusters from StFtpcClusterMaker to reconstruct the tracks in the FTPC using conformal mapping. A list of all the "found" hits along with the number actually found and the maximum number of hits possible are saved for each track.

    Then these "found" hits are fit using a 2x2-D track fitter. The impact parameter at the pre-vertex is calculated. All tracks with an impact parameter less than max_Dca are flagged as primary track candidates whose vertex is the pre-vertex. The momentum fit results for the unconstrained fit are saved in the track table.

     

  • Software documentation
  • FTPC Tracking Algorithms
  • Efficiency and contamination plots
  • Contamination study
  • Some nice pictures
  •  

     

     

     

     

     

     

     
     
     
     

     14553 Geant hits (about 1000 tracks per Ftpc) after being tracked with the new Conformal Mapping Algorithm.


    Markus Oldenburg

     

    Last modified: Apr 20 2005

     

     

     

     

    Simulation

     

    FTPC geometry

    • ftpcgeo  - FTPC geant geometry definitions

     

    Momentum Resolution

     

    Mock Data challenges

     

    FTPC Fast Simulator

     

    FTPC Slow Simulator

     

     

     

     

     

    HF Momentum Resolution Study

     

    FTPC Half Field Momentum Resolution Study

    This page shows results of some tests done in Munich to estimate the effect running at half the normal magnetic field strength would have on the FTPCs' momentum resolution.

    The tests were done comparing the momentum resolution of two venus runs produced for mock data challenge 2. MDC 2 data is not very suitable for FTPC testing as it was calculated at extremely high geant resolution, so that each track caused several hits in each padrow. However, all the available data at half field is from MDC 2.

    One event of each run was processed through the fast simulation chain, assuming that the quality of ExB corrections will not change with the field strength. Fast simulation makes it possible to compare reconstructed tracks to geant tracks by simply comparing the constituent points.

    The plots show the reconstructed momentum divided by the geant momentum and are in good agreement with earlier studies done with realistic simulation parameters and the measured magnet field. (Earlier simulations done by Michael Konrad assuming a perfectly uniform field looked somewhat better.) Only perfectly reconstructed tracks with 10 hits that actually belonged to the same geant track are used in the plot.

    The first plot shows the resolution at full magnetic field, the second at half field. Both peaks are nicely centered around one, showing essentially correct momentum resolution, but the RMS of the distribution increases from 15 to 20 percent when going to half the magnetic field. This is in contradiction with the obvious assumption of a linear increase of the errors, which, however, is not really to be expected at closer inspection. Also, the number of properly reconstructed tracks is smaller in the second plot, but it is yet unclear if this is due to a larger range of delta spirals in the smaller field, to some other effect or just statistics.

    Full field:

    Half field:

     


     

     

    MDC1

    FTPC and MDC1

    The FTPC slow simulator chain:

    • fss --- The FTPC Slow Simulator
    • fcl --- The FTPC CLuster finder
    • fpt --- The FTPC tracker
    • fte - track evaluator

       

    was included in the bfc.kumac for MDC1. NO dst information was written out.

    Results:

    We were able to find and correct programming errors which caused NaN's.

    This page was written by Janet Seyboth on February 6, 1999

     

     

    MDC2

    FTPC and MDC2

     

    In MDC2, the GEANT step size in the FTPC acceptance was reduced, so that every passing particle left a series of geant hits in every padrow. This became a serious challenge for the FTPC software and, even after optimization, increased the calculation time significantly. To get high-statistics for strangeness studies in the TPC, the planned schedule was changed in favor of more TPC fast simulator runs. Therefore, only a small number of events was run through the FTPC chain in MDC2, both with and without the slow simulator.

    The complete FTPC slow simulator chain was run in the ROOT chain macro (bfc.C):

    • St_fss_Maker --- The FTPC Slow Simulator
    • St_fcl_Maker --- The FTPC CLuster finder
      In runs when no raw_data from the FTPC slow simulator exists, St_fcl_Maker ran the FTPC fast simulator module
      ffs.

       

    • St_fpt_Maker --- ran the following FTPC modules:

      fpt - track finder

      fte - track evaluator

      fde - dE/dx calculator

    The FTPC track, point and dE/dx information was written out to the dst by St_glb_Maker.

    This page was written by Janet Seyboth on February 6, 1999

     

     

    MDC4

    FTPC and MDC4

    BNL, April 26 - May 10, 2001

    Status History for FTPC in MDC4

    Reconstruction

    Purpose - MDC4 gave us the opportunity to test the performance and stability of our reconstruction chain with a large amount of simulated data before real data taking begins. It was the first large scale test of our slow simulator and cluster finder.

    MDC4 Datasets - Dataset A (20,000 Au+Au MEVSIM events), DataSet B (100,000 pp PYTHIA events) and Dataset C (Au+Au peripheral STARLIGHT events) were processed in MDC4

    BFC Chains - The FTPC ran only in the Au+Au BFC chain since our simulators can not handle pile-up

    starnew new->SL00d

    ROOT 3.00.06

    bfc.C(#events,"MDC4","input dataset")

    Identified Reconstruction Tasks

    Implement StAssociationMaker

    Pileup - we have to add pileup to our simulators in order to process pp simulation data (Contact: Akio,Jan)

    Chisq - determine the correct values for the $STAR/StarDb/ftpc/ftpcClusterPars.C parameters 
                       timeDiffusionErrors[1]
                       timeDiffusionErrors[2] 
    

    Reconstructed Datasets on FTPC BNL Cluster

    There are 80 MEVSIM events in /cassini/data1/MDC4/Gstar/rcf0181_01_80evts.fzd

    A complete set of output *.root files for these 80 events is available in /cassini/data1/MDC4

    Analysis

    Status - The analysis half of MDC4 was held for Friday, May 4 - Thursday, May 10

    PWGs Expectations - The analysis focused on the new detectors and their physics capabilities

    Spectra PWG - larger acceptance with FTPC

    EbyE PWG - flow with FTPC

    Strangeness PWG - first attempts to find v0's in FTPC

    MDC4 Status Meetings

    There was a telephone conference each Tuesday and Thursday at 3pm EST in the White Pit in Bldg. 118 (dial in x8261)

    MDC4 Wrap-up Meeting

     

    On Thursday, May 10 starting around 1:30pm EST there was a wrap-up meeting. Each sub-system was requested to summarize their experiences gained in MDC4 regarding efficiencies,problems encountered, etc. This summary should also contain a short summary on the new physics capabilities

    JPWG Meeting

    A PWG Workshop was held for Thursday, May 10 - Monday, May 14


    This page was written by Janet Seyboth on April 27, 2001

     

    Sim vs. Real Data - dAu

    Comparison Slow Simulator and Real data dAu Min Bias


    The Distribution of the number of hits-on-global-track is NOT the same for Real Data and Simulation with Hijing through the Slow Simulator (?) What happens with all this 10-hit-on-track ? The used Gain Table is not good enough to reproduce the holes?
     
     

    FTPC West FTPC East
    AuAu minbias with low multiplicity in comparison with dAu min bias REAL DATA and SIMULATION AuAu minbias with low multiplicity in comparison with dAu min bias REAL DATA and SIMULATION
    REAL DATA SIMULATION

     

    The residuals are much better for Simulation than for Real data. We still need to do something here )-:

     

     

    Sim vs. Real Data - AuAu

    Comparison Slow Simulator and Real data for AuAu minbias events

    The trend with the variation of the number of hits on track with the multiplicity is the same for Real Data and Simulation with Hijing through the Slow Simulator
     
     

    FTPC East
      AuAu minbias REAL DATA from Low Multiplicity (red) to High multiplicity (blue) AuAu minbias SIMULATION from Low Multiplicity (red) to High multiplicity (blue)
    FTPC West
      AuAu minbias REAL DATA from Low Multiplicity (red) to High multiplicity (blue) AuAu minbias SIMULATION from Low Multiplicity (red) to High multiplicity (blue)

     

    The residuals are much better for Simulation than for Real data. We still need to do something here )-:

     

     

    StFtpcFastSimu

    StFtpcFastSimu - FTPC Fast Simulator

    The FTPC fast simulator is implemented in C++ (as a part of StFtpcClusterMaker).

    It was converted from Fortran (pam/ffs) to C++ by Holm Hümmler and is supported by Janet Seyboth.

    StFtpcFastSimu simply takes the hit points registered by geant and turns them into FTPC points. Some cuts are applied to remove points that are outside the sensitive volume of the FTPC sectors and to account for the loss of hits due to cluster merging. Some of the geant information is kept in the gepoint table to be used in efficiency studies.

     

    ftpcgeo

    ftpcgeo.g

    ftpcgeo.g defines the geometry for the FTPC in geant simulations. It contains information about the main aluminum cylinder, its support structures, the fieldcage and a rather detailed description of the readout chambers, and it defines the sensitive volumes.

    Other geometry files of interest for the FTPC simulation are pipegeo.g for the beampipe and svttgeo.g with a description of the SVT, its support cone, the beampipe support and the shield layers.

    This page was updated by Holm Hümmler on September 16, 1999

    FTT

    fobToHV Database table request

    fobToHV
    1) Variables and update frequency:
    ```
    octet fob; /* fob 1-96 */
    octet cable; /* HV cable 1 - 32 */
    octet board; /* HV board 0 - 2 */
    octet channel; /* HV channel on board 1 - 11 */
    ```

    Update frequency:
    A few times per year. In 2022 it has changed twice so far.

    2) idl structure
    ```
    /* fobToHV.idl
    *
    * Table: fobToHV
    *
    * description: // sTGC (ftt) map from FOB to HV
    *
    */

    struct fobToHV {
        octet fob[96]; /* fob 1-96 */
        octet cable[96]; /* HV cable 1 - 32 */
        octet board[96]; /* HV board 0 - 2 */
        octet channel[96]; /* HV channel on board 1 - 11 */
    };
    ```
    3) non-indexed, update entire database each time.
    4) size of structure: 384
    5) jdb

    fobToStation

    fobToStation
    1) Variables and update frequency:
    ```
    octet fob; /* fob 1-96 */
    octet station; /* HV cable 18 - 35 */
    ```

    Update frequency:
    A few times per year. In 2022 it has changed once so far.

    2) idl structure
    ```
    /* fobToStation.idl
    *
    * Table: fobToStation
    *
    * description: // sTGC (ftt) map from FOB to Station
    *
    */

    struct fobToStation {
        octet fob[96]; /* fob 1-96 */
        octet station[96]; /* station identifier 1 - 35 */
    };
    ```
    3) non-indexed, update entire database each time.
    4) size of structure: 192
    5) jdb

    fttDataWindows DB request

    DATA WINDOWS
    1) Variables and update frequency

    ```
    octet uuid; /* fob(1-96) x vmm(1-4) = index 1 - 384 */
    octet mode /* 0 = timebin, 1 = bcid */
    short min; /* time window min > -32768 */
    short max; /* time window max < 32768 */
    short anchor; /* calibrated time anchor for BCID */
    ```

    Update frequency:
    Every single run.
    This is essentially a status table which defines a few things:
    uuid: identifies the VMM
    mode: time mode to use for determining good data ranges (e.g. 0 OFF, 1 use timebin, 2 use bcid, etc)
    min, max: define time window in timebins or bcids
    anchor: calibrated bcid=0 for each VMM - changes every run

    2) IDL structure
    ```
    /* fttDataWindows.idl
    *
    * Table: fttDataWindows
    *
    * description: // sTGC (ftt) data time windows
    *
    */

    struct fttDataWindows {
        octet uuid[385]; /* fob(1-96) x vmm(1-4) = index 1 - 384 */
        octet mode[385]; /* 0 = timebin, 1 = bcid */
        short min[385]; /* time window min > -32768 */
        short max[385]; /* time window max < 32768 */
        short anchor[385]; /* calibrated time anchor for BCID */
    };
    ```

    3) non-indexed, update entire database each time.
    4) size of structure: 3080 bytes
    5) jdb

    fttHArdwareMap request info

    1) Variables and update frequency:
    Row_num    
    FEB_num    
    VMM_num    
    VMM_ch         
    strip_ch

    Very rare update - stores a hardware map which is static unless there is a mistake discovered or a hardware failure that requires remapping. May be updated < 10 times in next few years.

    2)  IDL structure:
    ```
    /* fttHardwareMap.idl
    *
    * Table: fttHardwareMap
    *
    * description: // sTGC (ftt) hardware map
    *
    */

    struct fttHardwareMap {
        octet feb[1251]; /* 1-6 */
        octet vmm[1251]; /* 1-4 */
        octet row[1251]; /* 0-4 */
        octet vmm_ch[1251]; /* 0-64 */
        octet strip[1251]; /* 0-166*/
    };
    ```

    3) I am always a little confused about this. Updates rare, expect 1250 rows each with the structure above
    4) size of structure
    5 bytes * 1251 array lengths
    Total size = 6255bytes

    5) jdb

    HFT

    HFT ( Heavy Flavor Tracker ), the Silicon Vertex Upgrade of STAR experiment

    HFT MANAGEMENT
    CD4

    SOFTWARE
    Integration Status


    Run-15
     

    PXL (Main)
    Xin's early Alignment page 2014
    Shusu's QA pages on PDSF (protected)
    QA pages in Online Area (need update for Run15)
    2014 Final Geometry Tables 
    LBNL Pixel pages 

    IST [new]
    2014 Calibration page

    SSD [new]

    AFS WEB-DOC AREA

    PUBLICATIONS
     
    PR - FOTOS 
    EVENT Displays
    Production 2014

     


     

    HFT Run-15

     Information and link to other pages relevant for operation, Q&A, calibrations, and running considerations. Content will be added as it becomes available.

    Physics Goals and Triggering

    Sub- systems

    • PXL
      • Expert List
      • Operational Procedures
      • Online Q&A plots
    • IST
      • Expert List
      • Operational Procedures
    • SSD
      • Expert List
      • Operational Procedures









     

    Software

    ALIGNMENT:
    [CVSDOC, XIN's RUN14 pages METHODS,SIMU,DATA]

    SURVEY [PIXELIST, SSD] GEOMETRY
    [PXL-PROTOTYPE(old)]

    QA (Shusu)
     , RUN14-QA pages
    REPORTS
    [Software MONTHLY]
    SIMULATIONS
    [
    starlight-docs, CVS][digmaps
    ]
     MEETINGS 
    Disk Space Needs
     Developers
    guidlines/help
    Dmetric definition
     CURRENT ACTIVITIES/TASK LIST

    CURRENT ACTIVITY/TASK LIST

    ACTIVITY/TASK STATUS WHO  Comments  Links
     RECONSTRUCTION        
    PXL
    Cluster Finder
    Hit Finder
     ---
    In progress
    In progress
     ---
    Jan/Hao/?
    Jan/Hao/?
       
    IST
    Cluster Finder
    Hit Finder
     ---
    pending
    pending
     ---
    Yaping
    Yaping
       
    SSD
    Cluster Finder
    Hit Finder update
     ---
    TBD
    TBD
    ---
    ?/Jonathan
    ?/Jonathan 
       
    SIMULATION        
    GEOMETRY
    IDS - BeamPipe - Other
    PXL - RUN13
    PXL - RUN14
    IST
    SSD
    ---
    In progress
    In progress
    In progress
    In progress
    In progress
    ---
    Flem./Jon./Amilkar/?
    Flem./Jon./Amilkar
    Flem./Jon./?
    Yaping/?
    Jon./Amilkar/?
       
    PXL SIMULATORS
    Fast
    Slow
    Mixer (embedding)
     --- 
    In progress
    Design phase
    Pending
    Mustafa, ?    
    IST SIMULATORS
    Fast
    Slow
    Mixer (embedding)
    ---
    In progress
    Pending
    Pending
    Yaping, ?    
    VMC
    Hit recorder
    Tracking
     ---
    studying examples
    TBD
     ---
    Spiros
    ???
       
     CALIBRATIONS        CVS
    PXL
    Calibrations (includes local->Global)
    Db makers
       Long, Hao, Xin, Jon.    
    IST
    Calibrations (includes local->Global)
    Db makers
       Yaping, Zhenyu    
    SSD
    Calibrations (includes local->Global)
    Db makers
           
    ANALYSIS CODES        
             
     

    OFFLINE

     RECONSTRUCTION:

    CLUSTER [PXL,IST,SSD]

    HIT [PXL, IST, SSD]

     FAST-OFFLINE  QA
    SIMULATION  DATA STRUCTURES 
    Overview
    PxlRawHit 
    PxlHit
    IstRawHit
    IstHit
    SsdHit

    Physics Performance Plots/Figures

    DCA Plots for Run14 and Aluminum only

    The document with the latest DCA plots on page 8 [all ladders] and page 9 [ Aluminum only] 
    is attached below (HFT Transitions to Operations).

    I also put the .root files of these figures on the web here:

     

    http://phys.kent.edu/~margetis/STAR/HFT/index.php?dir=&file=Prod14-DCAxy-p-Alum.root

    http://phys.kent.edu/~margetis/STAR/HFT/index.php?dir=&file=Prod14-DCAz-p-Alum.root

    http://phys.kent.edu/~margetis/STAR/HFT/index.php?dir=&file=Prod14new-DCAxy-p-Full.root

    http://phys.kent.edu/~margetis/STAR/HFT/index.php?dir=&file=Prod14new-DCAz-p-Full.root

     

    DATA

     
    SIMULATIONS

    • You can also look/get figures from the HFT PROPOSAL or the HFT CDR
    • ALL figures in the proposal and the CDR are included, in nice picture format, in the subdirectories below

    B - mesons

     

    • Yifei's B-meson plots, using the subtraction method (with known D-meson spectra), in the CDR are in various formats (starting with Fig16 and up) HERE

     

    D+

    D0

     

    • Yifei maintains a web page with the CDR and CD1 D0 figures in various formats HERE

    Ds

     

    • UCLA's latest figure with some explanation on Ds-> 3-body via the Phi channel is HERE
    • The low pt bin is not an artifact as finer binning reveals, see figure HERE

    Lc

     

    • Jan's Lc money plots with explanation are HERE
    • He released recently a Lc pt spectrum which attached as Lc_pt_spectra.png

    2014 Geometry

    TpcOnGlobal [from Db]

     entryTime  | beginTime           | flavor   | LocalxShift | LocalyShift | LocalzShift | PhiXY      | PhiXZ       | PhiYZ       | XX         | YY         | ZZ         | PhiXY_geom | PhiXZ_geom  | PhiYZ_geom  |    
     

    2014-03-17 00:59:50 | 2014-02-12 00:00:01 | ofl  | -0.17800000 | -0.67519999 | -0.08086000 | 0.00000000 | -0.00035000 | -0.00027000 | 1.00000000 | 1.00000000 | 1.00000000 | 0.00096220 | -0.00018970 | -0.00004400 |

    DEFINITIONS: $STAR/StDb/idl/tpcGlobalPosition.idl

    *  Table: tpcGlobalPosition description: */ 

    struct tpcGlobalPosition { 

    float  LocalxShift;   /* cm : x position of TPC center in magnet frame  */
    float  LocalyShift;   /* cm : y position of TPC center in magnet frame  */
    float  LocalzShift;   /* cm : z position of TPC center in magnet frame  */
    float  PhiXY;         /* radians: rotation angle around z axis  (not used) */
    float  PhiXZ;         /* radians: rotation angle around y axis  XTWIST */
    float  PhiYZ;         /* radians: rotation angle around x axis  YTWIST */
     float  XX;            /* XX element of rotation matrix  (not used) */
     float  YY;            /* YY element of rotation matrix  (not used) */
    float  ZZ;            /* ZZ element of rotation matrix  (not used) */
    float  PhiXY_geom;    /* radians: geometrical rotation angle around z axis psi,  -gamma  (not used) */
     float  PhiXZ_geom;    /* radians: geometrical rotation angle around y axis theta,-beta  */
     float  PhiYZ_geom;    /* radians: geometrical rotation angle around x axis psi,  -alpha */  };  


    PXL PxlOnPst 
    [/star/u/qiuh/hft/DbRun14/surveyAndCalibration/StarDb/Geometry/pxl ]

    1, 0.000129826, 0.000299546, -0.000129848, 1, 7.36199e-05, -0.000299536, -7.36588e-05, 1,
    0.0183753, -0.000644915, -0.0221809

    PXL Half-on-Pixel

    Original Survey (Bob):

     0.999999, -0.000890449, -0.00131775, 0.000890639, 1, 0.000143418, 0.00131762, -0.000144591, 0.999999,  
    -0.0470471, 0.00022503, -0.00608269,

       0.999999, 0.000890661, 0.00131761, -0.000890471, 1, -0.000144552, -0.00131774, 0.000143379, 0.999999,  
    0.0470471, -0.00022503, 0.00608269, 

    Alex's original corrections:

    M =
    |0.999988 -0.00049038 -0.00482335|
    |0.000485125 0.999999 -0.00109049|
    |0.00482388 0.00108814 0.999988|

    shift_vector = {-0.199987, -0.0586518, -0.00557364} in cm

    After averaging

       0.999999, -0.000646411, 0.00109379, 0.000645657, 1, 0.000689533, -0.00109424, -0.000688826, 0.999999, 
    0.052902, 0.0295242, -0.00356489,

       0.999999, 0.000646971, -0.00109437, -0.000647725, 0.999999, -0.000688232, 0.00109393, 0.000688943, 0.999999, 
    -0.052902, -0.0295242, 0.00356489,

    PXL SectorOnHalf [sample]
    [/star/u/qiuh/hft/DbRun14/surveyAndCalibration/StarDb/Geometry/pxl ]

    Sector-1
    0.999998, -0.00195912, -0.00070653, 0.00195803, 0.999997, -0.0015296, 0.000709523, 0.00152821, 0.999999,
    0.0110013, 0.00218197, -0.00736481
    ......
    Sector-10
    0.999996, 0.00259448, 0.00143927, -0.00258847, 0.999986, -0.00454193, -0.00145104, 0.00453818, 0.999988, 0.0134133, -0.0565213, 0.00166389


    PXL SensorTPS/SensorOnLadder/LadderOnSector are here:
     
    [/star/u/qiuh/hft/DbRun14/surveyAndCalibration/StarDb/Geometry/pxl ]

     

    CD4

    ----->CD4 DOCUMENTS<-------
    https://drupal.star.bnl.gov/STAR/future/proposals/hft-project/opa-review-july-31-2014

    ----->Older stuff<---------

    Yifei's efficiency plots are here
    http://www.star.bnl.gov/protected/heavy/yfzhang/hft/plots/CD4/

    Daniel's UPC electrons page:  here and here (recent) and here (latest)

    Amilkar's Hijing events area:

    /star/u/aquinter/ksu/Run14/Simulation/Hijing/VFMinuit or KFVertex/hijing.*.root

    For Thin/Thick case DCA documents:

    https://drupal.star.bnl.gov/STAR/system/files/cd23_homework_0.pdf
    https://drupal.star.bnl.gov/STAR/system/files/pointingResolution_summary_nolog.pdf

    Disk Space request

    Here is a summary of Points of View and Space requests

    General Comments:

    - We should consider this to be a request in addition to our existing institutional disks, which are mainly devoted for user analysis and not planned specifically for the HFT calibration.

    - (Xin)I think for us, the big request will come from the simulation production and tracking test. My verbal comment on Friday about >10TB is a rough estimate to have sufficient buffering space for this test. I believe the total space we really need may be larger than that (which is hard to estimate now). 10TB should be a good start. And we may approach with the S&C team as some of the tracking test is not particularly only related to us, it should be a global interest.

    Subsystem Calibrations:

    The PXL calibration shouldn't need too much space, we may need to finish some missing runs, but that in principle can be made as a subsystem request to the production team.

    SSD calibration may need some amount of space. Jonathan/team may have a better estimate.

    IST  needs ~4TB disk space for calibration and performance studies. At the moment, Yaping is using 1.5 TB disk space for IST performance study, alignment and code development, while Babak 1.1TB for IST calibration. They are using their own data01+data02 disk spaces plus temporary data05, as UIC does not have institutional disk space (even if we do, this kind of IST work should be on dedicated disk space, not institutional one). So when I add these  numbers up with 1TB of contingency, I get 4TB.

     Request: It looks like a request of ~15 Tb if we consider the IST/SSD calibration needs and the big chunk of ~10 Tb for simu and data sample analyses.
      

    P.S. the numbers others are requesting

    MTD: 25TB

    BEMC: 2TB

    FGT: 1TB

    EEMC: 1TB

     

    Documents

     

    THE HFT SOFTWARE DOCUMENTATION AREA
    • Older DOC pages (but with many goodies) are HERE
    •  
    • The Pixel detector maintains its own documentation area HERE

     

    Older documentation pages

    Data Sets/Chain Options/Geometry Tags


    1. The data sets used for the CD0 proposal in the Spring of 2007 are in the production pages: www.star.bnl.gov/public/comp/prod/TrackingUpgrade.html
    2. The BFC options used for that production are also there:  www.star.bnl.gov/devcgi/dbProdOptionRetrv.pl
    3. The production log files are gzipped in the prodlog area, eg.  /star/rcf/prodlog/P07ia/log/trs/rcf1298*.log.gz
    4. Geometry tags and explanation (sort of) are here: drupal.star.bnl.gov/STAR/comp/simu/r-d-geometry-tags/

    CD0 -> CD1 Software TASK LIST

     

     There are two important areas that fall under our responsibility:

    1) Physics analysis code/performance simulations

       The committe (see also the attached .doc file) asked for more detailed simulation of

       a) D0s : flow and RAA performance with realistic systematic errors

       b) B-meson capabilites quantified (see Question-3/4 related attached docs)

           * reconstruction channels, methods, rates

           * triggering

           - To begin look at the attached ppt file on B-meson work done for the SVT and some thoughts in attached Question_3.ppt file

           - The Yale group is willing to help someone start on this. Gang Wang (UCLA) and also Quan Wang (Purdue) have volunteered to work on this.

       c) L_c capabilities also quantified since these are unique to STAR

           - Jan Kapitan is currently refining his analysis using the CD0/UPGR13 production

           - YiFei is also interested (also in using TOF+HFT)

     

    2) Infrastructure/Development

       a) Geometry. We need to move on and build/use a new geometry for our simulations. This impacts the Simulators and the tracking optimization. Gerrit initialized already the discussion with Andrew but expert help is critical.

      b) Tracking. We need to reconcile hand/Geant numbers, aka debug/optimize Sti performance. BNL core group involvemen  is a must here. We need to clean our code from fuzzy factors etc. Simplification of geometry for this particular study might be a good idea. Work on this item is reported in Tracking Efficiency Investigations

      c) Simulators. Xin Li (Purdue) showed interest in this task. He will present an outline of a plan soon.

      d) Calibration/Alignment procedures. Spiros to visit LBL to gather info from survey people.

      e) Web interface uplift. Willie will help with initial effort. Some templates were posted (see minutes of 3/21/08) HFT software meeting

     

    This is an non prioritized list. We need to agree on an action plan with a timeline since the time for a new production for CD1 is near (before summer).

     

    CDR-Software

    Drafts for Software Chapters in HFT-CDR

     

    Fast/Slow Simulators

    1. Nov. 2007: Willie made some updates to StPixelFastSim and his documentation is HERE
    2. TBD

    Full GEANT/MC simulations

     

    1. Yifei's production page: http://www.star.bnl.gov/protected/lfspectra/yfzhang/hft/HFTproduction.htm
    2. My 10/12/2007 presentation on DCA resolution is at You do not have access to view this node
    3. Willies single track efficiency study (Nov/2007) is at Single-particle results

    Hand Calculations by J. Thomas

    The area where most recent talks/results/ were presented is at
    rnc.lbl.gov/~jhthomas/public/HFT/

    The code is on rcf in:

    ~jhthomas/STAR/hft/DetectorDesignV5.C (good on 10/1/2007)
    It compiles in root with
    >.x DetectorDesignV5.C++

    The macro is sort of self-documented.

     

    Hand Calculations-Formulas by H.Wieman/Gene VB/Victor P./


    WARNING:
    Many pages in Howard's area can only be viewed by IE or Safari browsers.
    I have attached pdf files  of these pages at the bottom.


    The point to begin about the derivation of 'Probability of finding the correct hit' is here:
    www-rnc.lbl.gov/~wieman/HitFinding2DXsq.htm
    Once you are there you can follow (backwards) the whole history through the embedded links.

    The 'Hit association formula' is then compared with MC (MathCAD) and agreement is achieved here:
    www-rnc.lbl.gov/~wieman/HitFindingStrip2DMC2Gory.htm

    The specific area where 'strips' or 'pads' are handled (some warnings are posted there too):
    www-rnc.lbl.gov/~wieman/HitFindingPadVsStrip.htm

    Part of the warnings is discussed in this page by Gene:
    www.star.bnl.gov/~genevb/IST_Study/Corrections.html


    A summary of HW work on this in a presentation form is here:
    www-rnc.lbl.gov/~wieman/HitAssociationMar2007Wieman.ppt

    Gene's work was presented in March 2007 in a talk:
    www.star.bnl.gov/~genevb/IST_Study/IST_HitMatching_03_2007.html

    How-to pages

    Code developers need to keep in mind that:

    1) Need to follow STAR code standards
    2) Need to supply some documentationTutorial [example]
    3)
     Packages need to be peer reviewed

    Below are attached some older 'HOW TO' documents.......

    Patch Pixel prototype


    The initial study for a Pixel prototype (patch) was done in MathCAD by Howard Wieman.
    The goal was set to measure D cross section (flow and high-pt suppression seemed unrealistic for a
    short run/life-time device). To that end one aims to catch the mean-pt region, ie around 1 GeV in pt.

    The study is here:
    www-rnc.lbl.gov/~wieman/D_efficiency.htm which was updated, for comparison to GEANT purposes, to this: www-rnc.lbl.gov/~wieman/D_efficiency_2.htm and later updated to this:
    www-rnc.lbl.gov/~wieman/D_efficiency_3.htm
    WARNING
    :
    These pages require IE or Safari to be viewed and played properly.
    A pdf version is also attached (no .avi movies though) for universal viewing.

    A GEANT simulation with some less ideal situation can be found here:
    www.star.bnl.gov/protected/strange/margetis/hft/patch3.pdf


    Pile-Up Studies


    This page is for documentation of pile-up work. Pile-Up refers to
    1. Non-triggered events which occur during the Pixel readout
      1. Au+Au or p+p
      2. UPC background (mostly electrons)
    2. General beam background due to Pixel proximity to the beam
    Initial hand estimates (9/2007) by Jim Thomas are encouraging and they can be found in the file below named 'SimulationRequest'.

    Tracking Efficiency Investigations

     

    • A simplified geometry was used for these tests. It is tagged as UPG15. Some GEANT snapshots are attached at the bottom.
    • Resolution ....

     

    documentation for analysis of simulated (McEvent) data


    Jan's pages are
    HERE

    HFT project

    The Heavy Flavor Tracker (HFT) project is part of the STAR upgrades for midterm and RHIC-II running.
    A CD-0 proposal has been prepared and submitted to BNL (August 30) for submission to DOE. The proposal was submitted to DOE by the ALD S.Vigdor on about October 2. It can be found HERE
    and it is also attached at the bottom of the page.

    A CD-0 review is expected sometimes in January of  2008
    This page will keep final documents on the project and documentation for the sub-systems (PIXEL, IST) and links to relevant other information.

    Beam Pipe Doc-1

    Beam Pipe Doc-2

    Project Meetings

    The project meetings are documented mainly via links to agenda and presentation or via other page.
    So far this only includes the monthly meetings.

    Later meetings can be found here



    CD0 preview

    This page will contain the talks given at the meeting the program was.

    HFT cd0- dry run and vetting review

     

    8:30-9:00    Committee closed session
     09:00          Introduction T. Hallman 20+10
     09:40          Physics motivation N.Xu 40+10
     10:50          Break
     11:15          Software overview S. Margetis 20+10 (by Phone from Kent)
     11:45          Performance simulations J. Thomas 30+10
     12:30          Lunch - committee closed session
     14:00          Monte Carlo results X. Dong 20+10
     14:40          IST B. Surrow 20+10
     15:30          PIXEL H. Wieman 30+10
          
     17:00 -18:00 Committee closed session (home work questions)
     

    Tuesday, Dec 18

     09:00          HFT Mechanics E. Anderssen 30+10 (presentation by Phone from CERN)
     09:40          Project overview HG. Ritter 30+10
     10:20          Cost & Schedule S. Morgan 20+10

      10:40          Summary F. Videbaek 10+10
      11:30         Presentation on Home work
     12:00          Committee Lunch
     14:00-         Closeout

    Panel Members:

    Carl Gagliardi (Chair)
    Carlos Lourenco
    Tom LeCompte
    Dick Majka
    Lanny Ray

    Steve Vigdor, ex-officio


    RHIC beam and luminosity

    The first attached file describe the expected performance of RHIC with stochastic and e-cooling. The emphasis is on the bunch lengths and distributions.

    HFT related publications

    This page will contain information on publication, conference proceedings and presentation on HFT related talks.
    This are all non-project talks.

    Work in progress (12/10/14)

    Publications

    Conference Proceedings

    Presentations


    Integration Status

     

      >>> Integration Status >>>
      Development Area
    e.g. $CVSROOT/offline/hft/ or private directories
    Official Area
    $CVSROOT/
    PXL
    StRoot/ StPxlClusterMaker/ Review complete Deployed
    StPxlDbMaker/ Review complete Deployed
    StPxlHitMaker/ Review complete Deployed
    StPxlMonMaker/ StPxlQAMaker/ Under review. Non critical for data reconstruction chain  
    StPxlRawHitMaker/ Review complete Deployed
    StPxlSimMaker/ Review complete Deployed
    StPxlUtil/ Review complete Deployed
    StEvent/ XXX: List individual files for clarity Deployed. May benefit from changes as for IST. For example, there are constants defined multiple times
    StiPxl/   Deployed but still may benefit from a review
    StarDb/Geometry/pxl/ Currently we use the local tables in this directory. Not clear if we can run without them. Need to understand what is available in the official STAR DB tables DB tables initialized
    IST
    StRoot/ StIstCalibrationMaker/ To be reviewed. Non critical for data reconstruction chain  
    StIstClusterMaker/ Review complete Deployed
    StIstDbMaker/ Review complete Deployed
    StIstFastSimMaker/
    StIstSimMaker/
    Review complete Deployed
    StIstHitMaker/ Review complete Deployed
    StIstQAMaker/ To be reviewed. Non critical for data reconstruction chain. Should it be renamed to StIstMonMaker similar to StPxlMonMaker?  
    StIstRawHitMaker/ Review complete Deployed
    StIstUtil/ Review complete Deployed
    StEvent/ Reviewed by Thomas. XXX: List individual files for clarity. There seems to be an issue with StIstFastSimMaker that depends on StEvent Deployed
    StiIst   Deployed but still may benefit from a review
    StarDb/Geometry/ist Currently we use the local tables in this directory. Not clear if we can run without them. Need to understand what is available in the official STAR DB tables DB tables initialized
    SSD
    StRoot/ StSsdClusterMaker/   Empty module. The code is in StSsdPointMaker/
    StSsdDaqMaker/   Expect new code in StSstDaqMaker/
    StSsdDbMaker/   Reuse existing code
    StSsdMonMaker/ Updates available in Jonathan's private directory Updated existing module with SST related code
    StSsdEvalMaker/   Empty module. Apparently not used
    StSsdFastSimMaker/   Reuse existing code
    StSsdPointMaker/ Updates available in Jonathan's private directory Updated existing module with SST related code
    StSsdSimulationMaker/   Reuse existing code
    StSsdUtil/   Reuse existing code
    StSstDaqMaker/ Peer review in progress...  
    StiSsd   Updated existing module with SST related code
    StarDb/Geometry/ssd/   Directory exists but the content is ~3 years old

           — Implementation stage
           — Review stage
           — Code can be moved to $CVSROOT
           — Ready for extensive testing with production libraries

    REVIEWS

     
    • The HFT (CD0) proposalas submitted to DOE, can be found HERE 
    • The HFT (CDR)as submitted to DOE at CD1 review, HERE. CD1 documents are HERE
    • The HFT (TDR) is HERE

    HLT

    Those pages are provided for the High Level Trigger activities.

    HLT Linux Cluster User's Guide

    HLT Linux Cluster User's Guide

    - Howe to get an account on HLT Farm

    1. Get an account to the STAR Protected Network. See instructions at https://drupal.star.bnl.gov/STAR/comp/onl/accessing-star-protected-network
    2. Once you can access STAR online gate way stargw.starp.bnl.gov, send an email to starsupport@bnl.gov (cc to me: kehw@bnl.gov) to request your access to xeon-phi-dev.starp.bnl.gov, which is the entry point to the HLT cluster.

    - How to login HLT cluster
    The STAR HLT cluster sites in a local network, which has an entry point to the STAR online network.
    To reach the cluster, one need to jump a few time via ssh.
    1. (if you are outside of BNL) Connect to the RCF gateway rssh.rhic.bnl.gov or cssh.rhic.bnl.gov.
    The RCF NX service is recommended for better graphics performance.
    https://www.racf.bnl.gov/docs/services/nx

    2. From the RCF gateway or the NX node, connect to the STAR gateway: stargw.starp.bnl.gov
    You must have a STAR online enclave account to be able to do this.
    https://drupal.star.bnl.gov/STAR/comp/onl/accessing-star-protected-network

    3. From the STAR gateway, connect to the HLT cluster entry point: xeon-phi-dev.starp.bnl.gov (aka L409)
    Now you are on STAR HLT cluster.

    The ssh ProxyJump or ProxyCommand is recommended to jump through multiple hosts
    https://en.wikibooks.org/wiki/OpenSSH/Cookbook/Proxies_and_Jump_Hosts#Passing_Through_One_or_More_Gateways_Using_ProxyJump

    - Computing Environment

       * SL 7.3
       * No STAR environment pre-configured.
       * If you need the STAR environment, add the following lines in you ~/.cshrc
    --------------------------------------------------------------------------

    source /net/l402/data/fisyak/STAR/packages/.DEV2/setupDEV2.csh
    setup NODEBUG
    setup 64b
    starver TFG18m
    --------------------------------------------------------------------------

    - NFS mounted home dir: /star/u/[user name]
    Size: 1.3TB, RAID 5, no quota yet, but please use your home dir wisely

     - NFS mounted Storage
    /net/l401/data/scratch[1,2]
    /net/l402/data
    /net/l403/data
    /net/l404/data
    * 10+TB each, available on all HLT nodes.

    - Ceph storate
    On all nodes
    /hlt/cephfs

     - Condor Queue: Intel Xeon E5-2670 @ 2.50GHz
    condor submitter: xeon-phi-dev.starp.bnl.gov (aka l409), l404

     - Intel Xeon Phi coprocessor 7120P
    l405-l408, l410-l417, two cards per node
    l409(xeon-phi-dev) one card
    In order to use the coprocessor, Intel C/C++ compiler is needed, which can be found at
    /opt/intel_17
    Documentation for Intel compiler and libraries can be found at
    https://software.intel.com/en-us/intel-parallel-studio-xe-support/documentation
    More information about Xeon Phi KNC (1st generation, x100 family) can be found at
    https://software.intel.com/en-us/articles/intel-xeon-phi-coprocessor-codename-knights-corner

    - PGI Compiler
    The PGI compiler community edition has been installed in
    /software/pgi

    One need the following settings to use the compilers

    % setenv PGI /software/pgi
    % set path=(/software/pgi/linux86-64/18.10/bin $path)
    % setenv MANPATH "$MANPATH":/software/pgi/linux86-64/18.10/man
    % setenv LM_LICENSE_FILE /software/pgi/license.dat:"$LM_LICENSE_FILE" 

    ---------------------------------------------------------------------------------
    Contact Person:
    Hongwei Ke
    kehw@bnl.gov
    631-344-4575

    HLT QA

     

    KFParticle Tutorial

    KF Particle Finder is the package for reconstruction and selection of the short-lived particles using the Kalman filter based mathematics. Properties of the package:

    • the vector of parameters include physical parameters: position of the particle, momentum and energy;
    • all parameters are provided together with their errors estimated by the Kalman filter;
    • short-lived particles and their decay products are treated exactly in the same way;
    • as a result, complicated decay topology can be easily reconstructed;
    • because of the general parametrisation, the package does not depend on the detector geometry;
    • the package is easy portable between experiments;
    • the package is optimised and fully vectorised;
     

    KF Particle Finder is being developed be the FIAS group. Documentation on the package:

    The example macro how to run KF Particle Finder is located in "/star/u/mzyzak/KFParticleFinderExample/". The folder contains:

    • setDEV2.csh - a script to configure the environment to TFG18d release;
    • StRoot - the latest version of the KF Particle software together with its interfaces;
    • data - a folder with one MuDst and one PicoDst file of the run 2016;
    • Analysis.C - the main macro to run the example;
    • lMuDst.C - macro to load needed libraries, called by.

    In order to run the test:

    • copy the folder;
    • configure the environment running "source setDEV2.csh";
    • compile the code by running "cons";
    • to run PicoDst: root -l Analysis.C;
    • to run MuDst: root -l 'Analysis.C(100000, 1)'.

    The output is a table with efficiencies every 100 events and "pico.root" or "mu.root" with the histograms.

    p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
    span.s1 {font-variant-ligatures: no-common-ligatures}

    STAR HLT User's Manual

    STAR HLT Weekly Discussion

     

    STAR HLT Weekly Discussion

    ---------------------------------------------------------------------- 

    Bluejeans Meeting Info.

    https://bluejeans.com/934506067

    To join via Phone:

    1) Dial:

    +1.408.740.7256

    +1.888.240.2560(US Toll Free)

    +1.408.317.9253(Alternate number)

    2) Enter Conference ID: 934506067

    ---------------------------------------------------------------------- 
    Jan 23, 2018
         -
    Maksym, Reconstruction of open charm in STAR with KF Particle Finder
         - Petr, KF Overview

    Oct 3, 2017
         - Biao Tu, updates on dNdx fit with derivative

    Sep 26, 2017
         - Biao Tu, improve dNdx fit by providing derivative of fitting function 

    Aug 22, 2017

         - Hongwei. Produce K and p Embedding samples with HFT tracking in order to check the efficiency, DCA distribution and etc. due to real alignment. These samples will also be used to check the PID, such as dEdx and TOF.
        - Maksym and Iouri. Check the reconstruction efficiency for simulation with real alignment for Lambda_c, D0, D+ and Ds. These task is to check the alignment and analysis mechanism are working. After these, check the efficiency and purity from embedding data.
        - Michal. For idea geometry, check the KFParticle reconstruction decays. Results are expected to be presented in a table of purity and efficiency.
        - Yuri. Fixed the library version TFG17g. This version should be used for the above tasks instead of .DEV2 for better stability.

    Aug 15, 2017
          - Michal Kocan, Efficiency of reconstruction D0->K0(PiPi)PiPi

    Apr 11, 2017
          - Hongwei Ke, HLT Job Management System

    Feb 13, 2017
          - Yongjin Ye, SC calibration with Run12 pp 510GeV data

    Jan 31, 2017
           - Yongjin Ye, SC study in Run12 pp 510GeV

    Jan 24, 2017
           - Yongjin Ye, SC calibration for Run12 pp 510GeV

    Jan 10, 2017
           - Irakli Chakaberia, TPC new cluster finder comparison

    Jan 3, 2017
           -
    Run 17 schedule, Hongwei Ke

    Oct 18, 2016
            - HLT code refactoring for Xeon Phi, Jeff Landgraf, Hongwei Ke

    Sep 6, 2016
            - Hongwei Ke, resources estimation for BES II
            - Yuri Fisyak, compiler tests

    Jun 21, 2016

            - Hongwei, HLT has switch back all the calibration tables and settings for AuAu200GeV. QA plot look ok. If there is nothing change dramatically, we may not have any new calibration before the end of this run.

            - Grigory is working on the merger, which merges the HFT tracks and TPC tracks.

            - Zhengqiao updated HLT QA plot for Vz because the requirement was changed during the low energy dAu runs. Zhengqiao will also resume the development of SC auto calibration code for HLT.


    Feb 15, 2016

            - Grigory and Zhengqiao are working on the data preparation for the merger test. There is some coordinates converting need to be done between STAR golbal coordinates and detector local coordinates. This is well undersood knowledge and HLT already has the code to do that for TPC.
            - Zaochen Ye is working on the new QA code MTD-hlt triggers.
            - Maksy Zyzak fix the problem of sending data to Phi via SCIF and is improving the sender.

    Feb 9, 2016

            - Zhaochen Ye. Finished the first t0 calibration for Run16. Working on additional QA plots for the new diElectron algorithm.

            - Zhengqiao Zhang. Finished the first space charge calibration for Run16.

            - Grigory Kozlov. Working on the merger now, but need simulation data. Zhengqiao will help to preparing the data.


    Jan. 25, 2016
           - Grigory Kozlov, The CA-HFT tracker is now running at ~90ms per event.  
    Speed is expected to improve when we run the code on AVX available CPU.
           - Maksym Zyzak, working on using SCIF APIs to transfer data between CPU and Phi.
           - Hongwei Ke, HLT is almost ready for the run, except a few calibration tables needed to generated during the first a few days after collisions are available.
           - Zaochen Ye, Working on HLT online QA code to add plots for MTD quarkonium.

    Jan. 19, 2016
            - Hongwei Ke, HLT is prepared for the coming run. Given the new machines are not arrived here, we prepared the current machines to be used for now. Will do a test later today.

            - Maksym Zyzak, Keep working on the KFParticle server on the Phi card side.
            - Grigory Kozlov, The CA-HFT tracker is now running at ~200ms per event. One more optimization is hopefully finished by the end of this week.

    Jan. 13, 2016
            - Maksym Zyzak, Status of the KF Particle Finder package for the Xeon Phi. Discuss the KFParticle secondary vertices reconstruction and task division of FIAS group.

    Dec. 22, 2015
            - Mike Lisa, 4Sd: the "other" di-baryon state searching in STAR. Discuss the idea of search 4Sd in HLT. Grigory Nigmatkulov will use KFParticle to estimate the selection rate with Run14 data.
            - Maksym Zyzak, Investigation of decays reconstruction in HLT. Discuss the KFParticle secondary vertices reconstruction and task division of FIAS group.
            - Ivan Kisel, Status of the primary vertex estimation. Present the recent development on primary vertex estimation algothms, which is intended to be used to start CA-HFT tracking.

    Dec. 15, 2015
            - Aihong, Thoughts on Exotic Searches at STAR.


    Nov. 17, 2015
           
    - Grigory, Status of HFT CA track finder Grid. Triplete calculation has been vectorized and gives x7.7 speedup.

    Nov. 10, 2015
            - Zhengqiao, Simulation events with pileup. Simulated ~1.5k AuAu 200GeV events with pileup for CA-HFT tracker turning.
            - Grigory, Status of HFT CA track finder Grid. Grid of HFT hits has been implimented and shows very good speed up, especially for  doublets calculation.
            - Rongrong, HLT performance using embedded muons. We invited Rongrong come to talk about the study on the HLT-diMuon trigger efficiency. New matching algorithm shows significant improvement.


    Nov. 3, 2015
            - Valentina. PV reconstruction with KF Particle. Discussed prelimary results of using KFParticle to reconstruct PV in AuAu collisions at 200GeV.
            - Grigory. Implimented grid for the CA-HFT tracker which will significantly increased the speed of CA-HFT tracker.
            - Maksym and Hongwei. Vc CPU version and MIC version both works, but we still have problem of building offload version.


    Oct. 20, 2015

            Hongwei Ke, Some experience with Vc

    Hongwei - This problem was triggered by the speed optimization of the local z calculation if HFT hits, StThinPlateSpline::z(). It turns out that a simple implementation with raw C++ arrays works the best with icc auto-verization. Explicit vectorization with Vc library works the second best with gcc. Surpassingly icc + Vc works five time slower than gcc + Vc and 10 times slower than icc auto-verization. Maksym thinks it maybe something to do with how icc treats inline functions, which are heavily used by Vc library. Need more study on this.

    Grigory - Need more simulation data with TPC and HFT without pile-up. Zhengqiao will provide that. Grigory is going to try tracking TPC and HFT together, i.e. extrapolate TPC tracks to HFT. 
     
    STAR offline software group has obtained a lot of experiences on this method, which should have very good speed because they only need a few percent more time to tracking HFT hits. The difficulty here is that the pointing resolution of TPC tracks is large comparing to HFT hits errors and pile-up. Usually one will see multiple HFT hits in a projected area on PXL. Requiring good long TPC tracks to do the extrapolation is helpful. Statistically, good TPC tracks point to the right HFT hits. The other thing one need to pay attention is that Kalman filter needs to treat the coulomb scattering in HFT differently to what have been done with TPC.

    Oct. 13, 2015
            Gregory Kozlov, Status of HFT CA track finder

            - Grgory talked about the new update of CA tracker for HFT. By calculate the doublets and triplets from outer station instead of calculating all of them, we see very good speed up, espically with low multiplicity events. There are still many optimization opportunities to imporve the speed.
            - Maksym contacted  Matthias, the author of Vc library, and it seems that there should not being problem of running Vc in offload. However, a real example is required by Mattias to work on. Maksym will contact Mattias again next week.
               What we do not understand now is how to call the same Vc code in CPU version and MIC offload version in the same program. It maybe helpful to know how Mattias thinks of this problem.
            - Hongwei has some news from supermicro tech. support aobut the low frenquency problem of phi card on HLT servers, model SYS-1027GR-TRFT+. The problem is that the phi card can only run at the lowest allowed frenquency when it is plugin in the right slot (see from front). After discuss and test for about three weeke, supermicro tech. support seems sure that a new riser card on the right side is needed, reversion from 1.00 to 1.01. Will send our card to supermicor will get a new one back.
               Hongwei will also talk with Zhengqiao about how to dump HFT hits for both MC and real events for Grgory. We may be able to joint some effort with Yuri, because he is producing MC events including HFT and pile-up for StiCA evaluation.

    Oct. 6, 2015
           For the interests of this group, Hongwei, Sti and StiCA Performance Comparison

    Sep. 29, 2015
            Gregory Kozlov, Status of HFT CA track finder

    Jul. 8, 2014
            We disscussed the HLT tracking efficiency again. There are two approches we can use. First choice is Yuri's HLTCAInterface, which can run HLT TPC maps can CA tracker in the save time. We used this method before, but found asymmetry between east and west TPC. We suspect that there is somethink wrong with the TPC maps. However, we have not identified them yet. The second choice is to write our own association tools. We can use common hits schem used by Xiangming before. We conclude that we will need these tools anyway. So will write it in this summary. 

    Nov. 12, 2013
            Hongwei Ke, PV Reconstruction at 19.6GeV

    Oct. 29, 2013
            Hongwei Ke, HLT TPC Maps' Problem

    Sep. 10, 2013
            Hongwei Ke, HLT+KFParticle PV finder

    Aug. 13, 2013
            Hongwei Ke, HLT+CA

    Jul. 30, 2013
            Hongwei Ke, HLT Status

    p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Helvetica; -webkit-text-stroke: #000000}
    p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Helvetica; color: #0069d9; -webkit-text-stroke: #0069d9}
    span.s1 {font-kerning: none}
    span.s2 {text-decoration: underline ; font-kerning: none}

    p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Helvetica; -webkit-text-stroke: #000000}
    p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Helvetica; color: #0069d9; -webkit-text-stroke: #0069d9}
    p.p3 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Helvetica; -webkit-text-stroke: #000000; min-height: 17.0px}
    span.s1 {font-kerning: none}
    span.s2 {font-kerning: none; color: #000000; -webkit-text-stroke: 0px #000000}
    span.s3 {text-decoration: underline ; font-kerning: none; -webkit-text-stroke: 0px #0069d9}

    SpaceCharge and GridLeak Calibration for offline

    calibration of SpaceCharge(SC) and GridLeak(GL)
     
    cd /star/u/yjye/yjye1/HLT/SC_GL/run2018/AuAu27
     
    operation step:
     
    1. get the inital value of StarDb6.8_9/StarDb6.8_10/StarDb6.8_11, chain option of bfc.f and star version from Gene.
       a. time relationship: (early)beginTime on the website < filename of the StarDb* < daqfile(late). 
       
    2. download the daqfile of this year which need to do calibration.
      a. open star RunLog and choose th daqfiles
      b. (20-30)daqfiles are needed and more than 5000 events for each daqfie.
      c. contain a wide lumilosity for those events in the daqfiles
      d. hpss_user.pl -f trans4
      d. we can confirm the download result on websit: http://www.star.bnl.gov/devcgi/display_accnt.cgi
     
    3. cd submit
     
    4. Use the StSpaceChargeEbyEMaker to create the root file from daq file for the following calibration.
       ./submit_all.bash
     
    5. Use the function of doEvent_SCGL_Calib.C to calibrate the SC and GL from the input root file and create some histogram for the   next fitting.
       ./run_step2.bash
     
    6. ./moveHist.bash
     
    7. Fit the histogram and get the SC and GL.
       cd ../ and ./run_all.csh

    SpaceCharge and GridLeak Calibration for online

     calibaration of SpaceCharge(SC) and GridLeak(GL) with online

     
    cd /star/u/yjye/yjye1/HLT/SpaceCharge/SpaceCharge/online/RTS/src/L4_HLT
     
    operation step:
     
    1. we need to files, "HLTConfig" and "conf_18171003.xml" in the run_l4Emulator.csh, which is supported by Hongwei. 
     
    2. set the value of "spaceCharge" in online_tracking_TpcHitMap.cxx. The value is decided by the sc which is calculated on the
       offline. The step of the value change is 0.001 or 0.002.
     
    3. ./run_ye18171003.csh

    L3

    HLT Review Book Page - Common Content

    This page can be used for common content for the HLT review, Summer 2010.

     

    Received HLT response to 2nd review (PDF)

    Directions:

    1. Run 10 performance (Manuel)
    2. Run 11 goals & means: archiving, simulations, integration w/offline (Jan)
    3. Is respons addressing concerns of 2nd review (Thorsten)
    4. Future: - man power &  expansion plans (Gang)

    For Executive Summary, click here.

    A) Questions/Comments on Run 10 Performance

    1. In the overview of HLT in run 2010, there is a comment that the HLT has been used as a real trigger.  This should be qualified, as by construction the HLT needs TPC readout, and therefore this use does not reduce deadtime like a level-0 trigger.  Rather, the gain here is to save disk space and offline processing time for those HLT-triggered event.  The accounting for cross section purposes when an event gets recorded only after HLT trigger is a topic that has been raised as a concern.  It could be useful to exemplify how this has been/is being handled.
    2. Figure 1: We would like to see a more detailed description of the data flow. In particular, addition of detector information such as the HFT can have very different design issues if the data is sent to SL3 (so it can be used for tracking) or to GL3 (so it can only be used to refit).
    3. Section 3 on Trigger efficiency: The comparison of offline counts vs. HLT counts as a way to estimate efficiency suffers from the comparison is not done on a track-by-track basis.  Therefore, it is impossible to determine if there is a significant number of split tracks.  For example, in Table 1, in the counts of the 0.5 - 1 GeV/c rigidity bin, there are 615 offline counts, and 543 HLT counts.  Without more information, it is not possible to know if these 543 counts are all good, or if say only 300 tracks are good and the other 243 are ghost or split tracks. A minor point on the same table is that the errors shown are wrong, the efficiency cannot be greater than 100%, so quoting 99 +/- 7 % is incorrect.  See the note from Thomas and Zhangbu on the treatment of errors for efficiency calculations.
    4. We also wondered why the J/psi efficiency was so much smaller than the gamma conversion electron efficiency.  Could be just low statistics, but the difference appears significant.  Any further studies on the J/psi efficiency would be useful.
    5. Section 4 on performance: it would be extremely useful to have figures that show how the performance, e.g. speed, deadtime, etc., scale with the occupancy, and in particular with the luminosity.  This can then be used to make projections for the future, which is one of the key issues of this review.
    6. Section 5 on calibration: We had several questions, as this is an important topic.
      1. How is it decided what is an acceptable resolution?
      2. How are the calibration constants archived? Are they sent to the star offline database so that they can be used in production and analysis? This is a necessary condition to guarantee that any analysis that relies on HLT-triggered events only uses those tracks that satisfy the given trigger condition, so these tracks have to be available in the event.  Is the HLT tracking re-run during production and the HLT tracks stored in the MuDst?
      3. The report says that the necessary calibrations for the TPC took one week to achieve.  After that one week, did it need someone to take care of it again during the run?  This is important to consider vis-a-vis manpower issues.
      4. Regarding the 7.4% resolution achieved for HLT tracks, it would be useful to understand which observables or studies can be done (or not) with this kind of resolution.  For the future, are there studies could rely on PID at the HLT level (e.g. J/psi, D-mesons, Lambda_c)? What dE/dx (and possibly secondary vertex resolution) would they need?
      5. It is not mentioned how much much manpower and time the TOF calibrations needed.
      6. Regarding the BEMC calibration: what is meant by the sentence "the gains are from theoretical calculation".  We think the BEMC gain calibration is one of the procedures that needs to be done before the L0 High-Tower triggers can work, so this calibration should happen regardless of whether the HLT is running. 
    7. Section 6 on online monitoring: in the vertex x-y comparison between online and offline, the offline vertex finder has the capability to find many vertices and then decide among them in order to reject out of time pile-up.  How does the comparison take this into account?  Is there any pile-up rejection in the HLT vertex finder?  Does the HLT vertex finder use information from the VPD also? If not, should it?
    8. Section 7 on the monte-carlo embedding to study the efficiency.  It is written that "the tracking efficiency is defined as the ratio of the number of successfully found MC tracks by the HLT tracker to that of the embedded MC tracks".  How is a track deemed to be "successfully found"?  How is the amount of split and ghost tracks studied?  If these are not specifically studied, simply counting tracks is not enough to correctly determine the tracking efficiency.
    9. Section 7.2 : we would like to hear about any progress on why a difference exists in Figure 11 left and right.  Is it simply that the embedded tracks have a different eta-pt range and therefore would have a different nhits distribution?  If so, that is easy to correct.  If the discrepancy persists when selecting tracks of similar pt and eta (and phi to be complete) then this is a potentially very serious issue.
    10. On the physics results: both the charge -2 and the di-electron results require dE/dx and the di-electrons require TOF also.  With the cuts discussed in section 8 and the resolution discussed earlier for dE/dx (the TOF resolution was not discussed), what efficiency and purity is expected for these observables?  How is this expected to degrade in larger luminosity future runs?
    11. For electron id using the BEMC, in the L2 trigger studies done for the J/psi and Upsilon L2 triggers we concluded that clustering the BEMC towers improved the mass resolution.  It probably improves the E/p as well.  Has this been studied with the HLT algorithms?
    12. Since there are 3 possible ways that an electron can be selected according to the cuts mentioned in section 8.2, it must be made clear to anyone using these triggered events for physics that they have to reproduce the complete trigger conditions, and that it will make the analysis more complicated (each possible selection should be simulated in order to estimate its efficiency, and the more variables and conditions, the better the Monte Carlo has to be tuned to reproduce the data. It is not encouraging that, as mentioned in Point 9 above, even reproducing a simple nhits distribution is not currently within our milestones.)

     

    B) Follow up questions for Run 11 plans:

    1. What physics trigger(s) will use HLT in run 11 ?
    2. Show intended data flow diagram for HLT in the context of trigger and daq machines
      1. show details of GPUs configuration
    3. Estimation of HLT dead time for the   for run 11 (pp500, HI running)
      1. Show impact of different luminosity levels (25%, 50%, 75%,100%, 125% of projected peak lumi for run 11) 
    4. Describe procedure of establishing 'good enough'  on-line HLT calibration for: Estimate # of days and name people responsible for on-line HLT calibration of given detector. Estimate # of days needed and name people responsible for on-line HLT calibration of given detector.
      1. TPC, expected 2x larger lumi than previously, ramp up for first ~6 weeks
      2. BTOW
      3. BTOF
      4. MTD
    5. How an end-user can simulate HLT efficiency for y2011 M-C geometry & Pythia events in the off-line framework (root4star on rcas6nnn)?
    6. How TPC, BTOW,MTD calibration used by HLT will be archived ?
    7. How HLT code used on-line will be archived?

     

    C) Responses to the 2nd HLT  review report

    1. The performance of two physics triggers has been sucessfully demonstrated. It is assumed that we deal with the performance not in this part.
    2. It's impressive to see a calibrated TPC after one only one week. However in the current report some information is missing to fully assess the calibration quality achieved:
      1. comparison of the calibration table to the "final" offline table (I assume that Fig3 is a comparison of the table to the offline calculation available at this time).
      2.  was there only one calibration table used or was it constantly upgraded during the run?
      3.  Is there an agreed-on workload sharing between the HLT and subsystem teams?
    3. On the section for future developments: The 2nd HLT review recommended a close collaboration with offline efforts to implement a new tracking/seed finder (if needed). Has there been a common activity? Why was there a decision to rewrite the SL3 code based on the old concept?
    4. On section 11:  Will there be sufficient human ressources to do any R&D towards HFT integration? Has there been collaboration with the HFT group?
    5. (Section 11.4) good to see that common activities with other groups has helped. Something not mentioned is reduced needs in the trigger system, i.e. are there any plans to obsolete L2? (At least the last time we discussed this, L2 had no benefits compared to the HLT)
    6. The point behind the seperate readout and tracking PCs was not for the current run, but for any further R&D. With the strong coupling you are very limited, decoupling will offer new possibilities (especially when the HFT comes online) For reference here the relevant part of the last report: Before proceeding with the installation of the SL3 tracking computers, it should be clarified if the coupling of TPC readout and HLT tracking is compatible with the envisioned further development of the HLT and its tracking algorithm or if a separate system would be more beneficially. As a byproduct, this will also result in the recommended clarification of the HLT-DAQ interface.
    7. OpenCL vs CUDA: Keeping it as an option is not too informative. There shold be a decision what to use - note that there were strong feelings in favor of an OpenCL based system. 
    8. Not discuseed: Given the impressive number of proposed physics algorithms, the impact of a large number of faststreams on offline computing (additional storage, CPU etc) should be clarified in collaboration with the STAR computing group.

     

    D) Future development

    1. The inclusion of HFT is necessary, but not urgent, since the completion of HFT is aimed for run-14 with Pixel available. The HFT hit information will be passed to GL3 machines, while the TPC information will be passed to SL3 machines, and the committee is curious about how and where the re-fit of the track is carried out.  
    2. The efficiency study via Monte Carlo seems to focus on the relative efficiency between HLT and offline data, instead of the full/absolute efficiency. The advantage is that the offline performance is well understood, and much less effort is needed than the full efficiency study. The caveat is that the relative efficiency has to be safely factorized out of the full efficiency.  
    3. What's the time line for rewriting the SL3 tracking package?  
    4. How necessary is it to add the HLT information into MuDst? Once the MuDst is produced, all the information that HLT provides can be obtained with the analysis code. And the MuDst production is usually separated into several physics streams, so some streams can be defined with HLT triggers, and then the HLT information doesn't need to be written into MuDst.
    5. Manpower: Hao will graduate next year, and Xiangming could be away. They both work on tracking, which means the HLT project could be badly short of manpower in tracking. On the other hand, the development of secondary vertex finder with GPU may be not so urgent.

     Executive Summary

    1. Run 10 Performance
      • HLT and DAQ/Trigger Interface: During Run 10, the interface between HLT and trigger seemed adequate.  The important changes in the operation were 1) to include all BEMC tower information, requiring HLT to access the EVB information directly instead of via L2, and 2) TOF information, sent directly from DAQ machine.  For the future, a design issue to be solved involves the incorporation of the HFT information, and whether this information should be sent to GL3 to be used for track refitting or whether it can/should be sent to SL3 machines to be used during the track-finding stage.  The addition of the MTD is expected to be similar to the current TOF interface and DAQ communication.
      • Trigger efficiency: For charge-2 events, an estimate of the trigger efficiency yielded values of 90% or above (here, a "trigger efficiency" of 100% would mean that the same events were found by the HLT as in offline).  For J/psi events, the trigger efficiency, based on looking at photonic electrons, was estimated to be 71%. NOTE before finalizing bullet: Needs a quantitative way to discuss whether these performance numbers are adequate, or not.
      •  Speed and deadtime: One test was done where the load was increased on the HLT CPU's and no noticeable deadtime was seen.  However, it is not known at this time what is the rate at which dead time will be apparent.

     

    MTD

     Muon Telescope Detector (MTD)

    The MTD is a large-area and cost-effective detector at mid-rapidity for the STAR experiment at RHIC. It utilizes the new Time-Of-Flight system with precise timing, and provides excellent muon trigger as well as identification capabilities in the high-luminosity era at RHIC.

    Data production


    Calibration


    Offline software

    Operation


    Plots & pictures

    Publications

     







     

    Calibration

    MTD Calibration

    Calibration Parameters

    • Trigger Time Windows -- limit the MTD hits to those that are within a certain window w.r.t. the central TOF clock
    • INL -- apply the Integral Non-Linearity electronics calibration (specific to the HPTDC chips) 
    • T0 -- align the timing of all the MTD channels
    • Slewing -- calibration for amplitude (time-over-threshold) depended timing effects
    • Alignment -- correct MTD/TPC alignment, using the residuals between matched TPC track and MTD hit pairs

    Procedure

    A schematic workflow is shown here, and a detailed description can be found in Calibration Workflow. Details about the calibration procedure can be found here



    Parameter Storage

    • The calibration parameters are stored in the data base
    • These parameters can also be found here


    Calibration Parameters

    Run21 (200 GeV O+O)

    • T0, slewing and position calibration parameters are taken from Run18 
    • Determine timing window for real data: OO 200 GeV
    • Global T0 timing: OO 200 GeV
    • Check all calibration parameters: OO 200 GeV

    Run19 (BES II)

    Run18 (200 GeV Isobar, 27 GeV Au+Au)

    Run17 (500 GeV p+p, 54 GeV Au+Au)


    Run16 (200 GeV Au+Au, 20-200 GeV d+Au)

    Run15 (200 GeV p+p, p+Au)

    Run14 (200 GeV Au+Au)

    • Determine timing window for cosmic ray data: slides
    • Determine timing window for real data: slides
    • Compare mean and sigma of dT distributions between data and cosmic ray: mean, sigma

    Run13 (510 GeV p+p)

    • Determine timing window for cosmic ray: slides
    • Determine timing window for real data: 130-149, 151-161
    • Check timing window vs. day: slides
    • Compare mean and sigma of dT distributions between data and cosmic ray: mean, sigma

    Database

     Database Tables and Handling

    Usage

    • Database tables are used in every StEvent-based Maker with run-depend information ranging from electronics maps, timing windows, to calibration information, and detector status.
    • Web access to offline database browser: STAR Database Browsers

    MTD database summary

    • Geometry/MTD
      • mtdTrayToTdigMapmap from the Tdigit number stored in raw data to the tray number which ranges 1-5
      • mtdTrayIdMap: map to the tray Id number in the UT database. This is the only valid way to find out which Tdigit broad has been mounted onto the tray in case some trays are moved around onsite.
      • mtdTdigIdMap: map to the Tdigit board Id from the UT tray Id. This is needed for the INL correction.
      • mtdGeant2BacklegIDMap: map to convert from the backleg ID in GEANT to that in real data.
      • mtdModuleToQTmap: mapping between MTD backleg/module to the triggering QT board.
    • Calibrations/MTD
      • mtdTriggerTimeCut: cut on the timing difference between MTD hits and trigger time recorded by THUB to reject the MTD hits that are not from the triggered collisions, i.e. pileup
      • mtdT0Offset: T0 offset for MTD calibration
      • mtdSlewingCorr: slewing correction for MTD calibration
    • Calibrations/tof
      • tofINLSCorr: the INL corrections for MTD modules resides together with the TOF parameters
    For detailed defintions of MTD tables in the database, please refer to: MTD tables.

    MTD operation

    This page collection information regarding to the MTD operation.


    MTD+TOF operation manual for detector operator and on-call experts


    MTD dimuon trigger commissioning
    MTD trigger map

    MTD trigger algorithm
    The algorithm documents can be found in https://www.star.bnl.gov/public/trg/
    • For QT algorithm: the link name is "Algorithm 6c - MTD (MT001->4) "
    • For DSM and TCU algorithm: the link name is "TOF Branch: TOF, MTD and PP2PP Algorithms for Beam Running"

    MTD reference plots for shift crew
    2019-03-01
    2017-04-26


    Offline Software

    Offline Software Organization & Development

     

    Schematic overview

    An overveiw of the offline software strucuture for MTD is shown. The detailed documentation can be found here: MTD software document


     

    Example 

    Macro: for running all the standard MTD makers on StEvent or MuDst input, see MTD macro.
    Access MTD information: for examples to access the MTD hit information and pid information, please refer to the function processStEvent() and processMuDst() in StMtdQAMaker.
     

    BFC options/workflow

    For explicit definitions, see CVS log for StRoot/StBFChain/BigFullChain.h

    • mtd -- loads all MTD chain Makers: StMtdHitMaker, StMtdMatchMaker, and StMtdCalibMaker.
      • note: it may also need to include StBTofHitMaker and StVpdCalibMaker for VPD start time information
    • mtdSim -- Simulation chain: Loads StEvent and StMtdSimMaker
    • mtdDat -- Raw data chain only: loads StMtdHitMaker
    • mtdUtil -- loads StMtdUtil (usually automaticall included, no need to explicitly call)

    Useful links

    STAR CVS repository:STAR CVS

    STAR Doxygen pages: StRoot (doxygen)


    MTD Offline Software Projects

    MTD Offline Software Projects: Description & Status of MTD Software Projects

    This page lists the various ongoing MTD offline development projects, with contact information and current status

    StMtdHitMaker

    • raw data decoding, basic mapping, persistent storage in StEvent and MuDST. [done]
    • StMtdRawHit implementation, and offline INL application. [done]
    • include electronics mapping
    • INL calibration
    • StMtdHit creation (incl. MuDST)
    • people: Xinjie Huang, Xianglei Zhu, Frank

    StMtdMatchMaker

    • hit selection & sorting
    • track extrapolation
    • hit-track matching algorithm development & implementation
    • people: Chi Yang, Wangmei Zha, Bingchu Huang, Frank

    StMtdCalibMaker

    • database table design
    • algorithm development & implementation
    • StMtdPidTrait creation
    • people: Frank + TBD

    StMtdSimMaker

    • implementation of the framework, read GEANT data and convert to basic hits. [done]
    • Fast & Slow simulation algorithm implementation
    • apply electronics mapping
    • data comparison
    • people: Shuai Yang, Ming Shao, Frank

     

    Operation

    MTD operation
     

    Detector operation

    For MTD operation, please see STAR operation


    Trigger documents

    TOF_MTD_PP2PP_Run14

    TOF_MTD_PP2PP_Run13

    QT_Algorithm_MTD_Run13

    STAR_DSM_MapE

    More trigger related documents can be found on trigger page.















    MTD HV CAEN Board (A1534) Settings.

     When installing a new board into a SY4527 mainframe, the settings on the board are configured to the previous use-so factory settings or test bench settings.  To check/configure the board, there are (at least) two ways to go about it [ CAEN HV Control Software or EPICS].

    The boards used in mtd-hv are A1534s.

    Easiest(?) Method:
    Use CAENHVControlSoftware. so on mtd@mtd-cr, cd to /home/mtd/CAENHVControlSoftware-1.1.2/bin/
    then: ./CAENControlSoftware

    [This software is outdated, but does the job.]

    A GUI should pop up, on the top left menu- click File->Connect.

    log in to mtd-hv as admin.

    and then a couple more guis should pop up inside the main GUI and look something like this: https://drupal.star.bnl.gov/STAR/system/files/MTDResetFlagConfig.png .

    Then click on the parameters you want to adjust and type in the new value and carriage return.

    [02/02/2016 a new board was installed into slot 5]

    In this case, Board05_Chan00*.

     So here I changed:
     V0Set to 6400 for full voltage (was 0, could have set to standby if wanted instead)
     I0Set to 100uA to match the other boards(was 20uA)
     RUp to 8Vps (was 50Vps)
     RDWn to 30Vps (was 50Vps)

    Everything else was the same on the board(except for V1Set & I1Set, but we do not use those--we could for standby&change the control operations a bit...).

    Second Method:
    Use EPICS. The commands to monitor and set the parameters through epics can be sent almost anywhere on starp that has epics running, but use mtd-cr just to be safe.

    To monitor:
    caget MTD:HV:Negative:5:0:v0set
    [MTD:HV:$Polarity:$Board:$Channel:$variable]

    v0set = the demand voltage setting
    i0set = the max current setting
    rampup = ramp up rate
    rampdn = ramp down rate

    To set:
    caput MTD:HV:Negative:5:0:v0set 6400

    To check to see which variables are available to read/set, you can look at (on mtd-cr) /home/mtd/MTD/HV/HVCAENx527_3.7.2/db/MTDHV.db . Use less to look at the file, we do not want to modify it!


    Random tidbits:
    New board installed: A1534 (neg.) ser. no. 71 board fw: 04.
    Current boards(A1534s) slot(1,3,5,7);  ser no (59, 69, 71, 61); fw(3.01,3.01,04.,03.); charge(-,+,-,+)  ramp up rate: 8v.s, down: 30v/s, max current 100uA.  max V 8k.



    Random


    MTD HV SY4527 Firmware Upgrade

    An entry describing the steps needed to update the firmware for the MTD HV Crate(mtd-hv.starp & model: SY4527)
    (install by Joey & Shuai)

    Originally,
    MTD crate firmware: 1.0.0
    FPGA firmware: 0.04 build c723

    This is indicated from the tech. info. page through the CAEN HV Control Software, as indicated here:

    To update the crate, the firmware(1.3.1) was downloaded from the caen website, http://www.caen.it/csite/CaenProd.jsp?parent=20&idmod=752 .
    The package, sy4527-5527-HVFw-1.3.1-b20150608.zip, was unzipped on a local directory at mtd@mtd-cr.starp
    And the firmware, sy4527-5527-HVFw-1.3.1-b20150608.bin, was used to upload to the crate.

    To upload the firmware to the crate, there are two possible methods(at least).  Uploading via webserver or usb stick.  Uploading via webserver was used for this upgrade.
    This was done by going to the Upgrade Firmware menu.  This should be accessible through the crate's web configurator.  In this instance, the CAEN HV Control Software was used to access the web configurator.
    To get there, click on the "Settings" button located on the left menu in the image listed below:

    Once clicked there, go to the "Upgrade Menu" on the top and select "Firmware Upgrade"
    Here, select to upload the firmware bin file.
    Once done, you should see the message: "Update Done!" in a green box as indicated in the image below:

    Turn off the crate.
    Wait 30 seconds.

    At this point, make sure everything trying to talk to crate is turned off.   Eg: CAEN HV Control Software, HV IOCs, etc.  [Do not worry about the ethernet cable itself]
    We do not want to mess with the crate while it is installing the new firmware.  (no inadvertent power reset, etc.)

    Turn on the crate.
    Wait 10 minutes for the installation to be complete.  [Do not touch/access until 10mintues have passed.]

    Check that the installation is complete.
    One can check the tech. info. page(or the web configurator's sidebar) and see that it was successful:

    Note: the firmware version is mis-reported via this version of the CAEN HV Control Software, but it correctly reported through the web configurator.

    The crate firmware has now been updated with the FPGA firmware.
    firmware: 1.3.1
    FPGA firmware: 0.06 build d910

    PicoDst production

    The page collects information about the PicoDst productions. The MTD information is integrated into the Berkeley PicoDst structure developed by Xin. Here are some useful links about the structure and details of the PicoDst:
    Xin's webpagehttp://rnc.lbl.gov/~xdong/SoftHadron/picoDst.html
    CVS repositoryhttp://www.star.bnl.gov/cgi-bin/protected/cvsweb.cgi/offline/users/dongx/pico/
    Documentation of PicoDst structurehttp://www.star.bnl.gov/protected/lfspectra/marr/documents/PicoDst.pdf

    MTD PicoDst production list 

    Data set Trigger Vertex selection & Event cuts  Track cuts Event/Track filter PicoDst production code & Storage   

     

     

     

     

    Run 2013
    pp @ 500 GeV

    Di-muon: 430103, 430113
    Single-muon: 430101, 430111
    e-mu: 430102, 430112, 430122 
     
    Vertex selection: select the primary vertex that has at least two associated primary tracks matched to MTD hits, otherwise the default primary vertex is used.
     

    • ! (Vx==0 && Vy==0 && Vz==0)
    • |Vz| < 2000 cm
    • |Vr| < 10 cm
    • RefMult >= 0   

     

     

    • Global tracks
    • 0 <= flag <= 1000
    • pT > 0.1 GeV/c
    • nHitsFit >= 15
    • nHitsFit/nHitsPoss > 0.52
    • gDCA < 10 cm 
    N/A

    Library: SL15e_embed


    Code in CVS


    Storage:
     /gpfs01/star/pwg_tasks/hf01/MTD_Run13_pp500_Pico 

     

     

     

     

    Run 2014
    AuAu @ 200 GeV

     

    Di-muon: 450601, 450611,
    450621, 450631, 450641
    Dimuon-30-hft: 450604
    Dimuon-5-hft: 450605, 450606
    e-mu: 450602, 450612,
    450622, 450632, 450642
    Single-muon: 450600, 450610,
    450620, 450630, 450640

     

     

    Vertex selection: select the primary vertex that is within 3cm of VPD,
    otherwise the default primary vertex is used. 
     

     

    • ! (Vx==0 && Vy==0 && Vz==0)
    • |Vz| < 1e4cm
    • |Vr| < 1e4 cm
    • RefMult >= 0   
    • Global tracks
    • 0 <= flag <= 1000
    • pT > 0.1 GeV/c
    • nHitsFit >= 15
    • gDCA < 10 cm  

     

    Save only electron and muon candidates.

    Electron PID: |nσe|<3.0 
    |1/β‐1|<0.05
    OR
    EMC match && pT>1.5 GeV/c

    Muon PID: match to MTD

     

    Library: SL15e_embed


    Code in CVS


    Storage: 
    /gpfs01/star/pwg_tasks/hf02/picodsts/Run14/AuAu/200GeV/mtd/P15ie 

     

     

     

     

     Run 2015
    pp
     @ 200 GeV 

     

    Dimuon: 470602, 480602, 490602
    e-mu:  470601, 480601, 490601
    single-muon: 470600, 480600, 490600

     

    Vertex selection: default vertex
      

    • ! (Vx==0 && Vy==0 && Vz==0)
    • |Vz| < 1e4cm
    • |Vr| < 1e4 cm
    • RefMult >= 0   

     

     

    • Global tracks
    • 0 <= flag <= 1000
    • pT > 0.1 GeV/c
    • nHitsFit >= 15
    • gDCA < 10 cm 

     

     

    Save only muon candidates

    Muon PID: match to MTD 

     

    Library: SL16c

    Code on RCF: /star/u/marr/mtd/PicoDst/Run15/StRoot/StPicoDstMaker/

    Storage: /star/u/marr/data02/PicoDst/Run15_pp200/PicoProd

     

     Run 2015 pAu @ 200 GeV

    Dimuon: 500602
    e-mu: 500601
    single-muon: 500600 

     

     Vertex selection: closest to VPD vertex

      

     

    • ! (Vx==0 && Vy==0 && Vz==0)
    • |Vz| < 1e4cm
    • |Vr| < 1e4 cm
    • RefMult >= 0   
    • Global tracks
    • 0 <= flag <= 1000
    • pT > 0.1 GeV/c
    • nHitsFit >= 15
    • gDCA < 10 cm 

     

    Save only muon candidates

    Muon PID: match to MTD  

     Library: SL16c


    Code on RCF: /star/u/marr/mtd/PicoDst/Run15_pAu200/StRoot/StPicoDstMaker/

    Storage: /star/u/marr/data02/PicoDst/Run15_pAu200/PicoProd

     

    Plots & pictures

     MTD plots, pictures and photos

    Plots can be downloaded in the attachment or the corresponding link.

    Physics plots

    Performance plots


    Physics projection

    Display



    Calibration


    Sketch of a cosmic ray traversing MTD



    Mechanical pictures


      

    Photos

    Installation photos

















    Production afterburner

     The MTD afterburner mode allows new or alternative calibration parameters to be applied on MuDst files. Usually, the MTD hits and PidTratis are modified to relfect a better understanding of the MTD performance. These makers should be run before users' analysis makers.

     


    Run13, pp 500, second part (day 130-161)

    The following makers need to be run to select and calibrate MTD information:

    // libriaries
      gSystem->Load("StBTofUtil");
      gSystem->Load("St_db_Maker");
      gSystem->Load("StMagF");
      gSystem->Load("StMtdHitMaker");
      gSystem->Load("StMtdUtil");
      gSystem->Load("StMtdMatchMaker");
      gSystem->Load("StMtdCalibMaker");

    // setting up chain and MuDST/DB Makers
    StChain *chain = new StChain("StChain");
    StMuDstMaker *muDstMaker = new StMuDstMaker(0,0,"",fileList,"MuDst.root",nfiles);
    St_db_Maker *dbMk = new St_db_Maker("db","MySQL:StarDb","$STAR/StarDb","StarDb");

    // initiate MTD hit maker to apply trigger time window cut used to reject background hits
      StMtdHitMaker *mtdHitMaker = new StMtdHitMaker("mtdHitMaker");
      mtdHitMaker->setSwapBacklegInRun13(2);

    // match maker needs to be re-run everytime hit maker is re-run
      StMagFMaker *magfMk = new StMagFMaker; 
      StMtdMatchMaker *mtdMatchMaker = new StMtdMatchMaker();

    // Use calibration maker to apply the calibration parameters
      StMtdCalibMaker *mtdCalibMaker = new StMtdCalibMaker("mtdcalib"); 

    Publications

    MTD publications

    2019

    Measurements of inclusive J/psi suppression in Au+Au collisions at \sqrt{sNN} = 200 GeV through the dimuon channel at STAR Link
    Measurements of the transverse-momentum-dependent cross sections of J/psi production at mid-rapidity in proton+proton collisions at \sqrt{s} = 510 and 500 GeV with the STAR detector Link

    2016

    Muon identification with Muon Telescope Detector at the STAR experiment NIM

    2014

    Calibration and performance of the STAR Muon Telescope Detector using cosmic rays NIM 

    2012 

    Multigap RPCs in the STAR experiment at RHIC NIM

    2011

    Performance of a new LMRPC prototype for the STAR MTD system NIM 

    2010

    Perspectives of a mid-rapidity dimuon program at the RHIC: a novel and compact muon telescope detector proposal arXiv JPG

    2008

    New Prototype Multi-gap Resistive Plate Chambers with Long Strips arXivNIM

     

    Quality assurance

     This page collects the QA plots for data production

    Run17

    • 20180615: Run17_pp510_picoDst
    • Issue 1
      • I ran my QA code on the picoDst files, and found an issue with the production. Only about 30% of the events survived the standard vertex selection cuts, i.e. |vz_TPC| < 100 cm and |vz_TPC - vz_VPD| < 6 cm. I think the reason for such a low vertex cut efficiency is that the default vertex is selected for picoDst files as the option "PicoVtxDefault" was used. However, given that these events were triggered with the VPD coincidence condition, it is better to use the option similar to that used for the standalone picoDst production, namely "PicoVtxMode:PicoVtxVpdOrDefault TpcVpdVzDiffCut:6", which selects the vertex closest to the VPD vertex. Therefore, a reproduction of the picoDst files will be needed by using the chain option mentioned above.
      • Jerome: At this stage, I suggest (request) we finish what we have ongoing and do not start new production wave of picoDSTs. The incoming format change (being discussed in a few forum) makes moving forward
        for all scheduled samples not optimal.
      • Reference: http://www.star.bnl.gov/HyperNews-star/protected/get/starsoft/10136/2.html

    MTD NPS Maps

    This page hosts the map of MTD's NPS.

    Please find the TOF NPS Maps over here: https://drupal.star.bnl.gov/STAR/subsys/tof/tof-nps-maps

    Last Updated:  February 4, 2021

    mtdnps: (telnet)
    
    1   MTD LV 1: 25-6
    2   MTD LV 2: BL 7-13, 24
    3   MTD LV 3: BL 14-22
    4   MTD-HV (CAEN)
    

    PMD

    Calibration

    Procedure

    STAR-PMD calibration Procedure

    Experimental High Energy Physics Group, Department of Physics, University of Rajasthan, Jaipur-302004

    The present STAR PMD calibration methodology is as follows:

    1. The events are cleaned and hot_cells are removed.
    2. The cells with no hits in immediate neighbours are considered as isolated hits and are stored in a Ttree
    3. The data for each cell, whenever it is found as a isolated cell, is collected and the adc distribution forms the mip for that cell.
    4. The mip statistics is then used for relative gain normalization.

    The steps (1) , (2.) and (3.) have been discussed in detail in past. This writeup concentrates only on (4.) i.e the Gain Normalization Procedure. The writeup here attepts to understand the varations in the factors affecting the gains. It also attempts to determine how frequently should the gain_factors be determined.

    Gain Normalization Procedure.

    We studied gain normalization factors of different datasets of CuCu 200AGeV data . We observed that the gain normalization factors within the SMChain do not vary from one days dataset to another. But the gain normalization of one SMChain wrt another does. So we have represented the total gain normalization factor into two factors

    Total_GNF = Cell_GNF * SMChain_GNF

    • We have to use large statistics to determine Cell_GNF. Here 330K events of day22 data were used because we needed to collect enough statistics for each cell. <
    • The gain_NF of one SM_chain wrt other SM_chains, SMChain_GNF, is determined as follows.
      1. Collect isolated cells for a small number of events.
      2. Using an pre-determined set of cell_GNF normalize the adc value of these isolated cells.
      3. Since these are now relatively normalized, the isolated cell adc distribution of all cells within an smchain are expected to fit to a single landau Even in the cases where a clearly developed mip was not seen in uncorrected data, mip was observed in corected data
      4. The SMChain_GNF can then be determined by two methods:
        • using the MPV values of the landau fit to SMChain mips : mpv_SMChain_GNF
        • using mean values of SMChain mips within the range 0-500 ADC: mean_SMChain_GNF
    • The values of mean and mpv SMChain_GNF differ from each other. These factors are found to vary within short time span and as a result these are different from one day to another ( discussed later). SMChain_GNF used here are determined using 3-4% ( here 13K events for day22 data ) of the statistics required for determining cell_GNF.

    Question: Is it okay to use Cell_GNF determined for using one set of data for normalizing another set of data ?

    To prove that the cell_GNF determined for one set of data ( here day25) can be used for normalizing another day's data( here day22) we have made comparison of SMChain_GNF for day22 data determined using gain factors for day22 and day25 data of CuCu200AGeV.

    1. The cell_GNFs were determined for data from both these days independently.
    2. For a small subset of day22 data, we determined SMChain_GNF using cell_GNF for whole day22 data and hence calculated the total gain factors(total_GNF).
    3. For the same data of day22 but using day25 cell_GNF we have again calculated the the total gain factors.
    4. The difference in the cell_GNF for a smchain for two datasets , were found to be small and were of the order of 5-10%. see figure for difference between Cell_GNF of day22 and day25. Here the fractional difference between cell_GNF values for all cells of sm19 and chain 40 are plotted.
    5. The mean and RMS of the distbn. of fractional difference for all the smchain combinations (a total of 49 combinations) is plotted in the plot 1 of this figure. That all the mean values are close to zero shows the stability of these factors over a certain time span.
    6. The other plots of this figure give the difference in the total_GNF which includes the difference in cell_GNF as well as SMChain_GNF. Plot 2 gives the fractional difference when using mpv_SMChain_GNF and Plot 3 gives the fractional difference when using mean_SMChain_GNF.
    7. Question: Which of the two SMChain_GNF gives a better correction?

      In order to determine which of the two SMChain_GNF gives a better correction, we applied these to a small dataset of unnormalized isolated cell adc values. After applying total_GNF, the mean and the mpv of the resulting SMChainMIP was collected. We observed:

      1. After applying only Cell_GNF and no SMChain_GNF the mean and mpv values were very scattered as expected. See the blue lines in plot 1 and plot2 of this figure
      2. If we apply mean_SMChain_GNF along with Cell_GNF the mean and MPV values are more clustered around a mean value. See green curves in the plot 1 and plot 2 of this figure
      3. If we apply mpv_SMChain_GNF along with CELL_GNF the MPV values show a sharp peak while distribution of means is also better than that observed than in (ii). See Red curves in plot 1 and plot2 of this figure
      4. The plots 3 and 4 of this figure show the resultant PMD mip for 13K events after applying the total_GNF using mean_SMChain_GNF and mpv_SMChain_GNF respectively
      5. The above study was repeated using day25 cell_GNF instead of day22 cell_GNF and the results are given in figure: See this figure

      Question: How frequently do we need to store cell_GNF and SMChain_GNF

      This study shows two things:
      • Cell_GNF are stable and can be used for a longish span of data ( till the difference is <20%). These require a larger statistics and can be stored once for say every 5 days of data taking.
      • mpv_SMChain_GNF are more effective than mean_SMChain_GNF, these 50 numbers(one for each SMChain combination) should also be stored in the DB. The numbers used in the present example are 13K event which is a very small amount of data. SMChain_GNF are fast varying but within the day they vary by ~6%. See this figure . So we need to determine these quantities more frequently than cell_GNF I would propose that they are determined twice a day.

    PP2PP

    PP2PP page

    A_NN, A_SS GPC paper review

    Transverse double spin asymmetries in proton-proton elastic scattering
    at sqrt(s)=200 GeV and small momentum transfer


    Target Journal: Physics Letters B


    PAs: Igor Alekseev, Andrzej Sandacz, Dmitry Svirida


    Abstract:

    Precise measurements of transverse double spin asymmetries A_NN and A_SS in proton-proton elastic scattering at very small values of four-momentum transfer squared, t, have been performed using the Relativistic Heavy Ion Collider (RHIC) polarized proton beams. The measurements were made at the center-of-mass energy sqrt(s) = 200 GeV and in the region 0.003 < |t| < 0.035 (GeV/c)^2 , which was accessed using Roman Pot devices incorporated into the STAR experimental setup. The measured asymmetries are sensitive to the poorly known hadronic double spin-flip amplitudes. While one of these amplitudes, \phi_4 , is suppressed as t \to 0 due to angular momentum conservation, the second double spin-flip amplitude, \phi_2 , was found to be negative and small, but significantly different from zero. Combined with our earlier result on the single spin asymmetry A_N, the present results provide significant constraints for the theoretical descriptions of the reaction mechanism of proton-proton elastic scattering at very high energies.


    Figure 1:

    Figure 1: Difference in R2 double spin normalization ratio for BBC and ZDC as a function of the RHIC fill number during the experiment data taking.

    Figure 2:

    Figure 2: (a) STAR BBC small tiles: white - inner, hatched - outer; the central circle and the dot show the beam pipe and the beam; (b) hit multiplicity distribution in STAR BBC.

    Figure 3:

    Figure 3: Difference in R2 ratio for various BBC parts as a function of the RHIC fill number during the experiment data taking: (a) east and west arms compared for high multiplicity events; (b),(c) and (d) - correspondingly: 'inner' tiles, 'outer' tiles and high multiplicity events compared to the BBC as a whole.

    Figure 4:

    Figure 4: Angular distributions for the asymmetry \epsilon2(\phi)/(P_B P_Y) and their fit with A_2+ + A_2- cos 2\phi: top left - full t-range of the experiment, other panels - individual t-intervals.

    Figure 5:

    Figure 5: Magnified background part of the \chi2 distribution for one of the t intervals and its fit with a sum of exponent and linear function.

    Figure 6:

    Figure 6: Dependence of the extracted asymmetries on the \chi^2_cut value for one of the t-intervals and their fits with quadratic polynomials.

    Figure 7:

    Figure 7: Results on the double spin asymmetries A2+ (left) and A2- (right) and their fits to extract relative amplitudes r2 and r4 .

    Figure 8:

    Figure 8: 1\sigma confidence level ellipses for the relative amplitudes r2 (left) and r4 (right).

    Paper Conclusions:

    In conclusion, we present precise measurements of transverse double spin asymmetries in elastic proton-proton scattering at the CNI region and sqrt(s) = 200 GeV. The experimental uncertainty of the result is about a factor of 10 smaller than that of the previous measurements at the same energy.

    Extensive studies were performed to evaluate and reduce systematic uncertainties originating from the relative luminosity and a background asymmetry. A detailed analysis of the data from various detectors and processes was carried out in a search for an optimal monitor of the relative bunch luminosity, which should be insensitive to double spin asymmetries. It led to the choice of the BBC detectors. The background asymmetry is not related to any physics process, but is dominated by accidental coincidences of scattered protons with beam halo particles. This background effect was studied and subtracted using two approaches, each with its own advantages and disadvantages. The conservative estimate of the corresponding uncertainty was obtained by comparing the results from both approaches.

    The measured asymmetry (A_NN - A_SS )/2 is compatible with zero. On the contrary the values of (A_NN + A_SS )/2 are significantly below zero. Its t-dependence is flat and the absolute values are of the order of 0.005. Our results are at variance, both for the sign and t-dependence, with the latest predictions [4] of the model based on the Regge theory. Using the extracted values of the relative double spin-flip amplitudes r2 and r4 , we conclude that the hadronic double spin-flip amplitudes \phi_2^had and \phi_4^had are different at our kinematic range. This indicates that the exchange mechanism is more complex than an exchange of Regge poles only. This conclusion is further supported by comparing \phi_2^had with the STAR result on the single spin-flip amplitude \phi_5^had [3].

    The STAR measurements of the double and single spin asymmetries, with small uncertainties and at high energy, provide important constraints for theoretical models aiming to describe the spin-dependence of elastic scattering.


    Recent Presentations:

    Early stage discussion

    PWGC preview

    DUBNA-SPIN2013 talk on asymmetries

    DUBNA-SPIN2013 talk on asymmetry uncertainties

    SPIN2014 talk on asymmetries

    SPWGC paper draft discussion

    Comments exchange / paper discussion


    Supporting Documents:

    Relative luminosity and normalization uncertainties

    Double spin asymmetries analysis note


    Paper Drafts:

    Collection of Paper Draft revisions


    References:

    [1] V. Barone, E. Predazzi, High-Energy Particle Diffraction, number XIII in Theoretical and Mathematical Physics, Springer Verlag, 2002. ISBN: 3540421076.

    [2] S. Donnachie, G. Dosch, P. Landshoff, O. Nachtmann, Pomeron Physics and QCD, Cambridge Monographs on Particle Physics, Nuclear Physics and Cosmology, Cambridge University Press, 2005. ISBN: 9780521675703.

    [3] L. Adamczyk, et al. (STAR Collaboration), Phys. Lett. B719 (2013) 62-69. doi:10.1016/j.physletb.2013.01.014. arXiv:1206.1928.

    [4] T. Trueman, Phys. Rev. D77 (2008) 054005. doi:10.1103/PhysRevD.77. 054005. arXiv:0711.4593.

    [5] K. Ackermann, et al. (STAR Collaboration), Nucl. Instrum. Meth. A499 (2003) 624-632. doi:10.1016/S0168-9002(02)01960-5.

    [6] I. Alekseev, A. Bravar, G. Bunce, S. Dhawan, K. Eyser, et al., Phys. Rev. D79 (2009) 094014. doi:10.1103/PhysRevD.79.094014.

    [7] S. Bueltmann, et al., Phys. Lett. B647 (2007) 98-103. doi:10.1016/j. physletb.2007.01.67. arXiv:nucl-ex/0008005.

    [8] N. H. Buttimore, B. Kopeliovich, E. Leader, J. Soffer, T. Trueman, Phys. Rev. D59 (1999) 114010. doi:10.1103/PhysRevD.59.114010. arXiv:hep-ph/9901339.

    [9] E. Leader, T. Trueman, Phys. Rev. D61 (2000) 077504. doi:10.1103/ PhysRevD.61.077504. arXiv:hep-ph/9908221.

    [10] L. Lukaszuk, B. Nicolescu, Lett. Nuovo Cimento 8 (1973) 405.

    [11] T. Trueman (2005). arXiv:hep-ph/0604153.

    [12] N. H. Buttimore, E. Gotsman, E. Leader, Phys. Rev. D18 (1978) 694-716. doi:10.1103/PhysRevD.18.694.

    [13] R. Battiston, et al. (Amsterdam-CERN-Genoa-Naples-Pisa Collaboration), Nucl. Instrum. Meth. A238 (1985) 35. doi:10.1016/0168-9002(85) 91024-1.

    [14] D. Svirida, for the STAR Collaboration (STAR Collaboration), Conf. Proc. of XV Advanced Research Workshop on High Energy Spin Physics (DSPIN- 13, Dubna, October 8-12, 2013) C131008 (2014) 319-322.

    [15] C. Adler, A. Denisov, E. Garcia, M. J. Murray, H. Strobele, et al., Nucl. Instrum. Meth. A470 (2001) 488-499. doi:10.1016/S0168-9002(01) 00627-1. arXiv:nucl-ex/0008005.

    [16] C. Whitten (STAR Collaboration), AIP Conf. Proc. 980 (2008) 390-396. doi:10.1063/1.2888113.

    [17] CNI Polarimeter Group at BNL (2012). URL: https://wiki.bnl.gov/rhicspin/Results.

    A_N Paper GPC and Collab. Review

    This is the webpage for the GPC review of the paper 

    Single Spin Asymmetry AN in Polarized Proton-Proton Elastic Collisions at sqrt(s) = 200 GeV

    Analysis Note

    Analysis Note (Kin Yip)

    There is also a copy at the STAR Notes area:

    http://drupal.star.bnl.gov/STAR/starnotes/private/psn0559 .





    A_N Paper Draft

    Paper Draft Feb. 24

    Paper Draft (Feb. 2 2012)

    Paper Draft Feb. 28, 2012

    Version 1 (Apr. 4-12, 2012)

    Version 1 --- maintained by Kin Yip

     


    Done

    Version 2 (after Collaboration Review)

    Originally from the complete version (http://drupal.star.bnl.gov/STAR/blog/yipkin/2012/apr/26/paper-q-a) which includes implementation of a lot of obvious text corrections and  here only those related to "Physics" are included :


    Flemming Videbaek :

     
    > - Is there a reason that 'transverse' is not in the title of the paper?
     
    "Transverse single spin" comes up at the first sentence in the abstract.  We would not like to change the title now.
      
    > line 168. I suggest you introduce East and West here rather than later. Also I think downstream is
    > ill defined in a collider enviroment.
     

    A few persons/groups have made suggestions to this sentence.  Now it's changed to be:

    "The Roman Pot stations are located on either side of the STAR interaction point (IP) at 55.5 m and 58.5 m with horizontal and vertical insertions of detectors respectively."
     

    > line 197 -- I am a little surprised that the signal spreads over up to 5 strips (500microns). Is that
    > reasonable? Do you have a explanation.             Also why not give the S:N values.
     
    The text just indicates that the set of thresholds are used mostly to deal with the clusters with a length of 3-5 strips.
     
    Concerning your comment on the five strip signal spread, most proton hits involve less, one and two are normal and most common.  A hit of two strips is charge sharing between neighboring strips.
     
    There is a simple explanation which is due to delta rays from the dE/dx process.
     
    Otherwise, there is also non-negligible electronic coupling between adjacent strips. We saw that already during pp2pp days, very large signals had (energy deposits) caused activity in neighboring channels.
     
    Protons can also disintegrate, for example at the entrance window or inside the silicon itself. in some cases this can still lead to a valid hit in case the secondary particles are in the forward direction very close to the original proton's momentum and no veto occurs.
     
    > line 225. Is the beam p really know to this accuracy?
     
    The fractional error is in the order of 1E-4 is.  We change it to 100.2 GeV/c (instead of 100.22 GeV/c).
     
    > Fig 6. Suggest to add in caption that error is stat+syst (as far I can tell).
     
    Added a sentence "All error bars shown include both statistical and systematic errors."  at the end of the caption.
     

     

    Andrzej Sandacz:

      

    > B) line 211 and Eqs (7), (8), (9)

    > L^{eff}, and also L^{eff}_x, L^{eff}_y are not defined explicitely.

    > One can try to infer which elements of the transport matrix may correspond to L^{eff} by guessing that

    > they must be the largest values, but still distiction between L^{eff}_x and L^{eff}_y is not

    > straightforward. The problem arised after replacing Eq. (7) written formerly for the general case in

    > terms of symbols, by the present  selected example with values of the transport matrix elements for

    > a particular store. I understand the aim of the change: to give idea about values of TM elements.

    > But for an average reader these precise values probably are only of moderate interest.  Thus I would

    > propose to go back to the  previous version, i.e. general Eq. (7) with symbols, which is more transparent.

     

    Now, we have the transport matrix in both symbols and in real nos. and so L^{eff}_x and L^{eff}_y are clearly defined. 

     


    Hal for the Argonne group :

     

    > ** l.168  -  maybe change "downstream" -> "away from"??

     

    A few persons/groups have made suggestions to this sentence.  Now it's changed to be:

    "The Roman Pot stations are located on either side of the STAR interaction point (IP) at 55.5 m and 58.5 m with horizontal and vertical insertions of detectors respectively."

     

    > ** l.225  -  "... with p = 100.22 GeV/c the beam momentum."  or something like this to explain why p is not 100.00.

     

    Actually, even the total energy is not 100.00 GeV.  All of these come from the fact that G-gamma has been set to be 191.5 between 191 and 192 (to avoid integers which might result in resonance).  We've changed it to be "p = 100.2 GeV/c" as have the fraction error in the order about 1e-4. 

     

    > ** Eq. 11  -  this equation assumes perfectly transversely polarized beams.  We believe it is possible that due to

    > non-ideal beam orbits and magnetic fields or magnet alignments, the beams may have a small longitudinal

    > component at the STAR IR even when the rotators are off. This could lead to a (A_LS = A_SL)sin(phi) term in the

    > equation.  It will (probably) also be negligible in your analysis as you describe ~ line 241. Please consider whether

    > you wish to add such a term in the equation and a couple words in the text about it.  Alternately, you might

    > mention that higher order correction terms are ignored in the equation.

     

    We now mention "higher order correction terms are ignored" under Equation 11, as you suggest.

       

    > ** l.272  -  perhaps quote the actual values instead of "~55" and "~10"??

     

    The "collinearity" in reality is slightly different from run to run, not just one single value.

     

     

    > ** l.283  -  do we understand correctly that there may be a false asymmetry that is proportional to the beam polarization, and it has not been ruled out by your tests? 

     

    There we try to explain in detail that the false asymmetry is ~ 0.

     

     

    > ** Fig.6  -  The text and error bar run into each other for (d).

    >   Fig.6 caption  -  is the vertical dashed line the average of experimental values or just zero or ...?? 

    > Please include in the caption. 

     

    We've added "The vertical dashed line indicates where Im($r_5$)=0." at the end of the Fig. 6 caption.

     

     

    > ** l.299,300  -  this sentence is a bit confusing.  You use "variable" for both deltaB and Re r_5, we believe.  Maybe

    > something like "The remaining lines show changes of Re r_5 and Im r_5 when the parameter was varied by +/- 1

    > sigma during the fit procedure."  We are not sure this is even what is meant.  Sorry. 

     

    Actually, you've understood perfectly. We've made the changes as you've suggested.

     

     

    > ** l.308,9  -  "... more emphasized in estimating ..."  we are confused by this and aren't sure what the  

    > authors are trying to say.  

     

    It just says the AN peak is more sensitive to Im(r5) and thus attracts more attention.  This explains why we look at the Im(r5)'s in Fig. 6.

     

     

     


     


    Panjab University :


    > Page 6 line no. 222
    > After the selection chi**2 < 9, please explain.

    Most cuts are at 3 sigma-level and so this so-called chi**2 at 3-sigma is 3**2 = 9.


    > Page 7 FIG. 3
    > Is it possible to display the distribution of forbidden asymmetry  for the five t ranges in
    > FIG. 3(a)-(e) instead of showing for the whole range of t in FIG.3f?
    >In FIG. 3(a) the error bars are large for two points around phi 80. Similar trend is seen in FIG. 3(e).
    >It is not seen for the negative phi values. Any reason.

    We feel that the figures are already too busy and it's not good to add more. The larger error b\
    ars are because of low statistics and they're related to how close the respective vertical roma\
    n pots were moved to the center of the beampipe. During the run, we tried to move the pots as c\
    lose to the beampipe as possible without having too much background.


    > Page 8 Table 1
    > First bin (0.003< -t <0.005), the statistics is less(20%) as compared to other bin. Its width is
    > 0.002 as compared to 0.005 and 0.15 for other bins. I think fit in FIG. 4 should be made removing this point.

    The point with the smallest -t range is probably the most interesting point in this measurement\
     and it contributes a lot in determining the shape.
     

     



     Janusz from the Cracow group :

    > General:
    > Explain fully the coordinate system shown in Fig. 1

    In the caption of Fig. 1, I now add "Positive y is pointing towards the sky and positive x is pointing to the center of the RHIC ring."

    > 144. At very high energy sqrt(s) - -> At high centre of mass energy, sqrt(s),

    Now, we define "center of mass energy" for sqrt(s) in "Introduction" (the 1st time that it appears, line 127) and here, we just say "At very high sqrt(s)".

    > 154-155. The contribution of the two spin-flip amplitudes, …., to the asymmetry AN is small as  indicated by  both experimental estimates [] and th. pred.[18]

    OK.  We have changed it to :

    "The contribution of the two double spin-flip hadronic amplitudes, …., to the asymmetry AN is small, as  indicated by  both th. pred.[18] and experimental estimates [19,20]".


    > Section 3
    > 168. on each side - -> on West and East side

    A few persons/groups have made suggestions to this sentence.  Now it's changed to be:

    "The Roman Pot stations are located on either side of the STAR interaction point (IP) at 55.5 m and 58.5 m with horizontal and vertical insertions of detectors respectively."

    > 190. Give also value of the distance of 10 mm in units of the beam width at RPs.

    It's about 10-12 sigma's but it's probably difficult to be exact.

    > 214.  small correction less than 4 $\mu$rad, the full  - ->  small, less than 4 $\mu$rad, correction, the full

    We've adopted Hal's suggestion : "small corrections of less than 4 $\mu$rad" .

    > 218-221. are not very clear, especially the use of “similar’ or “typical”
    > replace: are taken from the fits similar to those in Fig. 2 - -> are taken from the fits to data performed for each run. An example is presented in Figs. 2(a) and 2(b).

    We've changed to  "are taken from the fits to data performed for each data segment.  An example is shown in Fig.~2."

    > 257.  the position of the t = 0 trajectory … - -> the position of the t = 0 elastically scattered proton trajectory   or   the beam position

    "t = 0 trajectory" is an ideal trajectory with no scattering. 

    > 268-269. The simulation included … optics - -> Simulation of the elastically scattered proton transport through the 
    > RHIC magnetic lattice and the apertures was performed and the detector acceptance was calculated.

    The present form seems cleaner and the acceptance is mentioned in the following sentence.

    > 274-275. Assuming the background is unpolarized - -> Assuming that the background is the beam polarization 
    > independent …

    "unpolarized" is probably easier to understand.

     

    >298-302. Table II shows the fitted values of Re r5 and Im r5 together with statistical and total systematic 
    > uncertainties. Also, the contributionsto the systematic uncertainty are given in this table. They are due to: 
    > systematic  uncertainty on Leff , alignment ……. .They were obtained by changing the value of the considered 
    > parameter by $\pm$1 standard deviation.

    A few people have commented/suggested, we've changed to:

    "In Table II, we show the central value of the fit  and uncertainties on Re$\:r_5$ and Im$\:r_5$ due to various effects. In the first line of the table, the statistical error to the fit with the central value of the parameters is shown. The remaining lines show changes of Re$\:r_5$ and Im$\:r_5$, when the parameter was varied by $\pm$1$\:\sigma$ during the fit procedure. ......"

    > 308-309 Remove sentence: Since the maximum... since it is not needed

    This sentence explains why we look at Im(r5), but not Re(r5).
     

    > Present the main result with bold face font

    It's difficult to say which nos. are more important in this table.   ( And we have changed 1st two lines into one line as Andrzej has suggested. )

    > Table II. …. Measurement induced uncertainties  (1) – statistical, …. (4). Uncertainties associated with the fit: (5) – the total cross-section …

    Changed to "Table II: .....  (1): Statistical Uncertainties. (2)-(4): Systematic uncertainties associated with this measurement.
    (5)-(7): Systematic uncertainties associated with ...."
      


    Steven Heppelmann for Penn State U. :

    >If I look for an issue it would be the rather large chi-square value on Figure 3b (32.68 for 17 DOF). This is fairly unlikely
    >(1-2%) and might suggest that the errors bars on the 17 points should actually be about sqrt[2] larger than the ones
    >plotted. In fact, most of the points of Figure 3 have chi2/dof >1 which as a group is somewhat unlikely too.

    >Because the statistical error from the fits are only about 1/2 of the error on polarization, I don't think the story would
    >change much but I just wanted to comment and ask if some words in the text should be added to indicate that this has
    >been considered in the systematic error analysis.

     

    If we look at the variation of point-to-point, it's bigger than statistical variations. That means there
    are some point-to-point systematic uncertainties. These variations are likely from the variations of 
    geometry of t=0 and the uncertainties are factored in systematic uncertainty of delta_t(alignment).
    If we add systematic errors to the points before the fits, we'll certainly get better/more realistic chi-sqs.


    The other point is that the function we are fitting with is not "perfect".
    If we add high order/off-set terms, such as phi0 which we've decided to drop, chi-squares get better.
    We have chi2/dof = 11.1/16,  25.8/16, 12.6/16,  23.2/16, and 14.2/16 for the 5-bins (when phi0 is included).

     
    The errors look small partly because we use the same “large scale” for all the plots because
    we want to accommodate the with the largest AN scale, ie. 3(a), in which the chi2/dof = 11.32/17 < 1.
    For the other 3 fits,  chi2/dof~1.18, 1.54 and 1.32 which seem to be reasonable experimental fitting results. 
    And of  course, we've used the method/formula to calculate the error on each bin.
     



    Shan Dong University  :
     

    > 1)In line 214,215, "full transport matrix was used", does this mean Eq(8),Eq(9) and L_{x,y}^{eff} are not used for

    > the calculation of angles? If so, how to understand the uncertainties of L^{eff} in line 257,263,274 and in table II? 

    > Or they are only used in estimating the uncertainties?

     

    L^{eff} (for x or y) are just two of the elements (the two > 20 m) in the transport matrix

    and so when the transport matrix was used, they were indeed used. These were the two dominating terms in the transport matrix and the other elements are very small

    compared to L^{eff}.  Conceptually, it's often easier to just consider these two terms

    when you try to understand various things in the analysis. So, equations (8) and (9) are

    just approximations to help people understand/grasp the main idea and the transport

    matrix has been used in the analysis.  And indeed, the uncertainties L^{eff} are

    essentially the uncertainties of the transport.

      

    > 3) Line 241: "preliminary results of this experiment [20] show that..."

    > Are these results not part of this analysis or not further checked using the

    > final data sample?  It is a bit surprising to cite preliminary results of ourselves

    > for the same analysis, as we are the same collaboration or group.

     

    The double-spin asymmetries A_NN/A_SS indeed belong to another set of analysis. Unlike the single-spin asymmetry (A_N) in this paper,  there is NO square-root formula for extracting A_NN/A_SS. The square-root formula helps cancel out a lot of luminositybunch variance etc. We therefore need reliable normalization (bunch intensities etc.) for the A_NN/A_SS analysis which is an ongoing effort. We'd like to publish the A_N and r5 results first.

     

    > 4) In line 305, "Re r_5 =0.00167 +/- 0.0063 in line 306 Im r_5=0.00722+/-0.057",

    > The rounding of the digits should be consistent and make real sense as in other places

    > in the paper.

     

    What we've done is to show 3 significant figures for the measurements and 2-significant figures

    for the errors.  A "typical" (but not always) rule for displaying experimental uncertainties is to show

    1-significant figure less since the uncertainty is an estimate and cannot be more precise than the best

    estimate of the measured value.  For some errors are too small compared to the dominant one (polarization),

    so we've also restricted the significance to 4 decimal points which is ~O(1%) of relative accuracy.

     

     



    Stephen Bültmann :

    > line 171 : only -> almost exclusively

    Done !

    > line 172 : insensitive -> nearly insensitive

    Done !

     


    Last edited by Kin (May 9, 2012)

     

     

    Version 3.2 (May 24, 2012)

    Version3.2(May 30, 2012)

    Vfinal: Submission to PLB

    The pdf file of the paper and tex/figures in tar-gzipped format are attached here.

    Alignment and Survey

    Note on Survey Alignment of the Roman Pots

    Local Alignment

    Corrections to Local Alignment

    Analysis Codes

    1. The MuDst (for the 2009 pp2pp/STAR run) was made using the $STAR/StRoot/St_pp2pp_Maker together with all other standard packages in the STAR reconstruction chain.


    2. The codes to analyze the MuDst are only AnalyzeMu.h  and  AnalyzeMu.C  in $CVSROOT/offline/pp2pp/kinyip .
    3. Running it requires several input files and some of which depend on which run number you're running against.

       

      // ---------- Codes (say "mytest.C")  to compile and run once if you have the filename(s) in .file.list --------------

      // --- several run-dependent input files are really needed to run it meaningfully.

      class StChain;

      StChain *chain=0;   

      void mytest( string output_file = "hist.root" ){

        gROOT->LoadMacro("$STAR/StRoot/StMuDSTMaker/COMMON/macros/loadSharedLibraries.C");
        loadSharedLibraries();
        gSystem->Load("libStBFChain");    

        gROOT->LoadMacro("${HOME}/MuDstpp2pp/AnalyzeMu.C++");

        chain = new StBFChain();

        StMuDstMaker *maker = new StMuDstMaker(0,0,"",".file.list","",1000);

        AnalyzeMu( maker, output_file ) ;

        if ( chain != 0 )  delete chain ;

        return ;

      }

      // --------------------------  End -----------------------------------------------

    4. Officially, at the end of the GPC review, the above testing codes have been put in the official area of  $CVSROOT/offline/paper/psn0559.

       

    Comments by the GPC and replies by the PAs

    Optics and Magnet Strength Determination

    Phil's notes

    Response to Collaboration Review

    1. The latest Version 3.2 draft of the paper can be found:

    http://drupal.star.bnl.gov/STAR/subsys/pp2pp/anpaper-review/paper-draft/version32may-30-2012

    2. Our responses to the comments from the collaboration, with links to responses to individual institutions can be found below. You should see by the name of the file.

    http://drupal.star.bnl.gov/STAR/subsys/pp2pp/anpaper-review/response-collaboration-review

    We also created a file which contains summary of responses to the comments and major questions. The link to the file can be found on the bottom of the list.

    xxxx

    SPin Formalism A_N

    A_N Paper Proposal

    Paper Proposal (JH Lee)

    Collaborators' files

    Andrew Gordon

    You do not have access to view this node

    Andrzej Sandacz

    Angelika Drees

    Dana Beavis

    Dave Underwood

    Dmitry Svirida

    Donika Plyku

    Igor Alekseev

    Ivan Koralt

    J.H. Lee

    Kin Yip

    My analysis note for the 2009 data

    May still update from time to time.

    Stephen Bueltman

    Steve Tepikian

    Tomek Obrebski

    Cuts Technical Notes

    Tonko Ljubicic

    Wlodek Guryn

    Conference proceedings

    Conference talks and presentations

    Phase II information

    Run 2009 information

    List of runs

     

                                         
    Run # Date Start Stop Duration # Events # Elastic Elas Frac Comment Store Pos B Left B Right B Top B Bot Y Left Y Right Y Top Y Bot
                                         
    10181085 30-Jun 22:53 23:21 0:28 999833 548950 0.55   1 1 10.3 10.3 15.4 15.2 10.4 10.6 10.3 10.5
    10181086 30-Jun 23:23 0:16 0:53 1999935 972055 0.49   1 1 10.3 10.3 15.4 15.2 10.4 10.6 10.3 10.5
    10182001 1-Jul 0:17 0:32 0:15 559257 270138 0.48   1 1 10.3 10.3 15.4 15.2 10.4 10.6 10.3 10.5
    10182002 1-Jul 0:34 1:29 0:55 1999945 970560 0.49   1 1 10.3 10.3 15.4 15.2 10.4 10.6 10.3 10.5
    10182003 1-Jul 1:31 1:33 0:02 10001   0.00 pedestal 1 1 10.3 10.3 15.4 15.2 10.4 10.6 10.3 10.5
    10182004 1-Jul 1:34 2:32 0:58 1999950 964171 0.48   1 1 10.3 10.3 15.4 15.2 10.4 10.6 10.3 10.5
    10182005 1-Jul 2:33 3:32 0:59 1999839 962600 0.48   1 1 10.3 10.3 15.4 15.2 10.4 10.6 10.3 10.5
    10182006 1-Jul 3:34 4:36 1:02 1999942 950144 0.48   1 1 10.3 10.3 15.4 15.2 10.4 10.6 10.3 10.5
    10182011 1-Jul 5:58 6:00 0:02 10001   0.00 pedestal 1 1 10.3 10.3 15.4 15.2 10.4 10.6 10.3 10.5
    10182015 1-Jul 7:13 8:15 1:02 1999916 1016735 0.51   1 2 8.9 10.3 10.2 10.2 10.2 10.3 5.0 10.3
    10182016 1-Jul 8:20 9:31 1:11 1999957 980482 0.49   1 2 8.9 10.3 10.2 10.2 10.2 10.3 5.0 10.3
    10182021 1-Jul 10:12 10:36 0:24 675560 333461 0.49   1 2 8.9 10.3 10.2 10.2 10.2 10.3 5.0 10.3
    10182025 1-Jul 10:57 12:02 1:05 1593827 741882 0.47   1 2 8.9 10.3 10.2 10.2 10.2 10.3 5.0 10.3
    10183005 2-Jul 0:16 0:17 0:01 10001   0.00 pedestal 2 3 10.2 10.3 10.2 10.2 16.9 17.2 15.9 16.6
    10183013 2-Jul 1:44 2:03 0:19 778275 312823 0.40 no STAR 2 3 10.2 10.3 10.2 10.2 16.9 17.2 15.9 16.6
    10183014 2-Jul 2:04 2:17 0:13 484155 204489 0.42 no STAR 2 3 10.2 10.3 10.2 10.2 16.9 17.2 15.9 16.6
    10183015 2-Jul 2:20 3:12 0:52 1999933 799304 0.40   2 3 10.2 10.3 10.2 10.2 16.9 17.2 15.9 16.6
    10183016 2-Jul 3:13 4:08 0:55 1999935 795409 0.40   2 3 10.2 10.3 10.2 10.2 16.9 17.2 15.9 16.6
    10183017 2-Jul 4:10 5:09 0:59 1999969 793377 0.40   2 3 10.2 10.3 10.2 10.2 16.9 17.2 15.9 16.6
    10183018 2-Jul 5:23 6:19 0:56 1999933 842789 0.42   2 4 10.2 10.3 10.2 10.2 14.5 14.7 10.9 12.8
    10183019 2-Jul 6:22 6:24 0:02 10001   0.00 pedestal 2 4 10.2 10.3 10.2 10.2 14.5 14.7 10.9 12.8
    10183020 2-Jul 6:27 7:27 1:00 1999960 838079 0.42   2 4 10.2 10.3 10.2 10.2 14.5 14.7 10.9 12.8
    10183021 2-Jul 7:29 8:32 1:03 1999942 833429 0.42   2 4 10.2 10.3 10.2 10.2 14.5 14.7 10.9 12.8
    10183025 2-Jul 8:50 8:51 0:01 10001   0.00 pedestal 2 5 6.4 9.0 8.9 8.9 7.6 12.8 7.8 9.6
    10183027 2-Jul 9:10 9:46 0:36 1394100 638263 0.46   2 5 6.4 9.0 8.9 8.9 7.6 12.8 7.8 9.6
    10183028 2-Jul 9:53 11:24 1:31 3428889 1578849 0.46   2 5 6.4 9.0 8.9 8.9 7.6 12.8 7.8 9.6
    10183034 2-Jul 12:59 13:33 0:34 998537 448586 0.45   2 6 8.9 8.4 10.2 10.2 7.0 7.8 7.1 7.1
    10183035 2-Jul 13:36 14:16 0:40 1096472 479953 0.44   2 7 8.9 8.4 10.2 10.2 8.0 8.8 8.1 8.1
    10183036 2-Jul 14:17 14:18 0:01 267   0.00 CP trig test 2 7 8.9 8.4 10.2 10.2 8.0 8.8 8.1 8.1
    10183037 2-Jul 14:20 15:34 1:14 1999962 879881 0.44   2 7 8.9 8.4 10.2 10.2 8.0 8.8 8.1 8.1
    10183038 2-Jul 15:35 15:42 0:07 160125 70097 0.44 beam abort 2 7 8.9 8.4 10.2 10.2 8.0 8.8 8.1 8.1
    10183061 2-Jul 21:05 21:07 0:02 10001   0.00 X-shift 15 3 8 20.0 20.0 20.0 20.0 20.0 20.0 20.0 20.0
    10183062 2-Jul 21:07 21:07 0:00 154   0.00 X-shift 17 3 8 20.0 20.0 20.0 20.0 20.0 20.0 20.0 20.0
    10183065 2-Jul 21:16 21:17 0:01 10001   0.00 X-shift 11 3 8 20.0 20.0 20.0 20.0 20.0 20.0 20.0 20.0
    10183066 2-Jul 21:18 21:20 0:02 10001   0.00 X-shift 13 3 8 20.0 20.0 20.0 20.0 20.0 20.0 20.0 20.0
    10184002 3-Jul 1:08 1:10 0:02 10001   0.00 pedestal 4 9 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0
    10184016 3-Jul 4:09 4:52 0:43 1855459 805218 0.43   4 10 10.3 10.3 14.1 11.4 19.5 16.0 16.5 19.1
    10184017 3-Jul 4:53 5:43 0:50 1999853 866765 0.43   4 10 10.3 10.3 14.1 11.4 19.5 16.0 16.5 19.1
    10184018 3-Jul 5:45 6:36 0:51 1999925 804721 0.40   4 11 10.3 10.3 15.3 12.6 19.5 16.0 16.5 19.1
    10184019 3-Jul 6:37 7:28 0:51 1999957 798679 0.40   4 11 10.3 10.3 15.3 12.6 19.5 16.0 16.5 19.1
    10184020 3-Jul 7:30 8:24 0:54 1999951 796463 0.40   4 11 10.3 10.3 15.3 12.6 19.5 16.0 16.5 19.1
    10184021 3-Jul 8:25 8:30 0:05 181881 72200 0.40   4 11 10.3 10.3 15.3 12.6 19.5 16.0 16.5 19.1
    10184030 3-Jul 10:55 11:54 0:59 1999956 883236 0.44   4 12 9.1 9.1 9.6 8.9 8.3 8.3 8.4 8.4
    10184031 3-Jul 11:54 12:53 0:59 1999935 885745 0.44   4 12 9.1 9.1 9.6 8.9 8.3 8.3 8.4 8.4
    10184032 3-Jul 12:54 13:53 0:59 1999939 887591 0.44   4 12 9.1 9.1 9.6 8.9 8.3 8.3 8.4 8.4
    10184033 3-Jul 13:54 14:53 0:59 1999969 899709 0.45   4 12 9.1 9.1 9.6 8.9 8.3 8.3 8.4 8.4
    10184034 3-Jul 14:53 14:55 0:02 288 1 0.00 beam abort 4 12 9.1 9.1 9.6 8.9 8.3 8.3 8.4 8.4
    10184038 3-Jul 15:25 15:27 0:02 10001   0.00 pedestal 5 13 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0
    10184044 3-Jul 18:35 18:37 0:02 10001   0.00 pedestal 5 13 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0
    10185001 4-Jul 0:29 0:30 0:01 27297 10982 0.40 TPC limit 6 14 9.0 9.8 19.3 16.6 20.1 17.9 17.3 19.1
    10185002 4-Jul 0:32 0:32 0:00 8866 3991 0.45 rate limit 6 14 9.0 9.8 19.3 16.6 20.1 17.9 17.3 19.1
    10185003 4-Jul 0:34 0:40 0:06 253838 116882 0.46 TPC limit 6 14 9.0 9.8 19.3 16.6 20.1 17.9 17.3 19.1
    10185004 4-Jul 0:42 1:28 0:46 1999884 978901 0.49   6 14 9.0 9.8 19.3 16.6 20.1 17.9 17.3 19.1
    10185005 4-Jul 1:29 2:16 0:47 1999912 971976 0.49   6 14 9.0 9.8 19.3 16.6 20.1 17.9 17.3 19.1
    10185006 4-Jul 2:17 3:10 0:53 1999923 958470 0.48   6 14 9.0 9.8 19.3 16.6 20.1 17.9 17.3 19.1
    10185007 4-Jul 3:16 3:46 0:30 1125   0.00 Vernier scan 6 15 70.0 70.0 70.0 70.0 70.0 70.0 70.0 70.0
    10185008 4-Jul 3:47 3:52 0:05 320   0.00 VPD min bias 6 15 70.0 70.0 70.0 70.0 70.0 70.0 70.0 70.0
    10185013 4-Jul 4:32 4:35 0:03 10001   0.00 pedestal 6 15 70.0 70.0 70.0 70.0 70.0 70.0 70.0 70.0
    10185015 4-Jul 5:00 5:02 0:02 10001   0.00 pedestal 6 16 6.5 8.4 10.2 7.0 13.2 10.9 10.3 12.8
    10185016 4-Jul 5:17 5:25 0:08 59102 31092 0.53 no-0 supp 6 16 6.5 8.4 10.2 7.0 13.2 10.9 10.3 12.8
    10185018 4-Jul 5:28 6:14 0:46 1999921 1068287 0.53   6 16 6.5 8.4 10.2 7.0 13.2 10.9 10.3 12.8
    10185019 4-Jul 6:19 7:04 0:45 1999908 1064505 0.53   6 17 7.1 8.4 10.8 7.6 13.2 10.9 10.3 12.8
    10185020 4-Jul 7:04 7:48 0:44 1999758 1086820 0.54   6 17 7.1 8.4 10.8 7.6 13.2 10.9 10.3 12.8
    10185023 4-Jul 7:58 8:32 0:34 1469066 799732 0.54 beam abort 6 17 7.1 8.4 10.8 7.6 13.2 10.9 10.3 12.8
                                         
    Sum       34:53:00 72,154,615 33,018,472 0.46                      
                                         

    MuDST documentation

    Copied from https://lists.bnl.gov/mailman/private/starpp2pp-l/2010-March/001007.html,  Donika and Kin have made up a document for the MuDST information :


    Run 2013 Information

     The list of ehternet connected devices

     
    IP East – 5 o’clock
    IP West – 7 o’clock
    Device
    Name
    IP Address
    MAC Address
    Name
    IP Address
    MAC Address
    1Web Camera 1 (pot)
    pp2pp-cam03
    130.199.90.6
     
    pp2pp-cam01
    130.199.90.9
     
    1Web Camera 2 (elec.)
    pp2pp-cam04
    130.199.90.7
     
    pp2pp-cam02
    130.199.90.22
     
    1APC Power Switch
    pp2pp-iboot01
    130.199.90.26
    00 C0 B7 C3 3E F1
    pp2pp-iboot02
    130.199.90.38
    00 C0 B7 C3 3E F7
    2NI GPIB-ENET/100
    pp2pp-gpib01
    130.199.90.47
    00 80 2F  0A 0D 37
    pp2pp-gpib02
    130.199.90.51
    00  80 2F  0A 03 7C 
    2ADAM-ENET
    pp2pp-adam01
    130.199.90.43
    00 DO C9 35 91 43
    pp2pp-adam02
    130.199.90.46
    00 D0 C9 35 92 96
     3Hyper Terminal
    pp2pp-star90
    130.199.90.81
     
    1007wdigi
    130.199.90.120
     
     
     
     
     
     
     
     
    2Slow Controls PC
    pp2pp-slow
    130.199.90.72
    00 04 5A 62 95 76
     
     
     


    1Name.bnl.gov

    2Name.pp2pp.bnl.gov
    3Name.c-ad.bnl.gov


    Default routers = 130.199.90.24   ( 130.199.60.24 )

    IP broadcast address = 130.199.91.255

    IP subnet mask = 255.255.254.0

     

    Site DNS server address =    130.199.1.1            130.199.128.31

    Site WINS server address =   130.199.128.32      130.199.1.2

    Installation and running of slow-server

    List of additional software for installation

    (It could also be required to install some missing packages from the system distribution like kernel-devel, sox etc.)
    1. National Instruments GPIB drivers. 
    2. Advantech ADAM 4570 driver.
    3. LabView for Linux.
    4. Configure GPIB devices using gpibexplorer (/usr/local/natinst/ni4882/bin/gpibexplorer in default instllation). IP addresses are listed above.
    5. Compile and install advantech driver. Copy configuration files from the attached advtty_conf.tgz to the proper directories:
    • 80-advtty.rules to /etc/udev/rules.d
    • advtty to /etc/init.d
    • advttyd.conf to /usr/local/advtty (for default installation) --- (KY: 2014-9-5) which contains the IP's for ADAM.
    1. Enable and start advttyd using system-config-services

    Running of slow control

    1. Login as user daq.
    2. You may need to rebuild lv-pp2pp-slow.so library if the server IP changed. cd to pp2pp-slow, edit pp2pp-slow.h for the new IP and run make.
    3. Start the server if it was not running : pp2pp-slow-server &. It is assumed that we keep it running all the time.
    4. Start the LabView application lv-ppslow and press "Run continuously" button. Generally application could be run from a PC different from the server.
    5. In case of some configuration changes - edit the configuration file bin/pp2pp-slow.conf and restart the server either manually or using the LabView application.

    Technical Notes

     

    Angelika Drees: Luminosity measurement

    Collaborators' files

    Spin Formalism

     

    Useful figures and pictures

    RICH

    Roman Pot Phase II*


    Technical information

    DAQ-expert materials

     DAQ-expert materials

    Test of Si detector packages:

    A-6 A :  STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_A6_detAcorrect.pdf

    A-6 B :  STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_A6_detBcorrect.pdf

    A-6 C :  STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_A6_detCcorrect.pdf

    A-6 D:  STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_A6_detDcorrect.pdf


    A-6 Trigger system: STAR/system/files/userfiles/2729/file/Assembly_test/A_6_cosmic.pdf



    A-1 A: STAR/system/files/userfiles/2729/file/Assembly_test/AssemblyA1_detA(1).pdf

    A-1 B: STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_A1_detB(1).pdf

    A-1 C: STAR/system/files/userfiles/2729/file/Assembly_test/Assembl_A1_detC.pdf

    A-1 D: STAR/system/files/userfiles/2729/file/Assembly_test/Assembl_A1_detD.pdf


    Trigger system for A-1 assembly: STAR/system/files/userfiles/2729/file/Assembly_test/PMT_A1.pdf



    A-4 A: STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_A_4_detA.pdf

    A-4 B: STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_A_4_detB.pdf

    A-4 C: STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_A4_detC.pdf

    A-4 D: STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_A4_detD.pdf

    Trigger system for A-4 assembly:  STAR/system/files/userfiles/2729/file/Assembly_test/PMT_A_4.pdf







    B-1 A: STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_B1_detA.pdf

    B-1 B: STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_B_1_detB.pdf

    B-1 C: STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_B_1_detC(1).pdf

    B-1 D: STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_B_1_detD.pdf

    Trigger system for B-1 assembly: STAR/system/files/userfiles/2729/file/Assembly_test/B_1_cosmic.pdf



    B-2 A: STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_B2_detA.pdf

    B-2 B: STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_B_2_B.pdf

    B-2 C: STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_B_2_C.pdf

    B-2 D
    : STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_B_2_D.pdf

    Trigger system for B-2 assembly: STAR/system/files/userfiles/2729/file/Assembly_test/PMTB2.pdf



    B-4 A: STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_B4_detA.pdf

    B-4 B: STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_B4_detB.pdf

    B-4 C: STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_B4_C.pdf

    B-4 D: STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_B4_detD.pdf

    Trigger system for B-4 assembly: STAR/system/files/userfiles/2729/file/Assembly_test/B_4_cosmic.pdf


    B(A)-5 A: STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_BA5_detA.pdf

    B(A)-5 B: STAR/system/files/userfiles/2729/file/Assembly_test/Assembly%20_BA5_detB.pdf

    B(A)-5 C: STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_BA5_detC.pdf

    B(A)-5 D: STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_BA5_detD.pdf


    Trigger system for B(A)-5 assembly: STAR/system/files/userfiles/2729/file/Assembly_test/PMT_BA5.pdf


    A-3 A: STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_A3_detA.pdf

    A-3 B: STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_A3_detB.pdf

    A-3 C: STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_A_3_detC.pdf

    A-3 D: STAR/system/files/userfiles/2729/file/Assembly_test/Assembly_A3_detD.pdf


    A-3 Trigger system: STAR/system/files/userfiles/2729/file/Assembly_test/A_3_cosmic.pdf
    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------


    Test of trigger counters:


    Setup
    Test of the trigger counters was performed using (see first photo):
    - high voltage power supply,
    - Tektronix MSO 3034 Mixed Signal Oscilloscope,
    - beta radiation source 90Sr.
    Trigger counters were left in the storage boxes to avoid possible damages. Both PMTs in the counter were supplied with the same voltage, which was changed in a range 800-1200 V. Beta source was always sitting on the surface of the scintillator, pointing to it (like on the second photo). Measurements were done with the source in a few different points on the surface of the scintillator, in order to determine position dependencies. As shown in the third photograph, storage boxes with trigger counters were additionally placed inside the folder with zipper to fully avoid influence of the external light.
    In order to enable offline analysis, some number of waveforms from both channels (PMTs) was stored on a USB flash memory in *.isf format (internal save format of the scope). Then, isf files were converted into ROOT histograms and processed (contact me to get the converting program).

      


    Dark noise measurement
    This measurement was done without use of the 90Sr source. Signals were triggered with very low threshold value (order of mV, depends on supplying voltage). The lack of signal in the PMT other than the triggering one (like on the figure below; red = channel 1, blue = channel 2) was a confirmation for non-cosmic-ray origin of the pulse.



    For such events the dark-noise peak amplitudes were histogramed. Links to plots of these histograms are contained below.
    A-3   A-6   B(A)-5   B-1
    As an output the dark noise levels as a function of the supplying voltage were prepared for each channel separately: DARK NOISE vs. VOLTAGE.
     

    Detailed tests with 90Sr source - part 1
    The main part of the test was done with the beta source placed in three points on the scintillator, as shown in red on the right-hand side sketch. For each position of the source and each PMT voltage, about 150 triggers were collected. The triggering threshold was set to ~2sigma above the noise level, based on the dark noise measurement (see above). Only one of the channels was set to trigger. An exemplary output from oscilloscope is presented in the left-hand side figure (below), together with description of quantities whose distributions are available in the table below.

     


    quantity             \             counter     A-3        A-6     B(A)-5     B-1   
    Signal amplitudes (correlation
    between channels and 1-D
    amplitude distributions)
     PDF  PDF PDF   PDF
    Signal integrals (correlation
    between channels and 1-D
    integral distributions)
    PDF  PDF PDF PDF
    Time difference between
    moments of reaching the threshold
    by signals in two channels
    PDF  PDF PDF PDF
    Rise time (correlation
    between channels and 1-D
    rise time distributions) 
     PDF  PDF PDF PDF
    Fall time (correlation
    between channels and 1-D
    fall time distributions) 
     PDF  PDF  PDF PDF
     
     









     








    Explanation of quantities from the table:
    Signal amplitude is nothing but the maximum absolute value of the pulse (at peak).
    Signal integral is simply an integral of the pulse over time.
    Time difference (\delta t, difference in time of discrimination) is the time interval between the moment of reaching the threshold level by the pulses in two channels (see the left figure above).
    Rise time is defined as the time interval for the pulse to reach 10% and 90% of its peak value (see the left figure above).
    Fall time is defined as the time interval for the pulse to fall from 90% to 10% of its peak value (see the left figure above).

    One should note, that the time difference contains a component which depends e.g. on the lenght of wires connecting PMTs with the oscilloscope, so only the relative time difference ("difference between time differences") should be studied.


    Detailed tests with 90Sr source - part 2
    The second part of the tests was also done with the use of beta source, this time however a more detailed position dependence (11 different points) of the pulse properties was checked.  For each position of the source and supplying voltage only one waveform (for each channel) was saved (instead of ~150 as in part 1), however, a waveform itself was an average of 512 triggered pulses (the same trigger thresholds were set for corresponding voltages as in part 1). Examples of the average pulses from two channels are presented in figure below. Results of the 2nd part of tests are contained in the table below.
     

     

    quantity             \             counter     A-3        A-6     B(A)-5     B-1   

    Average-signal amplitudes

     
     PDF  PDF PDF   PDF

    Time difference
    (for average signals)
    PDF  PDF PDF PDF
     

     







    The black contour in the plots in the table above marks the area of the scintillator (8 cm x 5 cm). Colours of markers correspond to the value of presented quantity (see description of z-axis).

     

    Detector package information

    Detector package information

    Here you can find all the information about performance of compononets of the Roman Pot detector packages.
    Each cell in the table below corresponds to single package (as it was assembled in 2009), whose label is given at the top of the cell.


    Two tables at the bottom of the page indicate the Roman Pot in which components
    of each particular package (silicon planes - A, B, C and D, and trigger counter - TC)
    were installed during run 2009 and 2015.
     

    A-3

    Run 2009 performance:
    SVX pedestals:
    Cluster properties (silicon data):
    Trigger counter:
    Pre-2015 tests:
    SVX ped. and cluster properties:
    plane: A B C D
    Trigger counter:
    B-4

    Run 2009 performance:
    SVX pedestals:
    Cluster properties (silicon data):
    Trigger counter:
    Pre-2015 tests:
    SVX ped. and cluster properties:
    plane: A B C D
    Trigger counter:
    A-1

    Run 2009 performance:
    SVX pedestals:
    Cluster properties (silicon data):
    Trigger counter:
    Pre-2015 tests:
    SVX ped. and cluster properties:
    plane: A B C D
    Trigger counter:
    B-1

    Run 2009 performance:
    SVX pedestals:
    Cluster properties (silicon data):
    Trigger counter:
    Pre-2015 tests:
    SVX ped. and cluster properties:
    plane: A B C D
    Trigger counter:
    B(A)-5

    Run 2009 performance:
    SVX pedestals:
    Cluster properties (silicon data):
    Trigger counter:
     
    Pre-2015 tests:
    SVX ped. and cluster properties:
    plane: A B C D
    Trigger counter:
    A-6

    Run 2009 performance:
    SVX pedestals:
    Cluster properties (silicon data):
    Trigger counter:
    Pre-2015 tests:
    SVX ped. and cluster properties:
    plane: A B C D
    Trigger counter:
    B-2

    Run 2009 performance:
    SVX pedestals:
    Cluster properties (silicon data):
    Trigger counter:
    Pre-2015 tests:
    SVX ped. and cluster properties:
    plane: A B C D
    Trigger counter:
    A-4

    Run 2009 performance:
    SVX pedestals:
    Cluster properties (silicon data):
    Trigger counter:
    Pre-2015 tests:
    SVX ped. and cluster properties:
    plane: A B C D
    Trigger counter:

      E1U E1D E2U E2D W1U W1D W2U W2D
    A B-4 A-3 B-1 Spare A-4 B(A)-5 A-1
    B-2
    B B-4 A-3 B-1 A-6 A-4 B(A)-5 A-1 B-2
    C B-4 A-3 B-1 A-6 A-4 B(A)-5 A-1 B-2
    D B-4 A-3 B-1 A-6 A-4 B(A)-5 A-1 B-2
    TC B-4 B-1 A-4 A-6 A-3 B(A)-5 A-1 B-2



    Run 2009
      EHI EHO EVU EVD WHI WHO WVU WVD
    A A-3 B-4 A-1 B-1 B(A)-5   A-6     B-2  
      A-4  
    B   A-3     B-4     A-1     B-1   B(A)-5   A-6     B-2     A-4  
    C A-3 B-4 A-1 B-1 B(A)-5   A-6     B-2     A-4  
    D A-3 B-4 A-1 B-1 B(A)-5   A-6    B-2    A-4  
    TC A-3 B-4 A-1 B-1 B(A)-5   A-6     B-2     A-4  

    Operation instructions




    Power to most of the STAR Roman Pot systems can be controlled remotely with  two APC Switched Rack Power Distribution Units. Individual outlets on each rack can be turned on or off using a browser pointing to the following IP a11ddress (inside the BNL firewall):
    East Roman Pots:   130.199.90.26
    West Roman Pots:   130.199.90.38

    A browser running on ppdaq1.phy.bnl.gov (in STAR Control room) is the prefered link to the APC, if there is a need for remote access, a connection to a NX server will give you access to a window whwre you can launch FireFox and access the URLs listed above.
    Only one connection is possible to the APC, but the connections time-out after 1 minute without action
    Once a connection to the APCs is established you will be asked for user name and password.

    A LabView based monitor system called lv-pp2pp-slow runs a server in the ppdaq1.phy.bnl.gov computer. Clients communicate with the server to display the status of the bias and low voltage power supplies and the relevant values of voltages, currents and temperatures. One can run a client remotely by first connecting to a BNL gateway (rssh.rhic.bnl.gov) and from there login into ppdaq1.phy.bnl.gov from the daq account. (All necessary passwords need to be requested from relevant people). The client program is then launched by ~/bin/lv-ppslow &. The following window will open on you computer:



    When a board fails (blown fuse or other problem we need to change the pp2pp-slow.conf and use the variable SiIgnore at the end of the file. That variable uses 32 bits which correspond to all planes as listed in the file (Plane00 correspond to the least significant bit SiIgnore = 0x00000001 and so on.   This only masks out the alarm on the OneButton GUI below and the alarms (one on LV channel in question and another one on the "Global Alarm" section) would still appear in the the above GUI (lv-ppslow).


    Shift crew Detector operators will interact with Roman Pots thru the "Single button" client:



    Which appears above in the off position.

    This window is activated with the icon:

    what to do if the icon has dissapeared from the Desktop?



    Possible faults as seen in the "single button" client.

    Persistent Blue field:

    Red Alarm:


    There is also a "command line" monitor which has funcionality to peform many actions, it is called pp2pp_cmd. A snippet of the code gives you a list of all its functionality:





    If there is a need to modify the pp2pp daq configuration files one needs to login as evpops (not "operator" since early 2017) on daqman.starp.bnl.gov
    .  To reach that computer you have to be inside BNL firewall and login into the STAR gateway with your certificates (ssh -Y -A username@stargw.starp.bnl.gov). Or, you may "ssh -i ~pp2pp/.ssh/skm-key-yipkin-daqman.starp.bnl.gov  evpops@daqman" from our workstation blanchett in the STAR Control Room (and ask Kin Yip for the ssh passphrase).  The configuration files are in:

    /RTS/conf/pp2pp/pp01.ini  for the East Roman Pots,
    as a reminder of which RP plane is connected to a particular sequencer use this figure:


    As is shown in the figure below, each sequencer uses 4 bits to indicate which detector (plane, chain) is active and will be read. Example all planes are active
    0xF



    if chain D failed in E1D one should have 0x7  in ppSEQ02

    /RTS/conf/pp2pp/pp02.ini 
    for the West Roman Pots
    as a reminder of which RP plane is connected to a particular sequencer use this figure:



    with the corresponding variables in pp02.ini:





    The cameras are at 130.199.90.6 East (air flow) 130.199.90.7 (electronics rack)
                                           130.199.90.9 (air flow)         130.199.90.22  ElectronicsWest
    East air flow: 130.199.90.6/home/homeS.html
    East electronics rack 130.199.90.7/home/homeS.html


    West air flow: 130.199.90.9/home/homeS.html
    West electronics rack: 130.199.90.22/home/homeS.html



    Igor/Dima's trick in checking the LV quickly from the command line (which is the same set commands that the pp2pp-slow-server.c uses):

    In ppdaq1, we need two terminals.
    1. In one terminal (like "xterm"), do

      "stty -F /dev/vttyAP1 time 1 min 1" and then "cat /dev/vttyAP1" (for the East), OR

      "stty -F /dev/vttyAP3 time 1 min 1" and then "cat /dev/vttyAP3" (for the West),         

      and wait;

    2. In another terminal, do:

      echo -ne '$00A\r' >> /dev/vttyAP1  (for the East)   OR
      echo -ne '$00A\r' >> /dev/vttyAP3 
       (for the West) ;

    3. Watch the first terminal for any output.  If that power board/card exists, we should see a reading (output); otherwise, this may indicate that there is no such power board, when all the cables (including the relevant serial link !) have been connected.

    In the above code of "00A", "00" is the board no. ("board 0", here) and "A" means "AVDD1 volt reading".  All numbers are in hexadecimal (and so the board nos. range from "00" to "1F").  All the possibilities for reading are listed as below:
    • 'A' : AVDD1 volt reading 
    • 'B' : AVDD2 volt reading
    • 'C' : DVDD volt reading
    • 'D' : DPECL volt reading
    These and some more commands can be found here (from Stephen Boose).


    Igor's quick initialization check:

    After logging to operator@pp01-l or operator@pp02-l (from daqman.starp.bnl.gov for example),
    1. cd /RTS/IGOR
    2. Edit ppSEQ.ini to set the sequencer(s)/chain(s) that you want to initialize (just like pp01.ini/pp02.ini in /RTS/conf/pp2pp).
    3. ./pprun
    4. When working, you should see the least significant number after "read" to fluctuate between 0 and 1 when the number after "expect" changes between 2 and 3.


    Running pedestal:

    1. Get an account for blanchett.starp.bnl.gov (by asking Wayne Betts or Michael Poat and also requesting your account be put under the "pp2pp" group).
    2. cd /home/pp2pp
    3. ./pedestal.csh run_no
    4. Hit return to see all the pedestals and rms' for all chains.   The figures, 8 for pedestals and 8 for rms', are stored as seq_1.gif ... seq_8.gif and rms_seq_1.gif ... rms_seq_8.gif.


    Looking for hints in DAQ Monitoring log files (/log/esb.log in daqman) :

    1. When you found errors when using the RunControl of STAR to take data,  you may go to daqman.starp.bnl.gov (one can login with the operator account) to find more detailed error messages.

      For example, one can do the following to find out problems related to pp02 :

          grep pp02 /log/esb.log

      and one might see:

      [pp02-l   17:35:36 016] (pp2ppMain): ERROR: ppSEQ.C [line 98]: FAILURE: ppSEQ07: Switching to EXT clocks failed

      This means that something was wrong with the clock connection to the sequencer  #7.

    2. A hint for which VME to reboot in /log/esb.log or /log/daq.log in daqman or in STAR Run Control Panel:

      The first example may be seen in the STAR Run Control panel (used by the STAR Shifters) or in /log/esb.log

      esb.log:[pp01-l   08:32:39 060] (pp2ppMain): CRITICAL: det.C [line 140]: PP2PP: RDO 1 -- too many auto-recoveries. Stopping!
      esb.log:[pp01-l   08:32:49 060] (pp2ppMain): CRITICAL: esbTask.C [line 2458]: Recovery failed for RDO(s): 1  -- stopping run!

      If one sees the above the "CRITICAL" message for pp01-l, one knows that it's the pp01-l (VME) in the East that's at fault and so we should reboot that VME crate.   { From Tonko, this one was due to crashes in one of the sequencers and we got a message of failed recovery as there is NO recovery possible ! }


      Another example, either in the Run Control panel or /log/daq.log, one might see:

      daq.log:[daqman   14:14:39 065] (scDeamon): CRITICAL: scDeamon.C [line 1218]: PP2PP[1] [0x6111] Rebooted

      In this case, from Tonko, "PP2PP[1]" also pointed out thtat pp01-l (VME) should be rebooted.  Also from Tonko, we got this message that said "CPU rebooted" because either the CPU or the Ethernet had crashed.

    3. Also, we don't need to worry about the the "ERROR" messages about "External clock" such as :

      esb.log:[pp02-l   11:06:51 060] (pp2ppMain): ERROR: ppSEQ.C [line 97]: ppSEQ05: External clock reqired, but lost, status 0x80520C0F, trying to fix
      esb.log:[pp01-l   11:06:51 060] (pp2ppMain): ERROR: ppSEQ.C [line 97]: ppSEQ03: External clock reqired, but lost, status 0x80520C0F, trying to fix


      Tonko said: "Yes, the loss of clock seems to happen fairly often. It happens to both sides (crates) at the same time so it must be somehow related to the driver module at the TCD end. But it doesn't seem to cause any issues.

      Also, you will see this message at EVERY run-stop because of the way clock switch-over is sequenced. But this is completely innocuous because the run has already stopped."
       




    Chanaka's slow-control panel to control the bias HV's for our 16 trigger PMT's:

    In the Control Room, one usually just goes over to the terminal for the sc5 node.   But if one wants to remotely change the HV's, one may login from any starp gateway/node (such as our "blanchett"),
    1. ssh sysuser@sc5 and find the password (for sysuser) from the folder in the STAR Control Room ;
    2. enter the alias command "pp2pphv" on the command prompt ;

      [ From a home computer or whatever, pp2pphv may abort.   So, instead of doing
      "medm -displayFont scalable -x /home/sysuser/GUI/trg/pp2pp.adl" (which is what "pp2pphv" does),  you may do:

      medm -displayFont alias -x /home/sysuser/GUI/trg/pp2pp.adl ]

    3. a "PP2PP HV Controls" panel would pop up ;
    4. by putting the cursor on top of the "demand voltage" field of the relevant channel , one can type in to raise the voltages (such as 1100 V).
       


    Moving Roman-Pots using the "pet" page:

    One should communicate with the MCR first and gain their approval before you can move the roman-pot; otherwise, illegal roman-pot movement may result in complete beam loss.


    If it's not already opened for you, you may go to any of the following "pet" page to try to move roman-pots.
    •  "pet"  → "RHIC"  → "Interaction_Regions"  → "PP2PP"  → "RomanCtrl"  → "Sector 5" (East) or "Sector 6" (West)

      OR
    •  "pet"  → "RHIC"  → "Drives"  → "romanPots"  → "Sector 5" (East) or "Sector 6" (West)

    The two pet pages are mostly the same and we probably use the first one mostly, as the first one provides plots of the NMC radiation levels.  [ But the second one allows one to re-enable the motor drive permit (by clicking "disable" and then "enable", at the bottom of each page) that the first one doesn't.   This may be needed after one moves the roman-pot back to retracted position after finishing some work during an access or maintenance etc. ]

    There are 4 stations on each page and we should just pay attention to the familiar names of ours such as E1U, E1D etc.   For each plot, one should see "LVDT Top" or "LVDT Bottom" (under "Position Measurements"), and at the fully retracted positions, that number should about ±89 mm.

    For each pot, in order to move a pot, go to the space below the word "step cmd" and type in the "absolute" (not incremental) no. of steps to move the pot.   1 mm needs about 8000 steps.  ( It's "0" if the pot is fully retracted. )    After typing in the no. of steps, you should see the absolute number of 87 mm decrease.   Eg. if 87 mm changes to 85 mm, it indicates that the pot has moved 2 mm towards the center of the beam pipe.  To move more, you need to type in a bigger "number of steps".  

    The hard stop is 15 mm from the center (0 mm) but the limiting switch would limit the pot movement to around 16.5 mm or so.  

    To go back to the fully retracted position, just hit "home" for that pot.





    ACresets :

    In the unlikely event that you want to AC-reset APC or network equipment inside the tunnel, you may go to the following pet page area:
    •  "pet"  → "RHIC"  → "Interaction_Regions"  → "PP2PP"  → "RomanCtrl"  → "ACresets"

    On each side (East/yo5 or West/bo6), you may "AC reset" the power of the APC (apc.acpwr) or Network Switch (nw.acpwr) or Magnum Converter (mc.acpwr). Magnum Converter converts between Ethernet signals and optical signals.


    Cronjob in ppdaq1.phy.bnl.gov

    There is a cronjob in /etc/cron.d/pp2pp-slow:

    # Check pp2pp slow server every 2 minutes
    */2 * * * * daq /home/daq/pp2pp-slow/check-server.sh >> /dev/null

    which checks whether the server is running or not and restarts it if not.

    During shutdown,  it may be kept as a hidden file   /etc/cron.d/.pp2pp-slow   (noticing the "." before "pp2pp-slow").




     One can use the serial/DIGI link to look at the reboot of the VME in the tunnel, the instructions have been written in:
     
     https://romanpot-logbook.phy.bnl.gov/Romanpot/25

     Basically, it's just "telnet 130.199.90.81 2101"  and  one needs to use the right IP:

     East:   130.199.90.81
     West:  
    130.199.90.120 


    Restarting network daemon in ppdaq1.phy.bnl.gov:

    One should do :

    service NetworkManager restart

    NOT 

    service network restart

    !!



    Check lists for Installation and Repair of Roman Pot assemblies
    STAR/system/files/userfiles/2729/file/Assembly_test/installRP(2).pdf



    Running TCD manually:
    http://online.star.bnl.gov/daq/export/TCD_WWW/index.html?client=tcd-pp2pp
    Goto "Scheduler" and select "Start". User name: star_tcd.

     

    Run 15 preparation

    Run 15 preparation


      Documents

    Roman Pot naming convention
      


       Drawings / Schemes / Pictures

    Photographs of crates on EAST and WEST

     

    Photographs of crates

    East









    West




     




    Map of connections

    Click on a chosen scheme to enlarge. Similar schemes for 2009 setup can be found here.


    East







    West









    Trigger

     Trigger

    Data analysis

    Central Exclusive Production analysis

    The following webpage will be filled with information about the analysis of central exclusive production (CEP) process
    p+p -> p+X+p,
    X = pi+pi-,  
    K+K-,  pi+pi-pi+pi-,  pi+pi-K+K-,  K+K-K+K-
    Preliminary plot of invariant mass of exclusively produced pion pairs in proton-proton collisions at \sqrt{s} = 200 GeV measured in the STAR experiment at RHIC

    Preliminary mass plots in PDF and EPS formats are attached at the bottom of webpage.

    Elastic proton-proton scattering

    Elastic proton-proton scattering

    analysis webpage

    Single Diffractive Dissociation

    Single Diffractive Dissociation

    analysis webpage

    pAu/pAl Ultra Peripheral Collisions

    pAu/pAl Ultra Peripheral Collisions

    analysis webpage

    Software

    Software

    StMuRpsUtil - Roman Pot data analysis utilities (afterburner)

    StMuRpsUtil (under development, for testing purposes only!!!)

    Should you have any questions/comments/remarks regarding this module please contact
    rafal.sikora@fis.agh.edu.pl.

    1. What is StMuRpsUtil
    2. Structure
    3. Utilities
    4. How to use
    5. Useful links


    What is StMuRpsUtil
    StMuRpsUtil is user-friendly utility class which provides a set of post-processing corrections (afterburner) to Roman Pot data stored in StMuRpsCollection class. It has built-in functionalities which expand standard Roman Pot data collection.


    Structure
    StMuRpsUtil is a ROOT-based class intended to work in the STAR computation environment, as well as local environments e.g. standalone machines. A typical STAR "Maker" format (inheritance from StMaker class) was abandoned in order to gain possibility to run the same code on MuDST files and other storage formats e.g. private picoDST files. The only limitation/requirement is, that Roman Pot data has to be stored in the StMuRpsCollection class.

    Usage of StMuRpsUtil invloves creation of single instance of the class at the beginning of analysis, and invocation of StMuRpsUtil::process() and StMuRpsUtil::clear() methods at the beginning and ending of event analysis, respectively. StMuRpsUtil::process() returns pointer to StMuRpsCollection2 class, a mirror class of standard StMuRpsCollection, which contains RP data post-processed using final calibartions. All elements of StMuRpsCollection2 can be accessed in the very same way as of StMuRpsCollection class.


    Utlities
    StMuRpsUtil provides the following corrections to data:
    • run-based alignment calibration
    • (to be implemented) run-based time slewing corrections
    • (to be implemented) hot strips removal

    The following functionalities are available to user:
    • user can set position of the vertex that is used in reconstruction of proton kinematics
      StMuRpsUtil::updateVertex(double x, double y, double z)
      This method should be invoked before StMuRpsUtil::process(). The unit of arguments is meter.
    • (to be implemented) user can select type of selection criteria (loose, medium, tight): only proton tracks passing cuts at selected level are present in the tracks collection


    How to use
     MuDST analysis (working example: /star/u/rafal_s/StMuRpsUtil_tutorial/)
    1. Setup environment to SL16c or newer.
      starver SL16c
      Make sure you have the latest definitions of Roman Pot data classes in your StRoot.

    2. Download StMuRpsUtil package from repository.
      cvs co offline/users/rafal_s/StMuRpsUtil
    3. Put downloaded StMuRpsUtil catalogue under StRoot path in your analysis directory.
      mv offline/users/rafal_s/StMuRpsUtil myAnalysisDir/StRoot/.
    4. Edit setup.h file (myAnalysisDir/StRoot/StMuRpsUtil/setup.h) so that only the following line is uncommented.
      #define RUN_ON_MUDST // set if afterburner is used on MuDST
    5. Edit the header file of your analysis maker class.
      Add declaration of StMuRpsUtil class before definition of your analysis maker class, and add pointer to StMuRpsUtil object as the element of your analysis maker class.
      /*...*/
      class StMuRpsUtil;
      /*...*/

      class MyAnalysisMakerClass: public StMaker{
      /*...*/
      StMuRpsUtil* mAfterburner;
      /*...*/
      }
    6. Edit the implementation file of your analysis maker class.
      Include StMuRpsUtil and StMuRpsCollection2 headers at the beginning.
      /*...*/
      #include "StMuRpsUtil/StMuRpsUtil.h"
      #include "StMuRpsUtil/StMuRpsCollection2.h"
      /*...*/
      In the analysis maker constructor, create StMuRpsUtil object passing as an argument pointer to StMuDstMaker.
      MyAnalysisMakerClass::MyAnalysisMakerClass(StMuDstMaker* maker): StMaker("MyAnalysisMakerClass"){
        /*...*/
        mAfterburner = new StMuRpsUtil(maker);
      }
      At the beginning of MyAnalysisMaker::Make() method in your analysis maker class, invoke StMuRpsUtil::process() which will provide you post-processed RP data collection. Don't forger to call StMuRpsUtil::clear() at the end of MyAnalysisMaker::Make().
      Int_t MyAnalysisMaker::Make( ){
         /*...*/
         StMuRpsCollection2* muRpsColl = mAfterburner->process(); // <-- muRpsColl can be used to get corrected proton tracks etc. 
      /* here analysis of an event */
      mAfterburner->clear(); // <-- critical!!! return kStOK; }
    7. Download RP data calibration files from http://www.star.bnl.gov/~rafal_s/protected/rpsCalibrations2015.tar.gz, unpack it and put exatracted catalogues under myAnalysisDir (you should then have myAnalysisDir/Alignment etc.).
       
    8. Add the following line:
      gSystem->Load("StMuRpsUtil.so");
      to the macro which starts the analysis chain. It ensures that afterburner libraries are accessible.
       
    9. Edit you job submission XML file so that directories with calibration files extracted from rpsCalibrations2015.tar.gz are included into sandbox.
      <SandBox installer="ZIP">
              <Package>
                      <!-- Any other files.... -->
                      <File>file:./Alignment</File>
                      <File>file:./HotStrips</File>
                      <!--        etc.         -->
              </Package>
      </SandBox>
    After the above steps your code should be compilable and should make use of StMuRpsUtil afterburner in MuDST analysis.

    If you find any problems using StMuRpsUtil (code does not compile or crashes at execution) you are kindly requested to report it to developers. We kindly ask to not edit the StMuRpsUtil code on your own.

     picoDST analysis (Krakow format) (description to be added)



    Useful links
    StMuRpsUtil repository in STAR CVS: http://www.star.bnl.gov/cgi-bin/protected/cvsweb.cgi/offline/users/rafal_s/StMuRpsUtil/
    Working example of analysis using StMuRpsUtil: /star/u/rafal_s/StMuRpsUtil_tutorial/
    Roman Pot data calibration files (run 2015): http://www.star.bnl.gov/~rafal_s/protected/rpsCalibrations2015.tar.gz
    StMuRpsCollection documentation (write-up): https://drupal.star.bnl.gov/STAR/system/files/RomanPotsInStEvent_0.pdf
    StMuRpsCollection documentation (doxygen): http://www.star.bnl.gov/webdata/dox/html/classStMuRpsCollection.html
    Roman Pot alignment description: to be added

    Analysis code for UPC picoDST (Krakow format)

    Analysis code for UPC picoDST (Krakow format)

    Should you have any questions/comments/remarks please contact
    rafal.sikora@fis.agh.edu.pl or leszek.adamczyk@agh.edu.pl.

    1. Introduction
    2. 
    Code structure
    3. How to run
    4. Configuration file (options)
    5. Useful links


    Introduction
    In this page you can find a set of instructions that will enable you to develop, run, and share ROOT-based C++ code for the picoDST analysis created and maintained by the Krakow group of the UPC PWG. The code is shared beetween all data analyzers via CVS repository http://www.star.bnl.gov/cgi-bin/protected/cvsweb.cgi/offline/UPC/.

    Code structure
     Shared files - can be editted by all users:
              rpAnalysis.cpp - analysis class (definitions)
              rpAnalysis.hh - analysis class (header)
              config.txt - configuration file
     Core files - do not edit those files and directories:
              runRpAnalysis - launching script (recommended to use)
              rpAnalysisLauncher.C - launching script
              clearSchedulerFiles.sh - utility macro which removes files created by STAR scheduler
              picoDstDescriptionFiles - folder with a core code describing picoDST content etc.

    The skeleton of the analysis class rpAnalysis (inherits from TSelector) was created with ROOT built-in method MakeSelector() (some more information about MakeSelector() can found here).
    When the analysis starts, methods rpAnalysis::Begin() and rpAnalysis::SlaveBegin() are invoked (right place to create histograms etc.). Next, the rpAnalysis::Process() is called for each event in picoDST tree - you can put here your selection algorithms, filling histograms, and so on. After all events in picoDST are processed, methods rpAnalysis::SlaveTerminate() and rpAnalysis::Terminate() are invoked, where e.g. output file can be written.

    Data of single event accessible in rpAnalysis::Process() is stored by particle_event object. Click here to see all elements of this class.

    Running an analysis code should be launched by runRpAnalysis script (executable). The script can be run with one argument which is the name of the configuration file with a definition of the trigger that you would like to analyze and some analysis options (they can be used to control which part of a code should be executed). If script is run without any arguments, a configuration file config.txt is used to launch the analysis.


    How to run
     Preparation of the analysis environment (first run)
    1. Setup environment to stardev.
      stardev
    2. Create and enter a directory where you want to work with the UPC analysis code. Let's denote it MY_PATH.
      mkdir MY_PATH
      cd MY_PATH
    3. Download analysis code from repository. Enter the code directory. 
      cvs co offline/UPC
      cd offline/UPC
    4. Edit the configuration file config.txt. Especially important is to provide valid paths under options CODE_DIRECTORY and OUTPUT_DIRECTORY (absolute paths). You are encouraged to set the path for analys output outside the offline/UPC.
      CODE_DIRECTORY=/absolute/path/to/MY_PATH/offline/UPC
      OUTPUT_DIRECTORY=/absolute/path/to/MY_PATH/output
      
      OUTPUT_DIRECTORY does not have to exist, in such case it will be automatically created by the analysis launching script.

    5. Now you are prepared to run the analysis. For the first execution do not edit SINGLE_RUN option in the configuration (leave it set to "yes"). To start analysis simply type
      runRpAnalysis
      If there are any problems with the configuration file, e.g. wrong data directory etc., you will receive appropriate message(s) in the terminal.
      If no problems are found by the launching script, you should see a ROOT being started and displaying messages about compilation progress (please, do not bother about the warnings related to picoDST description files). When the compilation is done, analysis code is finally executed. You can verify the successfull execution by checking the content of OUTPUT_DIRECTORY - you should find there a ROOT file with analysis output.
     Regular code development/running
    1. Setup environment to stardev.
      stardev
    2. Enter the directory with UPC analysis code (MY_PATH is where you have offline/UPC directory).
      cd MY_PATH/offline/UPC
    3. Update the content of shared repository - this will ensure you are working with the latest version of all files in offline/UPC.
      cvs update
      
    4. Now you are free to work on analysis. You can change the analysis code (rpAnalysis.cpp, rpAnalysis.hh), edit the configuration file to run analysis code over various triggers, with different options etc., and launch analysis using runRpAnalysis script.
    5. NOTE: Use comments // or /* */ to describe part of the code you add, so that everybode can understand it.

    6. When you finish working with the code you should commit the changes you have made, so that all users are always working with same version of the software. It is important to always do a commit if change in the code has been made. Simply type
      cvs commit rpAnalysis.cpp rpAnalysis.hh
      NOTE: Before 'commit' always make sure that the code compiles and executes without errors! If the code doesn't work, but you would like to save all your work, you can easily comment lines in the code you have added, commit and work out the problem later.
      NOTE 2: Do not commit files other that rpAnalysis.cpp or rpAnalysis.hh. Especially important is to avoid committing configuration file, which is analyzer-specific.
      NOTE 3: CVS is "smart", so if somebody does a commit before you do, it can merge (typically with success) the changes in latest commited and your version of the file. If after doing 'cvs commit'  you receive a message similiar to
      cvs commit: Up-to-date check failed for `rpAnalysis.cpp'
      cvs [commit aborted]: correct above errors first!
      it means that described conflict has occured. In such case simply do
      cvs update
      If you don't get any warnings, you can re-commit  (first command in bullet #5). However, if you find a warnig like
      rcsmerge: warning: conflicts during merge
      cvs update: conflicts found in rpAnalysis.cpp
      you need to manually edit the file you want to commit. Click here to learn about the details.

    If you find any problems (code does not compile or crashes at execution) and you suspect it is an issue of the code core you are kindly requested to report it to developers.



    Configuration file (options)
    Find below a list of options available in the configuration file. Obligatory options are TRIGGER, SINGLE_RUN, RUN_NUMBER (only if SINGLE_RUN=yes), DATA_DIRECTORY, CODE_DIRECTORY and OUTPUT_DIRECTORY.
    If you think more options/utilities are needed in the configuration file, contact
    developers.
    • TRIGGER
      This option is the name of the trigger that you want to analyze. It should have the same form as at http://online.star.bnl.gov/rp/pp200/.
    • SINGLE_RUN
      • If set to "yes" forces analysis of a single run (run number is defined by RUN_NUMBER option). In this case analysis is launched without STAR scheduler, using node you are currently logged on. Name of the output ROOT file in the OUTPUT_DIRECTORY has the following form: analysisOutput.RUN_NUMBER.TRIGGER.root.
      • If set to "no", full available dataset for a TRIGGER is analyzed using STAR Scheduler with job-splitting to multiple RACF nodes.
      NOTE: It is recommended to check validity of the code (compilation and execution with no errors) using SINGLE_RUN=yes before you run analysis over full dataset using STAR Scheduler with SINGLE_RUN set to "no".
      The submission XML file is automatically created by the launching script. Number of files for a single job is defined to 20 (can make it changeable if needed), so typically there are a few dozens of jobs submitted. This results in plenty of scheduler files showing up in CODE_DIRECTORY, as well as log/error files in OUTPUT_DIRECTORY. If you want to clean up CODE_DIRECTORY from the scheduler files, use clearSchedulerFiles.sh script. You can check progress of jobs execution with command
      condor_q -submitter $USER
      or, if you do not have any other jobs submitted, use
      condor_q -submitter $USER | tail -n1
      If the output is:
      0 jobs; 0 completed, 0 removed, 0 idle, 0 running, 0 held, 0 suspended
      it means that all jobs are finished. If all jobs were successfull, in your OUTPUT_SIRECTORY you should see a number of ROOT files called analysisOutput.SOME_LONG_NAME_WITH_VARIOUS_CHARACTERS.TRIGGER.root. Those are output files from each single job (SOME_LONG_NAME_WITH_VARIOUS_CHARACTERS is the ID of submission and ID of the job, separated by underscore "_"). To merge them into single file type
      hadd allRunsMerged.root analysisOutput.*
      This will create a single file called allRunsMerged.root. Remember to merge files only from one submission! If you suspect something went wrong during job execution you can check log and error files of each single job that are placed in OUTPUT_DIRECTORY and have extensions .log and .err, respectively.
    • RUN_NUMBER
      is the ID number of analyzed run (this option is omitted if SINGLE_RUN=no).
    • DATA_DIRECTORY
      Should contain full path to a directory where lists of available picoDST files are stored (same place as picoDSTs themselves). Currently it is /gpfs01/star/pwg/UPCdst.
    • CODE_DIRECTORY
      Should contain full path to a directory where your private copy of offline/UPC/ directory is placed.
    • OUTPUT_DIRECTORY
      Should contain full path to a directory where you want an analysis output (ROOT files, log files) to be saved. In case OUTPUT_DIRECTORY does not exist, it is created.
    • ANALYSIS_OPTIONS
      This option is intended to contain set of options separated by "|" character, that are send to analysis program and can be used, for example, to control which part of code should be executed etc..


    Useful links
    UPC analysis code repository in STAR CVS: http://www.star.bnl.gov/cgi-bin/protected/cvsweb.cgi/offline/UPC/
    CVS tutorial @ drupal: https://drupal.star.bnl.gov/STAR/comp/sofi/tutorials/cvs
    Presentation on the Krakow picoDST: https://drupal.star.bnl.gov/STAR/system/files/talk_42.pdf
    StMuRpsCollection documentation (write-up): https://drupal.star.bnl.gov/STAR/system/files/RomanPotsInStEvent_0.pdf
    StMuRpsCollection documentation (doxygen): http://www.star.bnl.gov/webdata/dox/html/classStMuRpsCollection.html
    Roman Pot alignment description: to be added

    Other databases created for the 2015 reconstructions


    1.  Calibrations/pp2pp/pp2ppPMTSkewConstants


    There are 64 Skew constants, as there are 4 constants for each PMT's and there are 2 PMT's for each of the 8 RP's.


    Rafal's prescription:


    Constants in set1.* contain parameters for runs 
    16085056, 16085057 and >=16090042
    Constants in set0.* contain parameters for all other runs.


    Implementations:

    1st set:

    set0.*  entered at  "2015-01-01 00:00:01 GMT"


    2nd set:

    "RTS Stop Time" for 16085055 was "2015-03-26 23:05:04 GMT"
    "RTS Start Time" for 16085056 was "2015-03-26 23:06:39 GMT"

    set1.*  entered at  "2015-03-26 23:05:05 GMT"



    3rd set:

    "RTS Stop Time" for 16085057 was "2015-03-27 00:07:59 GMT"
    "RTS Start Time" for 16085058 was " 2015-03-27 00:14:32 GMT"

    set0.*  entered at  "2015-03-27 00:08:00 GMT"



    4th set:

    "RTS Stop Time" for 16090041 was "2015-03-31 22:38:57 GMT"
    "RTS Start Time" for 16090042 was "2015-03-31 22:39:59 GMT"

    set1.*  entered at  "2015-03-31 22:38:58 GMT"



    2.  Geometry/pp2pp/pp2ppAcceleratorParameters



    The order of entries in each set is:

    x_IP  y_IP  z_IP  theta_x_tilt  theta_y_tilt  distancefromDX_east  distancefromDX_west LDX_east  LDX_west  bendingAngle_east  bendingAngle_west conversion_TAC_time

    Entries entered :
    "2015-01-01 00:00:01 GMT" :  0 0 0  0 0  9.8 9.8  3.7 3.7  0.018832292 0.018826657  1.8e-11

    "
    2015-04-28 00:00:01 GMT" :  0 0 0  -0.003640421 0 9.8 9.8 3.7 3.7  0.011238936  0.026444185 1.8e-11
    ( The pp run stopped on Apr. 27 and there were a few days before C-AD could switch from pp to pAu operationall
    y.  I picked the beginning of Apr. 28 for this pAu entry.)

    "
    2015-06-08 15:00:00 GMT" :  0 0 0  -0.002945545 0 9.8 9.8 3.7 3.7  0.012693812  0.025021968 1.8e-11

    ( The last *pAu* run was 16159025 on June 8 and the first *pAl* run was 16159030 which was a bad run and started at "2015-06-08 15:44:13 GMT".  The previous run --- a pedestal run, 16159029, ended at "2015-06-08 14:24:54 GMT".  So I arbitrarily selected the above time kind of in the middle. )

    pp2ppRPpositions (STAR offline database) corrections

    Originally, this is mainly to correct for the malfunctioning LVDT readings of the E1D between Mar. 18 and Apr. 6, 2015.   I have come across 7 blocks/sub-periods where the E1D LVDT readings need to be corrected.  Since the steps are the same 505004 (with one exception below), I have used an average of the LVDT readings closest to the period of malfunctioning (usually the good ones before but if the last good readings was too long before, I have taken the averages of those good ones shortly after the period in question).   These are in the files ToCorrect1.txt, ToCorrect2.txt, .... ToCorrect7.txt.  The average position of each of this period that I have used is listed at the end of the file. [  ToCorrect1.txt has one entry which I have "bracketed" as it has to be corrected with the positions of ~-32 cm, instead of ~-25 cm, and this is explained in the 1st below. ]

    In all cases, in the table of "Calibrations/pp2ppRPpositions", I have just inserted a set of 8 entries 1 second later in the "beginTime" than the original "beginTime" (the latter of which appears in all the .txt files listed here) as Dmitry Arkhipkin has instructed, even though there might be only one entry (E1D) that was changed.


    However, I have come across the following and so I have also corrected them:
    1. For the run 16077055, it's ~32 cm or 449748 and so I have used the average of the last good LVDT readings corresponding to the same no. of steps (449748).  The file is "ToCorrect1.exception_32cm.txt".
       
    2. On Apr. 7, there was a period that the LVDT's of E1D and E2D were swapped by mistake.  I've corrected this as well.  The file for this is "E2D.txt".  I have needed to correct both E1D and E2D; and at the end of the file, the 1st position is for E1D and the 2nd one is for E2D.

    3. Accidentally, I've also come across a period (~6 pm on Apr. 3 to 10 am on Apr. 4) which had wrong entries because the online database was not updated (due to inaccessibility of CDEV/STAR portal server).   Dmitry Arkhipkin has scanned the entire online database and found 5 periods which has such gap of a period > 5 minutes, including the above-mentioned one.  I've checked that we only need to correct for 2, 4 runs in the above period (Apr. 4) and 6 runs on May 8 --- which was in the heavy-ion period where only the West side roman pots were actually inserted.  For the other 3, they are either not in the pp2pp (roman-pot) data-taking period or the positions (the steps) remained the same before and after the gap (so that the offline database just used the previous LVDT positions available in the online database).  The file for this is  "NotUpdated.Apr4_.txt" and "NotUpdated.May8_.txt" respectively for Apr. 4 and May 8.
       

    Pictures

     

     
     DX-D0 chamber  Roman Pot
     Setup in the tunnel

    Roman Pot vertical station (old)

       Other useful figures

    2009 setup (Phase I)

     
     
      Analysis notes


       Drawings / Schemes / Pictures
     
       Other

    ADC and TAC distributions
    Silicon performance (cluster data)
     

    Paper review - A_NN and A_SS

    A_NN, A_SS GPC paper review link (PP2PP subsystem webpage):

    Photomultipliers data

    ADC and TAC distributions


    Sets of plots below contain ADC and TAC spectra from the Roman Pot trigger counters, as they were in the sample of reconstructed elastic proton-proton scattering events (sqrt{s} = 200 GeV). Five datasets were used to prepare histograms: 0, 1, 4, 6, 9 (see numeration here).

    One should take into account, that TAC spectra are sensitive to the time-space structure of colliding bunches (that's the reason why many "bumps" are presents in the distributions). Also, TAC spectra from Roman Pots were biased due to "TAC cut-off" (sharp right edge - loss of early events). Sets of points around TAC ~ 100 also results from the cut-off.

    The PDF versions of the plots are listed at the bottom of the page.

     ADC          TAC

    Racks schemes

     
    East







    West









    Silicon performance (cluster data)

    Silicon performance

    SVX chips pedestals: average and RMS (from here)


    EHI (average):                                       EHI (RMS):                                            EHO (average):                                     EHO (RMS):                                          
            

    EVU (average):                                      EVU (RMS):                                           EVD (average):                                     EVD (RMS):                                          
            

    WHI (average):                                      WHI (RMS):                                           WHO (average):                                    WHO (RMS):                                          
            

    WVD (average):                                     WVD (RMS):                                          WVU (average):                                    WVU (RMS):                  
            





    --------------------------------------------------------------




    The part of page below contains sets of graphics with the data from Silicon Strip Detectors mounted in Roman Pots in 2009 run at sqrt{s} = 200 GeV. Five datasets were used to prepare histograms presented below: 0, 1, 4, 6, 9 (see numeration here). All histograms can be downloaded in the PDF format (see list of files at the very bottom).

    Number and length of clusters

    Sets of plots below contain distributions of the number of clusters(left) and number of strips (length) in clusters (right) in each silicon plane. Each row represents different Roman Pot, each column - a silicon plane.
    Number of clusters                                


    Cluster energy:

    Below an energy distribution of clusters is shown for each detector(file)/plane(row)/cluster length(column). Upper limit of cluster length used in reconstruction is 5.

    EHI:                                                         EHO:                                                      EVU:                                                       EVD:
            

    WHI:                                                       WHO:                                                      WVD:                                                      WVU:
            
     

    Cluster energy vs. strip

    For clusters consisting of one strip (length=1) it was possible to draw the distribution of energy collected by the strip as a function of the strip number.

    EHI:


    EHO:


    EVU:


    EVD:


    WHI:


    WHO:


    WVD:


    WVU:




    Below the piece of code used to fill the histograms presented above is attached:
    for(Int_t j=0; j<nOfPlanesInRpPerCoordinate; ++j){
      Int_t nClusters = rps->numberOfClusters(i,Planes[coordinate][j]);
      nOfClusters[i][Planes[coordinate][j]]->Fill(nClusters);
    
      if(nClusters < maxNumberOfClusterPerPlane)
        for(Int_t k=0; k < nClusters; ++k){
          Int_t lenCluster = rps->lengthCluster(i,Planes[coordinate][j],k);
          clusterLength[i][Planes[coordinate][j]]->Fill(lenCluster);
    
          if(lenCluster <= maxClusterLength && lenCluster>0){
            Int_t enCluster = rps->energyCluster(i,Planes[coordinate][j],k);
            clusterEnergy[i][Planes[coordinate][j]][lenCluster-1]->Fill(enCluster);
      
            if(lenCluster==1)
              clusterEnergy_vs_strip[i][Planes[coordinate][j]]->Fill(1e3*rps->positionCluster(i,Planes[coordinate][j],k)/Pitch[coordinate], enCluster);
          }
        }
    }
     

    Run-18 calibrations

    27 GeV Calibrations

    First Half ( days - 141)

    • TBD

    Second Half (days 142-168)

    • VPD calibration QA
    • BTOF alignment
    • BTOF T0
    • Comments:
      • BTOF slewing and Local-Z re-used from past full calibration (Run16)
      • calalgo=1 (startless mode, VPD is not used for the BTOF start time)









    SSD


    SSD group

    Hardware and Software status

    The link below points to a document detailling the current (April 2006) status of the SSD hardware and software.

    List of SSD experts for Run VI

    For the Run 6, the SSD experts can be reached at these numbers :

    Lilian MARTIN : +33 2 51 85 84 68 (lab), +33 6 88 06 45 32 (cell), +33 2 40 63 48 72 (home)

    Jerome BAUDOT : +33 3 88 10 66 32 (lab), +33 6 49 19 74 42 (cell), +33 3 88 32 59 72 (home)

    Storage crates at Bldg 1101

    Photos taken 22-jun-2007 at Bldg 1101 of various containers and boxes for SSD

    Web Site

    These pages contain information on the SSD web site.

    Latest updates

    Information on the latest posts on the SSD web server

    documentation on the new ADC HV decoupling capacitances

    During the summer 2005 shutdown, the capacitances implemented on the adc boards have been changed. Some documents describing their specifications have been posted in a pdf file.

    It can also be reached from the main page top menu : Hardware->Electronics->Ladder and "specs" in the "adc board" section.

    Major changes

    Up to February 2006, the SSD web site was hosted by the French computing center in Lyon. Its URL was http://star.in2p3.fr. The web server in Lyon has been modified so we decided to migrate the SSD web site into to the STAR web site. It is now located at : http://www.star.bnl.gov/STAR/ssd/.

    Some pages have not been transfered yet such as the picture collection.

    End of the migration

    The SSD web server is now fully transferred from Lyon to BNL.
    The "phototheque" for instance is now accessible.
    I have change several pages but some pages may still point to the old location.

    West side cooling hoses -- photos

    Look at the seriously kinked hoses (red arrow in one photo, red box in the other.
    These photos were taken with the end cap removed, but all else intact.
    Note that the kinks appear at the place where there is a bend and no outer support
    tubing.

    Conclusion: the inner tubing is too soft to be routed in this way -- it just folds.

    photos of SSD storage/shipping containers

    photos of containers in Bldg 1101 22-June-2007

    SVT

    The SVT Group pages are available on the old Web site.

    Slow Controls

    STAR Slow Control description and operations

     

    Archive Viewer

     

    In general, all archives are available on the starp.bnl.gov network only.
    There are several options available.
    1) web interface
    2) java client
    Manual is available at: http://ics-web1.sns.ornl.gov/archive/viewer/files/manual.pdf 3) command line

    If you are trying to get to the archive viewer remotely, try web interface first and if you can not get all features you are looking for than try Java client.
    If you are in the CR, try Java client first, it has a lot of nice features.
    Below you can find a brief description of how to use different viewers, for more details please see the manuals.

     

    if you have any comments or suggestions, please send them to me. Yury Gorbunov

    WEB INTERFACE

    Fig. 1 Point your browser to http://sc.starp.bnl.gov:8080/archiveviewer/
    Specify the location of the cgi script in the text field on the appeared web page: http://sc.starp.bnl.gov/archive/cgi/ArchiveDataServer.cgi.
    Push "Connect" button to establish the connection to the server.
    In a few seconds you should see a screen similar to what is shown in Fig 2.

    Fig. 2 In the left menu pick the "CURRENT_ARCHIVE" and in the "search string" text field specify the name of the PV (in the current example I'm looking for vme59 related PVs).
    Push "Search" button to search through the archive. In a few seconds you will be redirected to something similar to what is shown in Fig 3.

    Fig. 3 Pick PVs you'd like to see on a plot. (in the example I picked temperature of the crate 59).
    Click radio button "Configure for Plot". Push "Go" button to get to the next page. In a few seconds you will be redirected to a page similar to what is shown in Fig 4.

    Fig. 4 Here you can specify time range, titles and range for "y" axis. In the example I'm plotting temperature for the last 24 hours by specifying "-1d" as a starting date and "now" as a final date. Push "Submit" button to get a plot. In a few seconds you will be redirected to a page similar to what is shown in Fig 5.

    Fig. 5 Final plot, in the png format - ready to be printed!

    JAVA CLIENT

     

    Fig. 1 To start the client type
    login into sc.starp.bnl.gov
    type: ArchiveViewer &

    Fig. 2 Go to "File" then to "New Connection" and specify the location of the cgi script as it shown in Figure 2. Push "Ok" button.

    Fig. 3 In the menu located in the left top corner select "CURRENT_ARCHIVE" and press "search" button.

    Fig. 4 In the search window: type in the search pattern and select "Glob Pattern" radio button. Press "Go" button.
    From the list of found channels pick (highlight) channel(s) you are looking for and push "Add" and then "Ok" buttons.
    The search screen will disappear

    Fig. 5 In the right top corner specify time range and push "Plot" button. In a few seconds you will see something similar to Figure 6.

    Fig. 6 Final plot , in the png format - ready to be printed!

    COMMAND LINE

    heck this manual (page 43), which is talking about perl script:
    http://ics-web.sns.ornl.gov/kasemir/archiver/manual.pdf
    For example list all channels for current archive ArchiveDataClient.pl
    -u http://localhost/archive/cgi/ArchiveDataServer.cgi -k 1 -l
    ...
    Channel tpchv:SUB_RD_TC_1_8.H, 01/03/2006 11:29:58.390279188 -
    01/08/2008 02:04:57.788410644
    Channel tpchv:SUB_RD_TC_1_8.I, 01/03/2006 11:29:58.390279188 -
    01/08/2008 02:04:57.788410644
    Channel tpchv:SUB_RD_TC_1_8.J, 01/03/2006 11:29:58.390279188 -
    01/08/2008 02:04:57.788410644
    Channel tpchv:SUB_RD_TC_1_8.K, 01/03/2006 11:29:58.390279188 -
    01/08/2008 02:04:57.788410644
    ..
    and then you pick the channel from the list and time period and print
    the values recorded in the specified time interval
    , also you can access help by
    ArchiveDataClient.pl -h
    USAGE: ArchiveDataClient.pl [options] { channel names }

    Options:
    -v : Be verbose
    -u URL : Set the URL of the DataServer
    -i : Show server info
    -a : List archives (name, key, path)
    -k key : Specify archive key.
    -l : List channels
    -m pattern : ... that match a patten
    -h how : 'how' number; retrieval method
    -s time : Start time MM/DD/YYYY HH:MM:SS.NNNNNNNNN
    -e time : End time MM/DD/YYYY HH:MM:SS.NNNNNNNNN
    -c count : Count

    How to get PV name and alarm limits

     

    If you would like to find the PV name of the DB record displayed in the GUI or in the alarm handler.

    The easiest way is to do it with the help of  the GUI with displayed corresponding PV.

     

    1) open the GUI

    2) right mouse click somewhere on the GUI

        a) a small white dot with "PV" letter next to it will appear

    3) navigate the white dot to the corresponding displayed PV (displayed PV is shown in the white circle) and click left mouse button

     

    A screen similar to the shown below will appear with PV name displayed

     

    4) copy the pv name (shown in the white circle

    5) open session on one of the slow controls PCs (sc,sc5,sc3)

    6) type :

    caget pv_name.LOW - for low yellow alarm state

    caget pv_name.LOLO - for low red alarm state

    caget pv_name.HIGH - for high yellow alarm state

    caget pv_name.HIHI - for high red alarm state

    you should get something similar to what is shown in the screen shot below

     

     

    here is a small script which will read alarm limits for a channel . Just save into a file and do "a +x <filename> ". It will work on sc.starp.bnl.gov and sc5.starp.bnl.gov

    #!/bin/bash
                    echo Please, enter channel name
                    read NAME
                    echo "Channel name is :  $NAME!"
            echo "current value :"
            caget $NAME
            echo "LOW (yellow alarm) :"
            caget $NAME.LOW
            echo "LOLO (red alarm) :"
            caget $NAME.LOW
            echo "HIGH (yellow alarm) :"
            caget $NAME.HIGH
            echo "HIHI (red alarm) :"
            caget $NAME.HIHI

    How to operate HV of pp2pp system

    1) log into one of the Slow Controls computer (sc.starp.bnl.gov or sc5.starp.bnl.gov) under sysuser (the password is in the red folder in control room)
            a) Don't forget X forwarding
    2) type: pp2pp and hit enter a gui will appear on the screen as shown below
     

     

     

    3)push "on" to powerup the system or "off" to switch it off

    4) the config file can be found on sc5.starp.bnl.gov at
    /home/sysuser/oldepics/appDir/BBC/SAVE/PPHV.save
    the first line of the txt file corresponds to RPEVU1 , second line to RPEVU2, etc
     

    Goodluck

    Slow Controls Operations Manual

    STAR

    Slow Controls Operations Manual

     
     
     
     
     
     
     
     

    Creighton University

    version 7.0

    Dec 02, 2007
     
     
     
     
     
     

    This manual is supplemented by the figures in the

    STAR Slow Controls Training presentation.

     

    Slow Controls Operations Manual

    Telephone Numbers *

    Line Commands *

    Slow Controls Architecture *

    Hardware Controls Workstations 6

    Color Coding of Data 6

    The Alarm Handler 7

    Starting and Running the Alarm Handler 9

    Viewing Archived Data 12

    Accessing Front End Processors 13

    Special VME, DAQ, Trigger and Magnet Commands 13



    Telephone Numbers (Nov 12, 2007)

    Slow Controls at STAR (631) 344-8242

     

     
     


    Yury Gorbunov

    BNL apartment 22B  (631) 344-1043

    at Creighton (402) 280-2208
     

    Michael Cherney

    cell phone (will ring at BNL) (402) 305-0238

    at Creighton (402) 280-3039
     

     
     

    Line Commands
    Open a terminal on sc3.starp and get
    [sc3.starp.bnl.gov]:~>

    From sc3.starp.bnl.gov there are many alias commands for bringing
    up medm screens (guis) and launching tools.

    The most important one to be able to do as a shift crew member is to start the
    Alarm handler  (sometimes called alh or ALH)
     

    startalh   &   --   starts the Alarm Handler
    Note: using the P button you can access  almost every subsystem medm screen from the Alarm Handler

    Other usefull aliases that can be used from sc3.starp. All give the same screens as those available from
    the Alarm Handler.
     

    vme_plat1         ---         vme power supplies for the first floor of platform
    vme_plat2         ---         vme power supplies for the 2nd floor of platform
    eemc_canbus    ---         vme power supplies for the EEMC
    emc_canbus      ---         vme power supplies for the EMC
    zdc                    ---          ZDC  HV controls
    upVPD              ---         upVPD HV controls
    bbchv
                    ---         BBC HV controls
    richscal             ---          rich scalars
    tpctop
                    ---         gets the tpc top level user interface
    trighv                 ---        starts the trigger high voltage control program
    and many others  see .user_aliases on sc3.starp and/or sc.starp
     

    The second most important command you can do from the terminal is
    to open a serial connection to a IOC processor by opening a telnet session.
    Note : This can be done from any terminal session on a starp network computer.
    There can only be one session open.
    If a session is open the command will fail!

    telnet scserv xxxx

    connects to the serial port of the front end processor connected to

    port xxxx of the terminal server on the platform.

    A list of processors and port numbers is posted to the left of sc3.starp

     

    Archived data may be viewed at the web site: http://sc3.starp.bnl.gov/archive

    Slow Controls Architecture

    Front End Processors (located on the platform or in the DAQ room) carry out all of the control and monitoring functions. Sun Workstations display the data (processes on the workstations listen for broadcasts of parameter values and broadcast user requests to the front end processors over Ethernet).

    The STAR Hardware Controls system sets, monitors and controls all subsystems. In addition, Hardware Controls can generate and display alarms and warnings for each subsystem. A real-time VxWorks environment is utilized for this purpose and a controls software package, EPICS, is used to provide a common interface to all subsystems. The baseline STAR detector consists of time projection chamber subsystems (anode high voltage, cathode field cage, gating grid, front-end electronics power supply, HDLC link, gas, laser, interlocks, and VME-crate control), mechanisms for the exchange of information with the STAR trigger and online, as well as external magnet and accelerator systems. Approximately 16,000 parameters governing experiment operation are currently controlled and monitored. Magnet status and accelerator information is obtained from the accelerator controls database using a Control DEVice (CDEV) client/server interface. The set of databases for detector configuration is maintained by Hardware Controls. STAR Hardware Controls maintains the system-wide control of the STAR detector with its EPICS databases at the subsystem level. Databases are constructed on the subsystem level, such that they can be used for subsystem testing and remain easily included in a larger detector configuration. An HDLC link provides the field bus used by the experiment for controls and an alternate data path.

    STAR hardware is required to be designed failsafe. All required personnel and property protection is mandated to be independent of STAR Hardware Controls. This does not mean that you cannot cause damage to the detector using the slow controls interface. If you are in doubt about an action contact an expert.

    EPICS was selected as the foundation for the STAR control software environment because it incorporates a standard means of sharing information and services, graphical display and control interfaces, and a robust, real-time operating system. At STAR, EPICS is run on Sun workstations connected to VME processors running the VxWorks operating system. The components of EPICS used at STAR are the Motif Editor and Display Manager (MEDM), the Graphics Database Configuration Tool (GDCT), the sequencer, the alarm handler, and the data archiver.

    EPICS is a set of software tools and applications jointly developed by Argonne National Laboratory and Los Alamos National Laboratory for the purpose of controlling Particle Accelerators and Large Experiments. Present and future development is being done cooperatively by Argonne (ANL), Los Alamos National Laboratory (LANL), Lawrence Berkeley Laboratory (LBL), the Jefferson National Laboratory (CEBAF), the Spallation Neutron Source Collaboration , BESSY (Berliner Elektronenspeicherring-Gesellschaft für Synchrotronstrahlung m.b.H.) and DESY (Deutsches Elektronen-Synchrotron).

    EPICS provides: interfaces to instrumentation from data acquisition, supervisory control and steady-state control through a table entry database, an operator interface to all control system parameters through interactive displays, data logging through a table entry archiving file, alarm management through a table entry alarm file, sequential control through a state definition language with convenient database interface routines, channel access routines for interfacing the control system data to data analysis, and other functions not provided in the control system. The basic components needed are the Operator Interface (OPI), Input Output Controller (IOC), and a Local Area Network, which allows the OPI and IOC to communicate.

    Hardware Controls Workstations

    Slow Controls workstations are located to the left of the Level 3 Workstation. The main controls workstation (sc3.starp.bnl.gov) and a second workstation (sc4.starp.bnl.gov) are positioned next to each other with sc4 to the right of sc3. The system should be running on sc3. Access from other computers is controlled. (Telnet is possible from sc4 and certain online computers.) The current username and password for shift operators are posted on the sc3.star.bnl.gov monitor. These may be updated. Please check each time you are on shift and do not use old login names and passwords.

    Shutting down the workstation or restarting the workstation should have no effect on the front end processors. The user interface is independent of the front end controllers. Instructions for accessing these processors are included later in this document.

    Color Coding of Data

    Green entries identify data which is believed to be correct and within bounds

    Yellow indicates entries that are slightly outside of bounds. Corrective action should be taken or logbook entries should be made.

    Red tags data that is beyond normal operating limits. Corrective action should be taken or an expert should be contacted. A logbook entry should be made.

    White entries indicate channels where data is missing or where the validity is suspect. An expert should be contacted. A logbook entry should be made.

    The Alarm Handler

    The Alarm Handler is used to identify alarms, study the symptoms, and contact an expert. It is an interactive graphical application used primarily to display and monitor EPICS database alarm states. It serves as an interface between an operator and the EPICS database and it communicates with the database using the same mechanism (channel access) as the controls screens. The user interface for the Alarm Handler contains a hierarchical display of an alarm configuration structure allowing both high level and detailed views of the alarm configuration structure in one window.

    Purpose of the Alarm Handler

    The Alarm Handler monitors alarm conditions of EPICS database records. The primary responsibilities of the Alarm Handler are to: bring alarms to the operator's attention, provide the operator guidance for handling of specific alarms, allow the operator to globally acknowledge alarms, provide a graphical view of current database alarms, and provide for logging alarms and display of the logged alarm history.

    Alarm Handler Terminology

    An alarm is an unexpected change of state for an EPICS database parameter. Examples of alarm conditions are: deviations from tolerance band, software or hardware errors, and loss of communication to hardware or linked parameters

    When a change in a parameter is detected, a message is sent to each process monitoring the parameter. The Alarm Handler may be one of these processes.

    There are two parts to an alarm: the alarm status and the severity of that alarm status. Alarm status and severity are set and checked whenever a parameter is processed. The alarm severity can take one of the following four values: no alarm, minor alarm, major alarm, or invalid data alarm. A correspondence between these alarm severities and the alarm state are preprogrammed in the EPICS database. The relationship is loaded when the front end processor is booted. The EPICS database specifies the alarm state of the each parameter. The state is defined on the basis of the value of the parameter (being within certain bounds) and on the proper completion of data collection and processing cycles.

    The Alarm Handler displays alarms for an arbitrary set of channels. Each Alarm Channel refers to a specific EPICS database parameter. The Alarm Handler provides a grouping mechanism so that related channels can be grouped together. An Alarm Group consists of a named set of lower-level groups and/or parameters. An Alarm Channel will be considered in an error state if there is a loss of communication or communication error between the Alarm Handler and front end processor.
     
     
     
     

    Alarm Handler Windows

    The Alarm Handler display consists of two types of windows, a Runtime Window and a Main Window. While the Alarm Handler is executing, the Runtime Window is always displayed.

    The Runtime Window is a small icon-like window that contains a single button containing the name of the alarm configuration main Alarm Group. The color of this button is used to show the highest alarm severity of any outstanding alarms.

    White: Data Invalid Alarm

    Red: Major Alarm

    Yellow: Minor Alarm

    Background color: No Alarm

    Beeping and blinking of the button is used to show the presence of unacknowledged alarms. Pressing the Runtime Window button will open the Alarm Handler Main Window or, if already open, bring the MainWindow to the top of the window stack. The Close or Quit item on the window manager menu allows the user to exit the Alarm Handler.

    The Alarm Handler Main Window is divided into three parts: a menu bar, an alarm configuration display area, and a message area. On the menu bar, there are selections for pull-down menu items that perform all the functions of the Alarm Handler. The alarm configuration display area is divided into two major parts: an alarm configuration tree structure display and an Alarm Group content display. The current alarm configuration tree structure appears in the first area, and a list of the contents of the currently selected Alarm Group from the alarm configuration tree structure appears in the second area. Color is used to show alarm severity. A single character severity code is also provided to assist operators who may have difficulty distinguishing colors. The message area can show the current execution mode, the current execution state, and the name of the current configuration file. It also contains buttons to silence alarms, and explanatory descriptions of the alarm configuration summary abbreviation codes and status data which appear in the group summary and channel status lines.

    Starting and Running the Alarm Handler

    Invoke the Alarm Handler by executing the command: startalh

    The alarm handler should be operated at the left controls workstation (sc3). The Runtime Window appears. There will be beeping and the button will blink to indicate the presence of unacknowledged alarms.

    Some users prefer using tpctop, a generic user interface. Normally the monitoring functions found in tpctop are carried out at the subsystem workstations and you will not need to make any changes from this screen. The same control screens available in tpctop are in general accessible from the Alarm Handler.

    To get a top level control screen, at the UNIX shell prompt type: tpctop
    Click on the small square next to the name of the subsystem that you want to examine.

    From this screen you can make changes to the subsystems even if there is another similar interface open at the subsystem.
     
     

    Click on the Runtime Window to bring up the Main Window.

    You can then relocate the Runtime Window.

    Click in a box in the lower right corner of the Main Window to silence beeping.

    Silence Forever will stop the beeping forever. Silence Current will stop the beeping until you get another alarm. Silence Current is probably the sensible choice if you are watching the screen. Alarms will continue to be updated on the display regardless of your selection. Be sure Silence Forever is not clicked when you stop watching the screen.

    To acknowledge an alarm, click on the left-most box.

    To get the name of the expert responsible for this subsystem,

    Click on "G" for Guidance.

    To see the control screen for this subsystem, click on "P" for Process.

    There are two ways to see which channels are involved in a Group.

    Clicking on the Group name lists the channels in the frame on the right.

    Clicking on the right-pointing triangle lists the channels in the left frame in a tree.

    To collapse the tree back to the original view, click the triangle again.

    You may want to leave this window open at all times when the experiment is running.

    To Exit, click on "Close" on either the file menu on the menu bar or on the Runtime Window. This will bring up the Alarm Handler Exit dialog window.

    Make sure you kill any icons that are no longer in use when you exit.

    Enabling and Disabling Alarm Systems

    Individual subsystem may be partially or total disabled using the control screen obtained by clicking on the process button of the DISABLE-ALARMS subsystem.

    Alarm Handler Tree Structure

    The alarm handler configures the display of parameters into a tree structure. The detector is divided into groups (subsystems) which are divided into subgroups that are made up of individual parameters. Branches of the tree (alarm groups) are displayed as separate Group Summary Lines. Buttons on the tree structure display allow the user to expand or deflate the alarm display.

    Group Summary Line

    This display line summarizes the status of all alarms for the group (normally a detector subsystem or part of a subsystem) named in this line and all lower level groups (smaller divisions of the subsystem or individual channels). The group summary display consists of the following items:

    Acknowledgment Button

    The button is activated only if an unacknowledged alarm is present for the group and the alarm handler is active. This button is color-coded and labeled with a corresponding letter representing the highest severity of unacknowledged alarm as follows:

    White: labeled with V indicates an invalid data alarm

    Red: labeled with R indicates a major alarm

    Yellow: labeled with Y indicates a minor alarm

    Background color indicates no alarm present

    This button describes the highest severity of unacknowledged alarms for this group and all associated subgroups. Clicking on this button while alarm handler is active will send alarm acknowledgments to all alarm channels associated with the group. It has the same effect as individually acknowledging each subgroup channel in an unacknowledged alarm state.

    Single Character Severity Code

    This code shows the highest level alarm whether or not the alarm has been acknowledged. (The acknowledge button only identifies the highest level unacknowledged alarm.) A one-character severity display code is present only if at least one channel associated with the group is in alarm or in an error state. It is color-coded and represents the highest severity outstanding alarm following the same coding system as the acknowledge button. The code shows the highest severity current alarm for this group and all its subgroups.

    Group Name Selection Button

    This button is used to look at the next level down the tree. The alarm handler allows the operator to choose any alarm group or channel which is lower on the tree by clicking on this button. The result is to display one level of the contents associated with the selected subgroup. This subgroup is also becomes the currently selected group (your location in the tree). If this name is associated with a collection of channels, the button has an associated right-pointing triangle so that an operator can distinguish paths to channels (no triangle - end of tree) from paths to subgroups (triangle - additional channels below).

    Alarm Configuration Summary

    The Alarm Configuration Summary is a five character display. Each character in the display is a "-" or the character identifying special treatment of this alarm from the following list:

    C: cancelled - alarms will not be sent to the alarm handler

    D: disabled - alarms will be ignored by the alarm handler

    A: alarms are not required to be acknowledged

    T: transient alarms are not required to be acknowledged

    L: alarms are not logged

    If the current configuration for any parameter in this group or any subgroup is configured to be handled in special way, the corresponding character is displayed, otherwise the character "-" is displayed. For example, the string "--A--" means that at least one parameter is configured not to require the acknowledgment of alarm and that no parameters have any other special configurations.

    Alarm Severity Summary

    The alarm severity summary allows the user to assess how widespread a particular alarm is. The display shows the total number of parameters with a particular alarm severity for this group and all its subgroups. This summary consists of a set of four entries: (data alarm, major alarm, minor alarm, no alarm)

    Each field value indicates the total number of parameters with the specified severity in the associated group and all its subgroups.
     
     

    Viewing Archived Data

    The archived data can be viewed from any web browser. To do this in the control room, open a browser on sc by typing the shell command: Netscape

    Go to: http://sc3.starp.bnl.gov/archive. This will work from any computer. (Experts can check from home.) Do not view Netscape from the same workstation where you are running the alarm handler or the control display program (tpctop). (Opening screens in the wrong order will mess up your display. It is easier to keep these programs separate to avoid this conflict.) At the website click on the slow controls archive button. Also at the home page are links to the STAR website and the RHIC website where the icons lead directly into those websites. The slow controls button will lead into the main listing of all the database variables listed in the archive. At the main listing choose the variables of the archive, which is needed. From there it either leads directly into that information or into a subdirectory of more variables. If it goes into a subdirectory, continue to narrow the field down until it leads into a list of the information.

    This list of information will contain a list of all the different pieces of data contained in the particular directory and times of when they were first and last archived. First, in the box labeled "Names", take out all the variables that are not needed. Next set the start time and end time of the data that is to be reviewed. From there two options can be performed, a graph and a list of data. To get the graph click the plot button, and to get the list of data click the get button. They can only work for ten or less variables so if more are needed then the graphs or spreadsheets will need to be taken more than once for different variables.

    Accessing Front End Processors

    In the case where a front end processor has a fault it may need to be examined or rebooted. This should only be done on the advice of an expert. This is accomplished by requesting an independent serial connection to a processor located on the platform obtained by typing: telnet scserv xxxx
    at a UNIX prompt where xxxx is the port address obtained from the expert. (tip a typed on sc3.starp will do the same thing for the processor in the DAQ room.) The processor can now be tested using VxWorks commands provided by an expert. If you are told to reboot, typing [control]x should initiate the process. Rebooting does not normally affect the front end electronics. It will not shutdown the system. The system can only be shutdown by taking appropriate action on the individual control screens.

    Special VME, DAQ, Trigger

    and Magnet Commands

    VME crates should not be turned on and off except at the advice of an expert. A better option may be to reset the VME bus. This can be done from the controls display. These processes can be carried out remotely for crates on the platform and in the DAQ room using the alarm handler. To obtain a controls screen, click on the program button (P) after the group name of the VME crates (PLATFORM-VME for the platform, or DAQ-VME for the DAQ room). The control program for the VME status for crates in the DAQ room can also be obtained by typing: daq_crate at the UNIX shell prompt. The crate control program for VME crates on the platform can also be accessed by selecting the VME option using tpctop.

    The trigger high voltage control program can be accessed by typing: trighv at the UNIX shell prompt.

    To restart the link to RHIC and STAR magnet data, type the command: cdev at the Unix shell prompt to get to the correct directory, followed by the commands:

    source new.cshrc and MagnetData at the Unix shell prompt.



     

    #   Description      Port#   Crate   Location       Processor     IP address

    ________________________________________________________________________________

    1.  CANbus (STAR)

        1st 2nd Floor     9003     51      2A9         grant.starp    130.199.61.103   

       

    2.  CANbus (BARREL)

        Barrel crates     9040     100     2C4-1       bemccan.starp    130.199.60.59

     

    3.  CANbus (EEMC)

        EEMC/QT/West PT   9020     99      2C4-1       vtpc1.starp    130.199.60.189

     

    4.  Field Cage        9001     56      2A4         vtpc4.starp    130.199.60.192

     

    5.  Gated Grid        9002     54      2A6         vtpc3.starp    130.199.60.191

     

    6.  TPC FEE           9004     58      2B5         vtpc2.starp    130.199.60.190

     

    7.  Cathode HV        9005     57      2A3         cath.starp     130.199.60.162

     

    8.  Inner Anode HV    9006     52      2A7         vtpc7.starp    130.199.61.78    

       

    9.  BBC HV            9010     77      1A7-1       bdb.starp      130.199.61.218

        ZDCsmd, and upVPD

     

    10.  Ground           9011     57      2A3         vsc2.starp     130.199.60.217

        Plane Pulser

     

    11. Interlock         9012     52      2A7         epics2.starp   130.199.60.149

        TPC Temperature    

     

    12. Outer Anode HV    9013     59      2A6         vtpc5.starp    130.199.60.193

     

    13. Platform Hygrometer

         TPC Gas          9015     58      2B5         hdlc.starp     130.199.60.161

       

    14. Trigger HV        9021     63      1A6         cdb.strarp     130.199.60.40

        ZDChv programs

     

    15. SSD               9026     79      1C6         sdvmesc.starp  130.199.60.120

     

    16. SVT               not used                     svtmonitor.starp 130.199.61.50

                                                               

    17. FTPC              9033     71      1B5-1       ftpc.starp     130.199.61.83

        Slow Controls

     

    18. EMC TDC           9039     80      2C4-2       creighton5.starp 130.199.60.229

        & Slow Controls

     

    19. daq temp & humidity

         & gain        DAQ room            DC2         burton.starp   130.199.61.104

     

    20. CDEV           DAQ room            DC3-2       vsc1.starp   130.199.60.188     

      Scalars and Magnet

     

    21. Autoramp anode DAQroom             DC2-1       stargate.starp 130.199.61.48

        & cathode & testbits

       

    22. TOF_Gas program  DAQroom        DC3-3          taylor.starp 130.199.60.6

       

    23. CANbus iowritest DAQroom        DC3-1          tutor.starp 130.199.60.46

        Program

       (needs to be rebooted daily)

     

    24. Daq Hygrometer    DAQroom       DC3-1          medm.starp  130.199.60.49

         & GID              (PC in daq room)   

     

     

    TPC Lecroy serial session for inner sectors           Port 9037

    TPC Lecroy serial session for outer sectors           Port 9038

    FTPC Lecroy serial session                            Port 9023

    SVT?? Lecroy serial session                           Port 9034

    SMD?? Lecroy serial session                           Port 9035

     

    REMOTE POWER SUPPLIES---requires a telnet

    rps1.starp.bnl.gov  130.199.60.26   2A4 

    rps2.starp.bnl.gov  130.199.60.205  2A3 

    rps3.starp.bnl.gov  130.199.60.206   2A6

    bemcpower.starp.bnl.gov  130.199.60.54  2C4

    eemccanpower.starp.bnl.gov  130.199.60.90 1C2

    pmdrps2.trg.bnl.local    172.16.128.208 1A3  

    Name:   scdaqpower.starp.bnl.gov  Address: 130.199.60.95 

     

     

    computone servers

     

    scserv.starp.bnl.gov  130.199.60.167 

    Name:   scserv2.starp.bnl.gov  Address: 130.199.60.96 

     

     

    scserv:notes

    telnet scserv 

    then type help

    There is also a  web interface http://scserv.starp.bnl.gov/

     

     

    Name:   scserv2.starp.bnl.gov

    Address: 130.199.60.96

    username and password not assigned yet

    X-Windows Remote Access to BNL with NX

    This document based on http://request.nsls.bnl.gov/freenx/

    Yury Gorbunov

    22 February 2008

    Comments? gorbunov[attt]rcf_d_rhic_d_bnl_d_gov

    X-Windows Remote Access to BNL with NX

    This note is about how to use NX technology to gain remote access with full X-Windows support. Typically, one can use the nxclient to connect to the office/beamline Linux box (nxserver), with full X-Windows functionality (to access the BNL intranet, etc., almost like you are sitting infront of the remote BNL computer). The key advantage of using NX, as compared to others (for example, "ssh -X -C"), is that NX uses very efficient compression to achieve very fast response while using very low bandwidth. In my experience, while connecting to my office Linux box, with a cable modem connection, there is no noticeable delay at all, and the bandwidth usage is about or below 10KB/sec most of the time, while doing web browsing on the remote desktop with Firefox.  It's reported one could do this through a modem line.

    Like most of the network applications, NX comprises of a Client and a Server. I already set up NX servers for many of the nsls beamline Linux computers. To connect to the nxserver, the only thing you'll need is to download a free (as in beer) client (Linux/Windows/Mac/Solaris) and follow the following procedures to configure it.  For outside BNL network (at home), there is a special procedure to configure port-forwarding on PuTTY (Windows) or run a special "ssh" command (Linux), to go through the BNL SSH gateway.

    NX technology was developed at NoMachine ( www.nomachine.com ), who licensed the core technology (the core library) under GPL. The community effort (free version) of NX is the FreeNX. We use the NoMachine client (free download), and the FreeNX server. FreeNX client is command-line driven, it's not as easy to use at the moment.

     

    Client installation

     

    Go to the www.nomachine.com and download appropriate version of the client.

    Also one can download the 1.5 versions of clients (a snapshot from Jan 06) here locally. (Accessible only inside BNL)
    Or I have a local copy of the clients on sc5.starp.bnl.gov

    Linux client

    sysuser@sc5.starp.bnl.gov:/home/sysuser/Gorbunov/arch/nxclient-1.5.0-141.i386.tar.gz

    Windows Client

     

    sysuser@sc5.starp.bnl.gov:/home/sysuser/Gorbunov/arch/Windows/nxclient-1.5.0-138.exe

      plus fonts

    rpms

     

    Mandrake

    /home/sysuser/Gorbunov/arch/rpm_mandrake/nxclient-1.5.0-141.i386.rpm

    redhat

    /home/sysuser/Gorbunov/arch/nxclient-1.5.0-141.i386.rpm

    and two files for debian

    /home/sysuser/Gorbunov/arch/deb_sarge/nxclient_1.5.0-141_i386.deb

    /home/sysuser/Gorbunov/arch/deb_woody/nxclient_1.5.0-141_i386.deb

     

    I guess you know what to do with rpm files and deb files but I have to explain what to do with *.tar.gz

    cd /usr

    tar -xzvf /where/you/have/nxclient-1.5.0-141.i386.tar.gz

    so you ready to start it!

    First thing to do is to establish port forwarding

     

    Normally we have to go through the BNL ssh gateway (rssh.rhic.bnl.gov) to gain access to BNL computers, in a two step process, first ssh to rssh.rhic.bnl.gov, then from there "ssh BNL_host".  To make nxclient connect directly to the Linux server inside BNL firewall with SSH, we'll need to take advantage of the SSH port forwarding feature.

    For Linux, this is just a ssh command,

    this is not exactly true any more

    ssh -L 7777:sc5.starp.bnl.gov:22 your_username@rssh.rhic.bnl.gov

    please use the set bellow

    since stargw has been added please use the following commands, and if I will find something more simple I will update the page

    ssh -L 7777:localhost:7777 user_name@rssh.rhic.bnl.gov -A

    ssh -L 7777:localhost:7777 user_name@stargw.starp.bnl.gov -A

    ssh -L 7777:localhost:22 user_name@sc5.starp.bnl.gov

    Login to rssh.rhic.bnl.gov with the command above (don't forget to put a real user name) and leave the command window open.

     

    There is also a solution for windows. It's a bit convoluted way to do it but so far I don't have anyhting more elegant.

     

    First you should have pageant.exe running which should have you ssh key uploaded.

    As the next step go to the startup menu -> click run - > in the menu type cmd , The window should appear in which one you should type :

    putty.exe -L 7777:localhost:7777 user_name@rssh.rhic.bnl.gov -A

    as a result a standrad putty terminal window will appear in this window type:

    ssh -L 7777:localhost:7777 user_name@stargw.starp.bnl.gov -A

    ssh -L 7777:localhost:22 user_name@sc5.starp.bnl.gov

    and than start nx for windows as usual .

     

    I will update this page if I will find a better solution

     

     

     

     

     

     

     

    Client Configuration

    Start nx client with the command /usr/NX/bin/nxclient

    the window below will appear on the screen

     

     

    Picture 1

    Login Screen

    Specify the username : sysuser

    Password for the sc5.starp.bnl.gov

    Name the seession : whatever you like

    Now it's time to configure the NXClient.  Click "Configure" on the NXclient logon scree, you'll get the configuration screen, as shown below.

     

     

    Picture 2

    Configuration Screen

    Now specify host name : localhost

    port 7777

    Desktop Unix and X window manager GNOME

    Display

    I used 800x600 , but screen is a bit too small I think .. there are several options available an you can try them to see what fits you best.

    Now we have to configure public key!

    download one from sc5.starp.bnl.gov

    sc5.starp.bnl.gov:/home/sysuser/Gorbunov/arch/client.id_dsa.key to you computer

     

    or on sc5 at /etc/nxserver/client.id_dsa.key

    Clcik “Key” button in the configuration screen

    The screen below will appear

     

     

    Picture 3

    Public Key Specification Screen

    Click “Import” navigate to where you stored the public key and select the file, click “ok”

    then click “Save”

    You will get back to the configure menu, click “Save” and “Ok” and then click “Login” on the login menu

    After few seconds a new screen appear as shown below :

     

     

    We need a putty client and Xwindows server (Exceed or similar)

    First click on the PuTTY icon, to configure it. Host Name =rssh.rhic.bnl.gov. port 22. As shown.

     

    \

    Next click on SSH -> X11 button on the left, Enable X11 forwarding, to localhost:0

     

    Now click on Tunnels, fill in Source port the LocalPortNumber you chose (7777 in this example, this should be the same number that you input in the nxclient configure screen). Click on Add. Go back to the session tab to save the session, so that you don't have to go through all this configuring again every time you use it. Login to your rssh.rhic.bnl.gov account with your userid and passwd. Leave the PuTTY window open.

     

    TPC

    ALTRO Settings

    I toyed with Yuri's TPC simulation curve and I show you just one result:


    Blue line (before filter):
    --------------------------

    Using Yuri's curve and multiplying it to
    suit my needs I added two hits together.

    The first peak simulates a highly ionizing track.
    Note how high the peak ADC is.

    Second peak simulates a MIP (obtained by
    dividing the first by 10, moving 11 timebins
    to the right and adding up).



    Red line (after filter):
    -----------------------

    You can see the baseline restoration with a minimal
    loss of signal!

    Not bad.

    BTW, the ALTRO parameters were:

    K1 12807
    K2 50251
    K3 64797
    L1 65145
    L2 54317
    L3 8537

           -- Tonko

     

    FCF

     Please attach any information about the Fast Cluster Finder (FCF) here.

    Cluster flags

    TPC forum post 425:

    I am sending a short description of the
    TPX cluster finder flags. This is more an Offline issue
    but I guess this is a better group since it is a bit
    technical.
    
    1) The flags are obtained in the same way other
    variables such as timebin and pad are obtained.
    I understand that this is not exported to StHit *shrug*.
    
    See i.e. StRoot/RTS/src/RTS_EXAMPLE/rts_example.C
    
    2) They are defined in:
    
    StRoot/RTS/src/DAQ_TPX/tpxFCF.h
    
    The ones pertaining to Offline are:
    
    FCF_ONEPAD           This cluster only had 1 pad.
    Generally, this cluster should be ignored unless
    you are interested in the prompt hits where this
    might be valid. The pad resolution is poor, naturally.
    
    FCF_MERGED           This is a dedconvoluted cluster.
    The position and charge have far larger errors
    than normal clusters.
    
    FCF_BIG_CHARGE    The charge was larger than 0x7FFF so
    the charge precision is lost. The value is OK but the precision is
    1024 ADC counts. Good for tracking, not good for dE/dx.
    
    FCF_BROKEN_EDGE       This is the famous row8 cluster.
    Flag will disappear from valid clusters
    once I have the afterburner running.
    
    FCF_DEAD_EDGE          Garbage and should be IGNORED!
    This cluster touches either a bad pad or an end
    of row or is somehow suspect. I need this flag for internal debugging
    but the users should IGNORE those clusters!
    
            -- Tonko
    


    TPC forum post 426:

    I commited the "padrow 8" afterburner to the DAQ_TPX
    CVS directory. If all runs well, you should see no more
    peaks on padrow 8. The afteburner runs during the
    DAQ_READER unpacking.
    
    However, please pay attention to the cluster finder flags
    which I mentioned in an email ago. Specifically:
    
    "FCF_DEAD_EDGE          Garbage and should be IGNORED!
    This cluster touches either a bad pad or an end
    of row or is somehow suspect. I need this flag for internal debugging
    but the users should IGNORE those clusters!"
    
             -- Tonko
    

    Field Cage Shorts

    This page is for information regarding shorts or current anomalies in the TPC field cages.

    Excess current seen in 2006

    The attached powerpoint file from Blair has plots of the excess current seen in the IFC East for 2006.

    Modeled distortions

    Modeling the Distortion

    Using StMagUtilities, Jim Thomas and I were able to compare models of the distortions caused by shorts at specific rings in the IFC with the laser data. First, I'll have to say that I was wrong from my Observed laser distortions: the distortion to laser tracks does not have the largest slope at the point where the short is. Instead, it has a maximum at that point! The reason is that the z-component of the electric field due to the distortion (withouth compensating resistor) changes signs at the location of the short. So ExB also changes directions, and the TPC hits are distorted in one rPhi direction on the endcap side of the short, and in the opposite rPhi direction closer to the central membrane.

    This can be seen in the following plot, where I again show the distortion to laser tracks at a radius of 60cm (approximately the first TPC padrow) versus Z in the east TPC using distored run 7076029 minus undistorted run 7061100 as red data points. Overlayed are curves for the same measure from models of a half-resistor short (actually, 1.14 MOhm short as determined by the excess current of ~240+/-10nA [a full 2MOhm short equates to 420nA difference]) located at rings 9.5, 10.5, ..., 169.5, 179.5 (there are only 182 rings).

    [note: earlier plots I have shown included laser data at Z = -55cm, but I've found that the laser tracks there weren't of sufficient quality to use; I've also tried to mask off places where lasers cross over each other]

    The above plot points towards a short which is located somewhere among rings 165-180 (Z < -190cm). As the previous years' shorted rings were rings 169 and 170 ( = short at ring 169.5), it seems highly likely that the present short is in the same place. More detail can be seen by looking at the actual laser hits. The first listed attached file shows the laser hits as a function of radius for lasers at several locations in Z. The dark blue line is a simple second order polynomial fit I used to obtain the magnitude of the distortion at radius 60cm, which I used in the above plot. The magenta line is the model of the half-resistor short at ring 169.5, and the light blue line is the same for ring 179.5 (the bottom two curves on the above plot). Either curve seems to match the radial dependence fairly well.

    Further refinement can be achieved by modeling the exact resistor chain. We have a permanent short at ring 169.5 (rings 169 and 170 have been tied together), and have replaced the two 2MOhm resistors between 168-169 and 170-171 with two 3MOhm resistors (see the attached photo of the repair, with arrows pointing to candidate locations for shorts via drops of silver epoxy). So it is more likely that we have a 1.14MOhm short on one of these two 3 MOhm resistors. The three curves in this next plot are:

    • red: 1.14MOhm short on a 3MOhm resistor at 168.5, full short at 169.5, 3MOhm resistor at 170.5
    • green: 1.14MOhm short on a 2MOhm resistor at 169.5, normal 2.0MOhm resistors at 168.5 and 170.5
    • blue: 3MOhm resistor at 168.5, full short at 169.5, 1.14MOhm short on a3MOhm resistor at 170.5
    The difference between the curves does not come from the fact that we have treated the 3MOhm resistors properly (that difference is less than 5 microns in distortions, and only in a small region near the short! shown here), but rather from the movement of the effective short from location 168.5 to 169.5 to 170.5. From a visual inspection (not a fit), it appears that the blue curve is the best match to the data:


    We can also take a look at the data with the resistor in. Here is the same plot as before with a 1.14MOhm short at the same locations, but with an additional compensating resistor of 1.0MOhm. The fact that all the data points are below zero points again towards a short near the very end of the resistor chain, preferring a location of perhaps 177.5 over shorts near ring 170. These plots do not include the use of the 3MOhm resistors, but that difference is below the resolution presented here.

    Zoom in with finer granularity between rings (every other ring short shown):

    The second listed attached file shows the laser hits as a function of radius for lasers at several locations in Z for the case of the resistor in, again with magenta and blue curves for the model with shorts at ring 169.5 and 179.5 respectively.


    Applying the Correction

    I tried running reconstruction on the lasers using the distortion corrections for the 1.14MOhm short at three locations: 170.5 and 171.5 (two possible spots indicated in Alexei's repair photo), and 175.5 (closer to what the with-resistor data pointed to). The results are in the following plots. The conclusion is that the 175.5 location seems to do pretty well at correcting the data, slightly better than the 170.5 and 171.5 locations, for both with and without compensating resistor. For this reason (the laser data), we will proceed with FastOffline using a short at 175.5, even though we have no strong reasons outside the laser data to suspect that the short is anywhere other than the rings 168-172 area where the fix was made.

    170.5171.5175.5


    Gene Van Buren
    gene@bnl.gov

    Choosing an external resistance for an IFC short

    It is clear that when a field cage short is close to the endcap, it is best to add an external resistance to compensate for the amount of the missing resistance from the short as that would restore the current along the length of the resistor chain and only disrupt the potential at the last (outermost) rings instead of along the full length. A short near the central membrane would benefit less from this as restoring the proper current does restore the potential drop between each ring, but leaves almost all rings at a potential offset from the intended potential, essentially tilting the E field over nearly the whole volume.

    But the question then comes as to whether we can decide on the best external resistance for minimizing the distortion, to align with the principle that the best distortion we can choose is the one which requires the least correction, in case we're not quite correcting it accurately. To answer that, the distortion modeling was run with a variety of locations for an IFC west 2 MOhm single-resistor short, and a variety of external resistances. The code to run this modeling has been attached as a tar file to this Drupal page in case there is interest to re-run it (e.g. for an OFC short).

    Here are the results:
    • Left: Surface plot showing the mean r-φ distortion we would get at the first iTPC pad row (radius = 55 cm) averaged over all active west-side z as a function of where the short is located ("ring of short") and how much external resistance is added.
    • Right: Same but drawn as contours. The curve of interest to follow, that leads to <distortion>=0, is the one that starts near (ring,external resistance) = (110,0.0) and ends near (180,2.0), as indicated by the red markers. If the short is at ring 160.5, for example, then this curve indicates an external resistance of ~1.05 MΩ minimizes the distortions averaged over all active west-side z.



    However, it may be more important to restrict the z range included in the distortion average, as most tracks of interest do not cross the inner pad rows at high z...
    • Left: Similar surface plot, but restricting the average to 0 < z < 100 cm.
    • Right: The curve of interest to follow in this contour plot, that leads to <distortion>=0, is the one that starts near (100,0.0) and ends near (180,2.0), as indicated by the red markers. If the short is at ring 160.5, then this curve indicates that an external resistance of ~1.25 MΩ minimizes the distortions averaged over 0 < z <100 cm.




    Some additional observations:
    • The curve of interest should always end close to (180.5,2.0), as that approaches the condition where the external resistance is no different than an internal resistance at the very end of the chain.
    • The above is a simplification of what area should be integrated, as tracks with η ≠ 0 cross a variety of z at various radii, complicating the impact on their reconstruction. A track-by-track analysis of impact would be more meaningful, but a lot more work! The modeling shown here can serve as a rough guide to the best external resistance to use, but should not be taken as definitive for all physics.
    • It is interesting to note that the model implies that a negative external resistance would help minimize the <distortion> when the short is closer to the central membrane (ring 0). A way to think of this is like having a short at both ends, such that the potentials are too high near the central membrane, and then too low near the endcap, so that the E field tilts one way in the region near the central membrane, isn't tilted at half the drift length, and then tilts the other way near the endcap, resulting in opposing distortions for electrons which drift the full length that serve to cancel each other. This could in principle be achieved by reducing the overall resistance of the Resistor box at the end of the Field Cage chain. The STAR TPC has (as of this documentation) had no persistent shorts near the central membrane that would warrant this approach.

    -Gene

    OFC West possible distortion

    There remains a possible distortion due to a potential short in the OFC west as well. We see a bimodal pattern of 0 or 80 excess nanoAmps coming out of the OFC West field cage resistor chain (it has been there since the start of the 2005 run). That corresponds to a 0.38 MOhm short (420nA = 2 MOhms). The corresponding distortion depends on the location of the electrical short. The plot shown here is the distortion in azimuth (or rPhi) at the outermost TPC padrow near the sector boundaries (r=195 cm, the pads are closer to the OFC near the sector boundaries) due to such a short between different possible field cage rings:

    In terms of momentum distortion, a 1mm distortion at the outermost padrows would cause a sagitta bias of perhaps about 0.5mm for global tracks (and even less for primary tracks), corresponding to an error in pt in full field data of approximately 0.006 * pt [GeV/c] (or 0.6% per GeV/c of pt). This is certainly at the level where it is worthwhile to try to fix the distortion if we can figure out where the short is. It is also at the level where we should be able to see with the lasers perhaps to within 50cm where the short is.

    Just as a further point of reference, the plot for radius = 189cm, corresponding to the radius of the outermost padrow in the middle of a sector (its furthest point from the OFC) can be found here.

    This possible distortion remains uninvestigated at this time.


    Gene Van Buren
    gene@bnl.gov

    Other FC short distortion measurements

    I considered the possibility that other measurements might help isolate the location of the short in the TPC. So, using the Modeled distortions, I modeled the effects of adding even more compensating resistance to the end of the IFC east resistor chain. Below are the results for shorts located at ring 165.5 (between rings 165 and 166), 167.5, 169.5, ..., 179.5 as indicated for the colored curves. All plots use a 1.0cm range on the vertical scale so that they can be easily compared. I had hoped that one resistance choice or another would cause more separation between the curves, giving better resolving power between different short locations. But this dependence is small, and actually seems to decrease a little with increased compensating resistance. Remember also that the last laser is at Z of about -173cm.

    Also perhaps worth noting is that the 1.0MOhm compensating resistor probably helps reduce the distortions even more than an exact 1.14MOhm would. The latter causes the distortions not to change between the short location and the central membrane, but the former actually causes the distortion to re-correct the damage done between the short location and the endcap!

    Compensating
    resistance
    [MOhms]
    Distortion on first padrow vs. Z
    0.0
    1.0
    1.14
    2.0
    4.0
    20.0


    Gene Van Buren
    gene@bnl.gov

    Observed laser distortions

    First look

    The first listed attached file was my initial look at distortions to laser tracks in the TPC (not that it is upside down from subsequent plots as I accidentally took the non-distorted minus the distorted here). These are radial tracks, and the plots are off the difference between run 7068057 (with excess current) and 7061100 (without excess current). Also, run 7068057 was taken with collisions ongoing, so there is some SpaceCharge effect as well (7061100 was taken without beam). I believe this explains the rotation of the tracks at low positive z (west side).

    Second look: dedicated runs

    On March 17, we took a couple laser runs without and with a compensating resistor to get the IFC east current at least approximately correct. I plotted 1/p of laser tracks and took the profile. Straight tracks give very low curvature = low 1/p. The distortion brings up the curvature, as can be seen in the IFC east without resistor. The same plot also shows large error bars for the negatively charged laser tracks because there aren't many: the curvature tends to bring them positive. The IFC west shows the appropriately low level without any distortion, but the timing on the west lasers was wrong, so they are not reconstructed where they are supposed to be in Z. I am uncertain whether this bears any relation to the odd behavior of the first laser on the west side (showing up here at Z of about +67cm). The IFC East plot for the no-compensating-resistor run also shows that 1/p begins to drop somewhere around Z of about -100cm. The short would be located where the largest slope occurs in this plot (because the distortion to tracks is an integral of the distortion in the field, and the short is where the field distortions are largest), but the data isn't strong enough to pin this down very well. The negative tracks indicate a short between the lasers at -145cm and -115cm. But the seemingly better quality positive tracks are less definitive on a location as the slope appears to get stronger at more negative Z, implying a short which is at Z beyond -170cm.

    No compensating resistor (run 7076029):

    With 1 MOhm compensating resistor (run 7076032), which brings IFC east current to correct value within ~20 nAmps, according to 10:27am 2006-03-17 entry in Electronic ShiftLog:

    Again, I looked at the distortion as seen be comparing TPC hits from the distorted runs to those from an undistorted run as I did at the beginning of this page (but this time taking distorted minus undistorted). For the undistored, I again have only run 7061100 to work with as a reference. The plots for each Z are in the second listed attached file below, where the top 6 plots are for the no-compensating-resistor run (7076029), and the bottom 6 are the same ones with the resistor in. I also put on the plots the value of the difference from a simple fit (meant only to extract an approximate magnitude) at a radius of 60cm (approximately the first TPC pad row). Those values are also presented in the following plot as a function of Z, confirming the improvement of the distortion with the resistor in place.

    These plots seem to point at a short which is occurring somewhere between the lasers at Z = -145 and -115cm.


    Gene Van Buren
    gene@bnl.gov

    Resistor box at the end of the Field Cage chain

     After ring 181, the potentials are determined by a box of resistors which sit outside the TPC. This is well documented, but at the time of this writing is not complete. This was particularly relevant during Run 9 when an electrical short developed inside the TPC between rings 181 and 182 of the outer field cage on the west end (OFCW). Shown here is a plot of the resistors:

     

    Note that the ammeter is essentially a short to ground, while the voltmeters are documented in the Keithley 2001 Manual to express over 10 GOhm of resistance (essentially infinite resistance). The latter only occurs when the input voltage is below 20 V. The voltages at rings 181 and 182 are above 20 V and below 200 V (though actually at negative voltage), so their voltage is scaled by the shown 1.11 and 10.0 MOhm resistors to be stepped down by a factor of x10. The readings are then multiplied by x10 before being recorded in the Slow Controls database.

    After Run 9, this box was disconnected and resistances were measured for the OFCW portion. Because the resistors were not separated from each other, equivalent resistances were actually measured. In the below math, R182eq refers to the resistance measured across resistor R_182, while Rfull is the resistance measured between the input to the box from ring 181 to the output for the ammeter. R111 is the combined 10 + 1.11 MOhm pair.

    double R111 = 10.0 + 1./(1./1.11 + 1./10000.)
    11.11
    double R182eq = 0.533 // measured
    double Rfull = 2.31   // measured
    
    double pa = R111 - Rfull
    double pb = R111*(R111 - 2*Rfull)
    double pc = (R182eq - Rfull)*R111*R111
    
    double R181 = (-pb + sqrt(pb*pb - 4*pa*pc))/(2*pa)
    2.3614
    double R182 = 1./(1./R182eq - 1./R111 - 1./(R181+R111))
    0.5841
    
    double R182b = 1./(1./R182 + 1./R111)
    
    double Vcm = 27960.
    double Rtot = 364.44   // full chain
    double Inorm = Vcm/Rtot
    76.720
    double V181_norm = Rfull * Inorm
    177.22
    double V182_norm = V181_norm * R182b/(R181 + R182b)
    33.724
    
    double Rshorted = 1./(1./R111 + 1./R182 + 1./R111)
    0.5286
    double Rmiss = Rfull - Rshorted
    1.7814
    double Ishorted = Vcm/(Rtot - Rmiss)
    77.097
    double V181_short = Rshorted * Ishorted
    40.750
    double Iexcess = Ishorted - Inorm
    0.377
    

    Note that many of these numbers would be different for the inner field cages.

    External resistors to make up for missing resistance can also be added to the chain here.

    GridLeak Studies

    Floating Grid Wire Studies

    Distortions in TPC data (track residuals) are seen in 2004-2006 data which are hypothesized to come from a floating Gating Grid wire. Notes from a meeting of TPC experts regarding the topic held in October 2006 can be found here.

    See PPT attachment for simulations of floating grid wires from Nikolai Smirnov which show that the data is consistent with two floating -190V wires in sector 8, and two floating -40V wires in sector 3 (all wires are at -115V when the grid is "open").

    GridLeak Distortion Maps

     Using the code in StMagUtilities, these are maps of the GridLeak distortion.

    First, this is a basic plot of the distortion on a series of hits going straight up the middle of a sector (black: original hits; red: distorted hits). The vertical axis is distance from the center of the TPC (local Y) [cm], and the horizontal axis is distance from the line along the center of the sector (local X). Units are not shown on the horizontal axis because the magnitude of the distortion is dependence on the GridLeak ion charge density, which is variable.

    The scale of the above plot is deceptive in not showing that there is some distortion in the radial direction as well as the r-phi direction. The next pair of plots show a map of the distortion [arb. units] in the direction orthogonal to padrows (left) and along the padrows (right) versus local Y and angle from the line going up the center of the sector (local φ) [degrees].

    One can see that the distortion is on the order of x2 larger along the padrows than orthogonal to the padrows. Also, it is clear that there is a small variance in magnitude of the distortion across the face of the sectors.

    The next plot shows the magnitude of the distortion [arb. units] along the padrow at the middle of the sectors vs. local Y [cm] and global Z [cm]. The distortion is largest near the central membrane (Z=0) and goes to zero at the endcaps (|Z| ≅ 205 cm), with a linear Z dependence in between, which flattens off at the central membrane and endcap due to boundary conditions that the perturbative potentials are due to charge in the volume and are constrained to zero at these surfaces.

    GridLeak Simulations

    Nikolai Smirnov & Alexei Lebedev:
    Data for STAR TPC supersector.   05.05.2005  07.11.2005
    Jon Wirth, who build all sectors provide these data.
    
    Gated Grid Wires: 0.075mm Be Cu, Au plated, spacing 1mm
    Outer Sector 689 wires, Inner Sector 681 wires. Total 1370 wires per sector
    
    Cathode Grid Wires: 0.075mm Be Cu, Au plated, spacing 1mm
    Outer Sector 689 wires, Inner Sector 681 wires. Total 1370 wires per sector
    
    Anode Grid Wires:0.020mm W, Au plated, spacing 4mm
    Outer Sector 170 wires, Inner Sector 168 wires. Total 338 wires per sector
    Last Anode Wires: 0.125mm Be Cu, Au plated
    Outer Sector 2 wires, Inner Sector 2 wires. Total 4 wires per sector
    

    We are most interested in the gap between Inner and Outer sector, where ion leak is important for space charge. On fig. 1 wires set is shown. The distance between inner and outer gating grid is 16.00 mm. When Grid Gate is closed, the border wires in Inner and Outer sectors have -40V, each next wire have -190V and after this pattern preserved in whole sector - see fig. 2. When gating grid is open, each wire in gating grid have the same potential -115V. Above grid plane we have a drift volume with E~134V/cm to move electrons from tracks to sectors and repulse ions to central membrane. Cathode plane has zero voltage, while anode wires for outer sector holds +1390V and for inner sector +1170V.


    Fig. 1. Wire structure between Inner and Outer sector.


    Fig.2 Voltages applied to Gating Grid with grid closed.

    Another configurations of voltages on gating grid wires are presented on fig.3. All these voltages are possible by changing wire connections in gating grid driver. Garfield simulations should be performed for all to find a minimum ion leak.


    Fig.3 Different voltages on closed gating grid (top: inverted, bottom: mixed).

    This is a key for Nikolai's files: there are 4 sets of files in each set there is simulation for Gating Grid voltages on last wires. Additionally he artificially put a ground shield on the level of cathode plane and simulated collection for last-thick anode wire and also ground shield and last thin anode wire.

    Setups: Standard Inverted Mixed Ground Strip Ground Strip
    and Wire
    Equipotentials PS

    PS

    PS

    PS

    PS

    Electron paths PS

    PS

    PS

    PS

    PS

    Ion paths
    (inner sector)
    PS

    PS

    PS

    PS

    PS

    Ion paths
    (outer sector)
    PS

    PS

    PS

    PS

    PS

    GridLeak: Gain Study

    In February 2005, Blair took some special runs with altered TPC gains so we could study the effect on the GridLeak distortion. What is shown in the following plots is the distortion (represented by the profile of residuals at padrows 12 and 14, which amounts to 0.5 * [residual at padrow 12 + residual at padrow 14]) as a function of Z in the TPC. Black points are the distortions from sectors 7-24, and red are 1-6, which are the sectors where the TPC gain was reduced. I have chosen as labels "Norm", "Half", and "Low" for these three conditions of no gain change, half gain change on the inner pads only, and half gain change on both inner and outer.

    First note: the ever-present problem that our model goes to zero distortion at the endcaps, while the measured distortions do not appear to do so (though the distortion curves should flatten out [as seen in the above plots] as a function of Z near the endcap and central membrane due to boundary conditions on the fields in the TPC).

    Second note: I have not excluded sector 20 from these plots, which is partly to explain why the east half (z<0) has slightly less distortions than the west in these profile plots. In reality, east and west were about even for a normal run (distortions excluding sector 20).

    Here is the z-phi plot for Low (it's almost difficult to see the distortion reduction in the z-phi (in "o'clock") plot for Half):

    Third note: (though not too important for this study because we generally ignore east/west comparisons) the sectors between 1-6 o'clock already tend to show somewhat less distortion than the sectors at 7-12 o'clock, and because it is true on both halves of the TPC, it is more likely to be due SpaceCharge azimuthal anisotropy than asymmetries in the endplanes. Here are the distortions at |z|<50 for east (red) and west (blue) as a function of phi in "o'clock" where one can see the already present asymmetry, explaining why sectors 1-6 are already less distorted in the Norm run than sectors 7-12:

    We have to normalize to sectors 7-12 to see the drop in distortions as it the runs are taken at different times when the luminosity of the machine, and therefore the distortion normalizations, are different. Here are the ratios from sectors 1-6 / sectors 7-12:

    And the double ratios to see the drop in the Half and Low runs w.r.t. the Norm run:

    These plots show ratios in the Z = 25-150cm range of about 0.86 and 0.59 respectively, or reductions of about 14% and 41% give or take a few percentage points. Data beyond 150cm tends to be poor and there's little reason to believe that the ratio really changes by much there. However, there does appear to be some shape to the data, which is not understood at this time.

    Another way to calculate the difference in distortions is to take a linear fit to the slope of the distortions between z = 25-150cm. Those fit slopes are:

    Low:
    1-6: 0.000250 +/- 0.000019
    7-12: 0.000420 +/- 0.000024
    Half:
    1-6: 0.000365 +/- 0.000023
    7-12: 0.000433 +/- 0.000025
    Norm:
    1-6: 0.000401 +/- 0.000024
    7-12: 0.000422 +/- 0.000023
    
    Again, we need the ratio of ratios:
    [Half(1-6)/Half(7-12)] / [(Norm(1-6)/Norm(7-12)] = 0.89+/-0.11 (12%)
    [Low(1-6)/Low(7-12)] / [(Norm(1-6)/Norm(7-12)] = 0.63+/-0.08 (12%)
    Inner reduction = (11 +/- 11)%
    Outer reduction = (26 +/- 11)%
    Total reduction = (37 +/- 8)%
    
    These numbers are smaller than the reductions indicate by the above plot of double ratios of the distortions themselves. This likely reflects the errors in fitting the slopes. In that sense, the plot values may be more accurate. We need not worry in this study about getting the reduction numbers exact, but it is perhaps accurate enough to say that the inner sector gain drop reduces the distortion by about 13%, and the outer sector gain drop reduces it further by about 27% (about twice as much as the inner) from the original distortion. It is clear that both inner and outer TPC sectors contribute to the distortion, and that the outer TPC contributes significantly more to the distortion. If hardware improvements can only be implemented for either the inner or outer, then the outer is the optimal choice in this respect. It is not obvious offhand whether this is consistent with the GridLeak Simulations which we have done so far for these ion leaks.


    Gene Van Buren

    Long term life time and future of the STAR TPC

    In an effort to better understand the future of the TPC in the high luminosity regime, several meetings were held and efforts carried through.

    Present at the meeting(s) were: myself (Jerome Lauret), Alexei Lebedev, Yuri Fisyak, Jim Thomas, Howard Wiemann, Blair Stringfellow, Tonko Ljubicic, Gene van Buren, Nikolai Smirnof, Wayne Betts (If I forgot anyone, let me know).

    It is noteworthy to mention that the TPC future was initially addressed at the Yale workshop in (Workshop on STAR Near Term Upgrades) where Gene van Buren gave a presentation on TPC - status of calibration, space charge studies, life time issues. While the summary was positive (new method for calibration etc...) the track density and appearance of new distorsions as the lumisoity ramped up remained a concern. As a reminder, we include here a graph of the initial Roser luminosity projection.

    Roser luminosity
    which was build from the following data  

    200220032004200520062007 20082009201020112012 20132014
    Peak Au Luminosity 4 12 16 24 32 32 32 32 32 48 65 83 83
    Average Au Store Luminosity 1 3 4 6 8 8 8 8 16 32 55 80 80
    Total Au ions/ring [10^10] 4 8 10 11 12 12 12 12 12 12 12 12 12

    An update summary of the status of the TPC was given by Jim Thomas at our February 2005 collaboration meeting (A brief look at the future evolution of TPC track distortions ; see attachement below) to adress ongoing GridLeak issues. Slide 19 summary for teh future is added here:
    Will the TPC Last Forever?

    1. Dynamic distortions driven by L
      1. 2x increase is feasible and this takes us to 2010 and (probably) beyond.
    2. Some static distortions need work
      1. e.g. Central Membrane is not flat
        Probably of academic interest
      2. Unlikely that any of these static unresolved issues will affect the useful lifetime of the TPC
    3. Beam backgrounds and ghost beams may be a problem
      1. PHENIX put up shielding
      2. Gene see’s some bad runs
    4. TPC replacement parts will eventually be a problem
    5. TPC replacement people will definitely be a problem

    Questions arose as per the liability of the detector itself i.e., aging issues (including shorts and side effects) were raised along with increasing concerns related to grid leak handling. Alexei Lebedev proposed a serie of hardware modification in May 2005 to account for those issues (see What we can do with TPC while FEE are in upgrade attachement).

    Relevant to possible software and hardware solutions for the grid leak are GridLeak Simulations of the fields and particle paths in the region near the inner/outer TPC sector gap.

    Extensive analysis are also available from the pages SpaceCharge and GridLeak and especially (for AuAu) this page QA: High Luminosity and the Future.

    Anode wire aging

     See the attached file: RD51-Note-2009-002.pdf

    SpaceCharge studies

     Studies of SpaceCharge in the TPC.

    Potential distortions at projected luminosities for different species

    Here I show the expected distortions in the STAR TPC as represented by the pointing error of tracks to the primary vertex (otherwise known as the DCA) due to experienced (star symbols) and projected luminosities (triangle and square symbols) at RHIC. One can think of these distortions as being caused by the accumulated ionization in the TPC due to charged particles traversing it from collisions (ignoring any background contributions), or the charge loading of the TPC.

    These measurements and projections are current as of November 2008 with the exception of the pp500 (pp collisions at √s = 500 GeV) projections (open symbols) [1,2]. In 2005 the expectation was that RHIC II could achieve pp200 peak luminosities of 150 x 1030 cm-2sec-1 and pp500 peak luminosities of 750 x 1030 cm-2sec-1 [3]. Presently (2008), the pp200 peak luminosity estimate at RHIC II has dropped to 70 x 1030 cm-2sec-1, and this is what is used in my plot. I have found no current estimate for pp500. The 2005 estimate indicated a factor of x5 more peak luminosity from the 500 GeV running than 200 GeV; I do not know if that is still possible, but I have chosen to use only a factor of x2 (should be conservative, right?) in the plot shown here.

    I have also had to estimate the conversion of luminosity into load in the TPC for pp500 as we have not yet run this way. We did run at √s of approximately 405 GeV in June 2005, which showed a 17% increase in load over pp200 per the same BBC coincidence rate [pp400 (2005)]. I have roughly estimate that this means approximately a 25% increase in TPC load going from pp200 to pp500, with no serious basis for doing so.

    Finally, the documented projections show that RHIC II will achieve approximately a factor of x2.6 increase over RHIC I for both AuAu200 and pp200 [2]. I have taken the liberty to apply the same factor of x2.6 to CuCu200, dAu200, and pp500.

    (Note: The pp data use a different horizontal axis which was adjusted to lie amidst the ion data for ease of comparison, not for any physically justified reasons.)

     

     

    Of the data we've taken so far, AuAu collisions have presented the highest loading of the TPC. However, CuCu collisions have the potential to introduce the highest loading for ions, and the RHIC II projections lead to pointing errors close to 20 cm! Actually, SiSi may be even worse (if we ever choose to collide it), as the luminosity is expected to achieve a factor x4.2 higher than CuCu, while the load per collision may be in the 40-50% ballpark of CuCu [2,3].

    Loading due to pp collisions at 200 GeV are generally similar to AuAu collisions at 200 GeV, but 500 GeV pp collisions will load the TPC much more severely. Even in RHIC I, we may have pointing errors between 5-10 cm using my conservative estimate of x2 for the luminosity increase over pp200. If the factor of x5 is indeed possible, then use of the TPC in pp500 at RHIC II seems inconceivable to me, as the simple math used here would completely break down (the first TPC padrows are at 60cm, while 2.5*25.1 cm is larger than that)**. Even with a factor of x2, things may be quite problematic for pp500 in RHIC II.

     

    References:

    1. RHIC Run Overview
    2. RHIC Projections
    3. RHIC II - Ion Operation
    4. pp400 (2005)

    Some additional discussions from 2005 regarding the TPC and increased luminosities are documented at Long term life time and future of the STAR TPC.

    Note: This is an update of essentially the same plot shown in the 2007 DAQ1000 workshop (see page 27 of the S&C presentation).

    ** Actually, a rather major point: we never get full length tracks with DCAs to the primary vertex of more than a couple cm anyhow, due to the GridLeak distortion. These tracks get split and the DCAs go crazy at that point. Since GridLeak distortion corrections are only applied when SpaceCharge corrections are, an roughly appropriate SpaceCharge+GridLeak correction must be applied first to even find reasonably good tracks from which to determine the calibration.


    G. Van Buren

     

    SpaceCharge in dAu

    In Run VIII, dAu data was acquired at high enough luminosities to worry about SpaceCharge (it was ignored previously in the Run III dAu data (2003)). Attached is initial work by Jim Thomas to determine the charge distribution in dAu.

    TPC Sector Alignment Using Particle Tracks for Run 8

    TPC Sector Alignment Using Particle Tracks for Run 8

    TPC group meetings 2020-

    Provide minutes or link to minutes of meetings held.

    Earlier minutes from during construction and commisioning can be found on 
    https://drupal.star.bnl.gov/STAR/subsys/upgr/itpc/itpc-meetings/itpc-minutes

    Started  on 10/14/2020. There are some holes in records


    TPC meeting November 14, 2024



    Present: Alexei, Mathias, Jim, Emmy, Yuri, Flemming, Gene, and myself
     
     
    == Hardware
    — plan is for Yuri to meet with Alexei this week or next (due to illness)
    — continuing to run gas system recovery
    — preparing for measurements locating TPC relative to magnet support
    — many precautions now needed for work in hall
    — should start TPC location measurement work next week
    — discussions have been initiated regarding how to remove the TPC from the magnet
    — no shorted rings to worry about from previous run
     
     
    == Electronics
    — Tonko and Tim working on west side
    — found several bad cables (some can be changed)
    — possible that run may start in early March (rather than late)
    — current proposed end date for run is October of 2025 (or later?)
     
     
    == Software
    — BES-II data Au+Au 19 GeV collision data is now a priority
    — some calibration done using new alignment
    — event.root files exist if Yuri want to examine
    — 19.6 GeV Au+Au from 2019 is also a priority
    — Gene started SC calibration for that
    — some portion has different RICH scalars
    — ran calibration with dev and with new alignment (SL24)
    — no noticeable difference in SC calibration result
    — good news for BES-II work
    — Run 22 p+p 500 may also be good with regard to SC even if we use new alignment
     
    — Mathias looking at several comparisons between TFG24e and the new alignment test production
    — has requested a small embedding production for study
     
    — Emmy looked at delta-m/m vs <Z> with and without tracks that cross the central membrane 
                Presentation
    — now have 1/pt dependences and eta dependence
    — eta plot shows interesting difference in structure between positive and negative eta
    — Gene did an additional study using h-/h+ and got a beautiful correlation with Emmy's results
    — comparing the studies suggest that Emmy is seeing alignment artifacts
    — allows the possibility of a new method of correction for distortions due to sector alignment 
    — should also look at newer data and understand how the new alignment impacts the work
     
    — Yuri working on dE/dx calibration sample 
    — cluster is very slow at present 
    — some runs have fluctuating voltage (average is down by ~300 V)
    — new KFParticle release (TFG24f)
    — Maxim did some clean-up on the code
    — some interfaces changed
    — also added some indications in KFParticle finder
    — added projections to cylinder
    — also has new STKFParticleAnalysisMaker
    — plans is to reintroduce header files (remove VC files)
     
     
    == AOB
    — Flemming added Emmy to the TPC list



    TPC meeting November 6, 2024

    Present: Alexei, Mathias, Jim, Emmy, Yuri, Flemming, Grzyna, Tommy, Gene, and myself
     
     
    == Hardware
    — some new rules about work in hall 
    — have not been able to redo TPC movement measurement
    — need second person in hall
    — Yuri will join next week to accompany Alexei
    — working on gas system
    — also working on laser alignment
    — west laser okay
    — east laser is weak and needs some optics replaced
    — next week will mostly be devoted to east laser repair
     
    == Electronics
    — Tonko and Tim continuing work on RDO recovery
    — question about sPHENIX TPC performance
    — Flemming mentioned there was a dE/dx plot in PAC meeting presentation
    — Yuri mentioned there was an ADC plot that was called a dE/dx plot
    — Gene mentioned they are facing an issue with low cluster charge in Au+Au (but not in p+p)
                         added: this was a fct of drift distance
    — perhaps related to contamination
        
     
    == Software
    — Run 22 p+p 500 SC
    — nothing new for this week on SC
     
    — BES-II data Au+Au 19 GeV collision data is now a priority
    — need to look at effect of new alignment there
    — need to do SC calibration for this sample and produce a dE/dx sample
     
    — The production of 4p59GeV_fixedTarget with TFG24e continues. Now we have ~120 M event.
    — TFG24f release has been done. This release includes cleaned by Maxym Zyzak KFPaticle and StKFParticlAnalysisMaker.
    — TFG24g release will be next, including some modification with respect to Maxym's version of  StKFParticlAnalysisMaker.
    — To address Mathias's plot for deuteron Yuri showed ( https://github.com/fisyak/star-sw/blob/TFG/macros/Proton.C )  
    — η dependence deuteron peak position can be explained by energy loss in beam pipe, IFC, and TPC gas.

     
     
    — Mathias looking at several comparisons between TFG24e and the new alignment test production
    — some anomalous concentrations in positions of first and last hits
    — primary vertices found all the way across the TPC in Z
    — vertexing may still be relying on some collider mode settings 
    — interesting fall-offs in global tracks with >10 hits that are called primaries compared to global tracks with >10 hits
     
    — Emmy looked at delta-m/m vs <Z> with and without tracks that cross the central membrane 
    — previously observed w shape became more exaggerated rather than less
    — not currently understood
     
     
    == AOB
    — Nothing new from this week.


    TPC meeting October 31, 2024

    Present: Alexei, Mathias, Jim, Emmy, Gene, Yuri, Flemming, Grazyna, Sonoraj, Aihong, Daniel, and myself
     
     
    == Hardware
    — started gas system recovery
    — TPC currently in nitrogen flow (after purge), about 80 l/min
    — most sensors are off
    — started with dryer, then cooling down
    — lasers: west looks okay, east has several mirrors damaged (and intensity is not very high)
    — may want to change mirrors inside box
    — will discuss further with Yuri how to measure TPC movement
    — during sensor install noticed that TPC was already at limit
    — will make another measurement with more accuracy
     
     
    == Electronics
    — Tonko and Tim are doing inventory and status checks
     
        
     
    == Software
    — Run 22 p+p 500 SC
    — nothing new for this week on SC
     
    — Mathias looking at test production with new alignment
    — see plots posted to list and/or on Drupal agenda
    — checked with and without 2 cm shift in proximity cut for z-axis to account for FXT target location
    — have also noticed excess of forward rapidity tracks with decreasing energy (opposite expected)
    — amounts to ~10% more tracks in forward rapidity in test production
    — may want to request a test embedding sample with the new alignment
    — Gene mentioned that with the 2 cm shift, we may need to tighten the 3 cm primary cut to avoid tracks from beampipe
    — there are also z-cuts and ToF matching cuts used
    — Emmy looking at matter antimatter mass differences for tritons, He3, and He4
    — see talk posted on Drupal
    — general agreement between protons and kaons
    — some azimuthal variation visible even in proton and kaon mass comparisons
    — difficult to see in pions due to difficulty of measurement itself
    — similar variation in eta
    — perhaps a static field issue?
    — Gene mentioned a possible cross-check to identify TPC sector alignment
    — also, there is a suggestion to exclude tracks that cross the central membrane
     
    — Gene mentioned that using new chain option cause jobs to take longer in tracking
    — potentially doubles in time 
    — needs to be understood
    — Run 22 pp500 SC was waiting for new alignment, but may need to move forward with production is this increase in tracking time cannot be resolved in reasonable time
    — if we proceed with calibration production without new alignment, SC would be considered already done, dE/dx would be next
    — Yuri has finished dE/dx  calibration with new TPC alignment, using BES-II fixed target data available on HLT farm.
    — see: https://drupal.star.bnl.gov/STAR/system/files/RunXIX_XXI_fixedTarget_dEdx.pdf
    — Yuri has also started 4p59GeV_fixedTarget with TFG24e which includes new dE/dx calibration and adjustment vertex seed cuts
    — see /gpfs01/star/data25/TpcAlignment/TFG24e/2019/RF/4p59GeV_fixedTarget/
    — So far it has been processed ~31.983M events. 
    — The kfp Analysis is coming.
    TPC  MEETING October 10, 2024
    Present: Alexei, Mathias, Jim, Emmy, Zhangbu, Frank, Gene, Yuri, Tommy, Grazyna, and myself
     
     
    == Hardware
    — most things going well
    — several RDOs bad
    — access tonight (primarily for sPHENIX), beam after midnight
    — tried 111x111 last night, failed so returned to 56x56
    — drift velocity has recovered (trending up toward 5.5 cm/us) since adding slightly more methane 
     
     
    == Electronics
    — Tonko may be able to repair some boards today depending on schedule
    — anything done will be reflected in the log book
        
     
    == Software
    — Run 22 p+p 500 SC
    — waiting on new TPC alignment
    — Emmy looking at mass difference between light nuclei and light anti-nuclei
    — asymmetric effect of SC on charge signs is important to her analysis
    — looked at difference in mass^2 for protons from isobar dataset 
    — clear difference in mass^2 observed over pt which even diverges with increasing pt
    — apply a correction based on assumption of inaccurate momentum measurement
    — momentum inaccuracy taken to be proportional to momentum
    — new correction works well out to about 2.5 GeV/c
    — may need to look at dependence on Z position of the tracks
    — another suggestion is to look at dependence on azimuthal position
    — need someone to also look at other datasets to see if same effect is there 
    — perhaps also look at luminosity dependencies 
    — 
     
    — TFG24d
    — going forward with dE/dx calibration production from HLT farm in meantime 
    — next is to go forward with new alignment and check other data samples
    — Gene noted that calibration production usually comes after distortion corrections have been applied
    — calibration production for fixed target could happen immediately
    — other (non-FXT) sets will need distortion correction pass first
     
    == AOB
    — some discussion of increase in hit errors fro 2016 Au+Au 
    — question of relative benefit attributable to increased errors vs inclusion of more noise
    — increase in hit errors serves to include hit in pattern recognition during track finding
    — but because errors are large, the tracks are not unduly distorted




    TPC meeting October 3, 2024



    Present: Alexei, Jim, Grazyna, Frank, Gene, Flemming, Yuri, Daniel, and myself


    == Hardware
        — things look good 
        — some small decreased in drift velocity
            — added more methane
        — timing of Au+Au run still under some discussion (meeting yesterday)
            — sPHENIX requested 6x6 bunches
            — discussion of parameters took place (do we support crossing angle or no crossing angle)
            — Gene says we need conditions like we will use in regular running 
        — field cage currents should be checked (however weather has been dry so no strong concern)


    == Electronics
        — Flemming showed summary of Run 24 p+p iTPC running 
            — see Flemming’s Drupal Blog
        — RDO outages reach approximately 15% in iTPC
        — 4 RDOs in TPX are probably unrepairable (similar number in iTPC) for various reasons
        — probably need a dedicated meeting after the run to discuss what can be repaired
        

    == Software
        — Run 22 p+p 500 SC
            — 245 out of 247 fills have been calibrated (last 2 don’t have enough runs, used neighboring fills)
            — waiting on new TPC alignment
            — analyst in lfsupc group has been tweaking distortion corrections based on mass^2 between p and pbar to correct 
            — will ask analysts to join future TPC meeting to discuss

        — TFG24d
            — going forward with dE/dx calibration with new alignment and now producing on HLT farm
            — will calibrate, with help of Rice BTOF group, dE/dx
            — once returns from Germany, will produce fixed target data using test production based on available disk space
                — Frank has questions about status of answers to pull request questions
                — in particular, what is the effect of including hit sharing in the new TFG release
                — Yuri has code that is running to check effect of change in hit errors
                    — hits with flag 2 (deconvoluted hits from Tonko) have their errors increased by factor of 4
                    — explains why test production has about 10% more K0_S than P24ia
                    — increasing hit errors for deconvoluted hits increases efficiency in K0 reconstruction
                     — running jobs provide track by track comparisons and other tests
                    — Gene can recompile the code base and run tests that should take only a few hours


    TPC meeting September 26, 2024




    Present: Alexei, Jim, Grazyna, Daniel, Gene, Flemming, Tommy, and myself
     
     
    == Hardware
    — everything is working well
    — some minor laser issues (tuning etc.)
    — grid leak crate failed a couple days ago
    — David was notified, did hard reset through slow controls
    — problems seems to have been resolved and is now okay
    — can we update manuals to ensure this does not happen in future?
    — *Action Item* need to have "global reset" button programmed into the GUI, and also include instructions for manual reset
    — Wiener manual needs to be added to the web pages as well
    — may be ~20 hours of data taken without GL wall powered (may need special handling)
    — there were other mitigation measures put in place years ago, but have not been tested in absence of GL wall
    — bad module from GL crate (from 2021) was replaced and sent for repair yesterday
     
     
    == Electronics
    — may not have been enough time during APEX to fix current RDOs down 
    — usual outages and recoveries continue
     
     
    == Software
    — Run 22 p+p 500 SC
    — a fourth pass was done with portion of data
    — four remaining fills with statistics just below what is required
    — ran 22 jobs with slightly less statistics, will finish today
    — everything possible to calibrate will be done at that point
    — will then be waiting on TPC alignment
     
    — TFG24d
    — new TPC alignment is pending approval of changes in GitHub
    — GitHub approval process still proceeding 
    — Need some questions answered regarding hit error changes
     
    TPC meeting September 19, 2024 /cancelled

    TPC meeting September 12, 2024
    Present: Alexei, Jim, Yuri, Tommy, Frank, Mathias, Sonoraj, Flemming, and myself
     
     
    == Hardware
    — everything is working well
    — checked TPC vs magnet position again for Yuri
    — sensors were installed this past February
    — no motion detected, even with polarity flip
    — overall TPC is shifted by small amount (~1 mrad), but this took place before sensor installation
     
    == Electronics
    — analyzed all runs for past two weeks
    — iTPC dead channels vary from 12 to 15 on a regular basis
    — TPX has 7 RDOs down
    — may be permanent through next year
    — TPX is about 15% dead (7 out of 48 RDOs) at present
                    itpc status day 220-256
     
    == Software
    — Run 22 p+p 500 SC
    — subset of fills (few hundred jobs) with one more pass
    — SC calibration is _done_ with old TPC alignment once that pass is done
    — will then proceed with more passes once new alignment is available
    — T0 study 
    — look at all data from 2019 - 2021
    — 9.2 GeV (days 178 - 180 and day 195-196) shows 100 ns timing split vertex
    — 9.8 GeV shows approximately 1 time bucket splitting
    — d+Au (2021, days 180-183) also shows approximately 1 time bin splitting
    — see: https://drupal.star.bnl.gov/STAR/blog/fisyak/STAR-Trigger-delays-Run-XIX-XXI
    — TFG24d
    — some fixes implemented including those accounting for old alignment scheme
    — jobs have been resubmitted to test fixes
    — in 2 weeks, Yuri will be gone for 2 weeks
    — QA plots from Mathias show slightly smaller reduced Chi^2 (1.83 vs. 1.95)
    — secondary and tertiary peaks in vertex distribution observed in P24ia are gone in TFG24d
    — discontinuity in DCA Y is still there (TFG24c vs TFG24d)
    — reduction in Chi^2 is thought to be too small
     
     
    == AOB
    — Readiness of alignment changes for FXT production?
    — many comments in PR
    — library has been built by Gene with new alignment
    — for FXT, Yuri would like to use that library to produce calibration sample
    — new alignment needs new options
    — test was done to ensure new options are compatible with older libraries (was successful)
    — status summary will be sent to TPC list
    — no follow-up yet on the 0.1 Hz ripple on power lines
    — weekly reservation still needs to be updated in Drupal (some difficulty getting meeting added, R. Witt will follow-up)





    TPC meeting September 5
    Present: Alexei, Jim, Yuri, Tommy, Frank, Richard T., and myself
     
     
    == Hardware
    — small laser issue likely due to crew tuning
    — expecting thunderstorm this weekend, we'll see how the system handles it
    — most of week was devoted to ToF
    — Yuri requested check of sensors 
    — Alexei has it on the list of things to do
    — so far sensors have not detected any measurable movement of TPC
    — drift velocity calculation failed for a block of runs
    — last night only one run failed
    — failures seem to come in blocks (crew procedure related?) 
     
    == Electronics
    — seems to be a continuing problem with RDOs
    — usually about 10-12 have a problem
    — typically they get repaired cyclically 
    — significant number were still out this morning (another lost about an hour ago)
     
     
    == Software
    — Humidity 
    — dehumidifier added (turned-on) in mid-August
    — observed an anti-correlation (decrease in field cage current with increase in humidity)
    — July 30th shows sharpe jump after power shutdown at STAR (dehumidifiers stopped)
    — in that specific case, the field cage currents show an increase
    — typically, as long as humidity stays below 47%, field cage currents are not significantly impacted
    — Run 22 SC
    — Plots from EMC calibration folks show nice convergence of N+/N- vs. 1/p from different fills compared with past test production
    — significant improvement
    — waiting on alignment to include last few dozen fills
    — will do a pass 4 in the meantime (on remaining fills), needs about a week + 10 days
    — will do pass 5 once alignment in place
    — overall a 3-5 week timescale until Run 22 SC is considered done
    — 2021 Full Field
    — differences in O+O FF and RFF
    — associated with 1 mrad rotation of TPC
    — correction was included in dB
    — no motion has been observed with Alexei's sensors
    — TFG24c
    — test production chi^2 and primary vertex issues observed by UCD group
    — work ongoing
    — TFG24d
    — some fixes with error calculations, particularly for outer sector 
    — K0 in P24ia, TFG24c, and TFG24d
    — can see pull effects (1-2%), error calculation
    — pulls in FXT vs collider do not show differences in simulation but do in real data
    — tried vertex reconstruction with KFParticle
    — see split vertex reco in FXT (at 2 m, separated in x and y)
    — may be vertex from Au on target and another from proton on target
    — need to confirm
    — after upgrade of cvmfs machine, exported code does not run on rcas nodes 
    — thought to be due to extended instruction set 
    — looking for flag to exclude extended instruction set since races nodes do not support them
     
     
    == AOB
    — appear to be, in Run 24, some ripple on the power lines at 0.1 Hz (noticed by Jamie)
    — seems to affect beams at that time scale 
    — may lead to SC effects on that time scale
    — weekly reservation needs to be updated in Drupal

    TPC meeting August 29, 2024

    Present: Alexei, Jim, Yuri, Daniel, Mathias, Prashanth, Tommy, and myself
     
     
    == Hardware
    — checked laser and water level during access yesterday
    — looks good, no need to change optics yet on lasers
    — next access in two weeks
    — gas system is still running well
     
     
    == Electronics
    — 2-3 RDOs repaired yesterday
    — 6 or 7 still out this morning
     
     
    == Software
    — TFG24c
    — working to answer the questions and comments associated with pull request
    — new TFG24d soon, hope to have update by next week
    — further discussion of multiple primary vertex clusters ongoing
     
     
    == AOB
    — received HV cable for new cathode power supply
    — David comes back in a couple months
    — plan is to put new power supply as a spare

    TPC meeting August 22, 2024
    Present: Gene, Jim, Flemming, Yuri, Frank, Tommy, and myself
     
     
    == Hardware
    — things continue running well in general
    — lasers are still looking good 
    — gas system is still running well
     
     
    == Electronics
    — about 8% of iTPC FEEs are out right now
    — were almost at 15% until Tonko fixed many yesterday
     status since day 200 
    — STAR has much more luminosity with the crossing-angle running
    — important to keep TPC deadtime below ~20% for present trigger mix
     
     
    == Software
    — Run 22 SC
    — pass 3 finished, mostly converged except for a couple regions of fill (reasons understood, beamline constraint)
    — will do pass 4 on ~25% of the data (needs 7-10 days)
    — once TPC alignment is in can do pass 5 to evaluate impact of new alignment on SC
    — Production with TFG24c
    — test production is finished with available statistics
    — see link: 
    — several improvements observed
    — primary vertex problems
    — strange cluster(s) in old position vs. new position 
    — need to understand in the context of vertex plots shown by UCD group
    — work on TFG24c GitHub pull request approval is ongoing
    — Frank requested expedited review of the code in GitHub and several other S&C folks have become involved
     
     
    == AOB
    — Flemming will not be present in 2 weeks
    — shutdown of air handling recently due to storms but nothing observable in field cage currents
    — Yuri needs estimate of real event rate on target; BBCA? BBCE? BBCW? Something else?
    — window for timing count can actually include next crossing, so window needed to be moved
    — observes small shifts in primary vertices in X and Y (millimeters), concern about multiple interactions in target
    — probably best to check with Dan Cebra 




    TPC meeting August 15, 2024

    Present: Alexei, Jim, Gen, Yuri and Flemming

    short meeting
    -hardware
    Laser is now in good working condition after maintancece day

    Tpc electronics the usual making on/off

    TPC space charge calibrations
    -    Current pass 3 done witin about a day
    -    Will like to repeat with new alignment
    -    If this is not ready soon, has another 12 fills

    Yuri reported a double counting error in analysis code of TFG production that cause the apperent gain of ~factor 5. Actual analysis with corrected code gives about 12% improvement in e.g. K0s reconstruction
    See https://drupal.star.bnl.gov/STAR/system/files/TPCFXT2024Aug15.pdf

    Status of approval for mini pull request. Some comments received , and some action to be taken. For details see above link too.
    Once approved need calibration (Tof and de/dx)  and test sample to confirm proper working.

    AOB none




    TPC meeting August 8 2024
    Present: Alexei, Gene, Jim, Zachary, Flemming, Mathias, Grazyna, Yuri, Frank, Petr, Daniel, and myself
     
     
    == Hardware
    — things continue running well in general
     
    — lasers are looking good 
    — had a very good run overnight
    — gas system is running well
     
    — sector 11-8 voltage being kept at 1100 V, hasn't tripped since
     
     
    == Electronics
    — usual battle with RDO, we've lost about 7 iTPC RDOs and 4 TPX since last long access
    — each time many are fixed and then we lose some quickly
     
    — shift crew 
     
    — nothing else new to report
     
     
    == Software
    — Run 22 SC
    — calibration production about 2/3rds finished, need another week or so
     
    — Production with TFG24c
    — see slides on list from Petr
    — 50% increase in probability of finding 3pi K0 decay with TFG test production
    — increase seems to come from short tracks, high negative eta
    — need to check with cuts to exclude possibility of reconstructing same vertex with split tracks
    — maybe effect of loopers?
    — also want to check proximity of vertices within an event
    — Update on test production with new alignment (3.2 GeV FXT)
    — see slides on list and in Drupal from Mathias
    — observes what appears to be an azimuthal dependence of DCA Y shift 
    — also observes an eta dependence of dE/dx integrated over azimuth
    — also observes a dependence of amplitude, mean, and width of Gaussian fits to dE/dx momentum window as function of azimuth
    — dE/dx in test production may need some calibration
    — also a concern that observed azimuthal dependence may be affected by observed DCA Y shift
    — Status of TFG24c production 
    — see slides from Yuri at 
    — checking to ensure complete data sets have been processed (2019 and 2020 3, 4, 5, and 7 GeV FXT)
    — K0 still shows a factor 5 improvement with better signal to background and improvement in width
    — significant improvement in lambda baryons and hyper nuclei as well
      — ~factor 10 improvement in hypernuclei signal
    — study of pi- dE/dx peak further shows need for dE/dx recalibration
    — pull request has been made for TFG24c mini


    Mathias' slides from August 8th 2024 are here: https://drupal.star.bnl.gov/STAR/system/files/August82024_testProductionQA.pdf

     
    I added my analysis track cuts on slide 2. And I added the mean(mean(eta)) plot as a function of Vz on slide 21.
     
     

    TPC meeting August 1, 2024


    Present: Alexei, Gene, Jim, Zachary, Flemming, Mathias, Grazyna, Yuri, Frank, Petr, Daniel, and myself
     
     
    == Hardware
    — things continue running well in general
     
    — lasers are looking good 
    — had a very good run overnight
    — gas system is running well
     
    — sector 11-8 voltage being kept at 1100 V, hasn't tripped since
     
     
    == Electronics
    — usual battle with RDO, we've lost about 7 iTPC RDOs and 4 TPX since last long access
    — each time many are fixed and then we lose some quickly
     
    — shift crew 
     
    — nothing else new to report
     
     
    == Software
    — Run 22 SC
    — calibration production about 2/3rds finished, need another week or so
     
    — Production with TFG24c
    — see slides on list from Petr
    — 50% increase in probability of finding 3pi K0 decay with TFG test production
    — increase seems to come from short tracks, high negative eta
    — need to check with cuts to exclude possibility of reconstructing same vertex with split tracks
    — maybe effect of loopers?
    — also want to check proximity of vertices within an event
    — Update on test production with new alignment (3.2 GeV FXT)
    — see slides on list and in Drupal from Mathias
    — observes what appears to be an azimuthal dependence of DCA Y shift 
    — also observes an eta dependence of dE/dx integrated over azimuth
    — also observes a dependence of amplitude, mean, and width of Gaussian fits to dE/dx momentum window as function of azimuth
    — dE/dx in test production may need some calibration
    — also a concern that observed azimuthal dependence may be affected by observed DCA Y shift
    — Status of TFG24c production 
    — see slides from Yuri at 
    — checking to ensure complete data sets have been processed (2019 and 2020 3, 4, 5, and 7 GeV FXT)
    — K0 still shows a factor 5 improvement with better signal to background and improvement in width
    — significant improvement in lambda baryons and hyper nuclei as well
      — ~factor 10 improvement in hypernuclei signal
    — study of pi- dE/dx peak further shows need for dE/dx recalibration
    — pull request has been made for TFG24c mini



    TPC meeting July 25, 2024
    Present: Alexei, Gene, Jim, Zachary, Flemming, Mathias, Grazyna, Yuri, and myself
     
     
    == Hardware
    — things running well in general
    — laser runs are good with the usual caveats
    — methane flow was slightly increased in response to decrease in drift velocity over the past couple weeks
    — crews likely won't see the difference for at least a few days of operations
     
     
    == Electronics
    — nothing new to report
     
     
    == Software
    — Run 22 SC
    — pass 3 has been started and is proceeding well
    — good to have calorimeter fix for this pass
    —  dE/dx is turned on for this pass (no event.root files, but dE/dx is there if someone want to look at it)
    — may want to take one more look with new TPC alignment 
     
    — Production with TFG24c
    — some delays due to farm capacity and killed jobs (~1-2 week delay)
    — have now increased number of jobs by factor of 4 by splitting files into chunks
    — so far 5-20% of data sample has been processed . Progress slower than expected.

    - laser run
    Yuri has prepared a list of good/failed laser runs. See his blog
     
    — Update on test production with new alignment (3.2 GeV FXT)
    — see slides sent to list by Zachary
    — several tracks in test production are assigned zero momentum
    — seems to be event-by-event fluctuation not track-by-track
    — some example events are Run 20183003, events 1264727 and 1246798
    — most events are okay and match the P24ia production, but need to understand the root of the issue
     
    == AOB
    — nothing else new reported this week.




    TPC meeting July 18, 2024
    Present: Alexei, Gene, Jim, Yuri, Zachary, Tommy, Flemming, Mathias, Prashanth, and myself
     
     
    == Hardware
    — thunderstorm yesterday, almost 3 mbar pressure spike
    — gas system behaved very well
     
     
    == Electronics
    — some power problems to RDOs as usual
    — tried to fix some yesterday, other are out for the run
    — overall, the number out at any given time is more or less stable
    — more TPX problems this year than last
    — seems most concern is about the power system
     
    == Software
    — Run 22 SC
    — starting pass 3
    — about 247 fills, but calibration only works for about 235
    — trying to use calibration from nearby fills for the ones that don't work
     
    — Starting production with TFG24c
    — some problem with long jobs (> 3 days on a file)
    — trying to increase statistics to 100 million events
    — K0 width looks better in TFG vs most recent production slides
    — may need one or two more weeks depending on speed of processing
     
    — Early QA from production with new alignment (3.2 GeV FXT) Mathias slides
    — DCAX and DCY look good 
    — double peak structure due to broken tracks is gone
    — comparison with embedding also looks very good for low rapidity (not crossing CM)
    — some difference for higher eta with CM crossing tracks, but only in DCAY
    — nHits distributions also look good between test production and embedding 
    — maybe some systematics above eta~2
    — perhaps some relation to revised formulation of TPC hit errors in TFG test production?
    — PV distribution also shows some differences between test production and P24ia
    — can look at mean eta vs PVZ between test and P24ia



    TPC meeting July 11, 2024

    Present: Alexei, Gene, Jim, Yuri, Mathias, Pavel, Daniel, Frank, and myself
     
     
    == Hardware
    — installed magnetometer on July 6th
    — data stream goes to dB
    — readings look good
    — magnet has been half-field for past couple days due to humidity; 
                (once down 2 hours during apex), couple of trips. (FV)
    — gas system is also okay
    — some ongoing network problems as well and a failed switch
    — maybe run with 3/4 field to reduce trips?
    — possible but will reduce momentum resolution
    — also, we have no official field map for 3/4 field (though we only have a map for 1/2 field, FF only)
    — also, not likely analysis groups will utilize the set
     
     
    == Electronics
    — problems with PROMS
    — increasing current in some FEEs and parent RDOs not working properly
    — perhaps due to chips suffering radiation damage(?)
    — more problems on iTPC (inner) sectors than on TPX (outer) sectors
    — two RDOs lost last night, may not be recoverable
            Tonko will work with Tim next access day, and nudge te output from PS, particular those in TPX which look
            shaky (FV)
     
    == Software
    — Run 22 SC
    — calibration is proceeding
    — third pass starting end of this week
    — studying second pass right now
    — will have a beam line calibration as well for third pass
    — expect about 3 weeks, should be ready for TPC dE/dx sample in early August
    — Progress on FXT final data production
    — see slides at https://drupal.star.bnl.gov/STAR/system/files/TPCFXT2024.pdf
    — TPC alignment nearly finalized
    — Prepared TFG24c release
    — Adjust trigger delay based on PV reconstruction comparison with 2 different methods
    — use either hits or tracks from West or East TPC only (see slide 3)
    — in preparing TFG24c, need to split repo due to number of files reaching cvmfs upper limit
    — no access to file needed for splitting
    — instructions for using TFG24c are on slide 7 of this presentation
    — use of production cluster was refused for producing calibration samples
    — using TFG allocated space on RCF to run smaller batches to start FXT test production
    — will take about 3 weeks at current rate
    — began preparation of TFG  release for merger with official STAR (star-sw) repo
    — need to separate alignments between old and new on merger so as not to introduce confusion
    — Mathias will begin looking at produced files as they appear
    — More resources are needed to finish in 2 weeks, 100 TB and more CPU (about a factor 5 increase)
    — need a STAR management statement on these requirements
     

    TPC meeting June 27, 2024

    Present: Alexei, Jim, Frank, Dan, Yuri, Mattheu,  Gene and Flemming

     

    -hardware:

    Position and composition of NMR probe has been documented by Alexei. An entry to drupal blog has been added https://drupal.star.bnl.gov/STAR/blog/videbaks/NMR-probes.

    This would require an update to detector description in GEANT as it is in the active are of BBC and EPD.

     

    Tim has order additional interface so the NMR readout can be attached to slow control.

     

    Shift crews as usually struggles to operate laser well. Alexei provides on hand training to crew, and this helps a lot.

     

    -software

    Yuri presented at the Tuesday mgt meeting status, and a  detailed plan for alignment and calibration needed for the FXT target data and beyond, The plan was well received by mgt and will form basis for work in the coming month , including calibrations tasks for BTOF, eTOF,… The presentation is available at https://drupal.star.bnl.gov/STAR/event/2024/06/25/STAR-management-meeting/TPC-Alignment2024

     

    -electronics

    Usual loss of ~1 RDO per day of running. Several have been brought back to life by Tonko when there is sufficient downtime available to diagnose issues (prom’s power distribution).

    Fairly stable overall.

     

    -              QA team identified data where offline cluster finding failed to read adc files causing  memory overwrites and leaks. Issue was identified by Tonko, and relates only to offline cluster finding. Pull request has been generated.

     

    -software

    Yuri is working on optimizing t0 for FXT to minimize residual track breaking, using position of target (200 – 200.5 cm).

     

    Richard T. and Jae N. is working with Gene on 2022 space charge. Fill-by-fill correction is proceeding well.

     

     

    TPC meeting June 20, 2024 (solstice)
    Present Frank, Jim, Yuri, Alexei, Gene, Matheu, Flemming, Tommy

    Hardware:
    Progress with the NMR probes. Could not readout through the RS232 port. Turns out the $88 paort (GPiB) works, Tim purchases readout board. Further discussion will take place with Alexei, Tim and David how to incorporate this into slow control readout and data storage. Will need interface to IP readout so it can be stored. Discussion on absolute vs. relative readout. Certainly, it is very useful to understand time stability; much better the current readout. The value should be calibrated vs. analysis e.g of K0s mass. We already learned since installation that the current readout in not a proper reference, stability not good.
    The nominal value readout is 0,4988(6) ie. A 1.2 gauss deviation for the nominal 5kGaus field.
    After meeting Alexei supplied informationon position:
    “There are two probes sitting on East side of TPC, Z position is about 1300mm from magnet center. So Z is -1300mm,  and X is about -60mm, Y is about. +/-70 mm.
    After this run I'll order survey service to have more accuracy. Now my estimation is about 20-30mm.”

    Laser operations by crew is not always good. Alexei will (again) instruct crew as one has to refocus during the datataking as it drifts.

    Electronics:
    Tonko fixed 4 RDO today, had to mask one FEE out. Due to power issues for RDO

    Software:
    Gene
    1)  Space charge calibrations for run 22 pp proceeding well. On pass 2. Had to redo about 10% a the had failed (overwrite of files). Looks pretty good and will proceed (soon) to pass 3.
    2)Joe Nam (Temple) who earlier had done space charge, looked on some data from pp 22 to look at h+/h- for a few fills. Does see fill to fill effect. Lokks like a good tool to confirm new calibrations.

    3) Gene have discussed with (QCD) pwg full production time for run 22. With default setup it looks like >= 1year, so no time for additional repeats.
    Look to see for ways of improved time. Showed that skipping ~ ½ itpc rows in a couple of different configurations could improved reco time by ~ 50%. Have not demonstrated (task of pwg) if this affect the physics. Cold QCD sees this favorable, but other pwg not yet asked.

    4) Gene had e-mail from dAu flow analysis group that had observation of loss of hits near central membrane. Different of observation from FXT group. Wa mentioned tha flow group always have taking such in eff into account (Jim). There was possible difference between STI and STICA.

    5) observation by Zach (?) UCD  that there is a low pt de/dx difference expected and observed. Was requested through Matheau to get documentation- possible presentation next week by Yuri, so he knows what to look for.


    Software II
    Yuri had nothing new to report. Reviewing alignment found and presented last week.

    Frank presented the current exceptions that the proton-cumulants is of high importance in view of the recent very successful presentation of the collider data at CPOD and upcoming publications. DOE & BNL management have requested that a timeline has to be presented this FY that contains milestones, that will be followed, that can lead to presentation of final results by QM25, and subsequent publication. Based on PWG discussions and timelines for analysis, discussion, and approval it seem that production of the FXT energies (3.2, 3.5, 3.9, and 4.5) has to be done by end of September. Thus having the newly obtained alignment that resolved the track-breaking observed by UC Davis group incorporated into standard software as a first step, and confirmation of quality  sufficient for next calibration steps should be given high priority.
    Frank asked Yuri to prepare for a presentation at next weeks mgt meeting for a status and baseline.
    Frank reaffirmed that mgt would help with expediting the approval to incorporate into GitHub, strongly recommending to split it into several smaller, better manageable requests.




    TPC Meeting 6/13/2024

    TPC meeting June 13, 2024

    Present: Yuri, Alexei, Gene, Jim, Flemmig, Tommy, Dan

     

    Hardware:

    Worked on magnetometer interface. The RS232 does not seem to work, had help from David, Tim. We agree to have shift crew record setting once per hour. Now instituted. Stability seems to be in order .1 %% much better than current reading. Alexei will pursue options for readout to slow control.

     

    Laser:

    Laser in general are not great. Weak laser (eats or west?) crew not tuning well while taking data.

     

    Electronics:

    Kind of steady: loosing maybe 1 RDO per day; being brought back from tonko when time available (ie no beam)

     

    Software:

     

    Pp500 run 22 data proceeding. Now on pass 2. Mostly looks good and improved, but a few fills semms not to have converged in pass  2. Will reapet, but continue to pass 3 with the starting with pass 2 results.  The beamline calibration also need to be repeated with the pass 2 calib.

     

    Yuri has a new alignment (super sectors) 2024  and applied it to the all of the run 19-21 FXT data that showed a large CM track breaking as found by the Davis group. It seems to have fixed that problem see https://www.star.bnl.gov/~fisyak/star/Tpc/Alignment/FXT_CM_Tracks_splitting/

    This is very encouraging. Yuri will continue with look at t0 adjustments, and once that done like to make a TFG production of about 100M events on 11.5 GeV data (020) to check results, in particular K0s mass dependence on phi-eta .

     

     

    TPC meeting 6/6/2024

    Present: Alexei, Flemming, Gene, Jim, Yuri, Mathias, Daniel, and myself
     
     
    == Hardware
    — magnetometer is now stable and running 
    — readings are more accurate than we used to have
    — working to establish readout and get into slow control and dB
    — Alexei will send image of readings to list for discussion
     
     
    == Electronics
    — multiple RDOs go out and are repaired weekly.
    — about 10% of iTPC out now and some TPX
    — some may be result of power distribution (iTPC vs TPX)
     
     
    == Software
    — Run 22 SC
    — calibration production finished ~222 out of 246 fills calibrated (fill-by-fill) 
    — missing barrel tower was problem for some fills in pass2 
    — pass2 expected to start today
    — pass2 needs a couple weeks, but will be first fill-by-fill SC corrected calibration for Run 22
    — pass3 will hopefully be a confirmation that pass2 is good
    — will look at h-/h+ from pass1 to pass2 to judge effectiveness of fill-by-fill calibration
    — Discussion of track breaking in FXT runs
    — Gene looked at differences in alignment and the effect on track breaking
    — different field orientations may show different effects
    — perhaps some evidence for alignment effects
    — Cosmics alignment is finished
    — Yuri putting together summary of cosmics alignment
    — want to use that alignment for real data production (both collider and FXT)
    — Work ongoing on track breaking at UC Davis 
    — looking also at consistency of effect from energy to energy
    — some unexpected inconsistencies from energy to energy
    — see shift in XY vertex reconstruction using broken tracks
     
     
    == AOB
    — Some question about sPHENIX status and commissioning (affects STAR plan), but no answers
    — Hope to learn something new next week at RHIC/AGS meeting 
    — Nothing else new reported this week.




    TPC meting 5/30/24

    Present: Alexei, Frank, Flemming, Tommy, Gene, Jim, Yuri, Mathias, Daniel, and myself
     
    == Hardware
    — tried to use outer readings of magnetometer but did not get reliable result
    — looking for way to use manual mode of magnetometer
    — discussion of whether whole TPC cylinder may have a sag (endcaps have some incline relative to each other)
    — gas system is running well
    — once per weak there is a half-second gas system alarm (not understood)
     
     
    == Electronics
    — system is stable and running well
    — were up to almost 13% of iTPC channels out, but as of yesterday we're back down to only 2 RDOs out (1 is permanent)
    — occasionally see a whole TPX RDO light-up, needs to be power cycled
    — shift crew has been alerted to look for this effect and cycle the affected RDO
     
     
    == Software
    — Run 22 SC
    — calibration production finished ~3328 runs (about 12% did not produce calibratable output)
    — doing fill-by-fill, can do about 210-220 of the runs
    — expect to start pass2 at the start of next week
     
    — Discussion of track breaking in FXT runs
    — effect may be more radial than azimuthal which would preferentially affect high eta more than low eta
    — recently Mathias looked at -70 < vz < -50 in a collider run at 19.6 GeV 
    — chosen eta range of 0.4 to 0.6 may still be to low to clearly see the effect
    — found an apparent 6 degree shift between positives and negatives in 19.6 GeV collider data
    — also an interesting gap in middle of sector for negatives but not positives in same data
    — need Run numbers to look more closely at what is happening with the detector in that time
    — lastly, track breaking does not seem to appear in embedding
    — Mathias will post slides on the list









    TPC meeting 5/23/2024


    Present: Alexei, Tommy, Gene, Jim, Yuri, Mathias, Daniel, and myself


    == Hardware
       — working with magnetometer
       — some work for MTD gas
       — pressure jumped about 2 mbar today due to thunderstorm
       — TPC gas system responded very well
       — laser system is functioning well
       — some lost connection to GUI for TPC, but gating grid appears to have been on all the time
       — some language and instructions will be added to handle this situation should it occur in the future
           — put the system in Standby (Pedestal mode)
       

    == Electronics
       — system is stable and running well
       — 4 RDOs out in the iTPC
           — 9.7, 9.8, 10.7, and 10.8


    == Software
       — Run 22 SC
           — calibration production is nearly finished, 3338 jobs total
               — multiple DAQ files from a single run
               — typical jobs take more than 3 days
               — about 50 jobs have continued for 7 days and stopped (queue limitation)
               — may need other jobs from that fill, need to check
               — move to pass2 in next week or so
       — Fixed target sets from Run 22  presentation
               — event-by-event T0
               — some flaw in calibration for some events (about 10%)
               — a zero TAC is inadvertently used
               — used TAC comes from west side, so need to really only use those from east
               — also affects about 10%
               — seems to correlate with track breaking that is seen by fixed target analyses
               — have other methods to determine T0 
               — Mathias presented work he has done looking at track breaking in fixed target data
       — Alignment work is still underway
               
       
    == AOB
       — Discussion of GMT
           — Nikolai took over as detector expert after Dick passed
           — the subsystem is actually running with one of the minbias-like triggers that reads-out the TPC
               — runs at 50 Hz
           — need someone identified to check the usual online plots 
               — determine if it is taking good data
               — pursue use of the data
       
    TPC meeting 5/16/2024



    Present: Alexei, Tommy, Gene, Jim, Yuri, and myself
     
    == Hardware
    — working with magnetometer
     
    == Electronics
    — same 4 RDOs out in the iTPC
    — 8.1, 9.4, 10.3, 12.1, and 12.2
     
    == Software
    — Run 24 
    — gating grid transparency test was conducted
    — got to 5.6 kHz, ultimately 7.8 kHz
    — previous calculations show around 10 kHz should show transparency effects
    — early result shows no change in SC up to 7.8 kHz
    — cosmics with open gating grid show distortions, so 
    — Run 22 SC
    — calibration production is running, 3338 runs from Run 22 (247 fills)
    — about 30% have finished as of yesterday morning
    — so far can see different categories of runs with different SC behaviors
    — next week will calibrate fill-by-fill and prepare for 2nd pass
    — Alignment work with cosmics from last 5 years
    — trying to sort problems with Run 22 due to short
    — FF and RFF differences due to different accounting of short (too many dB entries)





    TPC meeting 05/09/2024

     

    Present: Yuri, Gene, Alexei, Tommy, Jim and Flemming

     

    Hardware: Alexei

    Working on Magnetometer readout.

    Laser status: East is weak ; only one raft

    West ok 2 rafts. Yuri is concerned about East status; It has though been like this for several years.

     

    Methan content has been changed to make it more stable.

     

    Pressure spike in yesterday thunderstorm was handled very well by system.

     

    Software:

    Gene presented a couple of studies

    a) background in early parts of fill. Concerning due to unknown charge distributions from backgrounds blog https://drupal.star.bnl.gov/STAR/blog/genevb/Background-Run-24-pp200

    b) study of cluster widths (time) for tracks

    blog entry https://drupal.star.bnl.gov/STAR/blog/genevb/Run-24-TPC-clusters-wide-time

    c) transparency study. Blog https://drupal.star.bnl.gov/STAR/blog/genevb/TPC-Gating-Grid-Transparency-study

    Conclude that at these lumi (40kHz BBC) no observable effect at 5kHz triggering rate.

     

    Run 22 pp space charge calibration. It is ready for production of 3,400 jobs (runs) covering 250 fills. For each runs several daq files must be processed. Ready to go.

     


    TPC meeting April 25, 2024


    Present: Alexei, Yuri, Tommy, Gene, Flemming, Jim, and myself
     
    == Hardware
    — moved magnetometer from right side to left
    — also bought new HV power supply but need plug installed (on device)
    — also working on ToF and MTD gas system 
    — tuning methane content slowly
     
    == Electronics
    — 4 RDOs out in the iTPC
    — 12.1, 12.2, 5.1, 21.3
     
     
    == Software
    — DAQ files have been staged for run 22 pp500 SC 
    — expect to start calibration production next week when Richard T. is back
    — developing plan for investigating ion backflow in 5 kHz running
    — need to bring to trigger board
    — waiting on collider data for tuning T0
    — work on cosmics continues as well
    — if cosmics are taken during running it's important to also have lasers taken to track changes in drift velocity
     



    TPC meeting April 18, 2024

    Present: Alexei, Yuri, Tommy, Gene, Flemming, Jim, and myself
     
    == Hardware
    — Gas system is more stable now 
    — watching MTD gas as well, appears to be working now
    — now moving to working on magnet and verifying sensors
    — expecting first collisions around April 23rd-25th
    — drift velocity is roughly flat, trying to get back to ~5.55 cm/us
    — lasers every 2 hours
     
    == Electronics
    — Flemming's dead channel scan was successful
    — Tonko has list (~7 pads that have always been dead)
    — when poletips were installed there were some problems with various electronics going in and out
    — maybe test after poletips is installed should be modified?
    — perhaps test is too short
    — summarized cosmics
    — for FF have ~15 hours, 30 million triggers (Yuri sees 50 million)
    — for RFF have ~12 million triggers
    — zero field has ~9.7 million triggers (~8 million HLT)
    — fast offline processes all events in all ADC files right now (not limited to 1000)
     
    == Software
    — Run 22 pp 500 SC
    — Gene preparing staging for having enough DAQ files for each run to get SC for each run
    — in pp500 that's challenging due to the significantly higher SC (need to have enough good tracks to determine SC)
    — developing procedure to determine how many files will be needed to get the needed number of good tracks
    — good procedure already developed for finding SC efficiently
    — putting the two together, we should be ready for production to find SC 
    — expecting about 1.5 weeks for each SC pass 
    — Yuri looking at cosmics that have been collected
    — also watching lasers and drift velocity
    — Tommy finished TpcRs tune-up for 2023 data
    — Yuri will include as soon as possible
    — working on fixing spread in ADC and time bucket values in simulation to match data
    — simulation currently underestimates both
    — also working on T0 offset and dE/dx differences between data and simulation
    — several parameter adjustments have produced much better agreement
    — final parameters and details available at link below
    — https://www.star.bnl.gov/~ctsang/daq_2023AuAu200_TPC23_final
    -- https://drupal.star.bnl.gov/STAR/system/files/Tpc%20simulation%20tune%20up%20for%20year%202023.pdf

    TPC meeting April 11, 2024


    Present: Alexei, Yuri, Tommy, Gene, Flemming, Jim, and myself
     
    == Hardware
    — TPC running well
    — lasers started and we have cosmic rays
    — gas system was interrupted yesterday, has been restored
    — leak has increased somewhat, started around 1.2-1.3 l/min
    — running forward full field right now
    — expecting first collisions around April 22nd
     
    == Electronics
    — running well
    — Tonko has been checking along the way
    — we have about 16 hours (wall clock) of FF data
    — Flemming will do a first pass dead channel scan
     
     
    == Software
    — some cosmic data has been run through fast offline
    — hope to have some produced soon
    — Run 22 pp 500 SC
    — Richard T. working on fill-by-fill study, preparing required software tools
    — calendar will free-up somewhat in May, April will be busy
    — jobs for voltage power supplies, magnet and lasers started
    — jobs for laser runs after a day failed
    — running time for processing laser jobs is very long (>1 day for a job)
    — usually ~10 seconds per event and about 2-3 hours for a file
    — we have big files, about 4000 events per file, but watching process
    — have cosmics from level 4 as of yesterday
    — will process as soon as we have drift velocity for this run
    — can then make statement on quality of data so far
    — Alexei will examine last day of laser runs to see if there is any issue
     
    == AOB
    — Flemming, Yuri, and Gene met with eTOF folks to talk about alignment
    — some slides at eTOF page
    — issue may be related extrapolation of track out to eTOF
    — helix model parameters and constant magnetic field assumptions at eTOF
    — work is ongoing, several suggestions were made for possible diagnostics to check
    — maybe O+O data, for example 
     

    TPC meeting Apr 4, 2024
    Present: alexe, jim, flemming, gene, Frank, Yuri, yannik


    Hardware:

    Argon started Monday. Blue sheet has been delayed.
    Argon will be delivered next Monday.

    Will start flammable gasses today did not happened breakers issue
    No short currently –
    Discussed what to do if a short is found – gene’s pref. to try to fix it.
    Might help to have laser to diagnose. Should pole tip

    Sensors for surveying motion is live and should be used when magnet field is changed.

    Electronics:

    Ready for run – except Tonko will perform pulser runs, and determine gain tables and dead/hot channels once magnet is stabilized. Maybe tomorrow early.

    Software:

    Run 22 space charge Rich S.  Did another pass on days (4 days) ongoing, using production system with long queue.
    Should be able to get fill-by-fill outcome of this pass.
    If has to be for full production about 20,000 runs. Preparing for this.

    Yuri Working with alignment. Problem to get stable results of data from different runs. Possible weeks are needed for results. No firm estimate for when results can be available Time scale weeks.
    Preparing for data taking. Cron job for filling of tables.

    Frank brought up the points, as for S&C meeting, that production for pp 500 (run22) really have to start  soon.

    Longer discussion on findings from eTOF group that it seems that projection from itpc only seem only to match with large deviation.

    Rather than iterate discussion point out for next week’s  eTOF meeting Apr 11 10.00am EDT we will have a joint discussion on this issues.
    Some points brought up -
    -    Track extension done with helix. What is effect of tracking in non-homogeneous magnetic field – particular radial changes.
    -    Some effect are order of up to 8cm deviations, unlikely this can be result of misalignment as current alignment deviations are order mm.
    -    Are there effect from the very long clusters in FXT data for iTPC response.
    -    Is this also seen in collider data?
    -    Is there a difference in OO with FF vs RFF data?

    TPC meeting March 28, 2024

    Present: Alexei, Yuri, Tommy, Gene, Jim, and myself
     
    == Hardware
    — gas shifts should start this coming Tuesday
    — sensor calibration to be done tomorrow
    == Electronics
    — nothing new reported this week
     
    == Software
    — status of Run 22 SC was shown last week at collaboration meeting
    — Richard T. looking at fill-by-fill results now
    — difference in SC now compared to preview production not yet within desired range
    — in FXT the eTOF group sees interesting effects when aligning to TPC
    — looks like space charge
    — but also seen in same magnitude in several other colliding and FXT systems
    — that's usually not characteristic of space charge
    TPC meeting March 21,  2024 - cancelled due to collaboration meeting

    TPC meeting March 14,  2024


    Present: Alexei, Yuri, Tommy, Gene, Jim, Flemming, and myself
     
    == Hardware
    — power drop on Monday due to strong winds
    — power was restored ~11pm
    — TPC was okay after water restored
    — Glassman operates okay
    — front display now working correctly (after power drop)(?)
    — laser inspection today
    — no problems found
    — oxygen sensors to be installed soon
    — new power supply was ordered last summer 
    — now in transit from Germany
    — will serve as our spare
    — with additional existing Glassman will have complete backup HV system
     
    == Electronics
    — talked to Tim Tuesday
    — seem to have one remaining RDO in the TPC (needs replacement connector)
    — otherwise are done 
     
    == Software
    — continuing work on SC in Run 22
    — DCA distribution is tightening
    — so calibration is likely converging in recent iterations
    — working on pass 7
    — want to now compare fill-by-fill calibration (looks like it's needed)
    — will scale calibration runs to "production" mode
    — alignment work continuing
    — struggling to get convergence
    — Tommy's evaluation of TPCRS for 2023 dataset
    — comparison needs some tuning
    == AOB
    — Gene showed a plot of local hit coordinate (colored by drift distance) from pp500
    — long drift distance hits are more distorted (SC)
    — r-phi cut on hits causes losses on the inner most pad rows




    TPC meeting March 7,  2024

    Present: Alexei, Yuri, Tommy, Gene, Jim, Flemming, and myself
     
    == Hardware
    — Tim (electrician) will look at Glassman as soon as he returns from vacation
    — he was able to fix it last time
    — if it cannot be fixed will move to an alternate
    — oxygen sensors to be installed later this week
     
    == Electronics
    — one RDO changed with spare
    — possibly 2 FEEs need to be switched
    — should be finished and ready for testing by end of this week
     
    == Software
    — sector 18 in run 22
    — calorimeter folks took a closer look 
    — proportion of hits lost (unmatched) in sector 18 is same as rest of calorimeter
    — problem is further up in applied cuts on EMC side
    — Run 22 SC (& GL)
    — queuing problem at SDCC resulted in production queue jobs (>3 days) being sent to wrong nodes
    — now resolved, will see Saturday morning
    — differences in running jobs noticed in SC corrections (batch vs. interactive) seems to be due to rounding on different processors (AMD vs. Intel)
    — very small (last digits differences)
    — about 1/3 of SC jobs hit an assert in tracking code
    — tacking code was changed in 2016 to allow skipping over "bad" nodes on a track
    — track node quality cut on position with respect to "detector" in which they are found
    — track nodes moved off the end of a padrow after distortion corrections are numerous enough that an assert gets thrown by other parts of the tracking chain
    — immediate fix is to undo the 2016 change and throw away remaining portion of tracks when they hit a bad node
    — Yuri continuing alignment work
    — applied new survey with Alexei's corrections to data from last 5 years
    — iterating on that
    == AOB
    — Richard W needs input for the TPC summary talk at the collaboration meeting
    — draft list of topics will be circulated on the list 


    TPC meeting February 29, 2024


    Present: Alexei, Flemming, Tommy, Gene, Jim, and myself
     
    == Hardware
    — second oxygen sensor received
    — to be installed later this week or next
    — tomorrow west pole tip will be closed so measuring voltages
    — strange readings on front panel
    — tried using alternate (Glassman) meter to check 
    — remote cathode HV is okay, currents are okay with remote meter
    — front end problem not yet understood
    — will also check end HV
     
    == Electronics
    — waiting for eToF work to finish on east side
    — nothing in notes about sector 18 
    — otherwise nothing new reported this week.
     
    == Software
    — sector 18 does not show any evidence of an issue on the TPC side
    — Gene will circle back to the EMC folks to discuss further
    — Run 22 SC (& GL)
    — SC histograms shows no z-dependence of variations (would indicate presence of GL)
    — so, even with high luminosity, it seems hardware fix with iTPC works well
    — also, jobs running for SC corrections were running past their limit (< 3 days)
    — working to have production account run these jobs
    — getting asserts due to errant RICH scalar values (moves TPC hits beyond reasonable postions)
    — sometimes SC correction can actually move hits by significant amount however
    — result is, due to high SC in run, some hits move beyond 10 cm
    — in the meantime, temporarily raised the limit to 20 cm and will revisit
    — more troubling is there are differences between jobs run via the batch queue and jobs run manually
    — Gene is pursuing this
    — also noticed difference between optimized and unoptimized code 
    — none of these affect the overall tracking results
    — shows that the math done mostly in floats in StMagUtilities is good (no observable difference in tracking either way)
    == AOB
    — Richard W will present the TPC summary talk at the collaboration meeting
    — final list of topics is TBD

    TPC meeting February 22, 2024

    TPC meeting 2/22 2024

    Present : Jim,Yuri,Tommy, Gene and Flemming

     

    Hardware

      Last repairs of RDO (2) should be done in next two weeks; closeout approaching

      Alexei reported during week

    “I finished measurements of target displacement on TPC wheel versus actual position. For all holes target dropped vertically down due to target gravity.

     I did only inner holes on wheel. There are too  difficult to measure others.”

    And provided numbers to Yuri. Resoultion in survey should be around 150 microns.

     

    Software

    a)  work on space charge calibration of pp 200 for 2022 has continued . Studied 5 fills with high statistics.

    The runs are long and some jobs were killed due to 3 day limit.

    Comparisons of fill vs ZDC rate shows

    1) nice linearity with ZDC rate which is good

    2) significant difference between fills corresponding to 4-5 mm DCA.

    So seems necessary to have space charge done per fill.

     

    b) Continued looking at matching between TPC and BEMC towers. Confirmed that indeed the phi-definitions are same between subsystems as should be. So issue of reduced matching is in area behind sector 18. Other plots (e.g. h+/h- issues) does not show any issues with sector 18 vs. time.

    Gene suggested looking if it could be a BEMC issue electronics. Matching looked slightly off.

     

    c) discussion on track matching to eTOF, that was also the subject to an earlier meeting today. New information appear during meeting, and issue may very well be a space charge issues. Have suggested eTOF group to review matching in FF and RFF setting for OO system to confirm this is a space charge driven.

     

     

     




    TPC meeting February 15, 2024

    Present: Alexei, Flemming, Tommy, Gene, Yuri, Iouri, and myself
     
    == Hardware
    — continuing measurement
    — now have right scale and additional magnification glasses
    — have 1 oxygen sensor and 1 was ordered this week 
     
    == Electronics
    — Nothing new reported this week.
     
    == Software
    — looked into the sector 18 issue mentioned last week
    — Gene was given a phi region, but there may be some misunderstanding 
    — upon initial examination, signed DCA is a bit messy on east, but sector 18 doesn't seem to be exceptional
    — charged ratios show some acceptance effects due to dead regions in some sectors, but not sector 18
    — sector 21 shows a strange split, maybe this is the sector that was noticed
    — Continuing work on alignment
    — looking at high voltages
    — looks like different direction of electric field in different sides of TPC (perhaps due to sag?)
    — Work by Iouri  Vassiliev on dE/dx in P23id
    — Hyperon reconstruction at 3.5 GeV in P21id and P23ie
    — Very nice results with newer calibration (P23ie)
    — see link that will be posted to TPC list separately link to talk at eTOF meeting
    == AOB
    — Need to discuss list of topics for collaboration meeting
    — Also need to decide on speaker for that talk

    TPC meeting February 8, 2024

    Present: Alexei, Flemming, Tommy, Gene, Yuri, Jim, Frank, and myself
     
    == Hardware
    — tuning procedure for survey related measurements
    — gas systems is ready
    — need to replace 2 oxygen sensors (1 in hand, 1 needs to be ordered)
    — also need to do methane sniffer calibration
     
    == Electronics
    — talked with Tim
    — had to fix 2 readout boards on east side
    — those are expected to be the last one
    — all that should be left is testing after pole tips are closed, etc.
     
    == Software
    — Richard Thrutchley has needed to be away so SC work is currently paused
    — pp500 BEMC calibration are moving forward
    — towers outside TPC sector 18 are seeing significantly reduced matching
    — Z close to endcap sees better matching than closer to central membrane
    — probably not space charge and should not be grid leak (but...?)
    — Gene looking more closely into it
    — Continuing work on alignment
    — looking at high voltages
    — looks like different direction of electric field in different sides of TPC (perhaps due to sag?)
    — Some difference in dE/dx between P21 and P23 production 
    — helium band not easily visible for eta<-2
    — J/Psi team checked 2019 data set with new calibration (4.5 GeV FXT)
    — do not see any degradation
    — asked to repeat for 2020 data sample
     
    == AOB
    — Possible topics for collaboration meeting
    — DAQ 5k?
    — alignment work?




    TPC meeting February 1, 2024
    Present: Alexei, Flemming, Tommy, Gene, Yuri, Jim, and myself
     
    == Hardware
    — making a new target holder with flexibility to move target
    — expect to finish tomorrow
    — will resume measurements related to survey
     
     
    == Electronics
    — Tonko has restarted work now that interlock testing is done
     
    == Software
    — Continuing work on alignment
    — goal is 1.4% momentum resolution at 1 GeV (right now is ~2)
    — expect to need to modify supersector alignment procedure and TPC overall alignment
    — also waiting for systematics and uncertainties from survey (Alexei is working on them)

    TPC meeting  January 25 2024


    Present: Alexei, Flemming, Tommy, Gene, Yuri, Jim, and myself
     
    == Hardware
    — long test of global interlocks
    — almost finished tuning check of survey and target installation
     
     
    == Electronics
    — testing will start when things are more quiet on site (repairs and checks, etc.)
     
     
    == Software
    — TPC field cage shorts in 2022
    — external resistor was added to compensate for a short that was found
    — in 2023 the short was repaired (via cleaning) and external resistor was removed
    — dB can be updated
    — Working with Richard Thrutchley to have enough statistics for SC
    — Yuri sees ~1 mrad rotation in survey
    — needs survey uncertainties to verify
    — Alexei mentioned survey target appears to have dropped in position by as much as ~1.5 mm
    — will determine precision on that number 
    — Yuri continuing work to advance Irakli's approach to alignment


    TPC meeting 18 January 2024

    Present: Alexei, Tommy, Gene, Yuri, Jim, and myself
     
    == Hardware
    — Jan 7th-8th was large power dip
    — several parts of the ring had no power
    — STAR was without power for ~3 days
    — restoration was a challenge
    — lost Allen-Bradley interlock (1st time ever), CMOS battery had (finally) died
    — Jim Thomas managed to recover Allen-Bradley 
    — Prashanth helped get new system connected to interlock
    — also cleared flag to allow update 
    — system has been okay since last Friday
    — Cathode was ramped yesterday
    — no shorts appeared
    — Tonko will start electronics testing
    — possible will be an update at Ops meeting in couple hours
    — Working to find uncertainty on target used for TPC position survey
    — Expecting ~1 mm but currently unknown
    — Start of run postponed until April 15th 
     
    == Electronics
    — Nothing new reported this week
     
    == Software
    — Working through issues with Richard Thrutchley having enough statistics per DAQ file for SC
    — Yuri sees some tilts in sector positions
    — needs uncertainties mentioned above to 
    — Yuri also working on 5 years of cosmics to advance Irakli's approach to alignment
    — so far only ~20% improvement in momentum resolution

    TPC meeting 1/11/2024

    Present: Jim,Yuri,Gene,Tommy, Flemming, Frank

    Hardware:
    The powerdip and subsequent power loss  several days ago is slowly being restored to the WAH.
    There are severe issues with the Alan Bradley safety system that prevents it from being brought back. Alexei and jim are looking into this. Wayne has also been asked to bringe (seveal) computers back to lift. It looks like either the cpu, or the fiber optic boxes has issues (in gas room or on platform).

    Data:
    Gene had looked further into the field cage current discussed last week.
    1)    The central membrane voltage change o day 157 from 27,995 V to 28,005 V after the shutdown caused by smoke incident and shutdown due to HDSD system. This change persisted to the end of the run.
    2)    Bimodal response in current , mainly inner, was seen during day 175-180.
    3)    Humidity caused issue (due to chilled water issues) for ~10M events on day 168. Considering marking these as bad- it is only ~1% of total data set.

    Alignment
    Yuri discussed results of survey of TPC to maget. Compared results from 2004 with 2023 survey. General small difference of < 0.1 mrad possibly 0.2 in alpha (rotation around x axis.
    One puzzle comparing some distance measurements in x,y that were ~ 1mm from design drawings. See Yuri’s blog entry  from 1/10.

    FXT issue
    Discussion on special issues for FXT tracking.
    -    Trigger gitter
    -    Ways to possible improve tracking for heavier masses.
    -    Yuri will pursue this further.
    TPC meeting  January 4, 2024

    Present: Tommy, Gene, Jim, Flemming, Yuri, and myself
     
    == Hardware
    — Nothing new reported, Alexei back next week
     
    == Electronics
    — Tim and Tonko started working on outstanding issues (replacing cables, fuses, etc.)
     
    == Software
    — TPC survey was done, Yuri does not yet have results he needs from the survey 
    — they were promised to him for next week
    — Yuri revisited alignment based on the studies shown by Rongrong before the holidays
    — was some T0 change that was not accounted for and exploring alignment parameters
    — would like to have alignment done for all of past 5 years
    — Tommy working on TPC RS
    — comparing DAQ 5k and DAQ 1000
    == AOB
    — Gene looking at field cages shorts from past couple years
    — spotted a sharp jump of ~15 volts in central membrane voltage on Junne 8 2023
    — decayed down by ~5 volts over a few days
    — all 4 field cage currents showed a sharp rise at the same time though the decay profile was different

    note from Jim after meeting
    June 7th evening shift was a shutdown due to the Canadian Fires.  The Fire Department was worried about particles in the air setting off their high sensitivity smoke alarms so they shut the alarm system down ... consequently all of STAR had to shut down too.  So power was off to the Hall for several hours.   Also, it appears that major work was done on the MCW system while other things were off.
     
    Perhaps, when STAR came back up, the TPC Cathode was set to a slightly different value?


     

    ^^^^^^^^^ Beginning 2024   ^^^^^^^^^^






    TPC meeting 12/21/2023
    Present: Alexei, Tommy, Gene, Jim, Flemming, Rongrong, and myself
     
    == Hardware
    — Still no leaks after repair
    — Survey was done, all RDO back in position
    — Will hear back from Yuri on survey data after New Year's 
    — Alexei checked TPC movement against previous data, no movement of TPC within ~200 um 
     
    == Electronics
    — nothing new reported this week
     
    == Software
    — Need to double check TPC field cage shorts records from 2022 and 2023 (Gene)
    — Rongrong showed studies of hadron ratios and K0s widths
    — see https://drupal.star.bnl.gov/STAR/system/files/20231221_Ks_hratio.pdf
    — previously observed degradation of upsilon and J/psi resolution
    — no obvious answers came up during the meeting
    — some question about what to look at next to help diagnose the issue
     
    == AOB
    — see Rongrong's presentation linke

    TPC meeting 12/14/2023
    Present: Yuri, Alexei, Jim, Tommy, Gene and Flemming


    Hardware:


    TPC survey this week yesterday Prepared moving RDO. Says it would take 5 days and will not do this. Alexei and survey agree to several points on magnets on and compare with old /recent setup

    Will need additional oxygen sensors.

    Comment from Yuri: Incomplete survey previous. Hope that the two measurements can be combined.

    Electronics:

    After survey the checkout and repairs on the can complete.


    Software:

    Space charge correction work slowed down due to final exams.

    Yuri checking alignment to revisit all cosmic data. No results yet.

    Tommy confirmed that (what ??) DEV2 is working properly. (I missed this )
    Also work on TpcRS
    Issues for for alpha reconstruction in FXT environment. Tommy is looking at this.



    From Irakli regarding revisiting the cosmic data for multiple years:

    “Just want d to say that the files that are on RCF for the old comics data is only MuDsts, I prepared the same list to stage daq files. Will clean up my 2021 cosmic event.root files and stage those to look at some hit level alignment just in case.”

    AOB
    Several people will be out next week, but we will have a short meeting on 12/21, but not one at 12/28
     TPC meeting 12/07 2023

    Present: Alexei, Tommy, Gene, Jim, Flemming, Rongrong, Weizhang, Yuri, Yue, Sonoraj, and myself
     
    == Hardware
    — Monday Alexei fixed the bottle leak (using old style clamps)
    — will watch over next 3 months
    — Finished moving RDOs to make room for target for TPC survey
    — now need to ensure all holes are clean enough for survey
    — will request survey be done next week at 2 o'clock meeting today
    — some beneficial adjustments made to scaffolding 
     
    == Electronics
    — nothing new reported this week
     
    == Software
    — Run 22 SC (pp500) work is ongoing
    — some impact due to vertex finding recently due to pedestal correction
    — Yuri just finished dE/dx calibration for Run 19 AuAu200
    — checked and found no surprises
    — Now working on revisiting Irakli's alignment work with cosmics 
    — all five year with iTPC
    — lacked some statistics for Run 19 and 20
    — recovered those and now making some progress
    — see https://drupal.star.bnl.gov/STAR/blog/fisyak/AuAu200GeV2019-dEdx-recalibration-new-model-has-finished
     
    == AOB
    — Yue Xu present on light nuclei acceptance in Runs 21 and 23
    — links to slides will be posted to TPC list
    — requesting a 10k production, keeping event.root, with libraries SL21d and SL23e 
    — Weizhang presented photonic electrons from  AuAu 14.6 GeV using SL21c and SL23d 
    — comparison of nsignmaE (electron) between libraries
    — links to slides will be posted to the TPC list 
    You can find the slides from Yue Xu (https://drupal.star.bnl.gov/STAR/system/files/Acceptanceindffproduc_20231208.pdf) and Wei Zhang (https://drupal.star.bnl.gov/STAR/system/files/nsigmaE_Distribution.pdf) here. They posted the links in a small group discussion thread. 
     
    Yue also mentioned:
    And the expected de/dx (Bichsel fuction) is:

    1. For triton : dedx_expected = charge*charge*(TMath::Exp(Bichsel::Instance()->GetMostProbableZM(TMath::Log10(p*TMath::Abs(charge)/mass), 1)));
    2. For helium3 :
        Float_t ParfHe3Mean[4] = {29.3642,-0.983422,0.0958638,0.169811};
        dedx_expected = ParfHe3Mean[0]*TMath::Power(p, ParfHe3Mean[1] + ParfHe3Mean[2]*log(p) + ParfHe3Mean[3]*log(p)*log(p));
     
    Best

    TPC meeting 11/30/23

    Present: Yuri, Jim, Gene, Rongrong, Flemming, Irakli, Alexei, Weizhang

     

    Discussion on tp high0 pt resolution. (cosmic, J/psi)

    Lead in presentation by Rongrong Link

    Presented results from looking at cosmics, 2011,2014 and 2018. 

    Irakli will take a look at these data, after having gotten location of these.

    Rongrong will update presentation included opening angle vs pt for J/psi to see

    If this can possibly explain some of the flat dependence of resolution vs. pt

    Will also take a look of h-/h+ at high pt as Gene used this as one measure for QA.

     

     

    Hardware:

    Preparing for survey, cleaning holes, addition pins in order to get better handled of TPC position relative to magnet. Will bring up request for survey at next ops meeting.

     

    Fixing of recent leak underway; gotten spare connectors.

     

    Electronics:

    According to Tim they will likely wait until the hardware repairs (leak) and survey is done, to finish the TPC electronics repirs.

     

    Software:

     

    Space charge correction; Quite good after 3 iterations, put in DB and will be used for test production. Will proceed to 4th but needs more statistics.

     

    Test production fails for some DAQ files (FST issues causes crash)

     

    Yuri back and will go back to looking at de/dx, alignment.

    Has still outstanding pull request that has to be approved.

     




     TPC meeting November 16


    Present: Alexei, Tommy, Gene, Flemming, Jim, Irakli, and myself
     
    == Hardware
    — busy with ToF and MTD systems 
    — west laser down, needs some retuning
    — another water leak found at 5 o'clock on west side (switched off)
    — question about whether Tonko is done with sectors 5 and 6 in that area
    — looks like sectors 1, 4, and 7 still need some work
    — east laser has a burned spot on one of the mirrors
     
    == Electronics
    — nothing new reported this week
     
    == Software
    — Run 22 SC (pp500) 2nd pass looked good
    — 3rd pass delayed a bit, still working on it
    — Run 19 AuAu200 
    — some interest in reproduction
    — calibration production for dE/dx should be done by end of week
    — Irakli will follow-up with Xianglei on issue of which dB table to pick-up 
    — one question is where the jobs are actually running (LBNL or SDCC @ BNL?)
    — Irakli gave Yuri all the alignment parameters, Yuri can update us when he returns
     
    == AOB
    — No meeting next week (Thanksgiving)

    Weekly TPC Meeting November 9, 2023


    Present: Alexei, Tommy, Gene, Flemming, and myself
     
    == Hardware
    — nothing new with TPC at present
    — finished ToF recovery
    — discussion with Yuri suggests survey of TPC and magnet should be redone
    — need to coordinate with Tonko and others 
    — requires moving a RDO
    — work appears to be ongoing, so need to know status
     
    == Electronics
    — nothing new reported this week
     
    == Software
    — working on Run 22 SC calibration 
    — 2nd iteration complete
    — job processing is working
    — improved SC in 2nd calibration also reconstructed pileup better (!)
    — ended-up cutting more vertices and lost some statistics due to this
    — overall is okay though because the efficiency is better
    — jobs have also been optimized to run faster
    — expect a 3rd and perhaps 4th iteration next week
    — can then produce a preview production of pp500 data
    — also want to look for fill-by-fill effects in SC
    — right now it seems there is very little fill-by-fill variation


    Weekly TPC November 2, 2023
    Present: Alexei, Tommy, Yuri, Gene, and myself
     
    == Hardware
    — busy with ToF gas system
    — was power interruption on platform
    — Tonko continuing repairs
     
    == Electronics
    — nothing new to report this week
     
    == Software
    — work continuing on Run 22 SC
    — hope for initial pass done within next 2 weeks (will allow first pass of about 20% of Run 22 data)
    — Gene starting work on Run 23 SC
    — some interest in reprocessing Run 19 AuAu 200 (does not have latest dE/dx model)
    — would need recalibration with new dE/dx model
    — calibration run coming in next couple weeks
    — Yuri working with Irakli's alignment process
    — checking results on Run 19 - Run 22 data
    — improvement in momentum resolution looks too big at the moment
    — Tommy working on gain, T0, and flag
    — flag needs to be defined
    — dead channels/FEE should also be included in dB (only in ASCII file at present)

    Weekly TPC meeting October 26, 2023

    resent: Alexei, Tommy, Yuri, Jim, and myself
     
    == Hardware
    — busy with ToF gas system
    — still see activity from Tonko and others
    — burned VME module was replaced, others cleaned
     
    == Electronics
    — nothing new to report this week
     
    == Software
    — TPC gating grid
    — 3.85 GeV FXT data from May 2021
    — see Yuri's slides posted to meeting agenda on Drupal
    — simulation does not properly reproduce the time difference in GG opening between inner and outer sector
    — anomalous delay of ~200 ns and ~600 ns for outer and inner sectors
    — Comparison of prompt hits
    — prompt hit distribution on inner sector is different between reconstructed and MC 
    — MC shows two peaks which is unusual in itself (possibly due to cluster finder?)
    — RC shows three peaks
    — in outer sector RC distribution is about 33 ns wider than MC (beta* = 10 m perhaps)
    — Modifications in StTpcRSMaker, pull request #614
    — unknown how to reproduce the problems of the failed test (about half)
    — need to run tests interactively with particular library (but we don't know how yet)
    — work continuing on Run 22 SC 
    — may need to start Run 23 SC soon, but currently no additional person power available
     
    == AOB
    — some desire to check alignment results with new data 
    — look at 2023 data with 10 Mevts (cosmics)
    — looked at old and new (IDL) alignment, but also used new hit errors 
    — see approximately a factor 2 improvement in resolution in RFF 
    — in addition, the number of matched pairs drops by ~2 so there is a need for supersector alignment as well




    Weekly TPC meeting October 12,  2023


    Present: Alexei, Flemming, Daniel, Mathias, Tommy, Yuri, Irakli, Gene, and myself
     
    == Hardware
    — last Monday was smoke alarm in STAR
    — smelled something from TPC FEE rack
    — found problem yesterday, was burned VME module (discussed on Star-Ops list)
    — Tonko now continuing with TPC repair (FEEs etc.)
    — replaced valves to fix leak (valve between purifier and dryer)
    — replaced dryer and started purifier recovery yesterday
     
    == Electronics
    — nothing new to report this week
     
    == Software
    — FXT TPC Embedding studies by Mathias Labonte
    — 3.2 and 7.7 GeV
    — differences noted in nHits and DCA distributions as functions of eta and pt
    — evidence of track breaking at central membrane?
    — concern about jitter in trigger being root cause
    — excess of hits in real data vs. embedding near end caps
    — not yet well understood
    — slides to be posted to the TPC list 
    — supersector alignment work from Irakli
    — still see some improvement from outer sector then inner sector alignment
    — setting supersector alignment parameters to zero seems to already give resolution < 200 um (in Y)
    — Yuri would like to repeat the K0 short study with no alignment parameters (survey result only)
    — compare K0 mass with tracks from same sector and from different sectors
    — reminder that for supersectors, some overall shift, overall rotation, and overall expansion are convoluted
    — new method needed for supersector alignment
    — Yuri looking at effect of different length cables on region near end cap
     
    == AOB
    — Gene asks for some feedback on the level of statistics needed in a calibration sample for Mathias' studies 


    Weekly TPC meeting October 5,  2023
    Present: Alexei, Flemming, Jim, Tommy, Yuri, Gene, and myself (Richard)
     
    == Hardware
    — work on both laser finished
    — replaced lamps
    — measured power
    — switched work to gas system regeneration
    — was another global alarm today
    — already repaired once 
    — recent one (overnight last night) was a smoke alarm on second floor
    — shutdown power to platform around secto 22-24
    — Alexei investigated this morning and could smell something that may have burned
    — also got a second opinion which concurred
    — will follow-up with appropriate contacts
    == Electronics
    — nothing new to report this week
     
    == Software
    — trying to understand some observations in simulations
    — working on FXT and new parameterization
    — needs more time
     
    == AOB
    — Flemming will contact Irakli


    Weekly TPC meeting September 28, 2023

    Present: Alexei, Flemming, Jim, Tommy, Yuri, Gene, and Richard 
     
    == Hardware
    — working on west laser
    — some trouble with piezo driver
    — found a way to make a better alignment tool
    — began work on implementing it
    — hope to finish west laser this week
    == Electronics
    — reply from Tim that they need about 1 more month
    — need to fix fuses on many (~7)  iRDO
     
    == Software
    — working on FXT and new parameterization
    — needs more time
     
    == AOB
    — perhaps we could hear soon from Irakli about any progress
    — understand there is some problem with iterations
    — Flemming will contact Irakli
    — Yuri sent some images of ALICE Pb+Pb events to TPC list 
           




    Weekly TPC meeting September 21, 2023

    Present: Alexei, Flemming, Jim, Tommy, Gene, and myself (Richard)
     
    == Hardware
    — working on west laser
    — PCB board was changed but didn't work properly
    — can use board with external relay (but relay failed)
    — have driver to run UV crystal
    — will finish alignment this week
    == Electronics
    — Tim going through list from Tonko
    — a couple power cables have been fixed
    — still many fuses on FEEs that must be fixed
     
    == Software
    — continuing to meet with Richard to work on SC calibrations '
    — Run 21 FXT dE/dx calibration is done
    — see https://drupal.star.bnl.gov/STAR/blog/fisyak/Runs-XXI-fixed-Target-dEdx-calibration-has-been-done
     
    == AOB
    — Jim's thoughts on sector alignment to beamline
    — used to assume the sectors were solid body rigid objects
    — actually need 5 parameters, 2 additional to account for stretches of pad plane
    — for iTPC, Jim accounted for all 5
    — perhaps we should be doing the same 5 parameter procedure for alignment work?
    — allow stretching for old outer sectors as well
                    --additional info from Jim  drupal.star.bnl.gov/STAR/system/files/TPC%20padplane%20surveydocx.docxattached docx

    Weekly TPC meeting September 14, 2023

    Present: Alexei, Yuri, Flemming, Jim, Tommy, Gene, and myself
     
    == Hardware
    — have received a couple additional 6-packs (have 4 full and 1 ~30% now)
    — this is about 90 days of supply
    — moved 1 6-pack to front of building, sitting on concrete
    — may need a roof of some kind
    — measurements of TPC supports are complete
    — TPC cannot move in north and west direction  
    — can move only in south direction 
    — will present next week
    — west laser failed again
    — have 2 spare boards, will check with those
    == Electronics
    — a couple of boards pulled (sector 4 for example)
    — some contacts corroded on RDOs (due to water leak)
    — question about corrosion on pad plane itself
     
    == Software
    — working on SC calibration
    — may do an initial calibration and continue working if there is immediate interest
    — Yuri sent a link on (i)TPC measurement systematics
                https://drupal.star.bnl.gov/STAR/system/files/Revision%20Tpc%20sytematics%20and%20errors_1.pdf
    — see slides for conclusions and plans
    — 2D parameterization seems to be insufficient, want to try 4D parameterization
    — Gene mentioned there is some priority on FXT productions including 3p85 from Run 21
    — may need to account for any new parameterization after those productions
     
    == AOB
    — Yuri out next week 

    TPC weekly meeting 9/7/2023

    Present: Alexei, Jim, Gene, Yuri & Flemming

    Hardware:
    Alexei working on tuning lasers, tuning intensities, drivers o west side.
    Revisiting CCD camera pictures taking at various times, with different filed configs. In total about 8-9. Will present findings next week.
    Yuri and Alexei inspected TCP and supports. 4 points setup for each, 1 points fixed (for each) others can expand/move

    Got delivery of methane, but in 12 pack w/o wheels. Not useful, will be returned. Attempt to get enough inventory one site for next years  (24-25) running.

    Yuri presented findings of analysis of RF and FF cosmics
    Presentations at https://drupal.star.bnl.gov/STAR/system/files/Cosmics_2023_RF_FF_0.pdf
    Main findings:
    There are offsets in momentum and dcaXY •
     No explanation yet. Alignment ?
    Discussion raised the issue if another survey of TPC+magnet is needed. Yuri pointed out previous survey was lacking good magnet survey.
    Yuri is also working of FXT TPC response .  Sees difference in vtZ in East vs. West tracks and fxt hit errors different than collider data. Need to be revisited
    recent minutes mainly written by Richard Witt



    August 31, 2023

    Present: Alexei, Yuri, Flemming, Gene, and myself (Richard)
     
    == Hardware
    — both poletips have been opened as of Friday
    — Tonko has started working on electronics (coordinating with Alexei's work)
    — replaced laser lamps and tuning
    — diagnosing west last
    — may even replace trigger board
    — gas system 
    — hope to be allowed to have 2 or 3 6-packs of gas on pad
    — will have an additional concrete pad with additional 6-packs near STAR as well
    — will continue laser work through this week and next
    — will then begin to recover gas system
    — Alexei has begun measurements of each support of TPC
    — will show Yuri the measurement process used today (4 pm)
    == Electronics
    — nothing new this week
     
    == Software
    — FXT 2021 calibration data are available 
    — started 14.5 GeV data production from 2019 data (with latest dE/dx)
    — Jeff downloaded cosmic data from HLT farm for Yuri
    — have ~3M matched cosmic pairs, ~5M RFF.
    — comparison of FF and RFF started
    — have ADC files for cluster finder comparison
    — some apparent rejection in online when cluster touches edge
    — dE/dx in P23id
    — Yuri has put pages documenting dE/dx production quality in production versions
    — noticed a splitting in proton (and heavier) bands in newer P23id production
    — thought to be related to ADC separation which is different between iTPC and TPX
    — needs further investigation 


    August 24, 2023
    Present: Alexei, Yuri, Flemming, Gene, Jim, Tommy, and myself
     
    == Hardware
    — will clean gas system 
    — prepare for recovery of purifier
    — lasers working well as of yesterday
    — will check sensors tomorrow with Yuri and discuss measurement of TPC movement during field flip
    — retreat was yesterday
    — information on gas limits was forwarded to CAD
    — cabinets were cleared of old stock
    — questions on limits were sent to CMS (Chemical Management System)
    — limit was returned as 3,000 ft^3 (6-pack alone is 2,000 ft^3)
    — looking for was to redistribute supply so as to come within limits
     
    == Electronics
    — presentations (Flemming and Tonko) were circulated and placed on the agenda page link
    — priority for next Run is to fix things and make small optimization changes, nothing major
    — Gene mentioned we should also look test of system at 5k (limited this year due to run stop)
     
     
    == Software
    — Calibration production with larger vertex window is nearly done (about 5% left)
    — still pushing on SC in Run 22 
    — may be some interest in preproduction of Run 22 data
    — might need an early SC calibration to make that happen
    — 2023 data 
    — offline vs online comparison of cluster finders should be repeated, should not be any change
    — want to check quality of cluster finder in 2023 data
    — want to revisit alignment using cosmics, but cannot currently access data
    — last three days in particular
    — HPSS blocked by production and HLT farm down(?)
    — request in carousel appears to be blocked (sitting for 2 weeks)
    — Looking at simulations (PHQMD model)
    — difference in slewing between east and west in FXT
    — cluster sizes and shapes are different in east vs west
    — for FXT a different correction will be needed in east than in west
    — dE/dx in FXT vs collider mode
    — some strange behavior noted in some data samples
    — trying to finish combined PID and can compare with different productions
     
    == AOB
    — Was mentioned at retreat, after Run 25, all the computing systems in the counting area will be de-commissioned (HLT farm, etc.)
        se Jerome contribution to retreat (not posted yet)
    — Cool-down will be Jan. 8th
    — starting species is still undetermined (pp, AuAu, ?)
    — PAC meeting Sept. 11-12th


    August 17, 2023
    Present: Alexei, Yuri, Flemming, Gene, Jim, and myself
     
    == Hardware
    — problem about gas from chemical management system
    — exceeded our methane for building 1006 
    — we have 10,000 ft^3, limit is apparently 3,000 ft^3 (1 6-pack is 2,000 ft^2)
    — will talk to safety folks
    — perhaps limit is related to building, not gas pad?
    — two issues, accounting and amounts we are allowed to use and store according to new limits
    — Alexei will contact new safety liaison to pursue issue and solution
    == Electronics
    — Operations retreat next week
    — Tonko will give update on status and plans 
    — Alexei mentioned operation meeting coming after this meeting
    — plans will be discussed for next week
    — plans for next cooldown are early January
    — 28 weeks for Run 24 + 6 weeks remaining from Run 23
    — Jim mentioned if Run 24 is going to go into July next year, we should develop plans for orderly weather stand-downs of STAR 
     
    == Software
    — Opened vertex window from -200 cm to +225 cm 
    — production submitted
    — will have event.root, picoDSTs, and muDSTs
    — pushing on Run 22 calibrations
    — dE/dx calibration for dAu200_2021 and 17p3GeV_2021 have been done, put in MySQL, and checked.
    — Yuri is still working on fixed Target Central membrane track splitting and combined PiD with new dE/dx.


    August 10, 2023
    Present: Alexei, Yuri, Flemming, Gene, Jim, Daniel, Tommy, and myself
     
    == Hardware
    — Last weekend ran cosmics
    — TPC circulation was stopped Monday
    — Argon was run for two days and yesterday switched to "summer flow"
    — West laser was inspected and has started working again
    — replaced a couple of blown fuses
    — laser stopped again, but has since returned
    — Alexei will start investigating with spare trigger boards
    — No official information on start of next Run (24)
    — will be at least 18 weeks away
    — next run should be 20+6 weeks total
    — Discussion about repair of TPC 
    — expect a timeframe of September to October
    == Electronics
    — Report on cosmics recorded was sent to the list (copied below)
    —The cosmic FF runs were very successful. Some runs were lost due to west trim coil trips 
    — once unnoticed for several hours, and one during night with slower response for help.
    — In all we took good data for 34 hours since Friday with
    —16.88 M cosmic1-cosmic3 triggers and 50.7 M cosmic wide triggers 
    — includes of the the first sets of triggers. 
    — This is about 75% more events that Yuri requested for this data sets.
     
    == Software
    — FXT 2021 3.85 GeV had higher beam intensity than any other energies
    — requires attention on SC 
    — Yuhang did simulation with Gene of what SC should be 
    — results were used and included in dB
    — T0 fields are empty in the dB for this dataset
    — Gene uses prompt hits in his method to determine e-by-e T0
    — does not correct for the mean T0
    — Yuri says there is a T0 offset (jitter) of about 14 ns (1/2 time bucket)
    — only seen in 2019 3 GeV data
    — but need to check other energies
    — need to confirm that trigger decision time has not changed
    — Yuri needs ~10k — 100k events and disk space for studies of T0 and vertex resolution issues
    — please remove any data from TPC spaces that you no longer need
    — Daniel says a preview production of FXT 3.85 GeV would be interesting and useful
    — Yuri supports a preview production as well and a dE/dx calibration production
    — important to have all primary vertices there and all tracks from pileup vertices as well
    — Gene mentioned that the T0 calibration usually done may not be useful for zero bias data
    — d+Au SC calibration
    — calibration was done for dE/dx and ToF
    — day 180-188 is the window
    — days 180-182 had collider trying to introduce beam crossing, but introduced high background
    — distortions were symmetric between east and west (but should not have been)
    — more calibrations work to be done for days 180-182 (see mass^2 splitting in BToF for example)
    — days 180-182 account for about 20% of the set, may want to skip that part at least for now
    — Yuri on 17.3 Au+Au and 2021 d+Au 
    — made first pass over data
    — dAu has slightly worse dE/dx resolution 
    — would like a few more days to update values currently in dB
    — Gene mentioned 17.3 GeV is higher priority if need to choose 

    August 3,2023
    Present: Alexei, Tommy, Yuri, Gene, Jim, Zachary, and myself
     
    == Hardware
    — West laser function was lost and is being repaired
    — electrical problem, not mechanical
    — polarity switch is ongoing (started today at 9 am)
    — position of TPC was noted yesterday
    — also trying to fix temperature in DAQ room (still 90 deg.)
    — several systems currently off due to high temp
    — have ~10 days methane supply remaining
    — decision on whether to resume run should be made tomorrow
    — can decide on what to do about low methane supply then
    — current consumption is about 1.5 l/min
    == Electronics
    — Flemming sent an update to the list separately
    == Software
    — Zachary looking at dE/dx in FXT with new calibration production
    — in FXT from 2019 and 2020 there is a ~2 sigma offset for the proton band
    — in all energies but 7.2 GeV (which had not yet been produced)
    — new calibration is closer but still ~1 sigma off (band is above expected position)
    — has a recalibrated set, using Bichsel fits to proton band, up to 5.2 GeV
    — ToF uses dE/dx identified protons and pions for finding common T0
    — above reported offset affects that algorithm due to magnitude of offset
    — implemented a T0 "afterburner" with recalibrated nsigma
    — demonstrates recovered identified protons with recalibrated method
    — will look at effect in new P23id production
    — Bao Xin working on SC calibration for Run23
    — all working, now using most recent data
    — can ask him to document his procedures for the next person who works on SC
    — Richard T was at BNL for AGS/RHIC meeting, met with Gene, and is working on Run23 SC
    — Yuri was blocked (job submission) since Monday
    — still exploring split in nHitsFit near membrane in 3.85 GeV FXT
    — also working on jitter in FXT and anomalous temp and pressure dependence of dE/dx in 17.3 GeV
    — then will finish combined PID

    July 26, 2023
    Present: Alexei, Flemming, Tommy, Yuri, Gene, Jim, and myself
     
    == Hardware
    — TPC systems are stable and working well despite temperature fluctuations in control room, etc.
    — currently on last supply of gas bottles
    — order has been in since February, delivery is the problem
    — remaining gas may last about 17 days
    — backup plan is to find single bottles as needed if current supply runs out 
    — Alexei says there are no single bottles of methane on site at BNL
    — Received all parts to connect magnetometer to the system
    — will start to monitor field when returns from vacation
    — Gene checked humidity from dB
    — getting up above 40 most days
    — unusual spike in humidity that corresponds to temperature spike
     
    == Electronics
    — system continues to perform well
    — temperature in the DAQ room is quite high
    — some CPUs are 20 years old and may not shutdown automatically if they overheat
    == Software
    — conversing with ops folks on a test for gating grid during high luminosity running
    — likely won't happen this week, sPHENIX not ready for high lumi
    — maybe next week, but Jeff is gone then
    — Gene was asked question about abort gap cleaning
    — causes large background dumps into STAR
    — trade-off is beam quality degrades over fill
    — alternative is to do cleaning every second
    — causes a nice sawtooth pattern of SC in TPC every second
    — problematic for low energy running
    — Gene looked at cleaning test this past week
    — could not see anything above ~.2 mm distortion as a result
    — if beam intensity is increased will need to re-check, but okay for now
    — SC calibrations
    — Work on Runs 22 and 23
    — Temple group on Run 22
    — Bao Xin on Run 23, nearly complete with 1st calibration, maybe done next week
    — Calibration productions for AuAu 17.3 and dAu 200 are done
    — Embedding seems to have issue with respect to real data in 2019
    — nothing to be done for real data
    — production was started with no change to e-by-e T0
    — Yuri put presentation on meeting agenda
    — Yuanjing Ji noticed splitting in nHitsFit near central membrane in FXT 3.85 in embedding
    — Yuri found same effect in real data for tracks associated with best primary vertex
    — seems not to depend on eta
    — 2nd problem noted by Tommy, broadening of vertex difference distribution using tracks only from east or only from west
    — could be related to increase in beta* (~1m for collider mode, ~10m for fixed target)
    — 3rd problem is the need to revisit primary vertex ranking for FXT data
    — for example, does not use information about tracks with prompt hits or pileup hits, etc.
    — also need to consider effect on combinatoric background for secondary vertices
    — it seems that the PVZ cut is not sufficient to eliminate this problem
    — plan is to introduce -35 ns offset in trigger, get estimate of target Z and use Z_w from KFCS to control jitter
    — will also ask S&C to optimize primary vertex ranking for fixed target and remove PVZ cut
    — should not interfere with QM2023


    July 19, 2023
    Present: Alexei, Flemming, Tommy, Yuri, Gene, Jim, and myself
     
    == Hardware
    — nothing new to report today
    — only concern is about methane delivery 
    — have about 30 days
     
    == Electronics
    — system continues to perform well
    — test was done with TPC at 70 kHz (ZDC rate) for rare trigger program
    — performed well
    — no more problems than usual
     
    == Software
    — dAu and AuAu 17.3 SC calibrations are done (Run 21)
    — 17.3 calibrations took only a day
    — all information has been uploaded to dB
    — large calibration productions have been produced from each of those datasets
    — TPC dE/dx can now go forward
    — no activity in SC from Run 22 and 23
    — high luminosity coming, should check for GG leak since have best sensitivity there
    — will be looking for changes to distortions due to different GG rate
    — Event-by-event T0 in FXT data
    — PWG has seen track breaking across central membrane in FXT data
    — concern that may be T0 (but have not ruled out drift velocity)
    — Gene looked at prompt hits (sent findings to list yesterday)
    — only ~20% of events Irakli looked at may have issues (due to side of EPD used or 0 timing from chosen side)
    — actual impact on dataset from that 20% is not easy to see
    — Tommy suggested using simulations to determine expected effect
    — Gene asked for a link to the presentation that originally reported the issue but we do not know which one
    — dE/dx for FXT in Runs 19 and 20 is done and entered in dB
     
     
    == AOB
    — Richard is being added as a reviewer for TPC code

    June 29,2029
    C meeting June 29, 2023
    Present: Gene, Jim, Yuri, Tommy, Alexei, Flemming

    Hardware-
    Summer is here. TPC handle pressure variation during storms well.
    Laser ok. Vdrift dropping to 5.42. There is gas in the current six-pack for another 5 days.
    Alexei on staycation,  and will follow gas situation closely.

    The quote for new HV for Cathode HV supply was finally pinned. BNL wanted US firm, but we ended up having Wiener approved for this purchase. 28 wks delivery
    Discussion on connectors for wiener different from Glassman, and how to implement. Alexei will consider and discuss with Tim C. how to do this (for next run)

    Question on Gating grid performance. No issues with device. Gene will still like take several runs
    with same beam condition and rates from 1 to 5Hz. Will come with request again, as it was not well received at 10 o’clock meeting.
    Suggestion in retrospect: send the request to ops list, and outline plan; indicate it was discussed already at trigger board.


    Electronics
    DAQ5K is now operational, with all of TPX and iTPC firmware updated. Some TPX RDO’s have issues at high rate. Likely related to electronics components. Some FEEs have been masked out.
    Some tuning still to be done for iTPC part. MB-zdc triggered data runs at ~4.2 kHz, and optimal at 13 KHz input trigger rate and ~50% dead.

    Software

    Yuri discuss work by him and Tommy regarding the RRF vs FF data.
    See https://drupal.star.bnl.gov/STAR/system/files/OO200GeV2021RFvsFF_2023ReV_1.pdf
    For details. Main point is that only run 21 OO FF data shows issues. Implement a non-understood fix of having additional 1 mrad rotation for run 21 FF running.
    Not in agreement with measurements taken before this years run. Will have to be revisited by measurements at end of run23 (September) FF filed cosmics and measurements,

    Yuri also show results from de/dx calibrations of the 7.7 and 19.6 GeV data. Overall looks good, but small differences looking at decays of phi and lambda’s
    See https://drupal.star.bnl.gov/STAR/system/files/RunXIX_XXFXT.pdf
     For details.

    Yuri also pointed out that the new clusterfinders and pad respeonce has to be investigated. Important for embedding.

    Some progress on space charge (Gene)
     Three prongs
    1.    Looking a dA data , symmetrc (background) vs asymmetric (dA coll)
    2.    Pp500 Temple work; identifying runs, trining new person
    3.    Initial run23 SC particular for HLT online.


    Have a good July 4th.

    June 22, 2023

    Minute TPC meeting June 22, 2023

    Present:

    Jim, gene, alexei, tommy, , flemming, yuri

     

    Hardware

    --quote for cathode  is been requested for 6okV  60 kV supply , 14k$ and lead time 28 weeks.

    This is what Tom Hemmick is using for sPHENIX, and Alexei decided to pick the same after discussion with Tom.

    - gas impurity showed again with the most recent exchange of 6-pack and the vDrift is down to 

    What happened with the first of new supplier.

    Methan left for 12 days. Note after meeting. Prahsanth said today(Friday) that he expects delivery today of 2 six-packs

    -Could be possible have a sample bottle, and check composition gas tomography e.g. in Chemistry? (FV). Alexei will follow up

     

    Electronics –

    3 more sectors 4-6 now  with daq5k . Some readout issue with adc’s for sector 4-1, clusters ok

     

    Software

    - pull requests that has been stalled (some of Yuri’s) have been approved

     

     finished up calibration fixed target dataset. Looks good. Will need a couple of more days  for checking. Should be done by Monday.

    The 19.6 is in database. Gene will start production. 7.7 needs update due to minor update (a few % correction)

     

    Working on combined PID.

     

    Yuri had asked Tommy to look at the production data for RFF and FF production data full field data going back to 2010 and look at the quantity

    “dY in SCS versus sector and Z zx projection”. 

    Summary plots as well as summary table shows clear modulation for FF data are in in https://www.star.bnl.gov/~ctsang/Alignment/dYS/

    Tommy’s table summarizing this is in

    https://www.star.bnl.gov/~ctsang/Alignment/dYS/summary.html

     

     

    The summary from Tommy is

    Signification sinusoidal modulation in cluster – tracks vs sector is only found in Full field results on 2021. It is not seen in Reversed full field or old full field results. Therefore a special rotation should be applied to FF data on 2021.

     

     

    Recall that this was  not supported by Alexei measurements. Suggest to introduced rotation for run 21.

     

     

     


     




    June 15,2023
    Present: Alexei, Flemming, Tommy, Yuri, Gene, David, and myself
     
    == Hardware
    — water leak at blower
    — problem with cathode HV supply
    — discussed with sPHENIX
    — Alexei investigating and getting quote for power supply
    — long delay in production (~20 weeks)
    — looking into use of alternative PS from test stand
    — some concern about absence of good laser runs
    — were not taken due to water leaks in magnet cooling system (under repair currently)
    — expect will be better laser runs soon 
     
     
    == Electronics
    — good progress
    — TPX FPGA code is mostly done
    — about 3 RDOs done in sector 1, started moving through sector 2
    — may be some issue with timing 
    — but new code is running on a number of RDOs in system
    — online plots are filling correctly
    — hope to test much larger part of system next week
     
    == Software
    — Gene noted the volume of zero field data and its utility in calibrations and alignment
    — discussed work with Temple 
    — students Charlie Clark, Richard Chruchley will be working
    — SC for run 23, usually done by HLT group, new person this year, Baoshan
    — Gene putting Baoshan in touch with Babu
    — all 2019 and 2020 FXT runs done for dE/dx calibration 
    — Yuri working on dE/dx FXT 2019 and 2020
    — FF vs RFF for 2021 is still a puzzle since no TPC movement has been detected
    — O+O and 7.7 Au+Au were taken with FF
    — other data after RFF change do not show same distortion
    — 1 mrad TPC roatation shown by survey, but Yuri does not see that in his work
    — only for 2021 FF does TPC appear to be rotated
    — old survey position works before and after that period
    — Irakli increased statistics by factor ~5
    — Yuri working on combined PID in O+O data
    — now trying to understand ToF
    — want to better understand how ToF resolution is calculated
    — Yuri looking into RDO activity map
    — 2020 and 2021 data
    — need confirmation from 2 independent experts 
    — currently only have one such expert
    — expect strong effect on embedding for 2020/21
    — New cluster finder 
    — after FPGA work is done, need to check what is produced offline and online (compare)
    — expecting new drift velocity today
    — Alexei says can check data from last night and this morning for new drift velocity
     
     
    == AOB
    — Request from Frank to present status of several TPC topics in next week's S&C meeting
    — Richard will call to discuss with Frank and update the group
    — Yuri also need some help from Tommy to run over as much data as possible from previous years for FF and RFF comparison 
    — years all the way back to 2013, etc.
    — want to understand if the problem seen in O+O can be found in earlier data


    June 8, 2023
    Present: Alexei, Flemming, Tommy, Yuri, Gene, and myself
     
    == Hardware
    — two shutdowns due to water maintenance
    — some smoke spotted
    — STAR was shutdown as precaution 
    — coming back in next couple hours
     
    == Electronics
    — about 5 RDOs masked out for various reasons
    — speed upgrade is moving forward
    — TPC readout scheme issue understood
    — will take out 1 RDO for tests
    — confirm new FPGA code
    — full test maybe next week
     
    == Software
    — Gene on SC for d+Au 200
    — https://drupal.star.bnl.gov/STAR/blog/Geneva/Investigating-symmetry-Run-21-dAu200-SpaceCharge
    — appears to be some additional component to SC from Au beam impacting the 5 cm -> 3 cm flange near z=400 cm due to high crossing angle
    — some concerns about blue beam background in Run 23
    — similar feature in "S" plots, but in upper left rather than lower right
    — much smaller contribution overall
    — perhaps a result of crossing angle
    — Gene also mentioned PWGs are interested in dE/dx from 4.59 FXT data
    — need some feedback on amount required for calibration 
    — 7.3 GeV dE/dx sample (~300k evts)
    — approximately 1% improvement from new work relative to what's in dB now
    — Yuri says better to do all runs together in one calibration pass rather than run by run
    — work on O+O is still in progress

    May 31, 2023

    Present: Alexei, Jim, Flemming, Tommy, Prashanth, Yuri, Gene, Irakli, and myself
     
    == Hardware
    — ready to measure magnetic field
    — cause of drop in drift velocity is still not certain
    — new gas remains primary suspicion
    — expecting some leak in methane
    — Alexei is watching
    — access yesterday
    — restored driver for laser, both now operational 
    == Electronics
    — no significant update this week
    — things are going well
    — watching for persistent effects in QA
    — Yuri asks for a final date when TPC is completely upgraded
    — no solid date, but group expectation is a couple weeks
    == Software
    — dE/dx calibration production for 19.6 and 14.5 was done 
    — will also requested
    — 200 GeV d+Au space charge calibration is being examined
    — was symmetric which was puzzling
    — perhaps a large part of SC is not coming from collisions
    — meeting with Temple group later this week 
    — discuss Run 22 500 GeV p+p SC calibration
    — Yuri reported on check of dE/dx calibration for SL23c 19.6 and 14.5 AuAu
    — resolutions are same so issue considered closed
    — Yuri also reported on look at Run23 
    — anomalous drift velocity drop began after methane bottle was changed 
    — extended from May 24th to May 31st
    — DV began to recover after bottle was changed again on May 30th
    — no severe effect observed on dE/dx (effect is ~10%)
    — some offset observed in z direction (need start time adjustment)
    — some offset (~.6 cm) observed between east and west with opposite sign charges
    — expectation is it's due to space charge
    — K0short is slightly heavier than expected 
    — Yuri has posted slides shown above to the list
    — Irakli reported on supersector alignment
    — anomalous shift in local y (radial) as iterations appear to be converging in x and z
    — small group of closely clustered sectors
    — Irakli will post a link to the list with current results

    May 24,2023

    resent: Alexei, Jim, Flemming, Tommy, Yuri, Gene, and myself (Richard)
     
    == Hardware
    — some anode trips
    — some problem with triggers
    — currently running 6x6 bunches
    — yesterday was good test for TPC 
    — ~7 mbar jump due to front
    — laser plot (drift velocity) dropped about 1% this morning
    — suspicion of new gas quality?
    — still watching, has not bottomed out yet
    — no motion of TPC was observed during polarity flip last week
    — some discussion with magnet experts
    — maximum current ripple occurs on ramp-up
    — trip goes straight through diode to zero, no ripple
    — conclusion is magnet trips cannot be the cause of TPC movement
    == Electronics
    — ASIC Thresholds
    — request from Tonko to increase TPX ASIC thresholds from 3 to 4
    — intended to reduce data volume to facilitate speed increase
    — originally set threshold at 3 during BES since there was no data volume limitation
    — Yuri does not anticipate a problem with change to 4
    — needs to redo dE/dx from scratch in any event due to new cluster finder
    — some concern raised regarding impact of new threshold on low pt electrons
    — thought is we will actually have better ID on electrons than before 
    — will still require some work, Yuri is watching low pt electrons closely
    — have better understanding of dE/dx than 5 years ago
    — proposal to collect 2 million events with old threshold (3) at some point  (for comparison)
    — group agrees with changing threshold to 4
    — Tonko is also improving FPGA code
    — progress slowed due to downtime for water leaks
    — ran successful test on sector 1 only, easily 5 kHz
    — currently around 2.3 kHz for TPX
    == Software
    — Gene reports that there are currently many error and warning messages in the log (from Tonko's code)
    — not urgent problem, just a note
    — Tommy's script is setup running automatic QA work
    — Yuri checked calibration sample for 14.5 and 19 GeV Au+Au 
    — looked at new dE/dx model
    — ran 2 versions dev2 and SL23c, both are consistent
    — no problem in 14.5 GeV, 2% shift in 19 GeV
    — could just adjust by 2% and update calibration, could be done by Monday
    — Yuri will send link to study plots
    — few days to decide whether to keep as it is or update, by Monday
    — Yuri still working on combined particle identification in O+O data (new dE/dx model)
    — using information from TPC and ToF
    — dE/dx needs new corrections for this, spline correction

    May 11,2023
    Present: Alexei, Jim, Flemming, Tommy, Yuri, and myself (Richard)
     
    == Hardware
    — water leak was fixed
    — monitored 3D sensor during polarity flip this week
    — no motion of TPC observed, somewhat surprising
    —  will continue to work with sensors to monitor any movement
    — Jeff and Tonko working on DAQ
    — some questions about magnet trips and magnet ramping procedure 
     
    == Electronics
    — did not see data in outer TPX
    — was error in new data structures
    — event builders did not find proper data structure to fill
    — is now fixed in online but not propagated to offline yet, request submitted
    — most cosmics data taken so far has tracks only in half of the TPC (about 5 million triggers)
    — due to problems with ToF
    — Yuri will need to evaluate these for usability
    == Software
    — some questions about combined PID and dE/dx tuning relative to ToF
    — some deviation from model
    — different deviation for pions coming from lambdas vs. K0short
    — trying to understand this difference
    — for >1 GeV momentum
    — Tommy delivered QA script to Gene
    — waiting for response when Gene returns

    May 4, 2023

    Present: Alexei, Gene, Jim, Flemming, Tommy, and Richard
     
    == Hardware
    — another water leak has occurred
    — opened pole tip
    — in same place as before
    — was repaired yesterday
    — not leaking as of today
    — poletip will be closed around 1 pm today
    — performed one successful laser run, but frequency was unusual
    — need more runs to compare
    — Monday will change magnet polarity
    — cosmics running now expected to likely begin this Friday
     
    == Electronics
    — Tonko working on problems that have developed since energizing the magnet
    — maybe some remaining issues with power supplies
    — goal is to be done by tomorrow
    — will get about 2 days of FF cosmics at best 
    == Software
    — nothing new this week
     
    April 27, 2023
    Present: Alexei, Gene, Jim, Yuri, Flemming, Prashanth, and Richard
     
    == Hardware
    — many problems currently
    — cathode readout producing spontaneous trips (fake, but shutdown PS)
    — cleaned-up and testing today
    — pulse production for calibration also lost proper control
    — replacing battery and reprogramming
    — TPC circulation started yesterday
    — magnet testing today (FF polarity)
    — after magnet testing done will try for cosmics and lasers
    — TPC appears to be sitting at fullest extent of motion in one direction (about 5 mm shift)
    — northwest corner is fixed, look at motion on opposite side
    — will check again when we have RFF polarity
    — approximate time scale for having lasers and cosmics is today if all goes well
    — STAR will be ready in the next couple days at the outset
     
    == Electronics
    — Tonko (on vacation now) communicated with Flemming
    — struggling with updating TPX firmware (not stable)
    — running in test setup (not on TPC itself)
    — can reach 3.2 kHz, but faster scrambles data (likely software related)
    — could impact physics goals for run (goals were based on 4.5-5 kHz)
    — taking a step back temporarily
    — will use first few weeks of run to improve
    — new software framework on all DAQ PCs
    — new cluster finder on TPC and TPX
    — new network and events builders
    — one iTPC sector (sector 1) running new firmware (need to watch it)
    — make sure it's stable in cosmics
    — if so will update other firmware
    — will start changing TPX firmware upon return
    — Gene can process incoming cosmics once a day at start
    == Software
    — about presentation from Yiding (updated from last week)
    — checked low pt e- band that was observed in SL22b in SL23a
    — claim is strange band is visible in all BES data 
    — band is not visible in SL23b processed data from 200 GeV O+O
    — would be good to confirm by reprocessing an older set in SL23b
    — Gene mentioned that the 19.6 GeV is already in the queue but may be far down
    — will look at doing calibration production in SL23c for comparison
    — on extra gain calibration 
    — mismatch in dB tables with actual RDO outages
    — fixed some RDOs by hand for 2019
    — tried applying same search to Runs 20-22 and found many mismatches
    — lost RDOs
    — Yuri has cleaned up Run 22, but many mismatches remain in other runs
    — many are in Run 20 but 21 as well
     
    == AOB
    — Yuri will be absent next week
    — but watching what's happening with detector and cosmics
    — Yuri will also look at O+O to analyze some questions about TPC but also ToF


    April 20, 2022

    Present: Alexei, Gene, Jim, Tommy, Yiding, Irakli, Yuri, and Richard
     
    == Hardware
    — starting gas next week
    — on west side there are a couple RDOs out
    — thought to be on DAQ side rather than poletip closure
    — was problem during interlock check
    — sensor appeared to be malfunctioning on inner field cage air blower
    — sensor was verified on bench
    — was connected with cable that had been disconnected from global interlock
    — report is CAD personnel removed resistor from cable but Alexei will verify
    — resistor is crucial to operation
     
    == Electronics
    — Nothing new reported this week
     
    == Software
    — SC for fixed target in Run 21 3.85 GeV Au+Au
    — iterative procedure appears to be converging
    — some difficulties particular to fixed target must be overcome (vertex position for example)
    — may not be able to do east west SC differential 
    — may be an additional T0 correction recalibration after SC is finalized
    — Meeting with Temple University has been delayed slightly, forthcoming
    — Yuri finishing RDO cleanup for embedding (in '20, '21)
    — due to blocks of runs where RDO outages were apparently not recorded in dB
    — changes committed but some iteration needed
    — dE/dx calibration in O+O is in progress
    — Irakli discusses his work on alignment
    — see https://drupal.star.bnl.gov/STAR/blog/iraklic/Run-21-Alignment
    — confirmed correct data set was used in plot discussed 2 weeks ago (bottom of page)
    — some improvement in pt resolution
    — work on study of residuals for new alignment and pre pass should be done by end of day today
    — may not need supersector alignment 
     
    == AOB
    — Yiding reporting on  pt < 200 GeV tracks in 19.6 GeV (~200M events)
    — see talk posted to TPC meeting agenda
    — Yuri requested check of 7.7 and 11 GeV data 
    — anomalous band should not be present



    April 6, 2023

    Present: Alexei, Flemming, Jim, Yuri, Tommy, and myself
     
    == Hardware
    — Not much new to report
    — Mostly tracing cables
    — Did walkthrough
    — told need new locks for lasers
     
    == Electronics
    — adjusting some PS voltages to find somewhat better operating point
    — this is to try to mitigate some blown fuses
    — about 12 adjustments done so far
    — some firmware changes still being made
     
    == Software
    — Irakli posted an update page on his alignment work
    — see https://drupal.star.bnl.gov/STAR/blog/iraklic/Run-21-Alignment
    — referring to last plot on page:
    — need some checks, but it seems initial survey of inner sectors was sufficient without additional alignment procedure for outer sectors
    — some asymmetry observed in high(er) momentum tracks for existing and new alignment procedures
    — possibly need to confirm with opposite field polarity as well
    — need to understand 
    — would desire to start cosmics this year with FF to increase statistics
    — next step will be supersector alignment
    — final pass for clean-up of dead FEEs in BES data was prevented by power loss yesterday
    — most corrections will be for Runs 20 and 21, 19 is mostly fixed
    — expect to finish in few days
    — may be an additional correction for PID in O+O that depends on mass hypothesis
     
    == AOB
    — Flemming expects to be absent for next two weeks

    March 30,2023
    Present: Alexei, Flemming, Jim, Yuri, Gene, and myself
     
    == Hardware
    — last week installed 2 magnetic probes
    — poletip was inserted
    — electronics all checked-out okay after poletip insertion
    — have sufficient gas for a couple months (3 full 6-packs)
    — each lasts for about 20 days
     
    == Electronics
    — nothing new for this week
     
    == Software
    — O+O production from Run21 has started (2 mag fields)
    — SC calibrations 
    — discussed with Temple group (was change in personnel)
    — work is transitioning to Richard Thrutchley
    — will meet with them again in 2 weeks
    — still have d+Au 200 calibration to do
    — 3.85 GeV Run 21 FXT still needs SC calibration
    — Yuri working on cleaning up dead FEEs for BES data
    — looked at O+O production for PID
    — some differences noticed but need to summarize 
    — no word from Irakli yet
     
    == AOB
    — nothing new reported this week

    March 23, 2023

    Present: Alexei, Flemming, Jim, Tommy, Prashanth, Yuri, Gene, and myself
     
    == Hardware
    — Working on lasers 
    — east alignment is finished
    — both lasers are working and operational
    — spare pump is in hand
    — checked cathode today
    — no shorts so far
    — finishing structure to hold magnetic coils
    — next week east poletip going in (on 28th)
    — gas flow starting April 18th per operations meeting
    — magnet polarity switching time about 3-5 hours (related to discussion last week)
    — (Prashanth) now buying gas from Lindy (sp?) 
    — Prashanth ordered all needed gases
    — AirGas actually supplying methane since Lindy couldn't deliver
    — also ordered another 2 6-packs from different company 
    — Alexei suggested another possibility, Mathison (or Madison), Prashanth will follow-up
     
    == Electronics
    — summary in slides from yesterday
    — just one RDO remains with some fuses to change
    == Software
    — information from Irakli
    — will provide resolution vs. momentum for new and old alignment soon?
    — update on O+O 200 GeV (RFF vs FF observed difference)
    — revisited RFF O+O from August-September 2022
    — see plots in agenda for today's meeting
    — evidence from FF for an approximately 1 mrad rotation of TPC
    — looked at vertices reconstructed from tracks in east only and west only in low lumi fixed target data
    — see approximately 2 time bucket shift between east and west for fixed target (as reported earlier by Xiangli)
    — see additional plot in meeting agenda
    — Temple University transitioning TPC work from Babu to Richard Thrutchley
    — will deal with Run 22 calibration
    — good positioning for this work due to previous use of calibration in calorimeter for TPC calibration
     
    == AOB
    — Yuri updated RDO mask and dead FEE map for all BES data sets (fixed target runs only??)
    — need to check if any issues remain for '20 and '21 sets



    F
    riday: East side was finished
    from Christian
    Finished up around 2:30.
    -14-4 RDO was removed, replaced with working alternate
    -14-2 & 14-3 RDO cables had nylon braid put in over the splices and were put back in the cable tray
     
    -17-1-3 FEE and it's cable were replaced (again), still failed; moved from -3 to -5 and it worked, Tonko will update mapping
    Tonko checked all of sector 14 and RDO 17-1.
     
    I looked at the other cable trays and didn't see any obvious problems with the cables and water lines.
    We should be ok to insert the poletip whenever.

    March 16, 2023

    TPC meeting March 16,2023
    Present:  Gene, Tommy, Yuri, Alexei, Jim and Flemming

    -Hardware
     Alexei complete East laser alignment; Laser system is now ready for upcoming run
     Next week will install ‘new’ magnetic sensors for additional monitoring.
     Q: can these be interfaced to Slow Control (Monitoring)

    -    Electronics
    -    The TPX s8-3 RDO issue turned out to be a network card problem in the DAQ PC. West is ok a few FEEs are masked off for run
    -    East TPC should also be ready.

    Gating grid tests
    -    Gene is proposing that we test the effectiveness (transparency) of the GG under different beam conditions.
    -    Aim to take short runs (<1 min) with 2,3,4,5kHz rate for MB data taking and similar for High lumi running later in the runs.
    -    Gene will talk to Jeff on setting up run configs so it can easily be run by crew. And with JG on beam conditions.
    -    The equest will be added to the TPC reafiness status

    Pre run calibrations
    Had long discussion on FF/RFF information that should be extracted. STAR magnet is currently wired for FF.
    Yuri would like, now that Alexei has installed detector to monitor possible movement , that we switch a couple of times between FF , 0 , and RFF to see if a) there is a movement and b) if it is reproduceable. This does not need COSMIC data
    It was not clear how this knowledge would translate into improved calibrations of TPC. Jim pointed out that (some) E-B mis alignment is already included in space charge distortions (static?). Gene commented that the main geometry calibration issue/uncertainties is related to the super sector to super sector alignment.
    Yuri iterated that understanding if there are movements of TPC between FF and RFF will help for calibrations methods.
    WWill bring the proposal up at readiness for TPC, but expect to get push back to the ‘cost’ (time, people,..) of doing multiple filed switches.

    De/dx calibrations
    Yuri pointed to the presentation at last S&C meetins that identified at FF/RFF difference between a correction that has been used previous for de/dx vs. position along wire attributed to wire to pad plane diff. See https://drupal.star.bnl.gov/STAR/system/files/RunXXI%20OO200GeV.pdf

    FV raised the issue why this shows up in OO at 200 GeV which has about equal positive and negative charge tracks with opposite curvatures.




    + Run 21 dAu200 SpaceCharge
      - Babu continues to find essentially no asymmetry between east and west, despite historically seeing ~20% more in east
      - Gene sees asymmetry in the lower luminosity Run 21 dAu data, but it disappears at high luminosity
      - Recommendation from Gene was for Babu to process more high luminosity data to help investigate dependencies further

    - Run 22 pp500 SpaceCharge
      - Babu has plenty of SpaceCharge work to do even without Run 22, plus his physics analysis
      - Gene has asked the PWGs to provide an additional helper to investigate possible fill-by-vill variations as seen in Run 17
      - Good to educate more people about SpaceCharge anyhow, even if Babu does find  time to work on it

    + New cluster finder
      - Tonko expects it to be not just different, but "better"
      - Tommy proposed a test with embedding to see if momentum resolution and acceptance improves as Tonko hopes
      - Yuri suggests not to focus on momentum resolution, but rather efficiency
      - Tommy will discuss with Tonko more

    + dE/dx adjustments by analyzers
      - Tommy brought up artificial, empirical dE/dx nsigma shifting by analyzers for BES-I datasets
      - Yuri says that's ok as a band-aid for older productions of iTPC era data, but that the new dE/dx model introduced last year should help avoid this
      - Tommy is willing to work on a standard version of the empirical band-aid for people to use, and no one objected


    March 9, 2023
    Present: Flemming, Yuri, Gene, Tim, and myself
     
    == Hardware
    — SC3 very slow (controlling Gating Grid driver)
    — several things have been tried but nothing successful yet
    — may ask Wayne to reboot once more
     
    == Electronics
    — Discussion with Christian this morning
    — east end work is done except for a few cable wraps
    — problem since STGC installation
    — fiber connection to one TPC sector 8 RDO 3 not working
    — whole RDO is out
    — will be investigated
    — worst case would have to reopen poletip and fix connection
    — but that should be a last resort and the decision would be based on size of dead area
     
    == Software
    — Babu working on d+Au SC and GL for Run 21
    — Yuri showed slides on TPC23 cluster finder
    — see https://www.star.bnl.gov/~fiscal/star/TPC/TPC23/
    — differences in number of found clusters in outer sectors between online and TPC23
    — different finders putting different bounds on found clusters resulting in more from TPC23
    — efficiencies are lower for TPC23 at low-pt, high eta
    — result is some track losses at high eta for TPC23 compared to online but losses are small and likely acceptable
    — Guanan working on TpcRs
    — still looking at differences in data width compared to simulation
    — Yuri will send URL to TPC list
    — Yuri still needs position of TPC with respect to magnet from survey
    — Also waiting on comparison of momentum resolution from old and new alignment from Irakli
    — also see https://drupal.star.bnl.gov/STAR/meetings/STAR-Collaboration-Meeting-Spring-2023/Pleanary-Session-I/iTPC-performance
     
    == AOB
    — Flemming will give an update to Operations meeting on status of TPC on March 22nd


    February 23, 2023


    Present: Alexei, Flemming, Yuri, Gene, Jim, Tommy, and myself
     
    == Hardware
    — East laser power supply had a leak
    — pump was leaking
    — have a spare and beginning repair today
    — TPC water system has leak from spiral hose as well
    — was repaired yesterday
    — notified electronics folks that they may resume work
    — some discussion about how to align with movement of TPC
     
    == Electronics
    — Nothing new reported this week
     
    == Software
    — Babu working on d+Au SC and GL for Run 21
    — Run 21 O+O large calibration production is finishing today
    — can provide to Yuri to look at dE/dx
    — For O+O Yuri still has a puzzle from survey
    — difference between FF and RFF
    — factor 2 deviation was reduced but some residual still present
    — looking at 2011 27 GeV FF data to see if rotation of TPC is there 
    — TPC23 cluster finder 
    — using same DAQ file for online and offline
    — Yuri sees some difference in TPX outer sector(s)
    — comparison of online and (TPC23) offline shows about 10% difference in clusters
    — checking to see if difference is due to gain file
    — Yuri says Irakli reported to have finished alignment with his new approach
    — report is there is a big improvement in alignment, but need to hear from Irakli directly
    — Yuri needs position of TPC with respect to magnet from survey
    — Guanan has finished tune-up of TpcRs
    — still some question about tails
    — data slightly wider than simulation in time direction
    — want to understand that
    — next will be to check data for fixed target runs
     
    == AOB
    — Some discussion of plan for fixed target (with 3 new targets)
    — we would like to see the drawings of the location(s) of new targets on east side
    — Tonko back in Croatia
    — software expected to allow us to start with ~4 kHz
    — will get to 5 kHz a little later

    February 16, 2023
    Present: Yuri, Gene, Tommy, and myself (Richard)
     
    == Hardware
    — Nothing new reported this week
     
    == Electronics
    — Nothing new reported this week
     
    == Software
    — Babu working on O+O and d+Au SC and GL
    — FF redone due to issue with tables
    — high luminosity looks good
    — low luimi shows some discrepencies
    — will revisit once alignment is done
    — can start large calibration production for O+O as early as tomorrow
    — d+Au needs more attention once 
    — Yuri has some questions for Alexei on survey
    — Guanan working on TpcRs
    — investigating some issue with prompt hits
    — Noticed Tonko has updated cluster finder
    — some mismatch between inner and outer sectors
    — misses last two padrows (71 and 72)
    — believe there is problem with pad maps

    February 9, 2023

    Feb 9, 2023 TPC meeting
    Present: Yuri, Prahsanth, Alexei, Flemming, Tommy

    --hardware
    Westpole installed. #d sensor installed before and can be viewed so we can observe movement when filed is turned on, changed polarity etc.
    Water leak in East laser box.  Alexei will investigate.

    -    After insertion 2 bad FFEs identified. Likely due to squeezed cable. Need to ensure better pre-closure inspection for future years.

    Software
      Tommy and Tonko identified one source of differences between online and offline clusters. Namely simulation of hardware asic that merged time sequences separated by just one empty timebin (below threshold) would be merged into one

    Yuri reported that analysis of recent survey revealed that TPC is rotated relative to RFF.
    Still needs to talk to survey group to understand coordinate systems.

    Yuri reported (see meeting page) that the QA plots for pbar vs p vs pt for the period c (production nomenclature) ie. From after covid shutdown is different than before shutdown, Period “” and b .


    January 25, 2023

    Present: Alexei, Yuri, Gene, Flemming, Tommy, Prashanth, and myself (Richard)
     
    == Hardware
    — 3D sensor installed on west side 
    — glue is now dry, need to install wire
    — expect a couple hundred microns accuracy
    — poletip to be install Monday or Tuesday next week
    — now shorts detected so far
     
     
    == Electronics
    — Tonko is done with checking west side electronics
    — waiting for poletip installation for final check 
    — if check is okay will wait for run
    — testing will occur at regular intervals 
    — all RDOs on east have been checked
    — some FEEs remain to be checked and possibly replaced
    — Tonko working on PC programming for DAQ speed-up work
     
    == Software
    — Gene found a workaround for SC and GL calibration jobs
    — Babu paused dAu work to work on O+O
    — RFF and FF OO calibrated as of Tuesday evening
    — large calibration production will be ready to go shortly (once values are in dB)
    — Richard will reach out to Irakli to ask about alignment work (CC Gene)
    — Yuri working on understanding differences between data and simulation timing tails
    — simulation tails are longer than in data
    — using 2009 data (with no shaper)
    — Also working on data from survey group
    — Yuri may be out for a couple weeks 
    — Gene will be out week after next
    — Tommy presented work on discrepancies between online and offline clusters
    — shows comparison of clusters found by online and offline
    — seems online CF is merging some clusters that offline finds separately
    — one possibility mentioned is that dead pad mask may make a difference in how potential clusters are connected
     
    January 11, 2023

    Present: Alexei, Yuri, Mike, Gene, Flemming, Tommy, and myself
     
    == Hardware
    — survey finished with new pins
    — will begin laser work once current measurements are done
    — Tonko has been working with electronics since Monday
    — still have not received documents from survey group
    — will reach out to see what's happening with documents
     
     
    == Electronics
    — Tonko is presently at BNL
    — going through electronics on both west and east ends
    — just a couple RDOs have some problems
    — hoping to have that work done by end of week
     
    == Software
    — Babu still facing some issues with SC and GL work
    — O+O will still be higher priority than d+Au, so Babu will focus on that next
    — Yuri says TpcRs tuning continues
    — need to understand relation between actual chip parameters and those in the dB
    — some question regarding signal shape on anodes wires and ALTRO chip tail cancellation
    — Tommy working on online and offline cluster finding match
    — trying to find mismatched clusters and recover pixel information
    — difficult work, at text file level
    — stay tuned


    January 4, 2023


    Present: Alexei, Yuri, Prashanth, Tommy, Gene, Flemming, and myself
     
    == Hardware
    — east side is finished from bake-out
    — will start now fixing RDOs on east side
    — also seems that survey is done on east side but Alexei has not received official word yet
     
    == Electronics
    — some delay in work waiting on beam pipe bake-out heat to dissipate 
     
    == Software
    — Babu still facing some issues with SC and GL work
    — O+O will still be higher priority than d+Au, so Babu will focus on that next
    — final SC work should use final version of alignment from Irakli
    — but should have the interim alignment results should 
    — 9.2 and 11.5 GeV dE/dx calibration finished
    — checked with standard library, no diffs with dev
    — dB timestamp should be taken from this year
    — link to current status of work with new model (dN/dx)
    — see https://drupal.star.bnl.gov/STAR/system/files/Revison%20STAR%20TPC%20dEdx%20Model_0.pdf
                    




    December 21, 2022

    Present: Alexei, Yuri, Guannan, Tommy, Gene, Flemming, Irakli, and myself

    == Hardware
        — survey done on west but not east side
            — Alexei (currently on vacation) trying to push a little to get it done
            — found good way to put cameras for TPC movement
            — Alexei will prepare short presentation upon return == Electronics
        — inner sector RDOs on west side have been put backup by Alexei
        — but on east side some RDOs still need to be returned to proper position         — Alexei needs to talk to Tonko about east side before Tonko resumes work with RDOs on or after 1/1/23

    == Software
        — Gene mentions that Babu is still facing some issues on SC and GL
        — PWGs now prioritizing O+O
            — SC is ongoing for that but Babu needs some help from Gene
            — once that’s done will move forward with TPC dE/dx etc.
            — Irakli working on alignment, RFF seems to be converged
                — FF does not seem to be in right place in X after RFF convergence (within 100 um)
        — Yuri finished with dE/dx calibration (7.7 and 9.2 from 2020)
            — have put calibration constants from new model in dB
            — need final check with Standard Library
        — Guannan is continuing work on deep tuning of TpcRS
            — some problems observed
        — Tommy continuing with offline/online cluster comparison
            — 0.5% difference observed and must be understood

    December 14, 2022

    No meeting, but Yuri presented an update on dE/dx at the s&c meeting  - presentation
    Electronics status presented by Tonko at analysis meeting - presentation

    December 7, 2022


    Present: Alexei, Yuri, Gene, Flemming, Mike, and myself
     
    == Hardware
    — finished cleaning of all TPC except 12 points on east side
    — interrupted by water leak
    — expected that leak will be fully repaired this week
    — Tonko nothing with TPC until Jan. 1st
    — plenty of time for Alexei to finish work
     
    == Electronics
    — nothing new reported this week
     
    == Software
    — Yuri finished with dE/dx calibration (7.7 and 9.2 from 2020)
    — will put calibration constant in dB and request production
    — will prepare calibration for entire BES data set
    — some other activities ongoing around combined PID (from all detectors)
    — Guannan is continuing work on deep tuning of TpcRS
    — Gene mentioned Babu has started working on SC and GL
     
     
    == AOB
    — Flemming mentioned minutes from QA board
    — Run 20192001 shows anomalous drop in sector 1 in phi vs. eta
    — not easily seen in QA plots (small)
    — sector 1 had many auto-recoveries in that run
    — seems that one RDO did not come back fully resulting in an inefficiency
    — likely need a new QA plot to help identify this problem in the future

    November 30,2022

    Present: Alexei, Yuri, Irakli, Gene, Tommy, Guannan, Flemming, and myself
     
    == Hardware
    — Tonko finished yesterday
    — much easier for Alexei to work on holes now
    — expects to finish cleaning today or tomorrow
    — needs to invite survey group to make extensive plan
    — moving scaffolds, etc.
    — need to make 3D tool to move fiber
    — Flemming asked about water leak
    — exact location has yet to be identified
    — Robert Pak was notified (instead of Alexei?)
    — CAD requested some parts to fix leak properly
    — Alexei finished cleaning but would like to work on whole inner sector 
    — last 2 RDOs will shift a little bit (approximately 0.5 inch)
     
    == Electronics
    — talked to Christian on Thursday, some issues remain
    — Tonko switched off all electronics yesterday
    — however, this morning sector 15 (or 16 or 17 was found to be on)
     
    == Software
    — Irakli presents alignment work on '21 cosmics data
    — previously kept outer sectors fixed and aligns inner sectors to them
    — some issues found so now reversed procedure (fix inner and align outer)
    — related to lack of knowledge about exact position of pad plane on outer sector
    — we have a good survey of pad plane position on sector for inner (iTPC) sectors 
    — so, inner sectors are now being used as reference instead of outer
    — deltaZ and deltaY converge quickly, deltaX is slower but seems to be converging with continued iterations
    — iterations will continue
    — supersectors still to come
    — Gene on production priorities
    — Run20 11.5 and 9.2, then 200GeV d+Au from Run21(SC and GL need to be finished)
    — next comes 200 GeV O+O (has 2 field settings)
    — need to understand static distortion and alignment issues  beforehand
    — working groups want a test production anyway
    — will finish SC and GL for O+O
    — then will move forward with preview production with whatever corrections state exists at the time
    — Tommy working on automatic pull of histograms for QA Fast Offline analysis
    — finishing work on differences in cluster finding results in simulation and production
    — Yuri continuing iteration on constant dE/dx work (each needs a couple days)
    — Guannan work on tuning TpcRS
    — some discrepancy in clusters noticed by Xianglei 
    — needs to be understood
     


    November 16, 2022

    Present: Alexei, Yuri, Tommy, Geydar, Vinh, Gene, and myself
     
    == Hardware
    — checked almost all holes
    — need to coordinate with Tonko now about non-interference with electronics work
    — did not get to check 3D sensor fit yet due to activity on west side
     
    == Electronics
    — nothing new for this week
     
    == Software
    — presentation from Vinh on dE/dx QA from 7.7 GeV
    — conclusion is that ToF m^2 cuts give, at best, only factor 10 improvement in reduction of misidentification (see talk)
    — Gene asked if ToF group has studied the effect of tails on the distributions
    — ultimately attempting to do combined PID with all detectors
    — this is first step comparing TPC and ToF
    — Yuri has frozen new modified dE/dx model
    — calculating dE/dx from all BES data
    — continuing iteration and work on methodology
    — expect new dE/dx calibration for all BES II data soon, but not clear how soon
    — Gene asked about choice of TFG22e library vs. SL22b
    — only TFG offers KFParticle finder

    November 09,2022
    Present: Flemming, Alexei, Tommy, Mike, Gene, and myself
     
    == Hardware
    — continuing to clean TPC wheels for survey
    — about 70% done
    — Tonko is working inner sector electronics, so being careful not to interfere
    — 3D sensor prototype has been built, looks okay
    — want to put on TPC to check fit
    — if okay will request manufacture of a support structure
     
    == Electronics
    — talked with Christian
    — Christian has list of FEE cards to look at (about 10 in iTPC)
    — few will need to be changed
    — seems like mostly cable issues
    — everything still moving forward 
    — Alexei needs some access to some areas being used 
    — need access once for testing and then again for mounting
    — need to coordinate with Tonko to deconflict access
    — will first finish with holes to which he currently has access
     
     
    == Software
    — reached out for space charge work 
    — project with Rice students on PID
    — students should join TPC group for discussions
    — Richard will check with Yuri to ensure the right names are added to the TPC list
    — Gene mentioned the code updates to cluster finder from Tommy and Yuri look good
    — some tuning still to be done

    November 02, 2022
    present:
    Gene, Yuri, Tommy, Alexei, Chenlian Jin, Flemming
    excused: Richard

    Alexei; working on probes for survey.

    Open questions cosmics for survey field A/B before run?

    Gene: There is no progress on space charge for some BesII years: dAu, 17.6 GeV FXT
    Also need for run 22
    Will contact Temple.

    Nhits issues solved. Will pay more attention to cluster status when selecting/counting cluster distributions

    Yuri:
    a) will wrap up de/dx model for higher Z. Couple of weeks.
    Request that we have a presentations for model before presenting to collaboration.


    b) There is interest in making common PID (de/dx,btof,etof,...) for BESII data set from Rice group. Will like to use tpc meetings and e-mail list for this activity.
    Suggestions add and invite this group



    October 26, 2022

    Present: Flemming, Alexei, Tonko, Yuri, Tommy, Mike, Gene, and myself
     
    == Hardware
    — continuing to clean holes for survey
    — building 3D sensor for TPC displacement measurement
    — no measurement possible until March or April due to construction
     
    == Electronics
    — expectation is everything will be done, including debugging and verifying, by March, 2023
    — includes rewrite of FPGA firmware, rewrite of DAQ framework, and new cluster finder
    — note that verification of TPX will require tracking (deconvoluting stage)
    — should be done within next 2 weeks 
    — cannot spend time on discrepancies in embedding right now, will be revisited later
    — hardware should be done earlier, by end of January
    — current hardware work (RDO repair, etc.) is proceeding slowly
    — requests will be made regarding how to adjust the scheduling
     
    == Software
    — Yuri and Tommy investigating embedding
    — checks have been done for Runs 18 and 19
    — software commits have been made
    — some issue with backward compatibility
    — one line of code should fix
    — similar problem to be addressed in Run 17
    — work on dE/dx model continues
    — Irakli continuing alignment work for Run 21


    October 19,2022
    Present: Flemming, Alexei, Yuri, Tommy, Mike, and myself
     
    == Hardware
    — determined 3D sensor to be used
    — tools ordered to make extensions for CCD cameras
    — need to order better scales (0.5 mm scales)
    — working with survey targets
    — will move some RDOs to have 4 points visible
    — question to Yuri if 150 um accuracy is sufficient
     
    == Electronics
    — talked to Christian
    — some small problems remain with the RDOs
    — have water back now and some things taken to lab
    — Tonko was notified that water has been restored
     
    == Software
    — Yuri and Tommy investigating embedding
    — efficiency online and offline are different in Run 18
    — due to gain table
    — interface changes were made to fix loading of gain file
      — some differences remain, investigation continuing
    — need 1-2 more weeks
    — work on dE/dx model continues
    — calibration files generated for Run 20
    — Irakli joined alignment work for Run 21
    — results will be forthcoming
     


    October 5, 2022
    Present: Flemming, Alexei, Yuri, Gene, Tommy, and myself
     
    == Hardware
    — still working on two options (sets of readouts)
    — these are for the 3D sensor
    — need a non-magnetic 3D sensor but almost no options for that
    — Bill is done with the survey device
    — tested and has ~50 um accuracy
    — various holes around TPC need to be cleaned (filled with corrosion or debris)
    — work also proceeding on TPC electronics (power supply checks, etc)
     
    == Electronics
    — nothing new reported this week
     
    == Software
    — Two issues discussed
    — embedding for Run18 was first
    — offline cluster finder finds more clusters than online cluster finder
    — was due to gain table
    — issue has returned, Tommy is working on it
    — Flemming agreed the problem looks to be related to the gain table based on plots shown in the software and computing meeting
    — other issue is 7.7 GeV dE/dx 
    — Rice group has found some deviation
    — shift is thought to be due to muons from pion decays 
    — Yuri working to improve dE/dx model
    — need to use data to correct model
    — ultimately would want to use integrated detector subsystem information to correct model
    — Rice may have one or two more student that can contribute to this effort
    — also 1 more student from Dubna who will work on PID issues

     August 17, 2022
    Present: Flemming, Alexei, Yuri, Mike, Tommy, and myself (Richard)
     
    == Hardware
    — methane leak is now something like 2 cc/min (very small) 
    — Alexei can now access scaffolds for lasers
    — also need to access areas that are outside scaffold volume (for 8 and 10 o'clock laser boxes)
    — will need an additional platform of some kind
    — talked with Bill Strubele about repeating survey with new tool
    — will report back any issues
     
    == Electronics 
    — Tonko was waiting on feedback regarding energizing the electronics
    — this is for remote operation of FEEs
    — Tim will contact Tonko
    — important for inner TPC sectors where replacements have been done
    — Tonko believes he will be done with most work in programming FPGAs in about a month 
    — can give status update at collaboration meeting
     
    == Software
    — discussion of some disagreement on nSigma in 7.7 GeV ('21) at Software Group meeting
    — some discrepancy with regard to 
    — in work on dE/dx for composites see ~3% shift between positives and negatives
    — positives slightly higher than negatives
    — same effect is also observed _in_simulations_ (puzzling)
     
    Hardware update,
    More on de/dx for 7.7 GeV auau
    update from Alexei of TPC electronics readiness

    About 2-3 weeks ago we had power dip and since I not paid attention to platform. I found some TPC VME crates and TPC FEE VME were OFF. I powered FEEs from platform(all PS have green LEDs) and control room, everything looks OK, but red LEDs on TPX PCs:   2, 24, 25-36 remained red.

    August 10, 2022

    minutes

    July 27, 2022
    Present: Alexei, Yuri, Gene, Tommy, and myself

    == Software
     — some slides on current state of dE/dx model work
       — will be updated as work progresses
       — will post slides with current status (keeping in mind there is more to do)
      -- see https://drupal.star.bnl.gov/STAR/system/files/Revison%20STAR%20TPC%20dEdx%20Model.pdf

    July 7, 2022
    Hardware update. Survey done, Short repaired,
    Software de/dx modle update
    minutes

    June 29, 2022
    hardware update. Prepare for survey. Concern for Ar availability
    software Tommy,Yuri have been comparing cluster finders offline/online
    minutes 

    June 22, 2022

    Hardware update, survey. TPC electronic speedupgrade, repair  status
    software update
    minutes

    June 15, 2022
    Updates on survey plans
    minutes

    June 8, 2022
    Updates on old vs. new cluster finder, simulated charge distribution from fixed target
    minutes

    June 1, 2022
    Updates on dE/dx work, old vs. new cluster finder, simulated charge distribution from fixed target
    minutes

    May 18, 2022
    TPC survey delay, inner sector wires, electronics update, 7.7 dE/dx work
    minutes

    May 11, 2022
    gap measurments, 3D sensor prep work, inner sector wires, electronics update, 7.7 dE/dx work
    minutes

    April 27, 2022
    nitrogen switchover, 3D sensor prep work, spare inner sectors, space charge corr., 7.7 dE/dx work
    minutes
     
    April 20, 2022
    survey results, electronics status end of run, 7.7 de/dx calib done
    minutes

    April 7, 2022
    prep for FF/RFF survey, space charge corr, updated manuals
    minutes

    February 2, 2022
    discussion on added CF4 to TPC, methane shortage, run19 FEE status tables complete
    minutes

    January 12, 2022
    to from run22 submitted, many RDO failures, look at skipping alt row for reco (pp500)
    minutes

    December 1, 2021
    Analysis of shorts, laser response, tpc blower fixed
    minutes 

    November 24, 2021
    Some results from short measurements, Open issues ahead of run, missing Vdrift for a day.
    minutes

    November 17, 2021
    Magnetic field settings reading GG slow control layout, short discussion, variable vdriftd
    minutes 

    November 10, 2021
    Plan for measurements for field cage short, 
    minutes

    November 3, 2021
    West field cage short; run 19 space charge; upcoming calibration issues.
    minutes

    October 27, 2021
    TPC electronics status (Tonko), Laser West repaired, 19.2 GeV space charge nearly complete.GG slow control.
    minutes

    October 20,2021
    hardware status, GG driver, calibration update(Yuri) see slide
    minutes


    October 13, 2021
    hardware status, cosmic request for run22 startup
    minutes

    September 22,2021
    9.2 TPC space charge cal completed,
    Embedding issue for FXT Xianglei discussed. 
    minutes

    September

    September 8, 2021
    brief meeting ; hardware cathode control
    minutes

    September 1, 2021
    minutes

    August 25, 2021 
    NMR status, laser progress
    de/dx short term time dependence, helpers for calibration
    minutes

    August 18  no meeting

    August 11, 2021meeting link
    agenda: hardware, calibration and software update

    minutes

     August 4, 2021 meeting link
    agenda:
    - hardware status
    -- gating grid
    - calibrations
    - TPC tasks for calibrations, software maintenance 
    -- presentations on meeting link

    minutes
     


    December 9, 2020
    Agenda:
    -hardware status
    - software
    --- alignment progress 
    --- https://drupal.star.bnl.gov/STAR/system/files/he34embedding4th.pdf
    --- AOB

    meeting link: https://drupal.star.bnl.gov/STAR/event/2020/12/09/tpc-weekly


    December 2, 2020
    agenda hardware status
    meeting link and minutes : https://drupal.star.bnl.gov/STAR/event/2020/12/09/tpc-weekly

    November 18, 2020
    agenda:
    hardware - SF4 tests; laser system
    software; 3He efficeincies, track splitting - see talk on meeting event page
    aignment progress

    meeting place: https://drupal.star.bnl.gov/STAR/event/2020/11/18/tpc-weekly

    November 6, 2020
    Discussion points:
    TPC power cables
    Installation of Gating Grid reinstallation
    Presentation on 3He efficiencies
    Clusterfinder offline vs online
    meeting place  https://drupal.star.bnl.gov/STAR/event/2020/11/04/tpc-weekly


    October 28, 2020

    Discussion points:
    hardware: TPC gas investigation; electronics repair
    software: GG turnon effects 
    meeting place  https://drupal.star.bnl.gov/STAR/event/2020/10/28/tpc-weekly

    October 21, 2020
    Discussion points:
    hardware: gating grid driver
    software: Super sector alignment procedure
    meeting page https://drupal.star.bnl.gov/STAR/event/2020/10/21/tpc-weekly

    October  14, 2020
    Discussion points
    Hardware: TPX sector 13,14 powersupply cable repairs
    Software:  Super sector alignment procedures 

    meeting page https://drupal.star.bnl.gov/STAR/event/2020/10/14/tpc-meeting
    minutes available on that page


    TPC performance

     Documentation on performance of the TPC
     

     

     

    TPC Hit Errors (2008)

    NOTE: this was for 2008

     These are the TPC hit errors as parameterized by Victor Perevoztchikov in order to normalize pulls (and thus chi squares) of ITTF tracks versus z, dip angle, and crossing angles:

    error_xy = sqrt([0]+[1]*((200.-y)/100.)/(cos(x)*cos(x))+[2]*tan(x)*tan(x))
    error_z  = sqrt([0]+[1]*((200.-y)/100.)*(1.+tan(x)*tan(x))+[2]*tan(x)*tan(x))

    where y in the equations is the z coordinate in the TPC, and x in the equations is the crossing angle for error_xy, and the dip angle for error_z. The parameters are:

    Inner TPC:

    error_xy->SetParameters(0.0004,0.0011513,0.01763);
    error_z ->SetParameters(0.00093415,0.0051781,0.014985);

    Outer TPC:

    error_xy->SetParameters(0.0012036,0.0011156,0.063435);
    error_z ->SetParameters(0.0026171,0.0045678,0.052361);

    Plotting these as a function of z and crossing/dip angles gives:

    Inner TPC:

    Outer TPC:

    EPS versions of these images are attached.

     

    TPC Hit Errors (Run 9)

     As was TPC Hit Errors (2008), hit errors were found for Run 9 by Victor Perevoztchikov.

    I created a macro which can be used by anyone to generate these plots, obtainable by CVS checkout from offline/users/genevb/hitErrors.C. Attached to this page are text files used as input for pp500 and pp200 (Victor ran on data with timestamps of 20090326.082803 and 20090425.093105 respectively), and eps versions of these plots.

     

    pp500:

    __________________________________

    pp200:

     

    -Gene Van Buren

    TPC Pt and DCA resolution

    [Update on 2010-06-21: global track momentum resolution has now also been studied using Cosmic rays study, version 2, with some as yet unexplained good performance at very high pT.]

    _____________ 

    Jim Thomas has written code to model transverse momentum (pT) and pointing resolution near the primary vertex (DCA) of tracks using various detectors. The code has been tuned to match TPC performance under low luminosity conditions, but assumes (for simplicity) TPC hit errors of 0.06 cm in rφ and 0.15 cm in z (which can be compared with the TPC Hit Errors (2008)), throwing tracks at η = 0.5 and including hits in "nearly all" 45 padrows (padrows 1 and 13 are dropped).

    Shown below are the pt resolutions for various options:

    • Black filled triangles: embedded antiprotons at half field
    • Black open triangles: embedded pions at half field
    • Cyan: model prediction for global (anti)proton tracks at half field
    • Green: model prediction for global pion tracks at half field
    • Blue: model prediction for global pion tracks at full field
    • Red: model prediction for primary pion tracks at full field (primary vertex known to 1 mm)
    • Magenta: model prediction for primary pion tracks at full field (primary vertex known to 0.3 mm)

     

    Note that the embedding data was shown in Figure 10 of the TPC NIM paper [1]. At low pT, the resolution is dominated by multiple coulomb scattering (MCS) effects. At high pT, the momentum resolution approaches a C * pT2 dependence (note that C can be considered the inverse transverse momentum resolution, as δ(1/pT) = δ(pT) / pT2 for small δ(pT)). For the above curves, C (in units of inverse momentum [c/GeV]) is approximately:

    • Cyan and Green: 0.018
    • Blue: 0.009
    • Red: 0.005
    • Magenta: 0.003

    Here is the same model run with only using every other padrow of the TPC:

     For the above "less hits" curves, C is approximately 50% higher in all cases:

    • Cyan and Green: 0.027
    • Blue: 0.013
    • Red: 0.008
    • Magenta: 0.0045

    Additional studies of the momentum resolution come from Yuri Fisyak for Monte Carlo simulations (similar to a thorough embedding study done using TPT in 2002 by Jen Klay) in the following plots for globals and primaries in AuAu200 and pp500 [with pileup] collisions. The comparison is best made with the blue [AuAu globals] and magenta [AuAu primaries] lines above. The data matches reasonably well with the "less hits" model curves, with the exception of an additional offset of about 0.5% for global tracks. The pileup in the pp500 simulation probably causes its results for primaries to have even further degraded resolutions than the red [pp primaries] lines above.

     

    AuAu200 [Run 10 FF setup]:

     

    pp500 [Run 9 RFF setup, with pileup]:


    Shown below are the DCA resolutions from the same model described earlier, using tracks with "nearly all" hits, for 

    • Cyan: model prediction for global (anti)proton tracks at half field
    • Green: model prediction for global pion tracks at half field
    • Blue: model prediction for global pion tracks at full field

     

    [note: I do not know what the dashed lines are]

    As can be seen, the field makes only a small impact on the DCA resolution. When using "good" (high quality) tracks for calibrations, I regularly find a mean DCA resolution for global tracks in full field of just under 3 mm.

    EPS versions of these images are attached.

     

    References:

    1. M. Anderson et al., Nucl. Instr. and Meth. A499 (2003) 659-678

    TPC speed upgrade 2022

     This page is setup to keep track of meetings documents for the task force work for the
    tpc speed upgrade for runs 23-25.



    Status as of December 14 22,
    Presentation from Tonko at collaboration meeting (pdf)

    TPC meeting June 22

    Update from Tonko on progress (pdf)


    Meeting March 24

    Discussion points send out ahead of meeting (Flemming) pdf
    Progress slides - tonko  pdf
    minutes - Richard pdf




    First meeting February 17

    overview presentation and discussion (ppt)
    minutes (pdf)
    -
    after meeting Jeff posted the projected resource needs for handling data to S&C (pptx)


    Charge and Committee

     We would like to form the STAR TPC DAQ improvement task force. With the firmware changes on the TPC electronics, it is possible for us to double the TPC electronics readout rate with a minimal cost. This will greatly enhance STAR physics capability. Our past Beam Use Request has taken the planned upgrade into account.  The long shut down anticipated after Run 22 provides us an opportunity to do the firmware change and evaluate the impact for physics data analysis. The task force is charged to evaluate the readiness of the TPC DAQ improvement for Run 23 and beyond:

     

    1. What are the resources required to realize DAQ improvement?
    2. Where is the bottleneck for this upgrade? What are the risks?
    3. What software changes (online and offline) are required to accommodate the upgrade?
    4. What hardware and network changes are required to handle this upgrade?
    5. Evaluate the impact of proposed changes on physics capabilities.
    6. What is the timeline and path toward completion of this upgrade? 
    7. Report to management regularly and provide input to beam use request for Run 23 and beyond.

     

    The members are:

    Flemming Videbaek (co-Chair)

    Richard Witt (co-Chair)

    Zhenyu Chen

    Xin Dong

    Yuri Fisyak

    Carl Gagliardi

    Jeff Landgraf

    Tonko Ljubicic

    Chun Yuen Tsang

    Gene Van Buren

     

    Lijuan and Helen

     

    TPX high rate test

     

    TRG

    Inventory

    Module Schematic Test Proceedure Req.Doc.
    QT32b link
    QT32b_schematic.pdf
    file file
    QT8 QT8_schematic.pdf  
    DSM      
    QT32c link      
    RCC2      
    STP-PMC      
    STP-Concentrator      
         
           
           

    Trigger Detectors

    L2 documentation

    .

    2008 pp run

    Details of L2-algos (triggers) as implemented for pp run in February of 2008

    Mapping used during data taking is in attachment 1)

     

    a Overview (Jan, later Ross)

    Graph illustrates the key changes in L2 processing scheme allowing for fast event abortion. Jan

    b) B+EEMC common calib (Jan)

    Common BEMC, EEMC calibration algos (do not abort)

    1. Goal: calibrates Barrel, Endcap towers, result is used by other L2-algo
    2. RTS params:
        int   par_dbg; // use 0 for real event processing
        int   par_gainType; enum {kGainZero=0, kGainIdeal=1, kGainOffline=2};
        int   par_nSigPed;    // ADC,  filters towers
        float par_twEneThres; // GeV, filters towers
        float par_hotEtThres; // GeV, only monitoring histos
    3. Class name: L2btowCalAlgo08::public L2VirtualAlgo2008 and similar for ETOW , 1 instance of each.
    4. Event processing uses method calibrateBtow(int token, int bemcIn, ushort *bemcData);+ similar for ETOW
      Not used: void computeUser(int token); bool decisionUser(int token)
    5. Hardcoded params: mxListSize: BTOW =500, ETOW=200
    6. Details of Algo:
       ------- tower threshold definition------
         float adcThres=x- > ped+par_nSigPed* fabs(x- > sigPed);
         float otherThr=x-> ped+par_twEneThres*x-> gain;
         if(adcThres < otherThr)  adcThres=otherThr;
      -------- computing ideal gains -------
       for(i=0;i<BtowGeom::mxEtaBin;i++ ){
          float avrEta=-0.975 +i*0.05; /* assume BTOW has fixed eta bin size */
          if(i==0) avrEta=-0.970;// first & lost towers are smaller
          if(i==39) avrEta=0.970;
          btow.cosh[i]=cosh(avrEta);
          btow.idealGain2Ene[i]=par_maxADC/par_maxET/btow.cosh[i];
      }
      
      ---------- peds,gains, masking bad towers by threshold --------
          if (par_gainType!=kGainIdeal) return -102;
          geom->btow.gain2Ene_rdo[x->rdo]=geom->btow.idealGain2Ene[ietaTw];
          geom->btow.gain2ET_rdo[x->rdo]=geom->getIdealAdc2ET();
          
          geom->btow.thr_rdo[x->rdo]=(int) (adcThres);
          geom->btow.ped_rdo[x->rdo]=(int) (x->ped);
          geom->btow.ped_shifted_rdo[x->rdo]=(unsigned short)(par_pedOff - x->ped);
      
      -------- event loop-------
           for(rdo=0; rdo < BtowGeom::mxRdo; rdo++){
            if(rawAdc[rdo] < thr[rdo])continue;
            if(nTower > =mxListSize) break; // overflow protection
            adc=rawAdc[rdo]-ped[rdo];  //do NOT correct for common pedestal noise
            et=adc/gain2ET[rdo]; 
            hit-> rdo=rdo;
            hit-> adc=adc;
            hit-> et=et;
            hit-> ene=adc/gain2Ene[rdo]; 
            hit++;
            nTower++; 
            }
          btowCalibData.hitSize=nTower;
       
    7. Execution time:
      L2-btowCal08  Compute   CPU/eve MPV 42 kTicks,  FWHM=6
      Reference:
      L2:jet06-algo  CPU/eve MPV 57 kTicks,  FWHM=11
      
    8. Output files:
      * ASCII logfile: run8.l2BtowCal08.log
      * binary histo: run8.l2BtowCal08.hist.bin
    9. QA info in log file:
      #BTOW_hot tower _candidate_ (bHotSum=67 of 50000 eve) :, softID 1397 , crate 21 , chan 156 , name 07tj37
      #BTOW_token_QA:  _candidate_ hot token=2 used 13 for 50000 events, token range [1, 4095], used 4095 tokens
      
    10. QA plots : PDF

     

    c) high-energy-Filter (Jan)

    L2-high-energy-Algo ( 100% accept)

     

    L2-high-ene-algo (does not abort)

    1. Goal: perseveres subset of ETOW & BTOW towers with ET above certain threshold.
    2. RTS params:
       
        int  par_dbg; // use 0 for real event processing
        int  par_maxList; // can't exceed hardcoded value of 150
        int  par_adcThres; // in ADC counts above peds
    3. Class name: L2hienAlgo08:public L2VirtualAlgo2008 , the same for ETOW , 2 instances.
    4. Event processing uses method
      void computeUser(int token); - build internal list
      bool decisionUser(int token, void **myL2Result); only for QA histos
      Output list of towers:
       int  hiSize=getListSize(token);
        const unsigned int  *value=getListData(token);
        printf("pr2-%s: dump %d acceted towers for token=%d\n softID   ADC-ped\n",getName(),hiSize, token);
        for(int ic=0;ic < hiSize;ic++,value++) {
          int adc=(*value) > > 16;
          int softID=(*value)&0xffff;
          printf("%4d %d, \n",softID,adc);
        }
    5. Hardcoded params: mxListSize: BTOW =ETOW=150
    6. Details of Algo:
       -------- event loop-------
         for(i=0;i< hitSize;i++,hit++) {
           if(hit->adc< par_adcThres) continue;
          if(hiTwEve-> size>=par_maxList) break; // overflow protection
          int softID=mRdo2towerID[hit->rdo];
          (*value)= ((hit-> adc) <<16 )+ softID; // store composite value
          hiTwEve-> size++; 
          value++; // increment  index 
        } 
      Descripion:
      this algo selects high-energy towers  from  BTOW & ETOW data
       and take advantage of common calibration to be deployed
       at L2 in February of 2008.
      The common (for B & E) ADC threshold is defined
       in units of adc above  ped i.e. in ET.
      The output of this algo _one_ lists of pairs {adc,softID}.
      You need 2 instances of this ago to cover E & B-EMC.
      Saving of those lists to disk is beyond the scope of this algo.
      SoftID is defined as follows:
      *BTOW : traditional softID [1-4800] used since 20th centry.
      *ETOW: range [0..719], eta index changes slower
          int ieta= (x-> eta-1);
          int iphi= (x-> sec-1)*EtowGeom::mxSubs + x-> sub-'A' ;
          int softId= iphi+EtowGeom::mxPhiBin*ieta;
      
      There is a hardcoded limit on the max list length at 256 towers.
      In case of an overflow a random (not realy) selection of towers  will be  added to the list until the software limit is reached.
    7. Execution time:
      L2:hienBtow08  Compute   CPU/eve MPV 1 kTicks,  FWHM=2
      L2:hienBtow08  Decision  CPU/eve MPV 0 kTicks
      L2:hienBtow08  Deci+Comp CPU/eve MPV 2 kTicks,  FWHM=3
      
    8. Output files:
      * ASCII logfile: run8.l2hienBtow08.log
      * binary histo: run8.l2hienBtow08.hist.bin
    9. QA info in log file: just timing
    10. QA plots : PDF

     

    d ped-monitor (Jan)

    L2-pedestal monitor-Algo ( 100% accept)

    1. Goal: accumulate pedestals for all towers of ETOW & BTOW.
    2. RTS params:
       
        par_pedSubtr  =rc_ints[0]!=0;
        par_speedFact =rc_ints[1];
        par_saveBinary=rc_ints[2]!=0;
        par_dbg       =rc_ints[3];
        par_prescAccept=rc_ints[4];
      
    3. Meaning of speed 
      time spent in L2-ped is proportional to # of towers processed per event.
      
      speed=1 means for every event process all 4800+720 towers, takes 1mSec
      
      speed=2 means 
        for the first event process tower 1,2, 5,6, 9,10,...., takes 500 uSec
        for the second event process tower 3,4, 7,8, 11,12,...., takes 500 uSec 
        repeat
        So L2ped runs 2x faster but effectively you get 1/2 of stats for all towers.
      
      speed=4 means : 
          1st eve works with towers: 1,2,3,4, 17,18,19,20,... takes 250 uSec
          2nd eve works with towers: 5,6,7,8, 21,22,23,24,... 
          3rd  ...
          4th eve ...
          repeat
      
      You got the pattern. Allowed 'speeds' are common divider of 4800 & 768.  
      Allowed speed values: 1,2,4,8,16,32,64,192
      
      But even if you type sth absurd L2ped will correct it to the closest correct value.
      

     

    e) gamma (B=Jan, E=Jason+Paul)

    RTS params

    2 int dbg, prescaleAccept
    4 float  seedEtThres, clusterEtThres, eventEtThres,solationEtThres
    
    two sets for B & E.
    
    ======== ALGO =======
    
    1. Get list of towers provided by the L2 main shell
    2. Sort the list in descending E_T
    3. Loop over list until tower below seed threshold is found
    
       a. Find the highest group of 2x2 clusters adjacent to seed tower
       b. Form a cluster
       c. Save to list of clusters
       d. Mark all towers in 2x2 cluster as "used"
    
    4. Repeat loop until all seed towers are exhausted
    5. Loop over all clusters
    
       a. Find highest E_T cluster
       b. If ( cluster E_T > cluster threshold )
    
          i. Accept event
          ii. Store information in the L2Result array
    
              - mean eta-bin
              - mean phi-bin
              - tower ID for seed tower
              - a relative ID specifying which of the 4 possible tower
                clusters was chosen
              - ET of cluster
              - number of nonzero towers in cluster
              - an isolation E_T sum (placeholder for later, set = 0).
    
    

     

    f) L2-jet (B+E) ver2006=Jan, ver2008=Will

    Params:

     // unpack params from run control GUI
      par_cutTag     =  rc_ints[0];
      par_useBtowEast= (rc_ints[1]&1 )>0;
      par_useBtowWest= (rc_ints[1]&2)>0;
      par_useEndcap  =  rc_ints[2]&1;
      int par_adcThr =  rc_ints[3]; // needed only in initRun()
      par_minPhiBinDiff=rc_ints[4];
    
      par_oneJetThr   =  rc_floats[0];
      par_diJetThrHigh=  rc_floats[1];
      par_diJetThrLow =  rc_floats[2];
      par_rndAccProb  =  rc_floats[3];
      par_dbg      =(int)rc_floats[4];
    

     

    2009 pp run

    •  pre-run 2009 L2 code release , January 15, 2009, by Ross 
    • L2 mapping used in 2009 data taking, 56 BTOW towers were swapped (attachment 1)

     

    0) lastDSM masks

    In order to accommodate the larger number of desired triggers in the old TCU, the following algorithms can now read in masks to be checked against the 128 lastDSM input bits before decision() is called:

    L2btowGamma

    L2etowGamma

    L2jet

    L2jetHigh

    L2upsilon

    L2btowW

     

    The masks are located in ~developer/emc_setup on the l2 machine, and must be named %s.lastDSM_mask , where %s is the name of the algorithm (subtract the 'L2' from the list above).  These files contain 8 integersthat correspond to the 8 unsigned shorts in lastDSM[] in the trigger data.

    If TCU_type!=2 we are assumed to be using the old TCU, and at the start of each run the code that initializes each l2 algorithm will attempt to read these files.  If they can't be found, then the mask is assumed to be not used for that particular algorithm.

    Each time decision() is about to be called for one of the above algorithms, we first pass in the lastDSM array to be checked against these 8 unsigned shorts.  Each ushort in the array is byte-swapped so that the endian-ness of the two sets agrees.  Note that lastDSM[0] does not seem to correspond to input channel 0 in Eleanor's LD301 documentation.

    At latest look (Feb 27), the values in the array seem to be swapped in the same fashion as they are in the barrel code that looks at layer 2 of the DSM:

    lastDSM[0] corresponds to input channel 3

    lastDSM[1] corresponds to input channel 2

    lastDSM[2] corresponds to input channel 1

    lastDSM[3] corresponds to input channel 0

    lastDSM[4] corresponds to input channel 7

    lastDSM[5] corresponds to input channel 6

    lastDSM[6] corresponds to input channel 5

    lastDSM[7] corresponds to input channel 4

     

     

    01 changes to L2 algos from last year

     * verify every aborting algo is using random accept prescale based on prescale method and is randomly initialize upon start. I know L2-jet & L2-http-barrel are using rnd(), I suspect L2-http-endcap is not randomly initialized.  Check them all. 

    * L2-jet needs change of jet size to 1x1.

    * experiment with L2-Wbos algo in the barrel, needs TOF unpacking

    * consider converting L2-http in to L2-eta , for EMC calibration purposes, Jason may have working prototype, but slow version due to memset

    Jan

     

    1) L2Result Offsets

    Every algorithm has the opportunity to write out a small report into the L2Result array after each decision().  The sizes of these results are fixed within a year so that they don't overwrite others unexpectedly.  They should not be changed without talking to the L2 algorithm maintainer.  The starting position for each is defined in terms of the offset from the start of the L2Array.  The values during the run were as follows:

    Algorith Offset
    (EMC_CHECK) 1
    EMC_PED 2
    Barrel Gamma 3
    Endcap Gamma 6
    Dijet 9
    Upsilon 17
    Barrel W 20
    Dijet (Higher Et) 25
    Endcap Hien 0
    Barrel Calib 0
    Endcap Calib 0
    Barrel Hien 0 (C2Result)

     

    a) L2emulL0_2009 (L0 hardware emulation)

    This algorithm is intended to provide access to the 16 lastDSM output bits, and should only be needed when the old TCU is being used.

    This algorithm reads in 5 ints from run control:

    [0] onbits0

    [1] offbits0

    [2] onbits1

    [3] offbits1

    [4] number of  onbits+offbits pairs

    The logic used is as follows:

      if((lastDSM & p->onbits[0]) != p->onbits[0]) return 0;
      if((~lastDSM & p->offbits[0]) != p->offbits[0]) return 0;
     

    repeated for [1] if the number of pairs is > 1.  A byte swap occurs before this logic is reached, so that both ushorts should have the same endian-ness.  If none of the 'if's fire, the algorithm returns 1.

    b) L2btowGammaAlgo (barrel gamma algorithm)

    This algorithm searches for localized energy distributions in the barrel calorimeter, and will not run if the BTOW is not in the run.

    This algorithm reads in 2 ints and 2 floats from run control:

    [i0] debug

    [i1] Random Accept Prescale

    [f0] Et threshold for seed

    [f1] Et threshold for cluster

     

    The logic used is as follows:

    Every tower with Et>f0 is added to a list.  The list is then checked in order, summing the cluster around it and returning Accept if sumEt>f1

    The Random Accept Prescale value sets the prescale for random accept, described elsewhere.

    If debug>0 debug messages are printed.

    c) L2etowGammaAlgo (endcap gamma algorithm)

    This algorithm searches for localized energy distributions in the endcap calorimeter, and will not run if the ETOW is not in the run.

    This algorithm reads in 2 ints and 3 floats from run control:

    [i0] debug

    [i1] Random Accept Prescale

    [f0] Et threshold for seed

    [f1] Et threshold for cluster

    [f2] Et threshold for event

     

    The logic used is similar to L2btowGamma, but etowGamma saves more information about each event.  All possible clusters per event are made (save for those that overlap other clusters) and added to a list if clusterEt>f1.  If at least one cluster has clusterEt>f2, the event is accepted.

    The Random Accept Prescale value sets the prescale for random accept, described elsewhere.

    If debug>0 debug messages are printed.

     


    The mapping is defined thusly

    >> tow = EtowGeom::mxEtaBin *phi + eta;

    tow = 0    ==>    01TA01
    tow = 1    ==>    01TA02
    ..
    tow   718  ==>    12TE11
    tow = 719  ==>    12TE12

    12 consecutive towers are in the same slice in phi.

    d) L2hienAlgo (passive barrel and endcap high tower filter)

    This algorithm packages tower ADC values and soft IDs for use outside of L2.  It always accepts, and in the current configuration is cannot be attached to a particular trigger, but rather runs for each event.

    This algorithm reads in 3 ints, which are hard-coded in algo_handler.c:

    [i0] debug=0

    [i1] ADC threshold=60

    [i2] max L2hienList size=120

     

    The logic used is as follows:

    A switch is set in the constructor so that one instance of the L2hienAlgo reads barrel towers and the other endcap towers.

    For each calibrated tower, if ADC>i1 it is added to L2hienList (up to i2 towers total).  The barrel version then adds the first (up to) 40 towers (whether above i1 or not) to the L2hienResult, which is written to C2Result in l2new.

    If debug>0 debug messages are printed.

    e) L2jetAlgo (barrel and endcap combined jet algorithm)

    This algorithm searches for jet-like energy distributions in the barrel and endcap calorimeters, and will run even if one or the other is not in the run.

    This algorithm reads in 1 int from run control:

    [i0] setup version, which controls which of ~developer/emc_setup/algoJet2009.setup* is used to set the parameters for the algorithm

    This algorithm reads in 5 ints and 9 floats from the selected file:

    [i0] What parts of the BEMC to use (1=East, 2=West, 3=both)

    [i1] Whether to use the EEMC (>0 = yes)

    [i2] par_adcThr (unused?)

    [i3] debug

    [i4] Random Accept Prescale

    [f0] Et threshold for single jet

    [f1] par_diJetThrHigh

    [f2] par_diJetThrLow

    [f3] par_diJetThr_2

    [f4] par_diJetThr_3

    [f5] par_diJetThr_4

    [f6] par_diJetThr_5

    [f7] par_diJetEtaHigh

    [f8] par_diJetEtaLow

     

    The logic is fairly complex.  Brian Page is the maintainer of the jet code, so questions should be directed to him.

    The Random Accept Prescale value sets the prescale for random accept, described elsewhere.

    If debug>0 debug messages are printed.

    f) L2pedAlgo (barrel and endcap pedestal monitor)

    This algorithm records the ADC values from the ETOW and BTOW for pedestal studies.

    This algorithm reads in 5 ints from run control:

    [i0] ped subtraction flag

    [i1] speed factor

    [i2] 'save binary' flag

    [i3] debug

    [i4] prescale for accept

     

    The logic used is as follows:

    Each event, ADCs from 1/i1 (where i1 is rounded off to be 1,2,4,8,16,32,64, or 192) of the barrel and endcap are histogrammed by rdo.  The selected regions roll, so that [0,1/i1] is histogrammed the first event, [1/i1,2/i1] the next, etc.

    If i0>0, instead of histogramming the raw ADC, the pedestal is subtracted first, allowing the residual to be seen.

    if i2>0 the entire histogram is saved at the end of the run rather than just keeping [-10,100] ADC.

    If the prescale for accept is set greater than zero, then the algorithm no longer fills histograms, and simply does the following for every event:

        if((rand()>>4) % par_prescAccept ) return false;

        return true;

    This is NOT the same as the procedure for the Random Accept Prescale.
     

    If debug>0 debug messages are printed.

    g) L2upsilonAlgo (barrel upsilon algorithm)

    This algorithm searches for upsilons in the barrel, and will not run if BTOW is not in the run.

    This algorithm reads in 1 int from run control:

    [i0] setup version, which controls which of ~developer/emc_setup/algoUpsilon2009.setup* is used to set the parameters for the algorithm

    This algorithm reads in 4 ints and 9 floats from the selected file:

    [i0] prescale

    [i1] Whether to mask hot towers dynamically

    [i2] how many events to take between updates to the dynamic mask

    [i3] Random Accept Prescale

    [f0] fL0SeedThreshold

    [f1] fL2SeedThreshold

    [f2] fMinL0ClusterEnergy

    [f3] fMinL2ClusterEnergy

    [f4] fMinInvMass

    [f5] fMaxInvMass

    [f6] fMaxCosTheta

    [f7] fHotTowerTheshold

    [f8] fThresholdRatioOfHotTower

     

    The logic is fairly complex.  Haidong Liu is the maintainer of the upsilon code, so questions should be directed to him.

    The Random Accept Prescale value sets the prescale for random accept, described elsewhere.

    h) W-algo BTOW

    L2 W-algo for BTOW, p+p 500 GeV , 2009 run

     

    This algorithm searches for energy depositions in the barrel, and will not run if BTOW is not in the run.

    This algorithm reads in 2 ints and 2 floats from run control:

    [i0] debug

    [i1] Random Accept Prescale

    [f0] Et threshold for seed

    [f1] Et threshold for cluster

    The logic is as follows:

    For each tower with Et>f0, each 2x2 tower patch around it is summed.  As soon as one of these 2x2 patches has Et>f1, we accept the event.

    The Random Accept Prescale value sets the prescale for random accept, described elsewhere.

    if debug>0, various debug information will be printed.

     

    Run time params
    2 int    dbg, randomAcceptPrescale 
    2 float  seedEtThres, clusterEtThres  (GeV)
    
    This algo works only with BTOW ADCs
    
    ======== L2 W ALGO =======
    
    1. Get list of towers w/ energy provided by the common L2 calib algo
    2. Form smaller list of seed towers above seedTH
    3. Loop over seed towers
       a. Find the highest cluster ET out of 4 groups of 2x2 towers containing the seed tower
       b. if Et > clusterET accept event, set trgBit=2, do not try any other cluster
    
    4. try random accept 
       - set event counter at random start
       - increment by 1 for every input event
       - issue random accept if (eventCounter %  randomAcceptPrescale == 0)
       - if accepted set trgBit=1
    5. Accept event if trgBit is 1 or 2 6. Store information in the L2wResult[.] array - seed eta-bin - seed phi-bin - seed ET - cluster ET - trgBits

     Access to L2W results in muDst, also to oflTrigIds

      StMuEvent* muEve = mMuDstMaker->muDst()->event();
    
      TArrayI& l2Array = muEve->L2Result();
      unsigned int *l2res=(unsigned int *)l2Array.GetArray();
      printf(" L2-jet online results below:\n");
      int k;  for (k=0;k<32;k++) printf("k=%2d  val=0x%04x\n",k,l2res[k]);
      const int BEMCW_off=20; // valid only for 2009 run
      L2wResult2009 *out1= ( L2wResult2009 *) &l2res[BEMCW_off];   
      L2wResult2009_print(out1);
    
      StMuTriggerIdCollection *tic=&(muEve->triggerIdCollection());
      assert(tic);
      const StTriggerId &l1=tic->l1();
      vector idL=l1.triggerIds();   printf("nTrig=%d, trigID: ",idL.size());   for(unsigned int i=0;i

    Simulation of L2-W-Algo for Filtered QCD & W events , LT=~10 /pb.

    Used thresholds: seedET>9 GeV, clusterET>14 GeV


    Fig 1. LT=10/pb. Left: W events, right: QCD events

     

    Fig 2. Uniformity of L2-W trigger for accepted QCD events, LT=10/pb.
    It is lower only at the physical edges of barrel at eta=+/-1 due to energy leak and smaller tower size.


    Below you will find full set of L2 monitoring plots for both sets of events


     

    xxxxxxx old xxxxxx

    SPARE PAGE

    2011 pp run

     bah.

    L2 Documentation: 2014 Comments

    The most current documentation can be found on Kevin Adkins blog page. It's not the 2014 comments any longer (2015 at this point!), though this page still bears that name. One should remove the following URL and add a more current one if this documentation is updated.
    Current L2 Documentation as of Run15

     This documentation inludes a few things:

    - A detailed guide to L2 monitoring
    - A detailed guide to monitoring the L2 pedestals
    - A detailed guide to updating the L2 tower masking file
    - A few comments about the L2 algos and how they work together

    This document will probably be in other places too, but it should be safe here.

    content of setup file for L2 algos

     This description refers to 2011 setup, before moving L2 to HLT

    The real L2 algo read all peds, status tables from the l2ana01, user=developer, dir=emc_setup (or sth similar) Only the files with the word 'current' in the name are read in. Those are symbolic links pointing to the real files. All other ped, status files, with some date attached are kept only as a record.

     

     Those are all setup files needed by L2 algos: [developer@l2ana01 emc_setup]$ ls *current btowCrateMask.current etowCrateMask.current pedestal.current btowDb.current etowDb.current towerMask.current

    Common format: all lines starting with '#' are ignorred as comments. Every file has short description of its contntent. One need to be familiar with EEMC DB to understand the details.

    btowCrateMask.current, etowCrateMask.current, btowCrateMask.current etowCrateMask.current are used to mask out whole crates or individual towers. They are editted by hand as needed during the run. Typically all us unmasked at the beggining of each run.

    pedestal.current is produced by L2-ped algo under different name (run12058037.l2ped.log) after every run.Once person monitoring L2 pedestal residua decideds pedestals used by L2 drifted too far, the pedestal.current is linked to the fresh runXXX.l2ped.log. Typically twice per week.

     

    btowDb.current, etowDb.current  contain full DB mapping for both EMC. Those files are produced once per  year, by running  $STAR/StRoot/StEEmcDbMaker/StEmcAsciiDbMaker.cxx  on the offline STAR DB.If no mapping was changed in the offline DB there is no need to regenerate those 2 files.

     

    convert L2 bin histo to root histo

    Compile this stand alone code 

     $STAR/StRoot/StTriggerUtilities/L2Emulator/macros/binH2rootH.c 

    It needs  L2Histo.cxx  L2Histo.h  from

     $STAR/StRoot/StTriggerUtilities/L2Emulator/L2algoUtil/

    and CERN root .

    It does not need STAR root4star.

     

     

    offline framework description

    How to compile & run full suit of L2-algos offline.

    
    1) take the current copy deployed at the real L2 and manage to compile and run it offline.
    Run over 100K events (takes ~1 minute).
    2) fire the macro producing full set of L2-jet plots, so it will produce PDF with all plots fr this 100K eve. Only then you can control if the L2-jets works. 
    3) Next, add your changes to L2jet , compile , run, view plots.
    I'll ask Ross to take care of I/O new params (cuts) you have added/changed  to select  jets you want. Just make sure all new params are _variables_ defined in .h and have name starting with 
    parWill_cut1, ....
    You just set values in initRun() and recompile each time you want to change them.
    Once you are happy with the cuts you tell me where is this version.
    Make sure to run it on full 2006 event file (~600K eve).
    
    

    Detailed instruction

    1. create a new directory at rcas6nnn and copy the recent version of code
      mkdir aaa
      cd aaa
      cp -rp  XXX/onlineL2 .
      cp -rp  XXX/StRoot .
      (ask Ross for actual location of both directories)
      
    2. compile full version of library (no exec yet)
       
      cd onlineL2
      make lib
    3. compile & exec the main code
       make
      m 5000
    4. To modify L2jet finder code open another terminal on rcas6nnn
       
      xterm&
      cd ../StRoot/StTriggerUtilities/L2Emulator/L2jetAlgo/
      edit the code you want
      make 
      make install
      go back to primary terminal in "onlineL2" and type: touch multiTest.c make Now you can run 'm' again
    5. to view plots convert hist.bin to hist.root and run root macro (for the moment all output files are stored with the name: out5/run8...)
       binH2rootH-exe out5/run8.l2jet.hist.bin out5/run8.l2jet.hist.root
      root.exe plJetsL2.C
      Now you can display one age at a time by typing within root session
      plJetsL2(1)
      plJetsL2(2)..
      Note, for L2jet algo the valid pages are: {1,2,3,4,5,6,10,20,21,22}

     

    production of binary events for L2 algos

     Here is prescription how to convert any muDst to the binary event format, needed to run L2-algos stand alone (by multiTest.C)

    Let me start with a warning - this procedure requires execution of some additional code not relevant for this task ( being production of binary event files), but I wanted to reuse existing offline trigger simu code and there was no consensus how to rearrange it.

    The goal is to produce events consisting of 2 data blocks : BTOW & ETOW - equivalent to raw data saved in the daq file.

    1. Prescription to produce R9067013.eve.bin from muDst from  run9067013 :
      mkdir aaa
      cd aaa
      mkdir out2
      stardev
      cp $STAR/StRoot/StTriggerUtilities/macros/rdMu2binL2eve.C .
      cp /star/institutions/mit/balewski/2008-L2events/R9067013.lis .
      root4star -b -q rdMu2binL2eve.C >& L1 &  
      
      You need no private code, just the .C macro, setup to read the run-list, muDst must still exist on the disk (fix it if needed).
    2. Inspect the .C macro to find out how to change # of processed events, # muDst to read in. If you use muDst from M-C you need to activate useMC-switch in the macro and decide on the time stamp - this is on of this irrelevant things I have warned you. Current default is reasonable. 
    3. Output: the binary event file (R9067013.eve.bin) will show up in the 'aaa/' directory. The content of binary events is different for real data vs. M-C. This can be fixed later (by Ross?), but for now you must take it in to account while reading such events by multiTest.C
      • binary events for real data contain raw BTOW & ETOW data blocks. This is certainly true for the Endcap, I'm less sure for the barrel, since I'm forced to use  StEmcADCtoEMaker and I do not if it removes ADC for masked towers. You need to figure it out. The main point is for real data you will see 4800+720 non-zero ADCs before pedestal subtraction. This is exactly what L2-algo receive on-line.
        Note, your L2-algo will receive data after ped subtraction.
      • binary events for M-C contain only fired towers, peds are not added, nor smeared. Therefore, you must use different L2-pedestal files while reading such events, with ped set to 0 - otherwise the common L2-calib algo will subtract peds anyhow.

        This could be changed if one activates properly slow simu for BTOW & ETOW - talk to Ross. The advantage would be identical L2-peds & status tables could be used for M-C events and for real data. The disadvantage of using slow simu is now you must set DB time stamp during production of binary events  and have L2-peds for exactly the same time stamp - or you will end up residual energies seen by L2-algos.

         

    4.  QA output: look at the file: aaa/out2/run9067013.l2ped.out it will show pedestal residua using online L2 peds as R9067013. This is one of those confusing steps. While producing binary events  the rdMu2binL2eve.C runs also L2-ped algo (I can't turn it off due to code dependencies), but L2-ped does NOT affect content of the output binary event file. But you can use L2-ped output to monitor if reasonable ADC have been found in muDst.
      This is an ASCII file, on the top there are 720 Endcap pedestals residua:
      #L2-ped algorithm finishRun(9067013), compiled: Jan 30 2008 , 23:25:07
      #params: pedSubtr=1 speedFact=8 saveBin=0 debug=0 prescAccept=0
      #L2H10,6,total event counter; x=cases,100,13,13,0,0,0,
      #L2ped  CPU/eve MPV 27 kTicks,  FWHM=2, seen eve=100
      # L2ped-Adc spectra, run=9067013, Z-scale is ln(yield), only first digit shown;  maxPedDev=5  table format:
      # name, ped, sigPed, crate, chan, softID-m-s-e, RDO_ID;
      #                                   ADC spectrum: [-20 ...  <=-10 ... *=0  ...  >=+10 ... :=+20 ... 100],  Xaxis=ADC - ped + 20
       01TA01   0  1.3 0x01 0x38         P01TA01  336 :.........<.........2211......>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01TA02   1  0.9 0x01 0x39         P01TA02  342 :.........<........1121..1....>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01TA03   2  0.9 0x01 0x3a         P01TA03  348 :.........<.........2121.1....>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01TA04   0  0.9 0x01 0x6d         P01TA04  654 :.........<.......112121......>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01TA05   0  0.9 0x01 0x6c         P01TA05  648 :.........<........1211.......>......1..:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01TA06   0  1.3 0x01 0x6e         P01TA06  660 :.........<........122........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01TA07   0  0.9 0x01 0x6f         P01TA07  666 :.........<........1311...1...>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01TA08  -1  1.3 0x01 0x3b         P01TA08  354 :.........<........221........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01TA09   0  1.3 0x01 0x3c         P01TA09  360 :.........<........1221.......>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01TA10   0  0.9 0x01 0x3d         P01TA10  366 :.........<........222........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01TA11   1  0.9 0x01 0x3e         P01TA11  372 :.........<.........231.......>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01TA12   0  0.9 0x01 0x3f         P01TA12  378 :.........<........1311.......>.........:...1.....:.........:.........:.........:.........:.........:.........:......... qa=0
       01TB01   0  1.7 0x01 0x40         P01TB01  384 :.........<.......1121111.....>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01TB02   0  0.9 0x01 0x41         P01TB02  390 :.........<.......1121111.....>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01TB03   0  1.7 0x01 0x42         P01TB03  396 :.........<........222........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01TB04  -1  0.9 0x01 0x70         P01TB04  672 :.........<.......1221........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01TB05   0  0.9 0x01 0x71         P01TB05  678 :.........<.......112.1....1..>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
      
      
      Once you scroll about 1000 lines, you will see 4800 barrel pedestal residua:
       01tg22   0  1.3 0x1d 0x19 id0122-004-1-02  754 :.........<.........221.......>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01tg23   0  1.7 0x1d 0x1a id0123-004-1-03  784 :.........<........1222.......>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01tg24   1  0.9 0x1d 0x1b id0124-004-1-04  814 :.........<........2121.......>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01tg25   2  0.9 0x1d 0x38 id0125-004-1-05 1684 :.........<.......121121......>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01tg26   0  1.3 0x1d 0x39 id0126-004-1-06 1714 :.........<........1221.......>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01tg27   1  0.9 0x1d 0x3a id0127-004-1-07 1744 :.........<........1231.......>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01tg28   0  1.3 0x1d 0x3b id0128-004-1-08 1774 :.........<......1..221.......>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01tg29   1  1.3 0x1d 0x58 id0129-004-1-09 2644 :.........<.......1122.1......>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01tg30   1  1.3 0x1d 0x59 id0130-004-1-10 2674 :.........<.......11221.......>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01tg31  -1  0.9 0x1d 0x5a id0131-004-1-11 2704 :.........<........21111......>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01tg32  -1  1.7 0x1d 0x5b id0132-004-1-12 2734 :.........<.......2221..1.....>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01tg33   1  1.3 0x1d 0x78 id0133-004-1-13 3604 :.........<........1221.......>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01tg34  -1  1.3 0x1d 0x79 id0134-004-1-14 3634 :.........<.......122.........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01tg35  -1  1.7 0x1d 0x7a id0135-004-1-15 3664 :.........<........222........>.........:..1......:.........:.........:.........:.........:.........:.........:......... qa=0
       01tg36   0  1.3 0x1d 0x7b id0136-004-1-16 3694 :.........<........2211.......>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01tg37   1  1.3 0x1d 0x98 id0137-004-1-17 4564 :.........<........1221.......>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01tg38   0  0.9 0x1d 0x99 id0138-004-1-18 4594 :.........<........1221.......>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01tg39 -20 -0.4 0x1d 0x9a id0139-004-1-19 4624 :.........<.........*.........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=?
       01tg40   0  1.3 0x1d 0x9b id0140-004-1-20 4654 :.........<.......1221........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01th01   0  1.3 0x0e 0x8b id4460-112-1-20 4187 :.........<........122.1......>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01th02 -20 -0.4 0x0e 0x8a id4459-112-1-19 4157 3.........<.........*.........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=?
       01th03  -2  2.2 0x0e 0x89 id4458-112-1-18 4127 :.........<......12221........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01th04  -1  1.3 0x0e 0x88 id4457-112-1-17 4097 :.........<......112211.......>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01th05   0  2.2 0x0e 0x6b id4456-112-1-16 3227 :.........<.......112211......>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
      
      The above files are for real data, when peds seen by L2-ped algo match the real pedestals.

      If you display the same file for M-C events, you will see just high-energy ends of ADC spectra, because at this moment binary events for M-C do not include pedestals.

      #                                   ADC spectrum: [-20 ...  <=-10 ... *=0  ...  >=+10 ... :=+20 ... 100],  Xaxis=ADC - ped + 20
       01TA01 -17  0.9 0x01 0x38         P01TA01  336 :..3......<.........*.........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01TA02 -12  0.9 0x01 0x39         P01TA02  342 :.......3.1.........*.........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01TA03 -20 -0.4 0x01 0x3a         P01TA03  348 :.........<.........*.........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=?
       01TA04 -20 -0.4 0x01 0x6d         P01TA04  654 :.........<.........*.........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=?
       01TA05 -20 -0.4 0x01 0x6c         P01TA05  648 31........<........1*.........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=?
       01TA06 -20 -0.4 0x01 0x6e         P01TA06  660 31........<.........*.........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=?
       01TA07 -20 -0.4 0x01 0x6f         P01TA07  666 :.........<.........*.........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=?
       01TA08 -20 -0.4 0x01 0x3b         P01TA08  354 3.........<.........*.........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=?
       01TA09 -12  0.9 0x01 0x3c         P01TA09  360 :.......3.<.........*.........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01TA10 -20 -0.4 0x01 0x3d         P01TA10  366 31........<.........*.........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=?
       01TA11  -4  0.9 0x01 0x3e         P01TA11  372 :.........<.....1...*.........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01TA12 -17  0.9 0x01 0x3f         P01TA12  378 :..3......<.........*.........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01TB01 -20 -0.4 0x01 0x40         P01TB01  384 :.........<.........*.........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=?
       01TB02 -15  0.9 0x01 0x41         P01TB02  390 :....1....<.........*.........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01TB03 -20 -0.4 0x01 0x42         P01TB03  396 :.........<.........*.........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=?
       01TB04 -16  0.9 0x01 0x70         P01TB04  672 :...3.....<1........*.........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=0
       01TB05 -20 -0.4 0x01 0x71         P01TB05  678 :.........<.........*.........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=?
       01TB06 -20 -0.4 0x01 0x72         P01TB06  684 :.........<.........*.........>.........:.........:.........:.........:.........:.........:.........:.........:......... qa=?
      

     

     

     

    Simulator Documentation

    The STAR trigger simulator is designed to

    1. reproduce trigger algorithms in offline software that can be run on Monte Carlo or real data ("online" mode)
    2. allow for use of improved calibration coefficients, pedestals, status tables, etc. to "clean up" trigger spectra ("offline" mode)
    3. aid in the development of new triggers ("expert" mode)

    It should provide a common framework for simulators of all trigger detectors.

    How to Run

    Ideally, this could be as simple as adding StTriggerSimuMaker to your chain and then asking

    trigSimu->isTrigger(trigId);

    in your analysis Maker.  Because the trigger simulator is still in development this is not always the case.  Here is a "straight from real-life" description of how to get it running:

    ( don't ask me, I'm just writing the docs .... )

    Code Structure

    Everything lives in StRoot/StTriggerUtilities.  StTriggerSimuMaker is the "head Maker" and owns all of the subdetector simulators.  Each subdetector simulator inherits from StVirtualTriggerSimu, which requires an implementation of isTrigger(int trigId).  This means that a user can

    1. ask the head Maker if the trigger fired for this event.
    2. ask the head Maker for a pointer to a subdetector simulator, and then ask this subdetector if the event passed the trigger.  For instance, a simple BEMC HT trigger is typically run in coincidence with a minbias trigger from the BBC, ZDC, or VPD.  By querying the BEMC and BBC/ZDC/VPD separately a user can find out why a particular event might have failed a HT trigger (i.e. was it because the HT threshold wasn't satisfied, or was it because the minbias condition failed?).

    Subsystem Simulators

    Detailed documentation of the subsystem simulators follows below:

    BBC

    Detailed docs for the BBC trigger simulator

    BEMC

    Detailed docs for the BEMC L0 simulator.

    EEMC

    Detailed docs for the EEMC L0 simulator.

    L2

    L2 simulator details.

    ZDC

    Hardware and software related to ZDC

    VPD

     Vertex Position Detector (VPD)

    Procedural Notes and Run Operations

    TOF noise rate pattern

     

    VPD Operations (HV Scan Instructions)

    [from Isaac Upsal's instructions to star-ops, dd.May 17, 2023]

    If stable collisions are available and the BBC (ZDC) is ready to use as a trigger then we can take the VPD gain scan. The procedure is as follows:
    Make sure that you are using a **non-VPD-based** Trigger for all data taking.

    First :
    On the sc3 machine there needs to be a "small GUI" for the VPD with 4 buttons - upVPD HV A, upVPD HV B, upVPD HV C, and Default. If it is not open or you are unsure open a terminal and execute "upvpd_settings". I just opened this ~an hour ago.
    "large GUI" shows all the channel values, and where one powers them on/off.
    "small GUI" selects between different gain sets A, B, C, & default.

    start with the VPD HV **OFF**

    1) On the small GUI choose the HV Gain setting *A*
    2) On the large GUI power on the VPD HV
    3) Make sure that the upper-left corner (channel 13-0) of the large GUI reports a voltage of ( 110112511401 ) for gain setting A, B, and C respectively.
    4) Wait about a minute or two for the VPD PMT channel voltages to settle
    5) Start run: TRG+DAQ+TOF (it is OK to include other detectors in if you want, but those are required), 200k events. Use string "VPD HV A" (or B, C as needed) in the shiftlog entry
    6) Use the "large GUI" to power off the VPD HV
    7) Wait for all channels to read a voltage of 0

    Repeat these steps for each of the gain settings (A, B, C) - making the appropriate entry in the shiftlog to identify which gain setting is used (upVPD HV A, upVPD HV B, upVPD HV C ).

    Once this is complete, use the "small GUI" to select "default" then power the VPD HV back on. Now you can close the "small GUI".

    More information on the software and analysis workflow can be found at this GitHub repository: https://github.com/RiceU-HeavyIons/BTOF-online/tree/master/gainfit


    VPD Run14 HV Map

     

    VPD Run15 HV Map

     

    VPD Run16 HV Map

     

    VPD Run17 HV Map

     

    VPD Run18 HV Map

     

    VPD Run22 HV Map

     

    Vpd Simulation Database Table

     Variables:
        short tubeRes[38];   // The resolution of a single Vpd tube in picoseconds to be simulated.

            octet tubeId[38]; // The offline ID (0-37) of the Vpd tube

        octet tubeStatusFlag[38]; // Flag for whether tube was active (True) or not (False) in a given Run

        octet tubeTriggerFlag[38]; // Flag for whether tube was triggered on (True) or not (False) in a given Run (again, simulation). Note that there will be overlap with the tubeStatusFlag, but could be additional False entries as well.

     

    Frequency:

    This table will be updated whenever the Calibrations_tof::vpdTotCorr table is updated, generally once per RHIC Run

     

    Index:
            *
    table is not indexed

     

    Size:

    The cumulative size of the 4 arrays is 190 bytes.

     

    Write Access:

    jdb - Daniel Brandenburg (Rice University)

    nluttrel  - Nickolas Luttrell (Rice University)

     

    See below the full .idl file

    VpdConfig.idl:

    /* vpdSimParams.idl

    *

    * Table: vpdSimParams

    *

    * description: Simulation Parameters for use in StVPDSimMaker

    *

    * author: Nickolas Luttrell (Rice University)

    *

    */

    struct vpdSimParams{

    short tubeRes[38];     /* Single Tube Resolutions for Vpd tubes in picoseconds*/

    octet tubeID[38];       /* Vpd Tube ID, values of 0-37 */

    octet tubeStatusFlag[38];   /* Flag for whether tube was active (True) or not (False) */

    octet tubeTriggerFlag[38];  /* Flag for whether tube was triggered on (True) or not (False) */

    };

     

    /* End vpdSimParams.idl */

    test