Analysis Blinding Committee Meetings

March 21, 2018 (4:30 PM Eastern/3:30 PM Central)

  • Doodle poll for regular meeting time: https://doodle.com/poll/72aumyv7bkbu45rq
  • Updated ABC membership
  • New role for ABC. From Helen's e-mail on Feb. 9, 2018
  • In my discussions with Jim and others it is clear that we need someone
    outside of the PWG and blind analysis analyzers to oversee things. I am
    therefore also writing to ask that the ABC continue and serve in this role.
    I know this is more than you signed up for when agreeing to serve on the
    ABC, so understand if you prefer to step down at this point. However, this
    group has the clearest view of how the procedure should proceed so you
    are the best group to ensure the steps are adhered to as closely as we can.
    Jim has agreed to stay on as Chair, I also ask that those of you one the
    committee planning to sign-up for a blind analysis also step down so there
    is no conflict of interest.
    
  • First looks at Q/A and trigger plots
    • A few plots (here and here) pose a challenge for the event mixing
    • Other discussion from early stages of on-going Q/A

January 11, 2018 (2 PM Eastern/1 PM Central)

  • Discussion of Recommendation Draft version 0
  • Lanny's comments:
    This is a very good draft to get us started.  At this time I have some
    general comments and suggestions for the document which we should 
    try to discuss and clear up at tomorrow's meeting.
    
    Exec. Summary:  Perhaps a bit more information about what is supposed
    to happen in steps 1 and 2 would be helpful. At the end we might add a
    sentence about the "after unblinding" stage.
    
    Blinding Techniques section:
    
    The text should, and more or less does follow the chronological order
    envisioned for the analysis. However, it doesn't say how bad runs are handled
    and doesn't discuss the special handling needed for calibrations. The latter
    is discussed later on pg.4 middle which should be moved to the early part 
    of this section. The bad run removal procedure should also be added.  The 
    latter cannot be done during step#1 since the events are scrambled, even 
    if the time-dependence is minimized. It could be done during step#2, however 
    if bad runs are included in step#1 then that analysis step is compromised.
    
    Why is the 4th bullet after heading "Assuming we will" included?
    
    One thing we discussed, but which is not in this draft, is who controls the 
    decoding key (Jerome's functions F and G) .  Also, do we rely on CAD alone
    to keep track of the run number and isobar information or do we want a
    copy to be maintained by STAR?  I think we all agreed that someone in STAR
    must have control of the run-isobar information. Who this is and how it is
    to be done and safeguarded should be in our recommendations.
    
    In a few places we refer to the "first step of unblinding," which I assume is
    what happens at the start of step#2, using true data run files but hiding the
    isobar labels.  This phrase should be carefully defined in the final draft.
    
    The Mock Data Challenge section  is very clear and well motivated. The 
    MC sensitivity study is vague however. Are we "recommending" this or merely
    suggesting it?  I think it is important in order to evaluate the efficacy of
    each analysis.
    
    "After unblinding" section:
    Based on the comments I've heard from the CME presenters I think getting
    agreement in the CME measurements between multiple groups is relatively
    straightforward. The problem has been, and likely will be interpreting the
    results.  How does STAR disseminate multiple interpretations of essentially 
    the same measurements such that the community does not become
    suspicious of selection bias or suppression, or become confused and dismissive?  
    I realize that this issue was not in our charge, however it is a good idea to 
    alert the council and management that this problem is on the horizon.
    
  • Ernst's comments
  • Happy New Year! I am still reading the draft-writeup, and agree with Lanny
    that it is very good to get us started. Let me nevertheless send a few initial
    comments now. First, regarding MC, my take on our discussion is that such
    would serve not only “for checking the integrity of analysis codes”
    (as stated in the note) but in particular also to quantify sensitivity. Second, I
    would like to propose a change in the section “after unblinding”
    to the recommendation “The committee recommends that the collaboration
    consider announcing the results from a blind analysis simultaneously with
    submission of the papers to the journals and preprint arXiv.” I also
    propose to add a recommendation that all papers from 2018 state explicitly if
    the analysis was blind or not, in the sense of the (one-and-only; it would be
    good to make this clear explicitly somewhere as well) blinding procedure STAR
    adopts.
    

Meeting Coordinates

To join the Meeting:
https://bluejeans.com/569548190

To join via Room System:
Video Conferencing System: bjn.vc -or-199.48.152.152
Meeting ID : 569548190

To join via phone :
1)  Dial:
	+1.408.740.7256 (United States)
	+1.888.240.2560 (US Toll Free)
	+1.408.317.9253 (Alternate number)
	(see all numbers - http://bluejeans.com/numbers)
2)  Enter Conference ID : 569548190

December 21, 2017

  • Jim's slides
  • Jie's slides
  • Jie's comment:
    Is it possible the detector non-uniform acceptance correction is done as a basic data
    calibration (a track level quantity weight, or <cosn,sinn> term), like, as a part
    of the TPC calibration, or during the picoDst production? Then it will be much easier
    for most of STAR data analysis, more or less like the model data analysis.
    
  • Sergei's followup:
    I support Jie's  proposal to calculate the quantities listed in his slide
    5, under "TPC subevents" section. I would add there the fours harmonic, and
    clarify that all those quantities are needed seperately for positive and
    negative eta subevents. I would calculate not the mean but just the flow
    vectors (but if all multiplicities are known it is not that important...).
    
    My only "worry", is that thosee quantities should be calculated using
    "good" tracks - so it is not clear if it can be calculated at the level of
    dst production.
    
  • Input from participants
    • From Ernst: inserted comments/questions directly into writeup
    • From Wei:
      - Assuming our recommended procedure is considered acceptable to physics
      groups, would it be possible to do a mockup exercise using MC events (e.g. AMPT).
      Since we are establishing a new procedure, analyzers may not be able to foresee
      some of complications until they actually give a try. I think this is one of the reasons
      we received limited feedback so far. We may also miss things for preparing such a
      procedure. Of course, MC samples won't have the signal we are looking for but I don't
      think that's relevant.
      
      - About unblinding conditions: you mentioned that "Code checked into CVS
      and passes review". My question is how codes will be reviewed exactly? Does
      that only require someone else to run the code and procedure the same results as
      main analyzers? After unblinding, the result should go straight to publication without
      further tweaking (except for detailed wording of the paper). What will be frozen exactly
      will have to be put in the context of physics message of the paper (to some extend, paper
      draft can also be prepared before hand). So I think GPC review should begin before
      unblinding, and GPC should review and approve the analysis to be unblinded. Once GPC
      is happy, the analysis should also be presented in the collaboration for approval of unblinding.
      There shouldn't be a time pressure on the GPC but they are made aware of the importance
      of the analysis. This is how blind analyses are handled in CMS but each blind analysis is
      handled independently there. In our case, I think it is possible to synchronize several CME
      analyses to go together.
      
      Maybe this is already clear but I just want to make sure (what I understood when talking to
      Zhangbu was that we will essentially be GPC of CME analyses).
      
      - It is crucial to establish and document all systematic uncertainties or at least exact
      procedures for obtaining systematic uncertainties for all sources that should be considered
      before unblinding, because they will drive the interpretation one way or the other. This is
      also a reason that analyses need to a full review by PWG and GPC, instead of just a code
      review, before unblinding.
      
    • From Jiangyong
      One question to think about before the meeting today. That is:  Whatever
      recommendations produced by the ABC committee on blind
      procedure/requirement, should we require them to be "signed-off" or
      explicitly agreed on by the majority of the CME analyzers? (to avoid
      issues/mistakes that appear later on)
      
  • Other relevant links

Meeting Coordinates

To join the Meeting:
https://bluejeans.com/796847846

To join via Room System:
Video Conferencing System: bjn.vc -or-199.48.152.152
Meeting ID : 796847846

To join via phone :
1)  Dial:
	+1.408.740.7256 (United States)
	+1.888.240.2560 (US Toll Free)
	+1.408.317.9253 (Alternate number)
	(see all numbers - http://bluejeans.com/numbers)
2)  Enter Conference ID : 796847846