2023 ops meeting notes

Under:

Notes from STAR Operations Meeting, Run 23

August 7, 2023

RHIC Plan:

Shutdown early.


Notable items/recap from past 24 hours:

Day shift: Cosmics

“Magnet trimWest tripped again”

Evening shift: Cosmics

“Expert managed to bring the magnet back around 17:05."

Owl shift: Cosmics

 “Smooth cosmics data taking during the whole night, no issues.”

Other items:

“I stopped TPC gas system ~8:10 at circulation mode and started high Ar flow. Magnet is down.”

“I started N2 flow for TOF, MTD and eTOF systems.”

“We turned off EPD and currently we are turning off VME crates”

“I powered down btow & gmt01 DAQ PCs. For now.”

Tonko will shut down iTPC and TPX after the meeting (leaving 1 for tests). Schedule time with Christian for maintenance.

Jeff will keep 1 or 2 evbs up but tomorrow will shut the rest down.

Cosmics summary: 17% runs bad. Final count: 51M (1.8x what Yuri wanted)

Shifters need to stay until end of morning shift (and help experts with shutdown). Officially cancel evening shift.


August 6, 2023

RHIC Plan:

Shutdown early.


Notable items/recap from past 24 hours:

Day shift: Cosmics

“Magnet trimWest tripped, called the CAD, they will try to bring it back” - no details

“Now, FST is completely shut down.”

“Alexei arrived, he solved the TPC oxygen alarm (gap gas O2) and confirmed that west laser does not work.” - will work on it tomorrow; will look at east laser today

Evening shift: Cosmics

“Magnet trimWest tripped. called the CAD.”

“Power dip and magnet dip around 10 PM."

“TR[G] component are blue but when all the components are included, the run won't start. When only include bbc and bbq, the run can start but DAQ Evts stays zero. DAQ: multiple VMEs are bad including VME1, we masked out all the bad VMEs.”

Owl shift: Cosmics

“L0 seem to have some issues, as Tonko also noted in the ops list; we rebooted the L0L1 VME, but still could not start a run after that, the daq was stuck in the configuring stage.”

Other items:

“GMT gas bottle was changed.”

“Alarm handler computer was completely stuck, we had to hard restart the machine.”

“We powercycled L0 crate once more and tried to run pedAsPhys with TRG + DAQ only and it worked.”

“Trigger rates were high, I called Jeff and he helped me to realize that majority of trigger nodes was taken out and I need to include them.”

5 hours of good cosmics (25/30M so far, ~1M/hr) — tomorrow morning will communicate with SL and start purging first thing in the morning assuming we hit the goal. If detector is not part of cosmic running, start earlier. sTGC will be done Monday.

Advice to shifters: cycle VME a few times. After 3 or 4 something might be wrong.

Tomorrow after end of run will turn off all trigger crates; all flammable gases.


August 5, 2023

RHIC Plan:

Shutdown early. (See email forwarded to STARmail by Lijuan at 3:30 PM yesterday for more details.)


Notable items/recap from past 24 hours:

Day shift: Cosmics

“Magnet is ramped up.”

“Temperature in the DAQ room is low enough, Tonko and Prashanth brought brouth machines back. Moving cooler in the DAQ room is turned off so the repaircrew repaircref could monitor how the AC runs”

“We turned on TPC, TOF, MTD and GMT for the cosmics”

“Tried to include L4 to the run, l4evp seems to be off”

“Alexei fixed the laser, both sides now work.”

Evening shift: Cosmics

“Will Jacobs called that he turned off the EEMC HV and LV to the FEE. We should leave EEMC out of the running over the weekend.”

"Trim west magnet tripped around 7:30 PM, called 2024 at 10:00 PM. They brought back the trim west magnet.” (Will follow up this evening) — these runs were marked as bad

Owl shift: Cosmics

“West camera is not showing anything” (Flemming sees no tracks) → “Both sides were working for us”

Other items:

Need to make sure shifters don’t come.


August 4, 2023

RHIC Plan:

Decision coming later today (fix starting in a week and resume vs. end and start early [STAR’s position]). Once official, will inform next shift crews.


Notable items/recap from past 24 hours:

Day shift: No data

“Magnet polarity is switched but the magnet is not ramped up yet.”

“MIX VME seems to have some hardware problem” -> fixed during the evening shift [Tim power cycled and cleared a memory error on the fan tray]

Evening shift: No data

“Nothing to report”

Owl shift: No data

“Nothing to report”

Other items:

Magnet up → waiting for DAQ room AC to be fixed this morning (hopefully) [UPDATE: fixed] → DAQ room computers turned back on → cosmics for 1.5-2 days → end Monday and purge → week after next, things coming down

Looks like we’re out of water again in the trailer


August 3, 2023

RHIC Plan:

No official decision yet. Likely end of tomorrow. Nothing changes (shift crews, etc.) until we have that info.


Notable items/recap from past 24 hours:

Day shift: No physics

Travis: “calibrated star gas detection system”

“etof_daq_reset command now works”

“FST Cooling was refilled. Reservoir level was filled from 66.6% to 90.4%. Swapped from pump 2 to pump 1.”

“We turned the detectors to save stages to prepare for the transfers switch test. Magnet is ramping down right now.” -> “The test is done and VMEs are back with David's help.”

“To reduce heat load while the DAQ Room A/C is offline, I'm starting to shutdown DAQ computers at this time (almost everything in the DA Rack Row is a candidate for shutdown).”

“DAQ computers which were shutted down by Wayne: tpx[1-36] except tpx[14] which is not remotely accessible (Dropped out of Ganglia at ~12:40 pm - possible hardware failure?); itpc[02-25]; fcs[01-10]; EVB[02-24]

Tim: “Replaced QTD in EQ3 with the non used QTD in EQ4”

“BCE crate: DSM1 board in slot 10 (Id9) and slot 11 (Id10) are swapped. Board address changed accordingly.”

Evening shift: No physics

Tonko “shut down even more DAQ machines; all stgc, all itpc, all tpx, all fcs, all fst, tof, btow,etow.”

Jeff and Hank fixed the trigger problems mentioned last time.

SL had a medical emergency and was transported to hospital. Thanks to Daniel for coming a bit early to take over. I will take her shift tonight.

Owl shift: No physics

Nothing to report

Other items:

Magnet polarity flipping today: 2 - 3 hours starting now. Will run cosmics for 1.5 - 2 days.

AC work yesterday, ongoing today. DAQ room still hot. Will not turn on unless this is fixed.

Just use TPC, TOF, MTD, BEMC


August 2, 2023

RHIC Plan:

Today: maintenance. Tomorrow - rest of run: ?


Notable items/recap from past 24 hours:

Day shift: Smooth physics runs + cosmics

At about 12:30, helium leak at 4 o’clock (blue — fixed target not possible either). Developing situation — may get the decision to end the run within the next few days. JH advocates for reversing polarity for two days after this maintenance before ending (because we couldn’t get it done before/during the run). STAR PoV: data-taking efficiency, machine status — best benefit from shutting down, save funds for next year. 4 months between end of this one and beginning of next one. Discussion point raised by Lijuan: how long do we need for cosmic data taking? Switch polarity immediately after maintenance for 2 to 3 days. Prashanth will talk to Jameela. When polarity is switched, Flemming will talk to Yuri.

Evening shift: Cosmics

“MCR called that due to the failure they won't be staffed over the night. In case anything happens, we need to call 2024”

Owl shift: Cosmics

“There was alaram in VME in first floor platform (cu_vme62_minus12volts, cu_vme62_plus12volts, cu_vme62_plus5volts & cu_vme62_fanspdm_nms). So we have turned on VME62 in first floor platfrom control. and alaram stops.”

“we had `L1[trg] [0x8002] died/rebooted -- try restarting the Run` critical message in the DAQ, then lots of `Error getting event_done client socket` messages. Also, vme-62_lol1 alarm sounded, DOs restarted crate. We rebooted all in the DAQ, then did the etof restart procedure as well.”

Summary: “had daq issues which we were not able to solve during the night, trigger was showing 100% dead (see details in shiftlog). We tried rebooting crates, first only BBC, then all of them one by one, but it did not solve the issue.” — Ongoing problem…To make sure TCD is ok do pedasphys_tcdonly w/ trigger and daq. Tonko thinks something is wrong with BBC.

Other items:

Modified ETOF procedures in detector readiness checklist and printed out/uploaded new ones (ETOF critical plot instruction, Canbus restart procedure also updated)

Should crate 54 still be out? — 54 is part of the old GG (control). And can be left off, yes.

Accesses? Tim for EQ3-QTD, Gavin: “Te-Chuan and I plan to refill the FST cooling system during the access tomorrow.” Alexei: west laser. Tim&Christian swapping BE-005, BE-006 to isolate missing 10 trigger patches which come and go.

Will make a list of detectors needed for cosmics and reduce shift staffing now. SL can decide (SL+DO minimum until gas watch mode).

Daq room temperature going up while AC is being worked on today.


August 1, 2023

RHIC Plan:

Today: physics. Wednesday: maintenance (7:00 - 16:00). Thursday - Monday: physics.


Notable items/recap from past 24 hours:

Day shift: Cosmics + mostly smooth physics running

“We tried to powercycle EQ3 crate and reboot trigger, the purple parts in the EPD plots belong to eq3_qtd and the orange to eq3.” — EQ3 problem seems to be fixed. EQ3_QTD problem won’t be until the board is swapped. Pedestals were not being subtracted correctly when qtd died

Evening shift: Cosmics + physics

“Two attempts for injection had failed at late stages; and a third one made it to the PHYSICS ON, but it lasted only for almost a couple of hrs”

Owl shift: Mostly smooth physics running

“ETOF critical plot had a new empty strip in Run 24213007, after run was stopped DOs followed the restart instructions, we rebooted ETOF in the daq [etof_daq_off], critical plots look fine in Run 24213008. Note: it should be clarified if this is indeed the right thing to do, because it takes more than 5 minutes between the runs which could be used for data taking.” — should be done between fills, as instructions say. Update: SL wrote an entry in the shift log clarifying the ETOF procedures.

“The very first physics run of the new fill (Run 24213004) was a complete 45 minute run without any noticable issue, however, strangely it only shows about 244K events (much less compared to the usual ~10M). Also, Run 24213012 was a complete 45 minute run, and it shows about half of the expected events, around 4.5M”. Database issue? Rate was fine. Talk to Jeff (out for the week). Flemming: if run is marked as good before counting is finished, shows a wrong number.

Other items:

“we just started the last big water bottle”

Another medical issue with SL trainee (SL starting today), but will hopefully not miss any shift.

“L3 Display: strange issue with lots of tracks [clusters?] at 7 o'clock in some events” (changeover checklist from owl shift) [check 24212006]

Large beta* test for sPHENIX (normal for STAR) with 12 bunches, lower lumi. Normal physics run after that. Update: sPHENIX requested no-beam time after that normal fill for 4 hrs.

Accesses tomorrow: Tim [removing bad board, EQ4 put in]


July 31, 2023

RHIC Plan:

Today-Tuesday: physics. Wednesday: maintenance


Notable items/recap from past 24 hours:

Day shift: Cosmics

"eq3_qtd is still out” — affects EPD. Hank is looking. Christian swapping in qtd or taking out of eq4 which is not being used and configuring fine (during Wednesday’s maintenance). Up to Hank. Haven’t heard back from Chris this morning.

ETOW: “_crate_ 1 lost its ID and so results that crate are junk.”

“sTGC yellow alarm for pentane counter, called Prashanth. He said that we should monitor it and if it changed rapidly, we should cal him again.”

Evening shift: Physics

“PHYSICS is ON @ 7:40 pm. Finally”

“low luminosity as it is almost 6.5 kHz at STAR ZDC.” — voted to dump. Refilled with higher rates ~ 13 kHz.

Owl shift: Physics

“Stopping the run did not succeed, attached is the trigger status (everything is in ready state on the webpage, including trigger)” “[E?]Q2 was in an incorrect state, it was at least a communication issue, and EQ2 needed a reboot, which could have been tried from the slow controls GUI (1st floor control platform), but Jeff did it from the command line. He also said in such a case (after realizing this is a trigger issue) a trigger expert could also have been contacted.” — procedure: reboot, power cycle if necessary, call Hank.

“There are two empty bins in BTOW HT plot. We saw it earlier today, too. This issue seems to come and go.” — be005 blank. No idea of cause of problem or of recovery right now.

“TPC:The iTPC #cluster vs sector QA plot has a hot spot for sector 19 (attached). This issue has persisted since the beginning of this fill (run 24211047)” — max # of clusters is a bit smaller in that sector. Has been going on the whole run and is not an issue.

“DO switched Freon line from line A to line B following an alarm that said that the pressure went below 5 psi.”

Other items:

Shifters doing better; one DO trainee returned to shifts, one may return today. Both seem set to assume their duties as DOs next week, with affirmative statements from their SLs.

Methane: identified methane source — 18 cylinders before running out, good for rest of run. (Also 2 bottles from national labs).


July 30, 2023

RHIC Plan:

Sunday—Monday: Physics


Notable items/recap from past 24 hours:

Day shift: Cosmics

“They have problems with injecting blue ring and need short access”

Evening shift: Cosmics

Storm => “Magnet trip at ~8:25”; “VME crates 63, 77 and 100 tripped…Lost Connection to BBC, VPD and EPD but we believe that this is because they all use BBC LeCroy. Will try to restore connections soon. TPC FEE were off before the storm.”

Owl shift: Cosmics

Persistent “ETOW: Errors in Crate IDs: 1 -- RESTART Run or call expert if the problem persists.” message. Continued after load write and read on individual fee crates and master reload. ETOW seemed to be recording normal data so they kept it in the run.” “Tonko said this issue should be fixed for physics.” — suggested power cycling crate but didn’t know how to do it. Oleg may know how to do it if Will doesn’t respond. Corruption means stale data. Update: the DO from today’s morning shift was able to fix the problem by following the manual’s instructions for power cycling before the load write and read. They think the instructions could be updated to be a bit clearer.

Other items:

Another DO trainee had a health problem and needed to stay home from this owl shift. Will update with any developments. DO trainee from evening shift is back from the hospital resting for a few days. Hopefully will be able to take her DO shift next week as normal. Need to verify their capabilities before they would start as DOs next week.

Jim suggests a “Weather Standdown [w]hen a thunderstorm is reported to be approaching BNL”. Will be implemented.

From this shift: “l2new.c:#2278 Most timed out nodes : EQ3_QTD::qt32d-8 (2000)” ”We were not able to bring back EQ3_QTD, restarted the EQ3 crate multiple times and rebooted the triggers. When I try to start the run after the reboot, error message says Check detector FEEs. Contacted Mike Lisa, he will bring it up at 10 o'clock meeting. Right now we started run without eq3_qtd.” David Tlusty has been contacted about a button not working for restarting the crate (#64). Alternative with network power switches? Not just QTD affected, but entire crate. VME board not coming back up. May need access. Update: now can turn it on in slow controls, but STP2 monitor says it’s off. Akio couldn’t be reached about this, and eq3_qtd remains out.

Alexei made an access for the laser (laser run was taken and drift velocity and other plots look good, but west laser is not working and will require longer access on Wednesday), but DOs have been informed and will pass on that only east camera should be used. Alexei also looked at EQ3: not responding. Will send Hank an email after trying a hard power cycle. Seems to still be on but not communicating.

Primary RHIC issues: power supplies; power dip on Thursday; magnet in ATR line is down. Weather looks better for the next week.

New procedure: “After rebooting eTOF trigger (or rebooting all triggers)[,] in etofin001 console (eTOF computer) command "etof_daq_reset". It should be typed after bash.” This is now written on a sticky note by the ETOF computer and Norbert is contacting Geary about adding it to the ETOF manual.


July 29, 2023

RHIC Plan:

Saturday—Monday: Physics


Notable items/recap from past 24 hours:

Day shift: Cosmics

Tim: “replaced compressor contactor for STGC air handler. Compressor now runs SAT.”

“Only subsystem which is not working now is the laser”

Evening shift: Cosmics

“one of the main magnet @ AGS has tripped and they are going to replace it”

“MCR changed the plan as they have a problem with one of the booster magnet”

“Alexei came around 8:00 pm and he fixed the east side camera, but not the west as he needs an access in order to fix it.” (not during night shift, after Saturday 20:00)

“…event display…shows the cosmic rays but not the laser tracks."

Owl shift: Cosmics

“Laser run at 7:15 AM, the drift velocity plot is empty” (leave it out for now)

Other items:

Related to SGIS trip: Removed Prashanth’s office number from expert call list. JH printed signs now posted in the control room with an instruction of what to do in the case of an alarm. Shift leaders have been briefed on the procedure.

“Noticed that EVB[6] is put back, there is no info about it in the log.” — since it seems to be working, leave it in.

DO trainee from evening shift had medical emergency. Shift crew from this current shift is with her at hospital. For this week, can operate without DO trainee, but she has two DO weeks (Aug 1, Aug 15). Will hopefully get an update on her condition today and plan accordingly.


July 28, 2023 

RHIC Plan:

Friday—Monday: Physics


Notable items/recap from past 24 hours:

Day shift: Mostly smooth physics runs + Cosmics

“EVB1 stopped the run, was taken out for further runs, Jeff was notified.” (Can put it back in the run; was actually a small file building problem)
“Temperature in the DAQ room was high in the morning, experts went to the roof and half-fixed the problem. They need access for longer time. Prashanth brought another portable fan and the temperature is now ok.”

Evening shift: Cosmics

“6:41 pm at flattop; then unexpected beam abort…problem with the power supply”

“magnet trips and the TPC water alarm fires…Few mintues later the Water alarm system fires at the control room…MCR informed us they are a general power issue and there are many systems tripped…slow control systems are down”

Owl shift: No physics

“We tried to bring back all the subsystems over the night.” Ongoing problems: “Laser: No, called Alexei…TOF: No, cannot reset CANBUS need to call Geary, already called Chenliang and Rongrong…MTD: same as TOF…ETOF: No…sTGC: No, air blower problem, Prashanth is aware” (Tim is currently checking on it; will let Prashanth, David know when it’s done)

“MCR is also having multiple issues with bringing back the beam"

Other items:

Thanks to experts (Jim, Oleg, Prashanth, Chengliang, Rongrong, Chris, anyone else I missed) for help during the disastrous night

Clear instructions for shift leaders: call global interlock experts on call list, turn off everything water cooled on platform. Written, and PC (or outgoing SL) talking to each shift leader and walking them through logging in and doing it.

Bring back TOF first (Geary will look at it after this meeting), laser second, …

Experts: if your device is on network power switch, send David email with the information so he can upload list to Drupal


July 27, 2023

RHIC Plan:

Thursday—Monday: Physics


Notable items/recap from past 24 hours:

Day shift: Cosmics

“Run restarted ETOF>100 errors” (multiple times) + “Tried eTOF in pedAsPhys_tcd_only - failed, excluded eTOF”

“Temperature in DAQ room still slightly rising, needs to be monitored.” (as of 9:30: room around 84 F; high for next 3 days: 89, 91, 90). 90+ is danger zone => shutdown

Evening shift: Cosmics + mostly smooth physics running

“I got to stop this run due to a critical message from evb01 of daqReader.cxx line 109 states "Can't stat '/d/mergedFile/SMALLFILE_st_zerobias_adc_24207054_raw_2400013.daq' [No such file or directory]”” (also happened this morning; Jeff is looking into it.)

“When the beam is dumped a pedAsPhys_tcd_only with TOF, MTD, ETOF, 1 M events and HV at standby, and the run to be marked as bad, per Geary request via star-ops list.. If there is no ETOF EVB errs and no trigger deadtime, then the ETOF can be included in the run when the beam is back again.”

Owl shift: Mostly smooth physics running

“The run was stopped due to unexpected beam abort and FST HV problem (error 2).”

ETOF check mentioned above was attempted; not enough time to complete before beam returned.

“itpc 9, RDO2 was masked out"

Other items:

Roof access scheduled for next Wednesday, with no beam, for AC servicing. Prashanth will ask an expert to come look at it before Wednesday (today?) to determine if a half-hour access (at end of this fill, ~ 11:00) is needed or not. [UPDATE: AC techs are going to do a roof access after the fill.] Reflective covers for windows in the assembly hall could also be used.
If it gets too hot might need to do an unscheduled stop.

Longer term: is there any computing that doesn’t need to be done there? Could maybe take some of L4 offline.


July 26, 2023

RHIC Plan:

Today: APEX “Plan A” = 7:00 - 23:00. Affected by power supply failure — decision by 12:00. Thursday—Monday: Physics


Notable items/recap from past 24 hours:

Day shift: Mostly smooth physics runs

“Lost beam around 3:20 PM, and had a bunch of trips on TPC, FST, TOF.”

“The DAQ room temp. kept going up. Prasanth put a blower in the room, but the temperature needs to be monitored.”

Evening shift: No beam

“Only a cosmic run with the field on during the entire shift…A machine issue, namely the power supply failure, is still under investigations”

Owl shift: Cosmics

 “The JEVP server seems to have a problem and stuck at run 24207007” — “Jeff fixed the online plots viewer.”

Other items:

“Controlled access started around 8:40 AM. C-AD electricians went in to reset the fuses on a faulty AC.”


July 25, 2023

Notes from RHIC plan:

• Today: Physics run

• Wed: APEX

• Thu-Mon: Physics runs


Notable items/recap from past 24 hours: 

Day shift: Smooth physics runs before noon + 1 beam for sPHENIX BG test (2 hrs)

• Jeff: Updated production_AuAu_2023 and test_HiLumi_2023 configuration files:

production: increased UPC-JPsi & UPC-JPsi-mon from 50->100hz (nominal rates 100->200)

test_HiLumi: 1. set phnW/E to low rates; 2. removed BHT1-vpd100; 3. remove forward detectors from dimuon trigger; 4. set upc-main to rate of 100hz; 5. set upc-JPsi and UPC-JPsi-mon to ps=1

• Jim: PI-14 Methane alarm (Yellow); switched Methane 6 packs on the gas pad; added Alexei's magic crystals to TPC gas system which help enhance the Laser tracks

• Magnet down (2:00pm)

Evening shift: Smooth physics runs

• Owl shift: Smooth Physics runs

• EEMCHV GUI shows one red (chn 7TA) and two yellow (4S3, 3TD) channels.

 MAPMT FEE GUI is all blue in the small one, and all red in the detailed view.

 However, no apparent problem seen in the online monitoring plots

• EPD PP11 TILE 2345 had low ADC values. reboot EQ3, TRG and DAQ, and took trigger pedestals. issue was fixed

Other items:

• Outgoing PC: Zaochen Ye --> Incoming PC: Isaac Mooney

• Ordered for gas methane 6 packs at beginning of run, but will discuss offline

• Water bottles are empty, get some from other trailer room


July 24, 2023

Notes from RHIC plan:

• Today: Physics run + single beam experiment (for sPHENIX BG test) around noon (~1 hour)


Notable items/recap from past 24 hours: 

Day shift: Smooth physics runs

• BTOW-HT plots have missing channels near triggerpatch ~ 200, Oleg suggested to reboot trigger, rebooted but the problem persists, Hank called and suggested that we powercycle BCE crate, We powercycled BCE crate but the problem persists.

• TOF Gas switched PT1 Freon line B to line A

Evening shift: Smooth physics runs

• Jeff Called in and helped us fix the L4Evp:.

• It was not working because:

1. l4evp was not included in the run. It was not clearing from the "waiting" state because it had been disabled from the run, so when L4 was rebooted it was NOT rebooted. Putting it back in the run fixed this.

2. xinetd is used in the communication between the Jevp and the DAQ server. It was in an inconsistent state, so I restarted xinetd.

Owl shift: Physics runs with a few issues

• Beam dumped around 2:20am due to power dip issue

• Magnet went down, VME crates went down as well

• TPC cathode was unresponsive, powercycle VME create associated with cathode (57) fixed the issue

• LeCroy that goes to BBC/ZDC/upVPD. DOs restarted the LeCroy, and BBC and upVPD got back. ZDC IOC still not good. There were 2 screens running LeCroy. Killed both and started IOCs fixed the issue.

• Back to physics around 5am.

Other items:

• Gene: “Distortions from Abort Gap Cleaning on 2023-07-21”

• MB DAQ rate dropped from 41k to 37k (due to TPC deadtime), now back to 41k

• High-lumi test, next week?


July 23, 2023

Notes from RHIC plan

• Today-Monday: Physics run


Notable items/recap from past 24 hours: 

Day shift: Smooth physics runs

• Empty areas in eTOF digidensity plot, Geary suggests full eTOF LV/FEE power cycle + noise run during 2 hours access.

Evening shift: 3 physics runs + a few issues

• MTD HV trip for BL4,5,6,7 before flattop. DO powercycled HV block 4-7 following the manual and fixed the issue

• Online QA plots were not updating, restarted Jevp server from the terminal from the desktop near window, fixed it

• L4 has an error: l4Cal, l4Evp, L4Disp are not responding, and prevent starting the run. tried reboot L4, but it is not working. Jeff Landgraf helped work on issue. On the meantime, L4 was moved out and restarted the data taking.

• After l4Evp get solved by Jeff, the issue will be finally solved.

• BBQ from L2 Trigger had problem: Most timed out nodes : BBQ (2000). DO could not powercycle it because the GUI was not responding. Jeff powercycled it. DO contacted expert David and he restarted the canbus to fix the GUI

Owl shift: Smooth physics runs when beam is on

• Beam lost twice (2:27-4:00am, 7:25-9:15am)

Other items:

• MB rate drop (from previous normal 4100à current 3700 Hz), Jeff should check on the prescale, affected by UPC trigger? Dead time from TPC?

• Oleg: need to replace a DSM board? Hank: no need to do it. Oleg and Hank will follow up offline.

• BG level at the beginning of run is too high, triggered lots of trips/spike current from different detectors (sTGC, MTD, TOF,eTOF…). Solution: wait for “physics” (not “flattop”) to bring up detectors.

• Geary: to minimize eTOF effects on the data taking for physics runs (rest of eTOF for a while, Geary will talk to eTOF experts to get a solution on this), tem. Solution: leave eTOF out when it has issue and wait for eTOF expert notice to include it into run.


July 22, 2023

Notes from RHIC plan

•Today-Monday: Physics run


Notable items/recap from past 24 hours:

Day shift: Smooth physics runs

•Loss of EPD connection (but did not affect EPD data taking). Later the connection is back.

•TOF gas is close to low, change of gas would be this Sunday. Shifts should pay special attention.

•DAQ room AC stopped working. Experts replaced the problematic unit.

Evening shift: Smooth physics runs

•Alexei came, worked with the TOF gas (isobutane)

Owl shift: Smooth physics runs

Other items:

•A shift leader of July 25 day shift is filled


July 21, 2023

RHIC plan: 

Today-Monday: Physics run


Notable items/recap from past 24 hours:

Day shift: Smooth physics runs

Evening shift: Smooth physics runs

FST: HV alarm (Failure code 2). DO followed procedure of powercycle and fixed it.

mask evb01 out 

DAQ dead time was found 20m later than it should be, shifts need to pay more attention on it.

Owl shift: Smooth physics runs 

Other items:

eTOF operation should not cost any physics run time, Geary share new instructions

Operation at continuous gap (maybe every hour) cleaning, we should have a plan for the data taking during this condition.

A shift leader is missing for the week of July 25

Bill can help a few days and Dan will get a solution today

Run log is not working well

More attention on the deadtime from DAQ

Run log not work well


July 20, 2023

RHIC plan: 

Today-Monday: Physics run

 

Notable items/recap from past 24 hours:

Day shift: Maintenance

Jeff fixed the issue of Run Control GUI by rebooting X server

sTGC gas, re-adjust the pressure

Eleanor performed CosmicRhicClock test run 24200043

Evening shift: No beam due to (sPHENIX TPC laser work + power supply issue) 

Owl shift: Smooth physics runs from 3am 

Other items:

DAQ rate at high-lumi runs ~2-3k Hz, we can reach 5k for MB trigger, Gene want special runs a few minutes (DAQ: 5-4-2-4-5 k), sometime next week.

eTOF operation should not cost any physics run time:

Remove it from run if ETOF has issue, try to run a pedestal test after the beam dumped and before the next fill, if ETOF is running good with the test run then it can be included in next physics run, otherwise keep it out of run.

 

July 19, 2023

RHIC plan: 

Today: Maintenance (7:00-17:00)

Thu-Mon: Physics run

 

Notable items/recap from past 24 hours:

Day shift: Smooth physics runs + Hi-Lumi Test runs (90m)

Slow response/refresh of Run Control GUI, can be improved by moving the GUI window, but not completely solved.

Evening shift: Smooth Physics runs 

Owl shift: Smooth physics runs 

Maintain:

hours are need in the morning from 10:30am, TPC water will be out (TPC fees should be off)

sTGC gas, re-adjust pressure, reducing valve

tour for summer students


July 18, 2023

RHIC plan: 

Today: Physics run

Wed: Maintenance (7:00-17:00)

Thu-Mon: Physics run

 

Notable items/recap from past 24 hours:

Day shift: Smooth physics runs before 11am

Wayne replaced a disk in EEMC-SC

MCR: power supply issue

Jeff: 1. Removed zdc_fast 2. Put zdc_fast rate into the UPC-mb trigger 3. Added contamination protection to UPC-mb 4. updated production ID for UPC-mb; 5. Added monitor trigger for zdc-tof0; 6. added test configurations: CosmicRhicClock & test_HighLumi_2023

Evening shift: Smooth Physics runs since 6:30 pm

Owl shift: Smooth physics runs 

Other items:

remind shifts about eTOF instructions for this year run

Plan for Wed's maintain:

hours are need in the morning from 10:30am, TPC water will be out (TPC fees should be off)

sTGC gas, re-adjust pressure, reducing valve

tour for summer students

 

July 17, 2023

RHIC plan: 

Today: Physics run

 

Notable items/recap from past 24 hours:

Day shift: physics runs

“Error writing file st_X*.daq: No space left on device”. masked out EvB[5]

Evening shift: Physics runs

sTGC cable 4, 27, 28 were dead. DO powercycled LV and fixed the issuE 

eTOF 100% dead. DO powercycled eTOF LV

EVB[24] [0xF118] died/rebooted, After two times, masked EVB[24] out (Once it happen, try reboot it only 1 time, if not work, directly mask it out.)

Owl shift: Smooth physics runs when beam was on

magnet tripped at 3:40am, CAS fixed it, back to normal run after 1 hour (reason of magnet tripped is still not clear)

Other items:

Plan for Wed's maintain:

* hours are need in the morning, TPC water will be out (TPC fees should be off)

* sTGC gas, re-adjust pressure, reducing valve

 

July 16, 2023

RHIC plan: 

Today-Monday: Physics run


Notable items/recap from past 24 hours:

Day shift: 3 physics runs, mostly no beam

Tonko: Reburned failing PROM in iS02-4; Brand new iTPC gain file installed. Should fix issues with S20, row 35; Added code to automatically powercycle TPX RDOs if required

Jeff: L0 software update to make prescale determination (and run log scaler rate logging) to use the proper contamination definition adjusted scaler rate, Jeff will follow up on this issue.

magnet tripped at 1:47pm till the end of this shift (reason of this trip is unclear, need to follow up)

Evening shift: Physics run started at 7pm

BTOW ADC empty entry

eTOF 100% dead

TPX and iTPC both had high deadtime ~ 70%

Owl shift: Smooth physics run except beam dump (2:50-4:45am)

2:35 AM, sTGC gas pentane counter yellow alarm, Prashanth reset counter in sTGC gas system pannel to fix it

MTD gas changed the bottle from Line A to Line B (Operators need to pay closer attention on the gas status)

Other items:

Geary added instruction of ETOF DAQ issue into the ETOF manual

 

July 15, 2023

RHIC plan: 

Today-Monday: Physics run

Now, CAD is working on AC issue, will call STAR when they are ready to deliver beam


Notable items/recap from past 24 hours:

Day shift: Smooth physics runs

ZDC_MB_Fast was tested, need further tunning

Evening shift: Smooth physics run

VME lost communication at 5pm, David reboot main canbus 

sTGC fan temperature is higher than threshold, expert fixed it

Owl shift: Smooth physics run till beam dump

Other items:

eTOF DAQ issue was solved by Norbert, can join the run

 

July 14, 2023

RHIC plan: 

Today: Physics run

~ 1 hour CeC access around noon

Friday-Monday: Physics run

 

Notable items/recap from past 24 hours:

Day shift: no beam

Prashanth changed the sTGC gas.

Evening shift: Physics run

7pm, sTGC gas had an alarm. Expert came over to fix it.

iTPC and TPX high dead ratio issue, problematic RDO of iTPC 18(3), lost ~1 hour

Oleg came over and helped DO to fix the BTOW

Owl shift: Smooth physics run, except 2 hours no beam

Other items:

zdc_mb_fast, Jeff will monitor and stay tunning

eTOF, keep out of run due to it cause high trigger rate

Leaking in control room, from AC, close to eTOF but no harm at this moment, people are working on it. 


July 13, 2023

RHIC plan: 

Today: 2 hours control access, may have beam early afternoon

Friday-Monday: Physics run


Notable items/recap from past 24 hours:

Day shift: APEX

1 EPD ADC was missing since night shift, EPD exert called, solved by powercycling EQ1 and took rhickclodk_clean run. Shift crew should be more careful on the online plots, compare to the reference plots more frequently.

Evening shift: APEX

Jeff added inverse prescale for ZDC_MB_FAST (not tested, if shiftcrew see problems, deadtime~100%, please inform Jeff. Aim for taking data 4k at very beginning of fill, try to get uniform DAQ rate. Jeff will also watch it)

Owl shift: Cosmics

Ingo fixed eTOF DAQ issue


12 July 2023 

RHIC plan:

Today: APEX starting 7:30 am (~16 hours)

Thu - Mon: Physics run

sPHENIX requested no beam for Laser test(5 hours) either on Thu or Fri


Notable items/recap from past 24 hours:

Day shift: no much good beam, pedestal runs, 3 good runs

Evening shift: TRG issue, Beam dump due to power failure, pedestal runs 

TRG experts power-cycled triggers and notes, got the TRG back after 3 hours work

OWL shift: Smooth Physics runs 2:20-6:45 am


3 July 2023


RHIC/STAR Schedule

Running AuAu until maintenance day on Wednesday

 sPHENIX requested 5-6 hours of no beam after the maintenance.

Students from Texas are visiting STAR. It would be good to arrange a STAR tour for them.

Tally: 3.43 B ZDC minbias events.


Summary

· Continue AuAu200 datataking.

· Yesterday morning beam loss after about 20 minutes at flattop. Some FST HV tripped.

· Beam back at flattop around 10:50 but PHYSICS ON declared half an hour after that.

· Smooth datataking after that with a TPC caveat (see below)

· This morning beam loss that will take few hours to bring back.

· 107x107 bunches last couple of days to address the yellow beam problems.

Trigger/DAQ

TPC/iTPC

· Tonko worked on iTPC RDOs. Most have been unmasked.

· At some point the problems with a 100% deadtime started. Restarting run and/or FEEs did not always solve the problem. Tonko was working with the shift crew.

· Three RDOs are down (iTPC). Two may come back after the access.

BEMC

· Two red strips around phi bin 1.2 in run 24184004, normal otherwise

EPD

· West tiles did not show up in one run, but were back again in the next one.

FST

· On-call expert change


Hanseul will take over as a period coordinator starting tomorrow.


2 July 2023

RHIC/STAR Schedule [calendar]

Running AuAu until maintenance day on Wednesday

 sPHENIX requested 5-6 hours of no beam after the maintenance.

Air quality is has substantially improved for today, but this very much depends on the winds, thus may worsen again.

Tally: 3.23 B ZDC minbias events.


Summary

· Continue AuAu200 datataking.

· Beam loss around 17:45, TPC anodes tripped.

· Ran some cosmics until we got beam back around 22:00

· Smooth running after.

· EPD and sTGC computers were moved away from the dripping area.


EPD

 West tiles did not show up in one run, but were back again in the next one.

eTOF

· EVB errors once. Was in and out of runs. Some new empty areas reported.

· ETOF Board 3:16 Current(A) is 3A (normally it is ~2A). Shift crew says there was no alarm. Incident was reported to Geary.

 

1 July 2023

RHIC/STAR Schedule [calendar]

Running AuAu until maintenance day on Wednesday

 sPHENIX requested 5-6 hours of no beam after the maintenance.

AIR QUALITY!!!

AQI is not great but nowhere near the HSSD trip levels. The document is growing, but need more input if it is to become a procedure.

https://docs.google.com/document/d/1-NhZJmS9MjIotvHUd9bPRVwObS-Uo7pWdjML36DjgeI/edit?usp=sharing

Tally: 3.02 B ZDC minbias events.


Summary

· Continue AuAu200 datataking.

· sPHENIX requestion access yesterday morning.

· Tim swapped out the troubled BE005 DSM board with a spare. It was tested and Oleg ran bemc-HT configuration and verified that the problem that BTOW was having is fixed.

· Beam back (after the access) around 13:40.

· Beam loss around 20:40 causing anode trips

· Problems with injection. Beam back around half after midnight

· Very smooth running after that.


Trigger/DAQ

· Jeff made agreed modifications to a zdc_fast trigger and added it back

· Also put DAQ5k mods into the cosmic trigger and improved scaler rate warning color thresholds

TOF/MTD

· Gas switched from A to B.

eTOF

· new module missing.

 

30 June 2023

RHIC/STAR Schedule [calendar]

F: STAR/sPHENX running

 sPHENIX requested 2 hour RA from 9 to 11.

Running until maintenance day on Wednesday

 sPHENIX requested 5-6 hours of no beam after the maintenance.

AIR QUALITY!!!

AQI is not great but nowhere near the HSSD trip levels. The document is growing, but need more input if it is to become a procedure.

https://docs.google.com/document/d/1-NhZJmS9MjIotvHUd9bPRVwObS-Uo7pWdjML36DjgeI/edit?usp=sharing

Tally: 2.86 B ZDC minbias events.


Summary

· Continue AuAu200 datataking.

· Around 12:50 one beam was dumped for the sPHENIX background studies

· 12x12 bunches beam around 16:40. This was to test the blue beam background. MCR was step by step (stepping in radii) kicking the Au 78 away from beam pipe. This resulted in a much cleaner beam and yellow and blue showing the same rates. Now they are confident in the cause of the background but creating the lattice for this problem is a challenge.

· New beam around 2:20


Trigger/DAQ

· BHT3 high rates happened overnight

· Geary was able to remove the stuck TOF trigger bit.

· Tonko suggested leveling at 20 kHz, based on last nights beam and rates/deadtime.

TOF/MTD

· Lost connection to the TOF and ETOF HV GUIs. David suggested that it could be a power supply connection problem. The problem restored itself.

sTGC

· STGC pT2(2) pressure frequent alarms in the evening. SL suggested to change the pressure threshold from 16 psi to 15.5 psi. I do not know if it was changed. David will have a look at it and decide weather to lower the alarm or to increase the pressure a little.

Discussion

· For the moment keep the leveling 13 kHz and discuss the adjustment of triggers during the next trigger board meeting.

· Tim will replace the DSM1 board and Jack will test it.

· During next maintenance day magnet will be brought down to fix the leak in the heat exchanger that occurred after last maintenance.


29 June 2023

RHIC/STAR Schedule

Th: STAR/sPHENX running

F: STAR/sPHENX running

AIR QUALITY!!!

We were warned about air quality index reaching 200 today, which means the HSSD’s will go crazy and therefore fire department would like them off, which means turning the STAR detector off, as we did a couple of weeks ago.

Experts please be ready and please contribute to this document so we have a written procedure in case this happens again.

https://docs.google.com/document/d/1-NhZJmS9MjIotvHUd9bPRVwObS-Uo7pWdjML36DjgeI/edit?usp=sharing

Tally: 2.65 B ZDC minbias events.


Summary

· Continue AuAu200 datataking.

Beam back around 22:10

· Pretty smooth running except stuck TOF bit starting around 2:00. Geary is working on it.


Trigger/DAQ

· Jeff added tcucheck into the logs, so that does not need to be done manually anymore.

TPC/iTPC

· TPC anode trip in sector 11.

· Tonko worked on the problematic RDOs on the outer sectors that were masked in recent days. It seems that some FEEs have problems with DAQ5k, he masked them and RDOs are back to runs.

· Plan for inner RDOs is to take a look today or at the next opportune moment.

eTOF

· One more empty fee

Discussion

· Power cycle MIX crate to try to fix the stuck TOF bit. Shift crew did it, but did not seem to help.

· If the board for the TOF stuck bit problem needs to be replaced we will need an access.

· 8 o’clock run seems to have proper rate.


06/28/2023

RHIC/STAR Schedule

W: APEX 16 hours

 It will most probably be over around 19:00.

Th: STAR/sPHENX running

F: STAR/sPHENX running

Tally: 2.53 B ZDC minbias events.


Summary

· Continue AuAu200 datataking. 45-minute runs. Detectors ON at FLATTOP.

· Beam was extended way beyond its dump time due to the problems with injectors. Dumped around 19:00

· sPHENEX requested short controlled access (30 min) after which beam was back around 20:50

· First run was taken no leveling for tests after this we are running with leveling at 13 kHz.

· There is water dripping in the control room over the sTGC station.


Trigger/DAQ

· Tonko changed DAQ_FCS_n_sigma_hcal threshold from 2 to 5.

TPC/iTPC

· TPC anode sector 13 channel 7 tripped three times.

BEMC

· Overnight high rates of BHT3 and BHT3-L2Gamma.

· Oleg was contacted. Trigger reboot if run restart does not help seems to be helping.

· Oleg: DSM boards need to be replaced otherwise we see it picking up masked trigger pages.

EPD

eTOF

· Geary worked on eTOF and it was included in the runs. It worked without major problems.

· Lost a couple of fees and then the entire module was gone.


06/27/2023

RHIC/STAR Schedule [calendar]

T: STAR/sPHENX running

sPHENIX wants to run some steering tests to the beam will be dumped 2 hours earlier

W: APEX 16 hours

Th: STAR/sPHENX running

F: STAR/sPHENX running

Tally: 2.28 B ZDC minbias events.


Summary

• Continue AuAu200 datataking.

• Beam dumped around 12:45 and we went to controlled access asked by the sPHENIX

• Beam back around 19:00 but lost and then back in about 45 minutes.

• A/C in the control room is fixed.

• We asked MCR to level at 13 kHz zdc rate to take advantage of the DAQ5k. With the new beam we got 4.2 kHz DAQ rate, TPC deadtime around 40%.

• This morning we requested MCR tore remove leveling. Without leveling DAQ rates are ~4.2 kHz. zdc_mb dead times around 51-56%.

• Around 23:00 DAQ monitoring page had some problems but was restored to normal in an hour or so. Perhaps it is related to a single corrupt message which the DAQ monitoring cannot display. It will restore itself.

• There was also an intermittent problem loading the shiftLog page in the evening. 

• Vertex looks well under control.

Trigger

• Jeff made bunch of changes to the trigger setup as agreed at the trigger board meeting. Some low rate triggers were implemented (~ 2Hz and ~50Hz).

TPC/iTPC

• Alexei checked the laser system during the access.

• Couple of additional RDOs could not be recovered and were masked out.

• Tonko will look at the masked RDO status tomorrow during the APEX.

BEMC

• Oleg has masked out Crate 0x0F.

• Tonko suppressed BTOW CAUTION message for Crate 4, Board 4.

• The high DHT3 trigger rate showed up but was resolved by restarting the run.

eTOF

• Geary worked on eTOF. It was briefly included in the runs, but the problems persisted. So, it is out again.


In progress / to do

• Increasing run duration.

o Currently we are running 30-minute runs.

o Perhaps we can increase the run duration to 45 minutes?

o AGREED: switch to 45 minute long runs.

• Bringing detectors up at flattop.

o Currently detectors are brought up after PHYSICS ON is declared.

If experts agree that the beams at FLATTOP are stable enough to bring up detectors, we could opt for this.

o AGREED: to bring up detectors at FLATTOP.


Discussion

• Tonko mentioned that sometimes FCS04 is starting to record data at a very high rate that causing deadtime. Perhaps a better adc (nSigma) cut should be applied to remove the noise, which it most likely is at those high data rates.

 

06/26/2023

RHIC/STAR Schedule

T: STAR/sPHENIX commissioning

sPHENIX will need 4 hour access today. Time TBD around 10:30.

Tally: 2.12 B ZDC minbias events.


Summary

• Continue AuAu200 datataking.

• Fills around 10:00, 18:00, and 4:40 this morning.

• Many eTOF EVB errors. Much more than usual.

• Many BHT3 high trigger rate issues.

• Temperature in the control room was in low 80s and could not be adjusted using thermostat. The fan is blowing constantly because thermostat is set to low but there air it blow is not cold.

• MCR is periodically correcting the vertex position.

• They are monitoring it and will be triggering correction at 10 cm. They also said they are working on automated procedure of vertex correction.


TPC/iTPC

• Tonko updated sectors 1-12 (both inner and outer) to DAQ5k.

• TPX RDOs S11-5 and S08-6 masked as Tonko sees some problem with them.

• ITPC: RDO S24:1 masked later (FEE PROM problem)

• iTPC RDO S18:3 early this morning

• Gas alarm briefly chirped twice this morning.

• This morning Tonko finished updating the entire TPC to DAQ5k

• 24177033 first run with DAQ5k configuration

BEMC

• A lot of BHT3 high rate trigger issues

• Oleg masked out BTOW TP 192, 193 and 159 from trigger.

• Issue with high rate of triggers still persisted.

• Oleg: some crates lose configuration mid-run. Symptoms similar to radiation damage, which is strange with the AuAu beam.

• Constant power cycling of BTOW power supply should not be used so often.

• Oleg will mask the problematic boards to eliminate the problem.

eTOF

• Many EVB errors. eTOF was mostly out of runs overnight and this morning.

• After many attempts to fix and bring back to runs it was decided to keep it out.


Discussion

• J.H will let CAD know that we would like to level ZDC rate at 13 kHz to accommodate DAQ5k rates.

 

06/25/2023

RHIC/STAR Schedule [calendar]

Su: STAR/sPHENIX commissioning

Tally: 2.01 B ZDC minbias events.


Summary

• Continue AuAu200 datataking.

• Shift leaders were in contact with MCR to have z vertex steered back to center

• Smooth running otherwise.

• MCR was checking on their injectors this morning.


Trigger

• Jeff moved triggers to the recovered bits UPC-JPSI-NS slot 9->15, UPC-MB slot 14->31, fcsJPSI slot 12->34

TPC/iTPC

• jevp plots updated and show the missing RDO data in sectors 4, 5

• PT1 and PT2 alarm threshold lowered to 15.5 PSI, alarms sounded when they dropped below 16 PSI.

• With the new fill around 18:00 shift crew notices higher deadtime and lower rates (1.8 kHz). Tonko was able to fix the problem by power-cycling TPX Sector 8 FEEs, which seems to have be causing this issue.

• Tonko continued working on updating sectors.

• TPC parameters used by the HLT using drift velocity were just changed. This should properly account for the changing drift velocity to reconstruct the z vertex

BEMC

• Issue with BHT3 trigger firing with very high rate reappeared. Oleg was contacted and suggested to power cycle BEMC PS 12 ST when simple run restart does not help.

FST

• Settings/configuration reverted back to pre-timebin-9-diognosis setup. 


Discussion 

in case of dew point alarm contact Prashanth


06/24/2023

RHIC/STAR Schedule

Sat: STAR/sPHENIX commissioning

Su: STAR/sPHENIX commissioning

Tally: 1.89 B ZDC minbias events.


Summary

• Continue AuAu200 datataking.

• MCR Computer at the SL desk pops a message about needing to update something.

• We had about 2 hours with just one beam circulating as requested by the sPHENIX

• Z vertex is drifting away during the fill

• Unexpected beam dump around 1am. TPC anodes tripped.

• Took cosmic data until beam returned around 6:40 this morning.

• LV1 crate lost communication which caused FCS and sTGC alarms. Back after quick recovery.

• Smooth running since.


Trigger

• Jeff worked on trigger configuration

• Set pre/post = 1 for fcsJPsi, UPC-mb, UPC-Jpsi-NS triggers. (Bits 9,12,14). In order to debug issue with lastdsm data not matching trigger requirements.

• Jeff also changed the scalers that we send to CAD, which had been zdc-mb-fst and not it is changed back to zdc-mb.

• This morning Jeff moved these bits again to the slots that were previously considered “bad” and proved to be usable.

TPC/iTPC

• Methane gas has been delivered.

• Tonko checked problematic RDOs in iTPC sectors 3, 4, 5. The problem is now fixed and needs the jevp code to pick up the changes and be recompiled.

• Drift velocity continues to go down but shows signs plateauing.

TOF/MTD

• TOF gas bottle switched from B to A - 14:20

• TOF LV needed to be power cycled

FST

• Some progress update was distributed by email and experts will discuss it to make conclusion.

• Inclination seems to be switched the time bin back

• The switch will happen at the end of the current fill.


06/23/2023

RHIC/STAR Schedule

F: STAR/sPHENIX commissioning

Sat: STAR/sPHENIX commissioning

Su: STAR/sPHENIX commissioning

Tally: 1.79 B ZDC minbias events.


Summary

· From the 9 o’clock coordination meeting

o CAD has a plan to go back to the blue background issue and try to eliminate it.

o They will also work on tuning the beam to get our vertex centered.

o sPHENIX requested an hour long tests with single beam configuration (one hour for each). At the end of the fill one beam will be dumped and another one at the end of the next fill.

· Yesterday beam back around 13:15 after a short access that we requested.

· sPHENIX requested a short access around 17:00

· Beam back around 18:30 but without sPHENIX crossing angle. It was put in around 19:30 and that seemingly improved our background

· Smooth running after that.

· This morning PSE&G did some work. There was just a split second light flicker in the control room, but nothing else was affected.

Trigger

· Jeff updated MTD-VPD-TACdiff window: MTD-VPD-TACDIF_min 1024->1026. The TACDIF_Max stays the same at 1089DAQ

TPC/iTPC

· About 11 days of methane gas supply is available.

· Expectation to deliver 2 six-packs today.

· Drift velocity continues to decline

BEMC

· Oleg took new pedestals for the BEMC and noise problem has vanished. Must have had bad pedestals.

EPD

· Tim used access time to check on EPD problem.

· The East TUFF box CAT5 cable was disconnected. After reconnecting it everything seem back to normal.

FST

· Gene: FST crushes the reconstruction chain so it is out until fixed

Discussion

 Jeff: added monitoring to trigger bits and noticed that some triggers are not behaving as expected. There are some slots marked “bad” that could be used for the newly noticed “corrupted” triggers after checking that they are actually “bad” or not.


06/22/2023

RHIC/STAR Schedule

Th: STAR/sPHENIX commissioning

12 x 12 bump test @ 8:00

F: STAR/sPHENIX commissioning

About 1.69 B ZDC minbias events collected.


Summary

• Magnet was down for the cooling maintenance (heat exchange cleaning)

• Maintenance team was not able to wrap up early, so we kept magnet down overnight.

• Took zero field cosmics during the RHIC maintenance day.

• Beam back around 1:00 am with 56 x 56 bunches.

• We took data with production_AuAu_ZeroField_2023 configuration.

• Gene reported the DEV environment on the online machines to be back to normal operations. Problems are reported to be gone.


Trigger

• Tonko corrected the deadtime setting. Now it is set to the requested 720. This fixed the FST problems seen in the beginning of this fill.

TPC/iTPC

• About 12 days of methane gas supply is available. Suppliers are being pressed to deliver more ASAP.

• Tonko worked on moving more sectors to DAQ5k configuration. Came across problems with sector 6.

• iTPC iS06-1 masked

• Some empty areas in sectors 4,5,6

• Tonko will look once the beam is back. The cluster seem to be there but not seen on the plots (sec. 4 and 5)

BEMC

Oleg asked to power cycle crate 60 to address noise issues in BEMC. It did not help. Access is needed to attempt to fi this issue. The problem seems to have started on Saturday. Only few minutes access needed to the platform.

It was suggested to power cycle DSM as an initial measure to see if it helps, but this problem might also be coupled with the EPD problem we are seeing.

EPD

• EPD ADC east empty, EPD ADC west has limited number of entries.

• Experts are looking into this problem. It may be due to problem in QA plot making.

• Some sections were also reported to have problems.

• Might be the problem with the FEE.

• To check this issue access will be needed as well – up to an hour.

FST

• FST experts made changes for the time-bin diagnostics.

• It was having problems in the beginning of the fill but was settled after Tonko corrected the deadtime settings.

• Experts are looking at the data after the change.

• The timebin distribution might be indicating an out of time trigger presence. Jeff will also investigate this.


06/21/2023

RHIC/STAR Schedule

W: maintenance day: 7:00 – 20:00

sPHENIX TPC commissioning 5 hours after maintenance – no beam

Th: STAR/sPHENIX commissioning

12 x 12 bump test @ 8:00

F: STAR/sPHENIX commissioning

 

Summary

• AuAu 200 GeV continues.

• Around 11:00 sPHENIX asked for a one hour access. Took a few cosmic runs.

• Beam back around 12:45 with 50 x 50 bunches

• 111 x 111 bunch beam around 19:45, although the MCR monitor showed 110 x 111

• About 1.69 B ZDC minbias events collected.

• Dumped this morning around 6:30. Prepared for magnet ramp and brought the magnet down (and disabled). Around 7:00 David Chan confirmed that magnet was down and said they work on heat exchanger cleaning will start and we will be kept updated throughout the day.

• Depending how it goes we may or may not keep magnet down overnight.

Trigger

Jeff made some changes to the production trigger and L0 code

DAQ

• BHT3 trigger high-rate issue that causes deadtime has reappeared yesterday. Run restart did not help and neither all the other superstitious attempts. Coincidently beam was dumped and refilled around that time. Once we came back with a new beam the problem was gone.

• Oleg: looked and saw no error messages when this is happening. If it happens again suggestion is to power cycle the LV of this crate [4 crates affected by power cycle].

TPC/iTPC

• Needed some attention time to time (power cycling FEEs).

• Multiple peaks in drift velocity in a couple of laser runs (not all)

• Drift velocity keeps falling after the gas change

• Tonko will update about 6 sectors probably once beam is back 

TOF/MTD

EEMC

• Brian noted that EEMC tube base 7TA5 seems dead and can be masked

eTOF

• DAQ restarted and kept out for one run because of additional empty strip (13) noticed by the shift crew.

FST

• Time bin diagnostics plan? Doing the time bin change diagnosis in parallel of the offline analysis might be prudent.

• Ziyue will distribute the summary of the plan for this 9 time bin diagnosis.

• Jeff: there has to be changes made in trigger setup associated to the FST time bin change for us to run properly. 

 

Discussion 

• Zhangbu: MCR were using the ZDC rate without killer bit for their beam tuning. It seems now they are using the right rate (with killer bit). We might require to redo the vernier scan.

• Maria: EPD QA monitoring plots are lost since day 166. Akio had the same problem. Gene had been working on the DEV environment on online machines. There is some improvement but automatic running of jobs are failing.

 

06/20/2023

RHIC/STAR Schedule

T: STAR/sPHENIX

W: Maintenance day : 7 :00 – 20 :00

 sPHENIX TPC commissioning 5 hours after maintenance – no beam

Th: STAR/sPHENIX commissioning

 12 x 12 bump test @ 8:00

F: STAR/sPHENIX commissioning


Summary [last 24 hrs]

· AuAu 200 GeV continues.

· Over 1.56 B ZDC minbias events collected thus far.

· Beam extended past the scheduled dump time due to the issues at CAD. Unexpected beam dump around 2:20 this morning. Back around 6:50 and a quick loss. Back for physics around 7:30 again. Running since.


DAQ

· Yesterday afternoon: TPC showing 100%. Power cycling TPC fees did not help. Many things were tried, but only after PefAsPhys it was fixed, although the culprit was not clear to the crew. Problem was caused by BHT3. It was firing at a very high rate. If this happens restarting the run should fix the issue, if not call to Oleg should help.


TPC/iTPC

· Tonko: updated TPX 3 and 4 updated – ongoing process. Waiting for Jeff to discuss a couple of ideas about token issues in iTPC. iTPC 2 sectors updated so far.


FST

· From the discussion at the FST meeting: Test setting 9 time bin running for diagnostics. To test timing shift. This will slow down the datataking.

· Experts will discuss it further to come up with the action plan for this test.

· Tonko: the plan is to split forward triggers in DAQ5k. So after that slow FST will only affect forward triggers and thus less of a problem. Perhaps it is a good idea to wait for that to happen before these test.


Discussion

· Alexei: changed the gas. Old one was affecting the drift velocity because of a contamination. This change should stabilize changed the gas drift velocity. It has already started to drop.

 

06/19/2023

(Weather: 59-76F, humidity: 74%, air quality 22)


§ RHIC Schedule

This week transverse stochastic cooling (one plane each for both blue and yellow).

toward 2x10^9 per bunch, 56x56 will be regular. 

Physics this week,

· 56x56 nominal store yesterday.

· 111x111 store since last night 10:30pm.


§ STAR status

· Full field: zdc_mb = 1.45B, 280 hours of running.

· DAQ5k tested two sectors, ran at 5.2 kHz with 37% deadtime. See star-ops email from Tonko for details. Tonko: we should produce the FastOffline for this run, 24170017, to analyze the output.

Gene: /star/data09/reco/production_AuAu_2023/ReversedFullField/dev/2023/170/24170017


§ Plans

· Continue to take data thru the long weekend.

· Tonko, slowly ramp up the DAQ5k next week, 1hour/day ~ each day.

· FastOffline production for DAQ5k test runs.

· Reminder:

1) Trigger-board meeting tomorrow at 11:30am, see Akio’s email. To discuss trigger bandwidth.

2) RHIC scheduling meeting at 9:30am (was 3pm Monday).

3) Irakli will be Period Coordinator starting tomorrow, running 10am meeting. I will be giving the STAR update for the Time meeting at 1:30pm.


06/18/2023

(Weather: 59-78F, humidity: 66%, air quality 72)


§ RHIC Schedule

This week transverse stochastic cooling (one plane each for both blue and yellow).

toward 2x10^9 per bunch, 56x56 will be regular. 

Physics this week,

· 56x56 nominal store until Tuesday.


§ STAR status

· Full field: zdc_mb = 1.29B, 259 hours of running (+120M events since yesterday 2pm)

· Half field: zdc_mb = 247M, 38 hours of running.

· 500A field: zdc_mb = 68M, 11 hours of running

· Zero field: zdc_mb = 168M, 30 hours of running

· Smooth running and data taking since 2pm yesterday. Magnet, PS, cooling, all worked.

· Carl: lowered TOFmult5 threshold from 100 to 20 for the FCS monitoring trigger.

· GMT gas bottle switched. Shift crew should silence the alarm for the empty bottle.


§ Plans

· Continue to take data thru the long weekend.


06/17/2023

(Weather: 59-76F, humidity: 86%, air quality 29)


§ RHIC Schedule

This week transverse stochastic cooling (one plane each for both blue and yellow).

toward 2x10^9 per bunch, 56x56 will be regular. 

Physics this week,

· 56x56 nominal store until Tuesday.


§ STAR status

· Full field: zdc_mb = 1.17B, 241 hours of running.

· Half field: zdc_mb = 247M, 38 hours of running.

· 500A field: zdc_mb = 68M, 11 hours of running

· Zero field: zdc_mb = 168M, 30 hours of running

· STAR magnet is down, and we are doing PS cooling system work (heat exchanger cleaning)

Many junks accumulated on the tower side, while the PS side is clean as expected.

· Blue beam background seems to be only a factor of 5 higher than yellow.

· Shift overlap issue: Evening shift DO trainee  owl shift DO. My proposal is to dismiss him early to be prepared for owl shift. Carl: ask him not to come in for evening shift.

· David: MCW temperature changed from 67F to 65F. David proposes to put it to 63F, given the dew point ~ 51-54F. Prashanth will set it to 63F.


06/16/2023

(Weather: 58-79F, humidity: 61%, air quality 28)


§ RHIC Schedule

This week transverse stochastic cooling (one plane each for both blue and yellow).

toward 2x10^9 per bunch, 56x56 will be regular. 

Physics this week,

· Today will be 6x6 from now to ~1pm, and 12x12 in the afternoon.

· 111x111 nominal store starting this evening until Tuesday.


§ STAR status

· Full field: zdc_mb = 1.08B, 226 hours of running.

· Half field: zdc_mb = 247M, 38 hours of running.

· 500A field: zdc_mb = 68M, 11 hours of running

· Zero field: zdc_mb = 160M, 28 hours of running

· STAR magnet is at full field!

· TOF: pressure alarm from Freon, shift crew missed it.

· Tonko: DAQ5K, some tests were interrupted due to the magnet ramping.

· Blue beam background: now it seems the mystery is understood but not yet confirmed:

- Au78 is the source of the background. CAD did some calculations (can remain in RHIC for ~ 3 turns?, big spikes on Q3 magnet)

- 2016 didn’t have it because we had the “pre fire protection bump”.

JH: CAD will come up with a new lattice or plan to remove the background.

 

§ Plans

· Ready to take data!!!

· Tonko will finish the tests that were left unfinished.

· David: VME crates temperature sensor, what should we do with the alarm?

· FST: no more adjustment until next Tuesday.

· Lijuan: talked with David Chan, preparation work, e.g., chiller, heat exchanger, cooling system, etc. should be done during the shutdown and well in advance before the run.

Communication with the support group should be done thru 1 person, e.g., Prashanth, instead of thru multiple people and potentially cause miscommunications.


06/15/2023

(Weather: 58-77F, humidity: 67%, air quality 29)


§ RHIC Schedule

This week transverse stochastic cooling (one plane each for both blue and yellow).

toward 2x10^9 per bunch, 56x56 will be regular. 

Physics this week,

· Thursday: PSEGLI work at Booster cancelled. Moved to next Wednesday.

12x12 bunches 6:00-13:00, no beam 13:00-18:00.

· Physics for the rest of the week.


§ STAR status

· Full field: zdc_mb = 1.08B, 226 hours of running.

· Half field: zdc_mb = 247M, 38 hours of running.

· Zero field: zdc_mb = 159M, 28 hours of running

· STAR magnet tripped due to the water supply issue. A few SCR fuses blown. CAS is still working on it. The current estimate is it can be back online this afternoon.

· Tonko: DAQ5K will be tested with real data, zero or half field.


§ Plans

· Magnet will be ramped up from half to full field in small steps.

· FST: APB timing, experts will look into it.

· FST running with DAQ5K. Jeff provided possible trigger setups for PWG to choose from, Carl made some suggestions. Jeff provided codes to Gene for the FastOffline production.


06/14/2023

(Weather: 60-74F, humidity: 77%, air quality 28)


§ RHIC Schedule

This week transverse stochastic cooling (one plane each for both blue and yellow).

toward 2x10^9 per bunch, 56x56 will be regular. 

Physics this week,

· Wednesday APEX. (07:00-17:00) Overnight Physics.

· Thursday: PSEGLI work at Booster for 12-16 hours. Only one store during the day, if STAR has magnet.

12X12 bunches for morning, no beam for the afternoon.

· Physics for the rest of the week.


§ STAR status

· Full field: zdc_mb = 1.08B, 226 hours of running.

· Half field: zdc_mb = 247M, 38 hours of running.

· Zero field: zdc_mb = 124M, 21 hours of running

· STAR chiller is still being fixed. See Prashanth’s photos.

· David rebooted the main Canbus, the VME crate issues resolved.

· Tonko did some DAQ tests during the morning shift, including Elke’s request for sTGC. See shift log for details.

· Tonko: Data format is different for the DAQ5k, and online-found clusters are there but not the ADC plot.

· Shift Crew reported that the online QA plot doesn’t have many entries for laser runs, where the events did not get “abort”. JEVP plot issue? Alexi: need to train the DO to tune lasers better.

· Zhen Wang had some issues recovering daq files from HPSS, should contact star-ops (expert: Jeff). Ziyue had similar issues (FST).

· Shift: one DO trainee came to shift for all day without taking RHIC Collider Training.

This is not acceptable, and each institute council representative needs to be responsible!

One possible solution is that: Period Coordinator checks ALL shift crew’s status online each week, e.g., Friday.


§ Plans

· Shift: Email reminder to the entire Collaboration. Bill: talk to CAD about training/schedule.

· Elke: some updates are needed on sTGC. Elke will send it to star-ops.[1]

· DAQ5k hope to be working before next week…

· sTGC group needs to come up with a plan. QA team needs to look into forward detectors.

· FST: APB timing, experts will look into it.

· FST running with DAQ5K. How to make the trigger? FST limit is at 3k. (prescale for the time being). Also follow up with PAC, PWGC, and trigger board.

Jeff will provide possible trigger setup for PWG to choose from.

 

[1] summary from todays sTGC meeting.

So Tonko uploaded the correct software to the one RDO which was replaced before the RUN, this definitely improves the time bin plot on page 144 for the online plots.

Based on the recent runs we will keep the time window at -200 to 600 so we do not cut in the distribution and also if the luminosity goes up we will need it.

The Multiplicity plot has not improved yet, first becuase the online plots have a cut on it so can we please remove the time-window cut on the multiplicity plot, page 142.

But of course one still needs to check the multiplicity plots per trigger, to explain the shape offline.

Additional observations, page 139 plane 4 quadrant C VMM 10 to 12 are hot, this most likely is FOB 87 which looks strange on page 148.

Should we disable it or live with it, or can we wiggle a cable during an access.


06/13/2023

(Weather: 63-77F, humidity: 74%, air quality 28)


§ RHIC Schedule

This week transverse stochastic cooling (one plane each for both blue and yellow).

toward 2x10^9 per bunch, 56x56 will be regular. 

Currently 111x111 bunches, started the store from yesterday.

12x12 bunches after this store for sPHENIX.

Physics this week,

· Tuesday: 100 Hz leveling at sPHENIX. ~ No leveling at STAR.

· Wednesday APEX.

· Physics for the rest of the week.


§ STAR status

· Full field: zdc_mb = 1.08B.

· Half field: zdc_mb = 235M, 34 hours of running.

· Shift changeover went smoothly.

· STAR chiller is being installed now.

· VME crate 77: Tim went in yesterday during the access and checked the voltage on those crates. They were fine. Issues are the Slow Control or monitoring?

David: Reboot the main Canbus.

· Tonko did some DAQ tests.

· FST running with DAQ5K. How to make the trigger? FST limit is at 3k. (prescale for the time being). Also follow up with PAC, PWGC, and trigger board.

Elke: we should think of which trigger needs FST first, e.g., how much data needed.


§ Plans

· For the VME crate 77, David is going to reboot the main Canbus today.

· sTGC group needs to come up with a plan. QA team needs to look into forward detectors.

Tonko suggests: look at some low event activity events, e.g., upc triggers.

FST: APB timing, experts will look into it.


06/12/2023

(Weather: 65-74F, humidity: 79%, air quality 61)


§ RHIC Schedule

This week transverse stochastic cooling (one plane each for both blue and yellow).

toward 2x10^9 per bunch, 56x56 will be regular. 

After the current store (dump time @ 12pm), it will be 111x111 for one store until 9pm.

· Controlled access 45mins after this store.

· Machine testing next store.

Physics this week,

· Mon: 1kHz, Tu: 3kHz, leveling at sPHENIX, but normal rate at STAR.

· Wednesday APEX.

Monday 3-5pm, tour at STAR control room, guided by Lijuan and Jamie.


§ STAR status

· Full field: zdc_mb = 1.08B.

· Half field: zdc_mb = 99M, 15 hours of running.

· TOF issue resolved. NW THUB is now running on the external clock.

· Magnet tripped again when ramping up at midnight. Outdoor temperature was ~65F.

· STAR chiller ready on Tuesday. JH: first thing in the morning, confirmed, a few hours expected. Tonko: use this time to run tests on the TPC with zero field.

· Many “Didn’t build token because of ..abort” error messages. Remind the shift crew for next week. Jeff will take this caution message out.

· VME crate 77 (BBQ) LV PS seems to have problems. Akio looked thru the QA plots and found nothing is wrong. Trigger group should investigate it, and Tim can be ready around 9am to go in, if we request controlled access.

· Jamie mentioned the drift velocity isn’t great? [1] (run 24163024), HLT people look into it. Tonko: could be half field effect?


§ Plans

· Hank will look at the problem of crate 77 (BBQ) LV ps, and Tim will go in during the Control Access.

· Diyu will grab new drift velocity from this year.

· Tonko: going to test the DAQ5K, mask RDO 6, Sector 1 in the code. DON’T mask it in Run control.

· Jeff will update ALL the trigger ids after the fix of TOF issue.

· sTGC group needs to come up with a plan. QA team needs to look into forward detectors.


06/11/2023

(Weather: 60-78F, humidity: 73%, air quality 58)


§ RHIC Schedule

This week transverse stochastic cooling (one plane each for both blue and yellow).

toward 2x10^9 per bunch, 56x56 will be regular. 

Physics all week until Monday, however,

· No beam: Fri-Sun, 9pm-2am

· Next week, Mon: 1kHz, Tu: 3kHz, Wed: 5kHz leveling at sPHENIX, but normal rate at STAR.

Monday 3-5pm, tour at STAR control room, guided by Lijuan and Jamie.

06/14 APEX.


§ STAR status

· zdc_mb = 1.08B, 226 hours of running time. (~+90M since yesterday)

· Three magnet tripped over the last ~16 hours!

· STAR chiller ready on Tuesday.


§ Plans

· Will be running half-field now.

· TOF: change or swap a cable to a different port. Tim can go in Sunday night 9pm-2am during no beam downtime. Geary will be monitor/check.

Tim: check NW THUB if it is on local clock mode.

· David: if half-field running, will look into the alarm handler.

· sTGC group needs to come up with a plan. QA team needs to look into forward detectors. 


06/10/2023

(Weather: 54-75F, humidity: 69%, air quality 20)


§ RHIC Schedule

This week transverse stochastic cooling (one plane each for both blue and yellow).

toward 2x10^9 per bunch, 56x56 will be regular. 

Physics all week until Monday, however,

· No beam: Fri-Sun, 9pm-2am

· Next week, Mon: 1kHz, Tu: 3kHz, Wed: 5kHz leveling at sPHENIX, but normal rate at STAR.

Monday 3-5pm, tour at STAR control room, guided by Lijuan and Jamie.

06/14 APEX.


§ STAR status

· zdc_mb = 994M, 212 hours of running time. (~+60M since yesterday)

· Vernier scan finally happened last night. (background seems to be different when vernier scan happened at IP8)

· TOF investigation. Tim went in to move the NW-THUB TCD cable to a spare fanout port. Problem persists.

· RHIC seems to have problems inject yesterday, and 9am just lost the beam.

· STAR magnet chiller status: Tuesday will be ready.

· sTGC timing is off. RDO changed, did Tonko look into this?


§ Plans

· TOF: change or swap a cable to a different port. Tim can go in Sunday night 9pm-2am during no beam downtime. Geary will be monitor/check.

· sTGC group needs to come up with a plan. QA team needs to look into forward detectors.


06/09/2023

(Weather: 53-70F, humidity: 71%, air quality 59)


§ RHIC Schedule

HSSDs enabled in STAR Thursday, and resumed operation.

This week transverse stochastic cooling (one plane each for both blue and yellow).

toward 2x10^9 per bunch, 56x56 will be regular. 

Physics all week until Monday, however,

· Today: sPHENIX requests 20 mins access after this store.  first 6x6 bunches for MVTX.  vernier scan with 56x56 without crossing angle.

· No beam: Fri-Sun, 9pm-2am

· Next week, Mon: 1kHz, Tu: 3kHz, Wed: 5kHz leveling at sPHENIX, but normal rate at STAR.

Monday 3-5pm, tour at STAR control room, guided by Lijuan and Jamie.

06/14 APEX.


§ STAR status

· STAR is back on running. zdc_mb = 933M, 202 hours of running time. (~10% of goal)

· Yesterday, first fill was 6x6 bunches and 56x56 afterwards.

· We followed procedure of turning all systems back on, with help of experts. Everything was brought back within 1h 5mins, except TPC. Total was about 3 hours. TPC cathodes power supply (Glassman) and two control/monitor cards (4116 and 3122) were replaced. Alexei: contacted sPHENIX (Tom Hemmick), and need to build a spare for the HV system for cathode. David: buy a new power supply, but Tom also has some spares in the lab.

· TOF: Since beginning of Run 23, ¼ of TOF was lost, only ¾ of TOF work (?). Not sure what the cause is. Offline QA should look at TOF trays. Bunch IDs were not right, and data was not right. More investigations are needed.

· UPC-jet triggers rates were much higher after STAR restarted, regardless ETOW had problems or not. Other triggers, please also pay attention to the difference if any. (W. Jacobs just fixed/masked one of the trouble bits, rates seem ok)

· DAQ: Event abort errors happened a few times. Look out online QA plots to see if they are empty. Jeff will remove that caution message.

 

§ Plans

· TOF experts should provide instructions to star-ops and/or offline QA team.

· We need to update the procedure after the Power dip to bring back STAR (2021 version missed EEMC, all forward detectors, MTD, RICH scaler). Experts should provide short instructions.

· Reference plots are more or less updated. Subsystems that did not respond/provide feedbacks are: sTGC, EPD. (These experts were busy the past few days in the control room). https://drupal.star.bnl.gov/STAR/content/reference-plots-and-instructions-shift-crew-current-official-version


06/08/2023

(Weather: 48-70F, humidity: 64%, air quality 162)


§ RHIC Schedule

This week stochastic cooling transverse.

toward 2x10^9 per bunch, 56x56 will be regular. 

Physics all week until Monday but NOT at STAR until further notice.

and 06/14 APEX


§ STAR status

· STAR at full field; Field on to ensure RHIC running.

· No physics was taken after access Wednesday. STAR is shut down due to the poor air quality.


Lab decided to turn off HSSDs lab wide -> No HSSD in STAR -> No STAR running.

Details:

The reason to shut down STAR is because they needed to turn off HSSD (high-sensitive-smoke-detector). They worry the air quality would get worse, and all the HSSD might go off, and the fire department would not know what to do and if there is a real fire. Since HSSD is within our safety envelope for operation, we cannot operate STAR if we turn off the HSSD. (sPHENIX is different, so they have been running)

· Since last night, 2-person gas-watch shift started. See Kong’s email on star-ops.

§ Plans

· MCR just called to ask us to prepare to ramp up! (09:58am)

· We need to come up with a procedure to shut down STAR safely and quickly. (Note: The process to shut down STAR yesterday was not as smooth as expected. Clearly, we do not do this every day.)

· We can use the procedure after the Power dip to bring back STAR.

· Jeff needs time to investigate DAQ.


06/07/2023

(Weather: 51-73F, humidity: 63%)


§ RHIC Schedule

This week stochastic cooling transverse.

VDM scan Wednesday after access (postponed from yesterday)

no cooling and no crossing angle (1h for physics), then add the angle back.

toward 2x10^9 per bunch, 56x56 will be regular. 

Access today (07:00-18:00), then physics;

and 06/14 APEX


§ STAR status

· STAR at full field;

· zdc_mb = 854M over 190 hours; (~104M+ since yesterday)

· MCW work is being done right now.

· STAR chiller for magnet update. Parts are here, the work will be finished today, but won’t switch over. The switch over does NOT need access.

· Blue Beam Background:

Akio: performed BBC test yesterday and confirmed the blue beam background. Run number: 24157039 was taken with bbcBackgroundTest. (Offline analysis on the background events would be helpful, but not easy without modifying the vertex reco code.)

During the 5 mins store yesterday, supposed to be the Vernier scan, background was still present without crossing angle.

· Akio instructed the shift crew to perform a localClock and rhicClock test to understand the rate jump issue. Changed DetectorReadinessChecklist [1]

Jeff: run “setRHICClock” after cosmic runs, which is already updated in DetectorReadinessChecklist.

· One daughter card on EQ3, will be done by Christian.

· Overnight shift observed a few blue tiles in EPD adc. Experts? Mike: two SiPMs died, two are the database issue. Will make a note to the shift crew. (Mike: Going in today to look at the tiles)

· asymmetric vertex distribution for satellite bunch, but not the main peak. 

· pedestals

L2 waits for 2 minutes before stop run;

MXQ rms>50, very large; take another pedestal right after the fill;

EQ1,2,3,4 pedestals; mean>200; check daughter card (will be discussed at the Trigger Meeting today).

 

§ Plans

· Update the DetectorReadinessChecklist for Vernier scan. (a copy of the production config. Bring up detectors at flattop, don’t stop the run regardless of detector conditions.)

· MCW fixes for the electronics, 9am Wednesday, 3 hours expected. But likely needs longer.

for the MCW fix: TOF LV needs to be off and the full list of subsystems will be sent on star-ops by Prashanth. (DONE)

· TCU bits; Jeff/trigger plan for Wednesday down time with delay tests;

Jeff: will take 4-5 runs and 1h after the water work is done.

· ETOW Crate #4 (W. Jaccob/Tim) on Wednesday (Tim is working on the fix now)

· Spare QTD tests; Chris continues to work on it;

· DAQ5K, outer sectors; Tonko will do this on Thursday with beam.

Tonko: mask RDO6 sector 1, and perform tests.

· After water work is done, who needs to be called. Email star-ops first, and make a call list.

· Passwords update (Wayne Betts)

· Reference plots for online shift; experts of subsystems provide reference for a good run.

FST: run22 is the reference, no update needed.

EPD: will get to us.

GMT: will provide after the meeting.

MTD: ask Rongrong

sTGC: will get back to us


06/06/2023

RHIC Schedule

This week stochastic cooling transverse, (yellow done, but not blue)

toward 2x10^9 per bunch, 56x56 will be regular. 

06/07 APEX cancelled, sPHENIX access (07:00-18:00), then physics;

and 06/14 APEX


§ STAR status

· STAR at full field;

· zdc_mb = 750M over 176 hours; (~100M+ since yesterday)

· asymmetric vertex distribution for satellite bunch, but not the main peak. 

(could test without the crossing angle, 0.5mrad each, to see if the structure disappears)

· Blue Beam Background, due to fixed target we installed? The investigation indicated not related to the fixed target. FXT data yesterday, only see background at positive x horizontal plane;

Akio: perform BBC test today.

· Overnight shift observed a few blue tiles in EPD adc. Experts? Mike: two SiPMs died, two are the database issue. Will make a note to the shift crew.

· Triggers: 2 upc-jet triggers (3,17) should be promoted (back) to physics;

(From yesterday)

· pedestals

L2 waits for 2 minutes before stop run;

MXQ rms>50, very large; take another pedestal right after the fill;

EQ1,2,3,4 pedestals; mean>200; check daughter card (will be discussed at the Trigger Meeting today).

 

§ Plans

· Magnet will be ramped down tomorrow 8:30am by shift leader, and Prashanth will take out the key.

· Magnet: chill water pump issues, prepare to be fixed on Wednesday morning.

JH: Oil line of the chiller is the problem. A few hours expected and hopefully fix the issue.

· MCW fixes for the electronics, 9am Wednesday, 3 hours expected.

for the MCW fix: TOF LV needs to be off and the full list of subsystems will be sent on star-ops by Prashanth.

· TCU bits; Jeff/trigger plan for Wednesday down time with delay tests; (plan for the afternoon after the water work done, and will be discussed at the Trigger Meeting Tuesday June 06 noon)

· ETOW Crate #4 (W. Jaccob/Tim) on Wednesday? (Tim plans to fix this tomorrow, may need to replace a card in this crate)

· Spare QTD tests; Chris continues to work on it;

· DAQ5K, outer sectors; Tonko will do this on Thursday with beam

· Reference plots for online shift; experts of subsystems provide reference for a good run.


06/05/2023

1. RHIC Schedule

This week stochastic cooling transverse,

toward 2x10^9 per bunch, 56x56 will be regular; 

chill water pump issues, prepare to be fixed in next few days, but STAR at full field;

06/07 APEX cancelled, sPHENIX 8 hours access;

and 06/14 APEX


2. STAR status

a. zdc_mb = 645M over 159 hours;

zero field: zdc_mb = 45M

half field: zdc_mb = 17M

b. bunch crossing and vertex fingers;

maybe transverse SC will fix everything;

move beam 0.6mm and 0.3mm both directions;

still investigating; 

c. STAR chill water pump issues,

shift leader can ramp STAR magnet while beam is ON, but need to coordinate with MCR ahead of time; run well so far;

clean water tank on Wednesday; still searching for parts;

d. Blue Beam Background, due to fixed target we installed?

FXT data yesterday, only see background at positive x horizontal plane;

e. ZDCSMD ADC issues;

Chris reported gain file issue; understood and will be fixed; remove pxy_tac.dat file 

f. pedestals

L2 waits for 2 minutes before stop run;

MXQ rms>50, very large; take another pedestal right after the fill;

EQ1,2,3,4 pedestals; mean>200; check daughter card

g. dimuon trigger:

MXQ calibration is good; loose trigger time window than used to be;


3. Plans

a. Kong Tu is going to the period coordinator for next two weeks;

b. TCU bits; Jeff/trigger plan for Wednesday down time with delay tests;

c. Spare QTD tests; Chris works on it;

d. DAQ5K, outer sectors; Wednesday test during down time;

10 days on low luminosity; another week for high luminosity;

e. Reference plots for online shift;

f. Water group (coordination) starts in Wednesday morning, 3+ hours;


06/04/2023

1. RHIC Schedule

Thursday stochastic cooling longitudinal done, transverse next week,

1.3x10^9 per bunch, 56x56 will be regular; 

chill water pump issues, prepare to be fixed in next few days, but STAR at full field;

9PM-2AM no beam both Saturday and Sunday, sPHENIX TPC conditioning;

06/07 APEX cancelled, sPHENIX 8 hours access;

and 06/14 APEX


2. STAR status

a. zdc_mb = 450M over 143 hours;

zero field: zdc_mb = 45M

half field: zdc_mb = 17M

b. bunch crossing and vertex fingers;

storage cavity not fully functional, asymmetric?

Yellow (WEST) second satellite bunch colliding with blue main bunch;

keep it as is;

c. STAR chill water pump issues,

shift leader can ramp STAR magnet while beam is ON, but need to coordinate with MCR ahead of time; run well overnight so far

d. ZDCSMD ADC issues;

Hank confirmed the issues (potentially internal timing issue)?

all channels; NOT in EPD QTD; some features need further investigation;

work with Chris on this

e. Blue Beam Background, due to fixed target we installed?

a FXT test?

FXT configuration flip east vs west; DONE;

HLT needs to change to FXT mode, DONE; 

J.H. coordinates the fast offline (~0.5—1 hours);

f. eTOW out quite frequently (one crate is out);

g. pedestals

L2 waits for 2 minutes before stop run;

MXQ rms>50, very large; take another pedestal right after the fill;

EQ1,2,3,4 pedestals; mean>200; discuss it tomorrow?

Or give shift leader specific instruction to ignore specific boards;


3. Plans

a. TCU bits;

b. Spare QTD tests;

c. Blue beam background FXT test right after the meeting;

d. DAQ5K, outer sectors; Wednesday test during down time;

10 days on low luminosity; another week for high luminosity;

e. FCS monitoring trigger (discuss at triggerboard);


06/03/2023

1. RHIC Schedule

Thursday stochastic cooling longitudinal done, transverse next week,

56x56 will be regular; 

chill water pump issues, no full field until 8PM last night, tripped at 11PM.

sPHENIX magnet quench yesterday, ramp up successfully;

9PM-2AM no beam both Saturday and Sunday, sPHENIX TPC conditioning

06/07 APEX cancelled, sPHENIX 8 hours access;

and 06/14 APEX


2. STAR status

a. zdc_mb = 452M over 127 hours;

zero field: zdc_mb = 45M

half field: zdc_mb = 17M

b. STAR chill water pump issues, magnet trip at around 11PM last night

shift leader can ramp STAR magnet while beam is ON, but need to coordinate with MCR ahead of time; run well overnight so far

c. ZDCSMD ADC issues;

Han-sheng found and reported to QA board.

Does EPD see this feature in QTD?

fencing feature with one ADC count per bin;

d. Blue Beam Background, due to fixed target we installed?

a FXT test?

FXT configuration flip east vs west; TODAY;

HLT needs to change to FXT mode (Dayi)?

J.H. coordinates the fast offline?

e. Shift leader found a (significant size) snake in the assembly hall, moved it to RHIC inner ring area. If you spot one, can call police.


3. Plans

a. TCU bits

b. Spare QTD tests

c. Blue beam background FXT test


06/02/2023

1. RHIC Schedule

Thursday stochastic cooling longitudinal done, transverse next week,

56x56 will be regular; 

STAR magnet tripped yesterday morning, has not been at full power since;

chill water pump issues, no full field until 5PM tonight.

sPHENIX first cosmic ray track in TPC;

9PM-2AM no beam both Saturday and Sunday, sPHENIX TPC conditioning

06/07 APEX cancelled, PHYSICS data?

and 06/14 APEX



2. STAR status

a. zdc_mb = 405M over 117 hours;

zero field: zdc_mb = 40M

half field? zdc_mb and upc_main

b. a few changes in trigger conditions:

zdc killer bit applied on coincidence condition;

UPC-JPSI and UPC-jets requires eTOW in;

c. MTD QT noise is back, need to retake pedestal;

d. Cannot start chill water pump, start 5PM,

next few days, temperature low, should be able to run

e. BBC route to RHIC, blue background high


3. Plan

a. TCU bit work on-going

b. High luminosity configuration;


06/01/2023

1. RHIC Schedule

no beam from Tuesday 7:30 to Wednesday evening 8PM. 

Sweep experiment areas at 6PM Wednesday; physics data at 8:30PM;

1.3x10^9 per bunch, leveling at STAR;

sPHENIX magnet has been ON; 

Thursday stochastic cooling after this current store (56x56),

06/07 and 06/14 APEX


2. STAR status

a. zdc_mb = 385M

b. Access task completion:

BEMC done, MTD BL-19 sealant to gas connector for minor gas leak;

BBC scaler, fixed a dead channel (move from #16 to different label),

need to route from DAQ room to RHIC scaler;

ZDC TCIM: fixed a broken pin and dead processor,

setting deadtime for scaler output (was 20us, set to 1us)

gain to sum output set to 1:1 (was 1:0.5)

Pulser to TCU: 3 TCU bits out of time, need look into this;

sTGC 4 FEEs did not improved (still dead)

EPD 2 channels remap done; QTD into spare slot;

VPD MXQ calibration does not look correct; contact Isaac/Daniel

c. Trigger condition updates, and production IDs

all physics triggers are promoted to production ID;

EJP trigger 10x higher; hot towers?

UPC-JPSI trigger too high after access; ETOW was out while related triggers are IN; 

set up reasonable range expected with color scheme for DAQ monitoring;

Jeff and the specific trigger ID owners

reference plots, still run22 plots for online shift crew; need to work on this once the beam and STAR operation are stable (next few days)

d. Magnet trip this morning at 9:29AM

bringing back the magnet in progress;

no errors on our detector; beam loss 3 minutes later;

magnet is back up;

magnet temperature is high; work in progress; down to 0 and

call chill water group;


3. On-going tasks and plans

a. BBC scaler need to route from DAQ room to RHIC scaler;

b. ETOW readout is out but trigger is ON;

Jeff need to set up a scheme for eTOW related trigger when ETOW is out;

c. TCU bits, trigger group continues the work on bit issues using the pulser

d. QTD, chris will look into the one we just put back into EQ4

e. MXQ VPD need further work on calibration

JEVP online plot of BBQ VPD vertex distribution missing;


05/31/2023

1. RHIC Schedule

no beam from Tuesday 7:30 to Wednesday evening 8PM (access could be up to 6PM). 

Sweep experiment areas at 3PM Wednesday;

1.3x10^9 per bunch, leveling at STAR; 

Thursday stochastic cooling first,

then sPHENIX magnet ON exercise, we should probably put STAR detector on safe status for the first fill. 

06/07 and 06/14 APEX


2. STAR status

a. Trigger condition updates, and production IDs

promote everything from BTH, UPC, UPC-JPsi (prescale not decided);

dimuon-MTD;

UPC-jets, UPC-photo;

zdc_mb_counter no production ID, zdc_mb_y and zdc_mb_ny removed

b. Another two incidents of DO and shift crew did not show up

DO from SBU started Wednesday owl shift

c. Water tower work plan in a couple of weeks


1. Access plans for Tuesday and Wednesday

a. Magnet OFF Wednesday

BEMC and MTD BL8 (done) work

MTD gas leak BL19 (11:30) Rongrong/Bill

b. Pulser for TCU bit checking Christian/Tim 107ns pulse; connected, waiting for jeff test

c. Laser in progress

d. MTD/VPD splitters (swap out with a spare) not done yet, 3 dead channels, Christian/Tim

e. EPD QTC remapping two QTC channels happens today;

QTD put into the crate to EQ4 spare slot? 

f. sTGC 4 FEEs no signals, reseat cables (magnet OFF) on-going

g. BBC B&Y background signals, single and coincidence issues to RHIC Blue background;

h. BCE crate errors; fixed by Power cycle

i. Measurement of dimensions of rails for EIC (Rahul/Elke, 12-1PM)


05/30/2023

1. RHIC Schedule

no beam from Tuesday 7:30 to Wednesday evening. 

Sweep experiment areas at 3PM Wednesday;

1.3x10^9 per bunch, leveling at STAR; 

Vacuum issues with store cavity in both yellow and blue, BPM issues, debunch issues on Monday 1 hour store;

Thursday stochastic cooling first,

then sPHENIX magnet ON exercise, we should probably put STAR detector on safe status for the first fill. 

06/07 and 06/14 APEX


2. STAR status

a. Trigger condition updates, and production IDs

promote everything from BTH, UPC, UPC-JPsi (prescale not decided);

dimuon-MTD;

Not promoted on UPC-jets, UPC-photo;

b. TPC Cathode trips during beam dump;

change procedure on TPC Cathode turn OFF before beam dump and right after beam dump, turn cathode back ON;

eTOF standby with high current a few days ago; 

c. Air conditioners in trailer (Bill will check on this)

d. Trigger BCE crate, dsm1 STP error, took out BCE crate;

update outdated document (on removing BBC crate);

e. Arrange for sTGC/MTD HV crate repairs

f. FST refill coolant


1. Access plans for Tuesday and Wednesday

a. Magnet OFF Wednesday

BEMC and MTD BL8 work

MTD gas leak BL19 (maybe) Rongrong/Bill

b. Pulser for TCU bit checking

Christian/Tim 107ns pulse;

c. Laser

d. MTD/VPD splitters (swap out with a spare)

e. EPD QTC remapping two QTC channels 

f. sTGC 4 FEEs no signals, reseat cables (magnet OFF)

g. BBC B&Y background signals, single and coincidence issues to RHIC

h. BCE crate errors

i. Measurement of dimensions of rails for EIC (Rahul/Elke, 12-1PM)


05/29/2023

1. RHIC Schedule

Beam for physics over the long weekend, (56 bunches);

No 9AM meeting over long weekend, no beam from Tuesday 7:30 to Wednesday evening. 

Sweep experiment areas at 3PM Wednesday;

1x10^9 per bunch (+20%); 16KHz zdc rate; STAR request leveling at 10KHz for about 10-20minutes;

automatic script does not work yet.

No stochastic cooling now; one of the five storage cavities in Yellow failed; store length is about 1.5 hours;

1.3x10^9 per bunch, leveling at STAR; 


2. STAR status

a. Trigger condition updates, and production IDs

promote everything from BTH, UPC, UPC-JPsi (prescale not decided);

nothing from UPC-jets, UPC-photo, dimuon-MTD;

b. MTD calibration is done; tables uploaded,

need to apply the TAC cuts, and then production ID:

MXQ VPD maybe minor issues need to address

c. Water out of the cooling tower, this is by design for more efficient cooling; small AC unit to cool down the chill water

d. Replaced MTD PS crate (Dave), was successful;

need to ship the spare for repair; currently use sTGC spare for operation

Tuesday access to check HV mapping

e. FST additional latency adjustment;

FST in pedestal runs

f. Add eTOF into TOF+MTD noise run if eTOF is operational


3. Access plans for Tuesday and Wednesday

a. Magnet OFF Wednesday

BEMC and MTD BL8 work

b. Pulser for TCU bit checking

c. Laser

d. MTD/VPD splitters

e. EPD QTC west daughter card need to swap out?

performance seems to be OK, need further check before swap;

Christian/Tim swap whole module?

f. sTGC 4 FEEs no signals, reseat cables 

g. BBC B&Y background signals, single and coincidence issues to RHIC


05/28/2023

1. RHIC Schedule

Beam for physics over the long weekend, (56 bunches);

No 9AM meeting over long weekend, no beam from Tuesday 7:30 to Wednesday evening. 

Sweep experiment areas at 3PM Wednesday;

1x10^9 per bunch (+20%); 16KHz zdc rate; STAR request leveling at 10KHz for about 10-20minutes;

automatic script does not work yet.

No stochastic cooling now


2. STAR status

a. Trigger condition updates, and production IDs

promote everything from BTH, UPC,

nothing from UPC-jets, UPC-photo,

elevate on UPC-JPSI triggers

b. Trigger event too large, some crashed L2,

zdc_mb_prepost prepost set to +-1 (was -1,+5)

c. tune_2023 for calibration and test;

Production should be for production ONLY

d. RHIC leveling STAR luminosity at 10KHz ZDC rate, STAR request this.

e. Event counts: zdc_mb = 218M

f. FST latency adjustment is done;

4 APV changed by 1 time bin


3. On-going tasks and plans

a. EPD bias scan done;

a couple of channels have been adjusted;

higher threshold for zero suppresson; Need to implement;

gate on C adjusted; TAC offset and slewing corrections

b. MTD calibration 

c. Fast Offline st_physics events not coming out today

d. TOF noise rate does not need to be taken daily if there is

continuous beam injection and Physics


4. Access plans for Tuesday and Wednesday

a. Magnet OFF Wednesday

BEMC and MTD BL8 work

b. Pulser for TCU bit checking

c. Laser

d. MTD/VPD splitters

e. QTC west daughter card need to swap out?

Christian/Tim swap whole module?

f. sGTC 9 FEEs no signals, reseat cables 

g. BBC B&Y background signals, single and coincidence issues to RHIC


05/27/2023

1. RHIC Schedule

Beam for physics over the long weekend, (56 bunches);

No 9AM meeting over long weekend, no beam from Tuesday 7:30 to Wednesday evening. 

Sweep experiment areas at 3PM Wednesday;

ZDC_MB =~ 5KHz

no stochastic cooling; landau cavity for blue tripped yesterday,

rebucket vs landau cavity RF 56 bunches every other bunches in phase,

changed fill pattern, solved the trip issue. Leveling works at 10KHz, automatic script does not work yet.


2. STAR status

a. Trigger condition updates, and production IDs

UPC_JPsi, ZDC HV and production ID;

UPC_JET; UPC_photo no in Production ID;

FCS bit labels not changed yet; and new tier1 files are in effective; 

need clarification today.

b. Any remaining trigger issues? (-1,+5)? zdc_mb_prepost

RCC plot no updating;

c. EPD scans

timing scan done; 4 channel stuck bit;

bias scan next; onl11,12,13 for online plotting cron servers;

zero suppression 30-40% occupancy 0.3MIP (~50)

d. MXQ VPD calibration done, MTD calibration next

e. BBC B&Y background scalers not working

Christian has a pulser; order a few more?

f. Confusion about FST off status and message

DO need to make sure FST OFF

g. Jamie’s goal tracking plots? zdc_mb, BHT3?

h. eTOF ran for 6 hours, and failed,

If failed, take out of run control;

eTOF follows HV detector states as TOF for beam operation;

i. TPC, drift velocity changes rapidly; new gas?

new vender, old manufactory; online shows stable


3. On-going tasks and plans

a. Pulser for TCU, MTD BL8 and BEMC work on Wednesday

b. sTGC FEE reseat the cable on Wednesday; Magnet OFF

c. ESMD overheating; inspect on Wednesday, talk to Will Jacobs

d. East laser tuning Tuesday


05/26/2023

1. RHIC Schedule

Beam for physics over the long weekend, (56 bunches);

No 9AM meeting over long weekend, no beam from Tuesday 7:30 to Wednesday evening. 

Sweep experiment areas at 3PM Wednesday;

Blue beam Landau cavity tripped, beam loss ½ at beginning and see to light up iTPC;

Stochastic cooling will setup hopefully today; no expert available today, over the weekend;

three-hour fill with Landau cavity on (or without if it does not work)


2. STAR status

a. We had a couple of incidents that shift crew and shift leader did not show up; please set your alarm, it is an 8-hour job, try to rest/sleep in the remaining of the day

b. Laser, DO always need to continue the intensity

need to pass the experience to evening shifts

c. zdc_mb = 65M

d. VPD calibration; BBQ done, MXQ not done, dataset done

e. MTD dimuon_vpd100 out until expert calls

f. L4 plots are not updating; online plot server is not available;

g. FST fining tuning on latency setting; update online plot;

beam with updated online plot;

h. New production ID; vpd100, BHT#? BHT3?


3. On-going tasks and plans

a. Pulser for TCU monitoring;

b. sTGC 4 FEE not working;

HV scan, gain matching; (Prashanth/Dave instructions)

c. L2ana for BEMC

The l2BtowGamma algorithm has been running. L2peds have not been, Jeff just restored them.

d. QTD

Chris fixed the issue, EPD looks good;

QTC looks good;

pedestals width large when EPD ON

ON for the mean, MIP shift correlated with noise rate?

gain QTD>QTC>QTB

Eleanor new tier1 file?

afterward, EPC Time, gain, offset, slewing, zero-suppression items

QTB->QTD swap back? Wait for trigger group?

leave it alone as default

ZDC SMD ADC noisier, but it is OK.


05/25/2023

1. RHIC Schedule

another PS issue, and storage cavity problem,

Stores will be back to 56 bunches after stochastic cooling commissioning first with 12 bunches and sPHENIX requested 6 bunches 


2. STAR status

a. No beam yesterday and this morning

b. Laser, DO always need to continue the intensity

c. zdc_mb = 50M

d. VPD slewing waiting for beam


3. On-going tasks

a. QTD issues,

LV off taking pedestal file

threshold and readout speed

Chris confirmed by email that indeed 0-3 channels in QTD

are off by 1 bunch crossing on bench test;

Chris and Hank are going to discuss after the meeting

and send out a summary and action items later today.

I feel that we may have a resolution here


05/24/2023

1. RHIC Schedule

Abort kicker power supply issue (blue beam), no physics collisions since yesterday.

They may do APEX with just one beam;

Stores will be back to 56 bunches after stochastic cooling commissioning first with 12 bunches


2. STAR status

a. zdc_mb = 50M

b. VPD slewing and BBQ upload done,

NEXT MXQ 

c. sTGC sector#14 holds HV;

a few FEEs do not show any hits;

d. sTGC+FCS in physics mode

FST still offline, need online data QA to confirm

Latence adjustment,

e. eTOF HV on, included in run

OFF during APEX


3. On-going tasks

a. TCU pulser another test during APEX


4. Plans for the week and two-day access next week

a. MTD calibration and dimuon trigger after VPD done

b. EPD bias scan and TAC offset and slew correction

c. Next week, electronics for pulser in the hall (Christian)

d. Wednesday BEMC crate ontop of magnet PS fix (Bill/Oleg)

e. Wednesday MTD BL-8 THUB changed channel (Tim)

f. Plan for resolving QTD issues:

before Sunday,

taking data with zdc_mb_prepost (-1,+2) in production;

Aihong observed changes in ZDC SMD signals when BEMC time scan;

Jeff will follow up on what time delays in TCD on those scans; 

After Sunday, Chris will do time scan or other tricks to figure out what

the issues with QTD; We need a clean understanding of the issues and solutions; If this is NOT successful, 

Wednesday replace all QTD by QTB and derive a scheme to selective readout QTB for DAQ5K for both BBQ,MXQ (EPD and ZDCSMD).

Mike sent out a scheme for EPD 


05/23/2023

1. RHIC Schedule

MCR working on stochastic cooling, longitudinal cooling first, will reduce the background seen at STAR and sPHENIX. 

Stores will be back to 56 bunches after stochastic cooling commissioning first with 12 bunches


2. STAR status

a. TPC in production, DAQ 5K tested this morning with iTPC sector, TPC current looks good;

Deadtime rate dependent, outer sector RDO optimization for rate (Gating Grid); 15KHz to saturate the bandwidth; Tonko would like to keep ZDC rate to be high (~5KHz)

b. EPD gain and time scan

Timing scan last night and set for 1-3 crates, EQ4 very different timing,

need update on individual label for setting; need this for next step bias scan; QTD first 4 channels signals low (1/20); same observed in ZDC SMD; Eleanor needs to change the label in tier1 file, tune file, and Jeff moves it over. QTD->QTB replacement works.

c. VPD scan

Daniel and Isaac BBQ data using HLT files for fast calibration;

VPD_slew_test from last year (BBC-HLT trigger)

MXQ board address change? Noon:30 trigger meeting;

d. BSMD time scan; scan this morning, will set the time offset today


3. On-going tasks

a. ZDC SMD QTD board issues

ZDC SMD QTD shows same issues with first 4 channels

MXQ power cycled, SMD readout is back

pre-post +-2 zdc_mb trigger data taking after the meeting

b. TCU bit test with the pulser RCC->NIM Dis->Level->TTL->RET

bit to TCU 12,15

c. Some triggers are being actively updated, BHT4 UPCjet at 13

d. Adding more monitoring trigger (ZDC killer bits)

plan: discuss at trigger meeting; pulser 100ns


4. Plans for the days

a. FCS close today?

coordinate with MCR for a short controlled access today

b. BSMD helper from TAMU

BSMD only operates at high luminosity

ESMD only operates at high luminosity

Will discuss action items at later time


05/22/2023

1. RHIC Schedule

access at 10AM 2 hours of controlled access.

Stores will be back to 56 bunches after stochastic cooling commissioning first with 12 bunches


2. STAR status

a. TPC in production, DAQ 5K is NOT ready yet,

outer sectors firmware optimization, need about 3 weeks,

rate at about 3KHz, 

laser runs well,

b. sTGC sector 14, masked out, you will do the checking behind scene,

sTGC and FST will be in production

c. FCC, watch the luminosity and background for the next few days, decide whether we close the calorimetry

d. Trigger system, tcu bit slot#21-35 BAD, BHT1, dimuon, zdc_mb_gmt

a few other triggers with high scaler deadtime, zdc_killer should discuss at triggerboard meeting,

TCU spare daughter card good, two spare motherboards,

highest priority,

e. TOF

no issues in production

f. VPD

working on slewing correction, an issue with TAC offset with MXQ

VPD MXQ one and BBQ two channels (Christian is going to check them next access)

g. ZDC and ZDC SMD

SMD timed correctly, need Aihong to check again

SMD no signal at QT

h. EPD

replace EQ4 QTD now

EPC time scan and LV bias scan tonight,

Need to do time and offset matching among tiles, need more time,

i. BEMC is timed, one crate on top of magnet stopped sending data, never seen such failure (coincide with the beam dump), 3% of total channels

j. BSMD in middle of time scan BSMD02 failed,

need pedestal online monitoring helper (star management follows up)

k. FCC need to close position, LED run procedure, trigger not commissioned, stuck bit need to re-routed, thresholds need to be discussed, a week from today

l. MTD, Tim THUB push in, trigger needed VPD and MTD timing calibration

m. Slow control

fully commissioned, MCU unit for sTGC, more resilent against radiation,

HV IOC updated, trip level set according to luminosity

TOF and MTD IOC updated (fixed connection issues)

need update instruction procedure

SC general manual updates.

n. Fast Offline

started on Friday, and processing st_physics and request to process st_upc streams, st_4photo?

QA shift fast offline in China, google issues, alternative path to fill histograms and reports

o. FST, commissioning,

Field OFF beam special request after everything ready


05/21/2023

1. RHIC Schedule

No 9AM CAD meeting. Stores with 56 bunches, will continue over the weekend,

Potential access Monday for RHIC work, sPHENIX and STAR


2. STAR status

a. production_AuAu_2023 TRG+DAQ+ITPC+BTOW+ETOW+TOF+GMT+MTD+L4+FCS

Fixed a few issues yesterday, zdc_mb promoted to production ID.

TCU hardware issue, avoid tcu slot#21-25

Need to check whether same issue occurs with other tcu slots: external pulse (Christian)

b. Fix blue beam sync bit

c. Fix L4 nhits 

d. ESMD time scan done

e. TPX/iTPC done

f. UPS battery and the magnet computer dead, need replacement by CAS


3. Ongoing tasks

a. VPD scan for slew correction, update to “final”, QTC in BBQ and MXQ

pedestal run needed to apply the slewing and offset corrections

L4 needs new VPD calibration file.

VPD TAC look good now after pedestal run, last iteration will be done.

VPD on BBQ is fine, but need to check MXQ

b. Minor issues need work on TPC

c. Fast offline production comes (contact Gene)

d. BSMD one of two PCs has memory errors,need to swap out in DAQ room

e. EPD time and bias scan after QTD replacement

f. MTD one backleg need work (canbus card need push-in, magnet off, need VPD calibration done)

g. Beam loss at 10:30 chromo measurement, beam abort unexpectedly, MCR called STAR informing about the measurements, But the CAD system puts “PHYSICS ON” and STAR shift turned on the detector, thought that MCR was done with the measurement and PHYSICS is ON. Mitigation is to make sure that the information (calls and instructions) from MCR should overwrite the BERT system.


4. Plan of the day/Outlook

a. Collision stores over the weekend

b. Access Monday

c. FCS position, wait until we get more information about the abort, takes 15 minutes to close.

d. sTGC status and plan?

e. FST is good status, will check if further calibration is needed

f. Monday magnet OFF during access? Shift leader


Confirm with Christian about access Monday


05/20/23

I. RHIC Schedule

 Stores with 56 bunches since yesterday evening, will continue over the weekend


II. STAR status

 production_AuAu_2023 TRG+DAQ+ITPC+BTOW+ETOW+TOF+GMT+MTD+L4+FCS+STGC+FST


III. Ongoing tasks

 Production configuration, trigger rates, BBC tac incorrect

 Autorecovery for TPX not available, crews powercycle the relevant FEE

 EPD bias scan to resume today, timing scan for QTD

 VPD tac offsets taken overnight, slew correction to take

 Series of sTGC HV trips after beam loss yesterday evening, keep off over weekend

 BSMD, ESMD need timing scan

 zdc-mb production id

 Access requirements, list of the needs


IV. Plan of the day/Outlook

 Collision stores over the weekend


05/19/23

I. RHIC Schedule

 We had stores with 56 bunches till this morning.

 Possible access till 11am, beam development during the day

 Collisions overnight


II. STAR status

 tune_2023 TRG+DAQ+ITPC+BTOW+ETOW+TOF+GMT+MTD+L4+FCS+STGC+FST running overnight

 ZDC HV calibration done

 

III. Ongoing tasks

 TPX prevented starting the run, Tonko working on it, ok now

 EEMC air blower is on, chill water not yet

 BSMD had corrupt data in bsmd02 in cal scan

 EPD calibrations ongoing, work on QTD, ok for physics

 eTOF worked on by experts

 VPD HV updated, will do TAC offsets

 sTGC plane 2 is empty in some place

 Production trigger configuration by Jeff today


IV. Plan of the day/Outlook

 Possible access till 11am

 Beam development during the day

 Collision stores overnight and during the weekend


05/18/23

I. RHIC Schedule

We had store with 56 bunches till this morning.

1 - 3 stores are scheduled today overnight

Beam development during the day, opportunity for controlled access


II. STAR status

Runs with tune_2023 TRG+DAQ+ITPC+TPX+BTOW+TOF+GMT+MTD+L4+FCS+STGC overnight

Done with BBC gain scan, and EPD scan without EQ4, BTOW timing scan without ETOW


III. Ongoing tasks

EEMC turn on (email by Will J.), BTOW + ETOW timing scan in upcoming store

VPD-W, cat-6 to be connected, VPD data from this morning ok, VPD should be off till then, controlled access needed with magnet off

sTGC ROB #13 has TCD cable disconnected, needs fixed or masked out, access with magnet off

EQ4 does not run for EPD, 25% of the detector not available, ongoing with trigger group

Trigger FPGA issues in the beginning of the store, could not get past 15 events, started to take data when different part of the FPGA was used (temporary workaround)

TOF LV yellow alarms

BSMD timing scan (Oleg, tonight) + endcap shower max


IV. Plan of the day/Outlook

Beam development during the day for rebucketing

Opportunity for controlled access after rebucketing is done (work on collimators)

Collision stores (1 - 3 stores) overnight, no crossing angle


05/17/23

I. RHIC Schedule

Restricted access till 6pm (scheduled)

First collisions today early overnight


II. Ongoing tasks

Access ongoing for poletip (scheduled till 6pm), reinsertion in progress

All TPC RDOs were replaced yesterday and tested

FST tested ok, water leak is fixed

TPC lasers, work in progress on control computer, waiting for new repeater, for now works only on the platform


III. Plan of the day/Outlook

Access till 6pm, poletip insertion, will finish earlier (before 4pm)

Collisions early overnight, could be in 2 hours after the access is done, lower intensity because of no stochastic cooling for now

Cosmics + lasers after poletip closed and magnet on


05/16/23

I. RHIC Schedule

Restricted access till 10pm.

Beam ramps overnight, both beams

First collisions as early as Wednesday night, likely on Thursday


II. Ongoing tasks

Poletip removal in progress, access till 10pm today + access tomorrow till 6pm

TOF LV for tray 18 west 2 was too low, the channel was swapped to a spare (ok), work in progress on GUI update


III. Plan of the day/Outlook

Access till 10pm, beam development overnight

Collisions on Thursday


05/15/23

I. RHIC Schedule

Restricted access ongoing till 2:30pm to prepare for poletip removal

Beam development overnight, blue and yellow ramps

First collisions on Wednesday night, 6 bunches


II. Ongoing tasks

Preparation for poletip removal (BBC, EPD, sTGC), access today till 2:30pm

ETOW and ESMD off (FEE LV and cooling water)

TOF LV is too low for tray 18 west 2, caused high trigger rate, taken out of the run, call to Geary, mask it off now

MTD THUB-N new firmware (Tim today, behind the barrier)

Tier-1 for timing on Wed (Jeff+Hank)

Inform SL over zoom of any work from remote, like ramping up/down HV/LV

sTGC LV to standard operation in the manual (David)


III. Plan of the day/Outlook

Access till 2:30, likely done earlier, beam development overnight

Collisions on Wednesday night, 56 bunches (10 kHz) + then 6 bunches for sPHENIX