RUN 2B L1CAL MEETING MINUTES 20 February, 2003 Present o Fermilab: V.Pavlicek o MSU: P.Laurens o Nevis: H.Evans, J.Mitrevski o Northeastern: D.Wood o Saclay: J.Bystricky, D.Calvet Online Monitoring of L1Cal in Run IIa: Philippe Laurens ------------------------------------- o Different functions - collection - display - distribution - must be done on different time scales: hours, weeks, months o Different Customers - commissioning/debugging . this has been a lot of work - more for Run IIa than Run I > IIa system more complex/flexible > more problems observed from Cal side > all new tools had to be made . will probably be even more time-consuming for Run IIb - check for run readiness b/w stores - shifter support . hot towers, etc - instantaneous view of operation - online hardware verifier . compare with simulation o Run IIa Monitoring Tools 1) Examine/Offline Analysis (see Bob Kehoe's talk of 23-Jan) . analyze events taken as part of D0 DAQ . detect problems: noisy channels, cabling, timing, etc. . check E-scale, linearity, etc > compare L1Cal w/ precision readout . generate plots for shifters 2) L1 TCC Monitoring Information . collects block of info every ~5 s > from a triggered crossing > currently: collect EM and H Et for 8 consecutive crossings (for all TTs) . simple display program > ascii dump of each time slice: w/ ave and std over 8 crossings > example on web: see talk for url . This is all used to verify system before beam . Missing from current system > ability to chose 8 consec. crossings > monitoring of outputs: and-or terms, counts, sums, etc. 3) "Find_DAC" . input pedestal DAC: programmable by TCC . currently used to set each TT to 8 ADC counts as baseline pedestal > run this when no beam to find DAC value to give 8 ADC count pedestal . this happens before Et corr., low energy cut, etc. . runs on TCC - indep of DAQ > much faster than full readout - uses VME access to system > also reports noise in each TT . helpful as a diagnostic tool as well o Run I Monitoring Tools (not yet implemented in Run II) 1) Low Level Tools: System Exerciser . indep of data taking - test that electronics is operational . run on TCC w/out interfering w/ DAQ . input simulated Et's - capture result for that crossing(s) . can be run b/w stores > especially after work on BLS . some crucial design goals - must be built into hardware > maximize amount of electronics switching > run hardware at full speed > need to capture appropriate crossing > useful to capture multiple time slices (1 before and 1 after) 2) Low Level Tools: Pulser System . pulser injects charge at Cal preamp > see pulser pattern in L1Cal data > go through a list of these patterns > cannot calculate exactly how much energy you get from each pattern - but can compare w/ previous runs . useful for finding > noisy/dead channels > cross-talk > mis-wiring . took approx 1 min / pattern ==> ~1 hour for total run . Bob Kehoe is working on getting this operational for Run II 3) Low Level Tool: Find_DAC . used to detect "slow-drift" problems w/ front-end capacitors 4) High Level Tool: Analysis of L1Cal Readout Data . E-scale, gain, saturation, reesolution studied on timescale of weeks 5) High Level Tool: Calorimeter Examine . L1Cal was included in this . useful for: ped drift, hot towers . watched this over months/years > important that this was done by the same people (Jan and Joan Guida) 6) High Level Tool: L1Cal Verifier . ran on Examine data . an extension of L1sim . compared hardware outputs w/ results of feeding hardware inputs into simulator > an important bit-level check for validating both hardware and simulator . currently no one working on this for Run IIa > emphasis has not been put on bit-level checking capability o More Comments from Philippe (after the meeting) I think the main lessons are: - It takes lots of time to commission the system after and in addition to building and installing the equipment. It takes lots of time to develop tools to analyze the physics runs, pulser runs, etc. Lots of time to learn what you need to do, how to interrogate the data, and systematically hunting every problem down is tedious and never ending. - TCC based tests are very powerful, and need hardware support planned and built into the system - A simulator exact at the bit level is a precious tool, and one should read out enough data to feed a Verifier test. Saclay Status: ------------- o ADF Layout - currently placing components > FPGA filtering capacitors left o Interfaces - SBS PCI-VME: shipped back b/c it didn't work - Working on possibility to use VME Single-Board-Computer . using spare board from Saclay . starting development of VME library . will use this long-term at Saclay for tests - Datel PCI interface: a different one ordered o Crates - need to purchase a 2nd crate for Fermilab - accounting for ordering this through Fermilab will be difficult > supposed to be purchased by MSU > probably not an insurmountable problem - but will take a little time - money available at Saclay for this purchase o Firmware - using Model-Sim VHDL simulator - Xilinx tools for place-and-route - Leonardo - will need a PC at Fermilab with appropriate licenses installed . can bring a PC to Fermilab - but will need to deal w/ licenses Nevis Status: Hal Evans ------------ o First pass layout of TAB done - now checking over firmware to make sure that we haven't missed anything - start ordering the last parts to make the prototype board o L3 Readout - plan for TAB/GAB system is to send (on each L1 accept) identical copies of data to L2Cal and to a VRB (for readout to L3) - question: is this sufficient? Does L1Cal have to send any other timing information to the VRB? - answer: (from Philippe) this is sufficient. All timing of readout of data from VRB's for a geographic sector is done using the VRB-Controller (VRBC).