CHEP2013: October 14 - 18, 2013 in Amsterdam, The Netherlands

Conference Topics and Plenary Programme

Plenary key notes

The CHEP programme consists of plenary key note talks in the morning, and oral presentations and posters during the afternoon sessions. We are pleased to announce an exciting range of key note talks for this CHEP conference. We have confirmed the following speakers:

  • Brian Bockelmann of Nebraska on big data processing in HEP;
  • Philippe Canal of Fermilab on future directions for HEP software packages such as GEANT4+ and Root6;
  • Pirjo-Leena Forsström of CSC Helsinki on data archiving and data stewardship;
  • Jim Kowalkowski of Fermilab on massive multicore and GPU developments;
  • Robert Lupton of Princeton on software engineering for science in LSST;
  • Inder Monga of ESnet on software defined networking and bandwidth-on-demand;
  • Axel Naumann of CERN on parallel languages, the C++ v11 concurrency model, and the migration strategy for HEP;
  • Harvey Newman of Caltech on trends in advanced networking;
  • Stefano Spataro of Torino on the computing design choices made for PANDA at FAIR, and how trigger-less data acquisition and massive multi-core can change our thinking;
  • Edwin Valentijn of the Groningen Kapteijn Institute on data processing for the LOFAR radio telescope with the Target project, and how that is moving forward towards SKA;
  • Torre Wenaus of Brookhaven on the lessons learned in computing during the LHC shutdown and the development programme currently underway;
  • Kostas Glinos of the European Commission on how the e-Infrastructure vision affects the research infrastructure landscape, and how European e-Infrastructure can work towards international collaboration across continents in Horizon 2020.
  • Toon Moene of the Royal Dutch Meteorological Institute KNMI on data and compute intensive challenges in weather forecasting.

and Oxana Smirnova of Lund University will be leading the plenary panel discussion "HEP computing is ‘doomed’ to become less HEP-specific", looking at why we share, or do not share, software between HEP and other sciences, industry and the world at large.
We expect a few more key note talks to be announced at a later time.

Parallel programme and submitted contributions

The parallel programme has six tracks, with the session on Distributed Processing and Data Handling having coordinated focus sessions on both Infrastructure, Sites, and Virtualization as well as on Experiment Data Processing, Data Handling and Computing Models.

For those participants active in the IFCA DPHEP study group, a separate workshop is organised by the DPHEP community on the Wednesday afternoon.

The parallel tracks in the programme cover the following topics:

Data Acquisition, Trigger and Controls
event building and farm networks; compute farms for high-level triggering; configuration and run control; describing and managing configuration data and conditions databases; online software frameworks and tools; online calibration procedures; remote access to and control of data acquisition systems and experimental facilities.
Event Processing, Simulation and Analysis
event generation, simulation and reconstruction; detector geometries, physics analysis; tools and techniques for data classification and parameter fitting; event visualization and data presentation; frameworks for event processing; toolkits for simulation, reconstruction and analysis; event data models.
Distributed Processing and Data Handling
grid computing; virtualization; infrastructure as a service; clouds; distributed data processing; data management; distributed analysis; distributed processing experience, including experience with grids and clouds; experience with production and data challenges; experience with analysis using distributed resources; interactive analysis using distributed resources; solutions for coping with a heterogeneous environment; mobile computing; monitoring of user jobs and data; grid and cloud software and monitoring tools; global usage and management of resources; middleware reliability, interoperability and security; experiment specific middleware applications.
Data Stores, Data Bases, and Storage Systems
storage management; local I/O and data access; mass storage systems; object dictionaries; event stores; metadata and supporting infrastructure; databases; access patterns and caching strategies; data preservation; data curation and long-term data reproducibility.
Software Engineering, Parallelism & Multi-Core
CPU/GPU architectures; tightly-coupled systems; GPGPU; concurrency; vectorization and parallelization; mathematical libraries; foundation and utility libraries; programming techniques and tools; software testing and quality assurance; configuration management; software build, release and distribution tools; documentation.
Facilities, Production Infrastructures, Networking and Collaborative Tools
basic hardware, benchmarks and experience; fabric virtualization; fabric management and administration; local and wide-area networking; private networks; collaborative systems: progress in technologies and applications; tele-presence and teleconferencing systems; experience in the use of teleconferencing tools.