logo EDITE Sujets de doctorat

Performance optimization for the LHCb experiment

Sujet proposé par
Directeur de thèse:
Doctorant: Florian LEMAITRE
Unité de recherche UMR 7606 Laboratoire d'informatique de Paris 6

Domaine: Sciences et technologies de l'information et de la communication

Projet

The candidate will analyze some representative algorithms and codes (data intensive computation, control major codes) and design new versions of them for current state-of-the-art parallel architectures: SIMD multi-core processor, GPU and Xeon-Phi. For each architecture, some specific optimizations will be developed. If a straightforward parallelization is inefficient to leverage all the computation power (memory bandwidth issue), the candidate will focus on memory layout optimizations and data management in order to design algorithm with a better throughput. Numerical Stability can also be studied in order to validate previous optimizations. Finally the architectures will be challenged together to determine what is the most appropriate architecture for each type of algorithms. Typical examples are - curves fitting for particule tracking (linear or parabolic trajectography, depending on the particule nature), - kalman filter - pattern recognition algorithms for tracks reconstruction in detector apparatus. The thesis could involve analyzing and transforming the code in order to help the vectorization, and to enhance data locality in caches. These transformations would be loop transformations and HLTs (High Level Transformations) like memory layout change and functions merging

Enjeux

LHCb is one of the four particle physics detector located on the Large Hadron Collider Ring at CERN (Geneva). The aim of the LHCb experiment is to understand the difference between matter and anti matter and to search for sign of physics beyond the Standard Model using decays of beauty and charm particles. The simulation of the tracking code used to reconstruct the passage of charges particles through the apparatus is written in C++ and runs on CERN's computing farm. Currently they use neither the SIMD extension of the current general-purpose processors, nor the GPU and Xeon-Phi Accelerators. In order to scale and fit the requirements of upcoming experiment [LHCb], both codes and machines should evolve. It is indeed planned to remove the first hardware event filter in order to remove biases introduced by this low level trigger. As a consequence, the LHCb computing farms need to compute about 40 times more input events as previously with nearly the same amount of machines. It is consequently more critical than ever before to have fast computation. This problem is very specific because it is not one big problem that have to be computed as fast as possible but instead, it is a lot of very small problems that have to be computed as much as possible per second. The thesis could also involve the numerical stability analysis of several algorithms like matrix inversion, curve fitting, Kalman filter… This is important to optimize the computation speed and the memory footprint, and thereby, maximize the global throughput of the whole computation chain. Indeed, it is possible, with current processors, to use several floating point representation with different precision like double precision (64 bits) or single precision (32 bits) or even half precision (16 bits). This study would be based on tools like MPFR, MPFI and cadna which would be wrapped in a C++ library allowing having the same code for tests and for production.

Ouverture à l'international

Au CERN, à Genève