Many linear algebra libraries, such as the Intel MKL, Magma or Eigen, provide fast Cholesky factorization. These libraries are suited for big matrices but perform slowly on small ones. Even though State-of-the-Art studies begin to take an interest in small matrices, they usually feature a few hundreds rows. Fields like Computer Vision or High Energy Physics use tiny matrices. In this paper we show that it is possible to speed up the Cholesky factorization for tiny matrices by grouping them in batches and using highly specialized code. We provide High Level Transformations that accelerate the factorization for current multi-core and many-core SIMD architectures (SSE, AVX2, KNC, AVX512, Neon, Altivec). We focus on the fact that, on some architectures, compilers are unable to vectorize and on other architectures, vectorizing compilers are not efficient. Thus handmade SIMDization is mandatory. We achieve with these transformations combined with SIMD a speedup from ×14 to ×28 for the whole resolution in single precision compared to the naive code on a AVX2 machine and a speedup from ×6 to ×14 on double precision, both with a strong scalability.
ISSN: 1383-7621 Journal of Systems Architecture https://hal.archives-ouvertes.fr/hal-01550129 Journal of Systems Architecture, Elsevier, 2017, 〈10.1016/j.sysarc.2017.06.005〉ARRAY(0x7fe6a78e56e8) 2017-06-16