Many linear algebra libraries, such as the Intel MKL, Magma or Eigen, provide fast Cholesky factorization. These libraries are suited for big matrices but perform slowly on small ones. Even though State-of-the-Art studies begin to take an interest in small matrices, they usually feature a few hundreds rows. Fields like Computer Vision or High Energy Physics use tiny matrices. In this paper we show that it is possible to speedup the Cholesky factorization for tiny matrices by grouping them in batches and using highly specialized code. We provide High Level Transformations that accelerate the factorization for current Intel SIMD architectures (SSE, AVX2, KNC, AVX512). We achieve with these transformations combined with SIMD a speedup from 13 to 31 for the whole resolution compared to the naive code on a single core AVX2 machine and a speedup from 15 to 33 with multithreading compared to the multithreaded naive code.
Design and Architectures for Signal and Image Processing (DASIP) https://hal.archives-ouvertes.fr/hal-01361204 Design and Architectures for Signal and Image Processing (DASIP), Oct 2016, Rennes, France. pp.1--8, 2016, <https://ecsi.org/dasip> https://ecsi.org/dasipARRAY(0x7f0400eb3178) 2016-10-12