FERRO Quentin
Supervision : Fabienne JÉZÉQUEL, Stef GRAILLAT
Co-supervision : HILAIRE THibault
Precision auto-tuning and numerical validation
This thesis focuses on the auto-tuning of floating-point precision, particularly via the PROMISE tool. Auto-tuning involves automatically reducing the precision of floating-point variables in a code while respecting an accuracy constraint on the final result. Lowering the precision of variables offers numerous advantages in terms of time, memory, and energy performance, making it applicable to HPC codes. To this end, many tools exist. The particularity of PROMISE lies in its use of Discrete Stochastic Arithmetic (DSA) through the CADNA numerical validation library, allowing it to optimally estimate the accuracy of results.
While neural network reduction generally uses specific methods, they could also benefit from precision auto-tuning. Therefore, PROMISE was applied to four different neural networks, demonstrating the possible reduction of floating-point precision during the inference phase without compromising result accuracy. Two approaches were tested. The first approach involves using the same precision for the parameters of a single neuron, allowing for the maximum reduction of precision. The second approach assigns precision per layer. Despite having more variables in high precision, it allows for faster results. Since PROMISE's outcome depends on the input chosen for inference, it can also be less specific to a single input. Auto-tuning with PROMISE was also studied during the training phase of neural networks. Despite being limited by the randomness of this phase, it showed that reducing the precision of floating-point variables had very little impact on the training phase.
The application of PROMISE also highlights significant performance improvements. In terms of memory, the results are equivalent to the theoretical data on memory usage for each floating-point format. In terms of time, the acceleration results obtained for vectorized and non-vectorized codes are close to the theoretical results but are somewhat hindered by certain operations (casts and library function calls). All these results confirm the interest in reducing precision, particularly within vectorized codes.
In addition to examining the performance of the studied codes, the performance of PROMISE was also evaluated. The main algorithm used by PROMISE was parallelized. The implementation of an instrumentation tool based on Clang/LLVM was also carried out. This tool allows for the instrumentation of codes for CADNA, replacing a Perl script that was neither robust nor ad hoc. It also allows for the automatic instrumentation of codes for PROMISE, which had to be done manually. A third version of this tool, in the form of a Python API, replaces the analysis and code generation performed within PROMISE, making these steps more robust.
Defence : 10/16/2024
Jury members :
Daniel Menard, IETR, INSA Rennes [Rapporteur]
Guillaume Revy, LIRMM, Université de Perpignan [Rapporteur]
Pierre Fortin, CRIStAL, Université de Lille
Stef Graillat, LIP6, Sorbonne Université
Thibault Hilaire, LIP6, Sorbonne Université
Fabienne Jézéquel, LIP6, Université Paris-Panthéon Assas
2022-2024 Publications
-
2024
- Q. Ferro : “Auto-ajustement de la précision et validation numérique”, thesis, phd defence 10/16/2024, supervision Jézéquel, Fabienne Graillat, Stef, co-supervision : Hilaire, THibault (2024)
- Q. Ferro, S. Graillat, Th. Hilaire, F. Jézéquel : “Auto-ajustement de la précision grâce au logiciel PROMISE”, CANUM 2024, 46th National Congress on Numerical Analysis, Le-Bois-Plage-en-Ré, France (2024)
-
2023
- Q. Ferro, S. Graillat, Th. Hilaire, F. Jézéquel : “Performance of precision auto-tuned neural networks”, MCSoC 2023 (16th IEEE International Symposium on Embedded Multicore/Manycore Systems-on-Chip), special session POAT (Performance Optimization and Auto-Tuning of Software on Multicore/Manycore Systems), Singapore, Singapore (2023)
- Q. Ferro, S. Graillat, Th. Hilaire, F. Jézéquel : “Precision Auto-Tuning of High-Performance Neural Networks”, European Conference on Numerical Mathematics and Advanced Applications (ENUMATH), minisymposium "Mixed Precision Computations in Theory and Practice", Lisbon, Portugal (2023)
- Q. Ferro, S. Graillat, Th. Hilaire, F. Jézéquel : “Precision auto-tuning using stochastic arithmetic”, 10th International Congress on Industrial and Applied Mathematics (ICIAM), minisymposium ``Exploring Arithmetic and Data Representation Beyond the Standard in HPC”, Tokyo, Japan (2023)
-
2022
- Q. Ferro, S. Graillat, Th. Hilaire, F. Jézéquel, B. Lewandowski : “Neural Network Precision Tuning Using Stochastic Arithmetic”, RAIM 2022 : 13es Rencontres Arithmétique de l'Informatique Mathématique, Nantes, France (2022)
- Q. Ferro, S. Graillat, Th. Hilaire, F. Jézéquel, B. Lewandowski : “Neural Network Precision Tuning Using Stochastic Arithmetic”, NSV'22, 15th International Workshop on Numerical Software Verification,, Haifa, Israel (2022)
- Q. Ferro, S. Graillat, Th. Hilaire, F. Jézéquel, B. Lewandowski : “Neural Network Precision Tuning Using Stochastic Arithmetic”, Sparse Days conference, Saint-Girons, France (2022)