HADJ ALI Mahdi

PhD student at Sorbonne University
Team : DECISION
https://lip6.fr/Mahdi.Hadj-Ali

Supervision : Nicolas MAUDET
Co-supervision : WUILLEMIN Pierre-Henri, LE BIANNIC Yann

Enhancing AI Interpretability through Causal Reasoning: Preventing Misinterpretations with Improved Explanations

Recent works in eXplainable AI (XAI) have improved the general interpretability of models by quantifying the contributions of input features to predictions. Despite these advancements, practitioners still seek to gain causal insights into the underlying data-generating mechanisms. To this end, a possible solution is to rely on classical probabilistic causal analysis, which offers tools to quantify causal effects. Building on this foundation, this thesis explores the intersection of machine learning and causality, focusing on how predictive models can be leveraged to infer causal relationships. While traditional approaches like RCTs offer robust causal evidence, they are often impractical in real-world scenarios. This manuscript examines alternative methods, such as uplift modeling and meta-learners, which approximate causal insights from observational data.

However, these methods have limitations, particularly with regard to selecting the appropriate variables for training the model. To address these shortcomings, this thesis introduces new methodologies that integrate causal reasoning with the tools of XAI. These methodologies refine the quantification of causal effects by ensuring that model results are interpreted within the context of the underlying causal structure. By tailoring predictive models to specific causal queries, the proposed methodologies enhance interpretability, aligning it more closely with human intuitive understanding.

Finally, this thesis presents a comprehensive case study using synthetic data to validate the proposed methodologies. By simulating complex production scenarios, we rigorously evaluate the performance of these methods, demonstrating their robustness and effectiveness in estimating causal effects, even in the presence of indirect effects and confounding variables. This empirical validation not only reinforces the theoretical contributions of the thesis but also provides perspectives for applying this work in real-world contexts.


Phd defence : 11/14/2024

Jury members :

Marianne CLAUSE, Professeur à l'Université de Lorraine [Rapporteur]
Éric GAUSSIER, Professeur à l'Université Grenoble Alpes [Rapporteur]
Hervé ISAMBERT, Directeur de recherche à l'Institut Curie
Éric SIMON, Directeur de recherche à SAP France
Nataliya SOKOLOVSKA, Examinateur, Professeur à Sorbonne Université.
Nicolas MAUDET, Professeur à Sorbonne Université
Yann LE BIANNIC, Ingénieur de recherche à SAP France
Pierre-Henri WUILLEMIN, Maître de conférences à Sorbonne Université

Departure date : 11/15/2024

2021-2024 Publications