Programs obtained by machine learning often have the ability to transform data into an appropriate form before performing a specific task more efficiently. However, these transformations are difficult to understand and grasp due to the inherent complexity of high capacity learning models. This therefore represents an obstacle with respect to the interpretation of how the model works. While an increase in the capacity of the model often means a loss in terms of interpretability, the acceptance and certification of the model learned by the different actors involved depends on our understanding and confidence in these models.
This research theme thus focuses on the fundamental aspects of the two main forms of interpretability: transparency, which concerns the interpretability of the model as a whole, and explainability, which relates to the interpretability of specific predictions or decisions made by the model.
Leader : Mario Marchand, Université Laval