Are we sure why the model made that prediction?

Towards nuanced decision-making via uncertainty quantification of post-hoc explanations

ONGOING

This project is concerned with explaining predictions of black-box models such as Deep Neural Networks and Random Forests.

Several post-hoc explanation methods (aka explainers) exist and are already used in practice, such as LIME, Shapley values, Integrated gradients, to name a few. However, running a training algorithm and/or an explainer several times can produce contradicting explanations, henceforth reducing trust in those techniques. In this project, we study the variability of explanations arising from model uncertainty and explanations techniques with the goal to develop trustworthy strategies for explaining black-box predictions.

Équipe du projet

PROJECT TEAM