How can mathematical logic help to learn interpretable models?

Sparse Decision Trees based on Logic for an Increased Interpretability

ONGOING

Interpretability, whether it is in the form of transparency or explainability, is a pillar in certifiability.

Interpretability of Artificial Intelligence is the property that enables an expert to understand why a prediction is made. Providing interpretable models is of great importance in applications where health, freedom, racial bias and safety are at stake. In addition, it plays a crucial role in production by allowing engineers to comprehend the failures of the learned models in order to address them. Moreover, in research, interpretable algorithms are useful because they often unveil new investigation paths. This study aims to combine two supervised machine learning algorithms to optimize both interpretability and performance by taking advantage of tools from mathematical logic. This new algorithm aims at improving predictions by slightly increasing model complexity while preserving a high level of interpretability.

Team

Datasets used for experiments

PROJECT TEAM