How to learn an interpretable model with theoretical guarantees?

Learning interpretable RKHS Weightings of Functions


The goal of this project is to develop a new interpretable mathematical model, with strong theoretical guarantees.

Simple predictors are easy to interpret. The core idea of this work is to aggregate simple predictors, weighted by a Reproducing Kernel Hilbert Space function defined on the predictor parameters. This aggregation transforms the simple predictor into a complex function which admits interesting theoretical properties, inherited from the Reproducing Kernel Hilbert Space. The resulting basic model hence becomes highly flexible. The objective of the project is to instantiate this new machine learning model in ways that are eminently interpretable.