Can “Mixture of Experts” provide interpretable models?

Generative Explanations Systems: Interpretable similarity measure for example-based explanations attention mechanisms and rules generation


Mixture of Experts potentially provide powerful interpretable models by combining multiple interpretable models.

The policy prescribing how to rely on the individual expertise of each model depends on the query input and is handled through gating or attention mechanisms. While mixtures of experts were proposed in the early 90’s, attention mechanisms have become a very popular way to apply them to sequence-to-sequence prediction problems, such as natural language translation. However, the most commonly used modules are generally very complex and do not present any or few theoretical guarantees. In this project, we seek to design and create interpretable and efficient algorithms for learning mixtures of experts with guaranteed performance. 

Équipe du projet

Datasets used for experiments

Real helicopter flight test data includes sensor data containing features related to flight conditions and high-pressure pneumatic system conditions at the pitch link.