How much can we trust our trained models?

Assessing Robustness of Deep Learning Models

ONGOING

In this research, we are trying to answer the question of how certain a neural network is on an input data point. This information can be leveraged to create robust models that are able to detect out of distribution samples and adversarial attacks.

Deep learning has proven its power to solve complex tasks and to achieve state-of-the-art results in various domains. However, due to the distributional shift between collected training data and real test data, the model might predict an output with high probability while being uncertain about this prediction. This problem makes it hard to use neural network models in safety critical applications, such as autonomous driving, robotics, medical images prediction, passport control, etc. Our objective is therefore to assess the uncertainty of a neural network around its predictions to increase the robustness of deep learning models.

Team

Datasets used for experiments

PROJECT TEAM