Can neural networks quantify the uncertainty in their outputs?

Predictive uncertainty estimation in deep learning

Completed

Probabilistic representations provide us with the ability to give a quantitative value to prediction mistakes, which is an important first step to improve generalization and robustness.

Deep neural networks usually produce point estimates as predictions, making it difficult to quantify the trustworthiness of their outputs. We work on designing neural networks that are capable of estimating a probabilistic representation of their output. Our goal is to avoid restricting this representation to a specific parametric probability distribution while maintaining memory and runtime scalability. Our representation correctly quantifies the severity of mistakes in predictions, as well as the lack of knowledge about inputs that have never been seen during training. As such, we can determine how to make use of the predictions from these probabilistic networks based on their quantified uncertainty. Our work is especially important for deploying neural networks to solve safety-critical problems in robotics and automation.

Team

Datasets used for experiments

PROJECT TEAM