What can internal computations of a Deep Neural Network tell us?

Models of Computational Profiles to Study the Likelihood of DNN Adversarial and Metamorphic Test Cases

COMPLETED

This project aims to assess the robustness of a network in critical domains where it may be difficult to forecast a proper and representative sample of unknown or unexpected cases, to allow certification of the network.

This project consists of analyzing internal computations performed by a neural network when passing training images through and comparing them with other foreign inputs to compute the likelihood of the network’s predictions. Experiments are designed to assess and evaluate the network’s robustness against adversarial attacks and metamorphic variations of inputs. Furthermore, the tests attempt to separate the foreign images from the more familiar images of the training set.

Team

Datasets used for experiments

PROJECT TEAM