This project aims to assess the robustness of a network in critical domains where it may be difficult to forecast a proper and representative sample of unknown or unexpected cases, to allow certification of the network.
This project consists of analyzing internal computations performed by a neural network when passing training images through and comparing them with other foreign inputs to compute the likelihood of the network’s predictions. Experiments are designed to assess and evaluate the network’s robustness against adversarial attacks and metamorphic variations of inputs. Furthermore, the tests attempt to separate the foreign images from the more familiar images of the training set.
Team
Datasets used for experiments