Learning algorithms have shown excellent results in various tasks and in numerous fields. However, they have also shown great fragility when faced with situations that differ even slightly from the data with which they have been trained. This fragility is a significant limitation for applications where it is difficult if not impossible to ensure that the training data represents real situations. For the transportation industry, the robustness of a system refers to its ability to operate outside of the usual conditions while maintaining a level of performance set in advance.
This research theme therefore aims to develop learning methods that make it possible to produce robust models as well as a methodology suitable for evaluating the robustness of these complex and often opaque models. This notably includes the study of “adversarial” attacks, anomaly detection, robustness to distribution shift and non-distribution data, as well as transfer learning.
Leader : Liam Paull, Université de Montréal