The security of computer systems is essential for learning algorithms. In particular, it is necessary to be able to ensure that machine learning is based on impenetrable information systems. Otherwise the owners of critical data will perhaps not be able to consider implementing these techniques. This becomes particularly crucial when the learning must be carried out by a third party (for example a computer center) which has the infrastructure, tools and knowledge to allow the calculation of automatic decision rules.
This research thrust therefore aims to address security issues specific to machine learning, such as ensuring data confidentiality and enabling collaborative learning.
Leader : Sébastien Gambs, Université de Montréal