Can we guarantee the privacy of Machine Learning models?

Quantitative Information Flow Analysis Related to Data Privacy and Model Confidentiality in Machine Learning Based Systems

ONGOING

Protecting sensitive information from improper disclosure is an important concern in Artificial Intelligence based systems. In this context, patching flaws as they are discovered is not acceptable and it becomes necessary to follow a more disciplined approach based on theoretical guarantees in order to embed privacy into the system’s design.

Machine Learning based approaches owe their success, in large part, to the abundance of data collected from a wide range of sources. Much of this data contains private information about individuals or institutions, which needs to be protected from leakage or improper disclosure. In addition, with staggering costs for R&D and training, machine learning models have become the cornerstone of many products and services over the past few years, making them among the most valuable assets of many companies. Machine learning models therefore need to be equipped with efficient mechanisms to prevent them from getting stolen or emulated, thus protecting the intellectual property of their owners.

Team

Datasets used for experiments :

Reference : 

QIF (DEEL Workshops)

PROJECT TEAM