Ulrich Aïvodji
Professeur en informatique
Ulrich Aïvodji is an Assistant Professor of Computer Science at ÉTS Montréal in the Software and Information Technology Engineering Department. His research areas of interest are computer security, data privacy, optimization, and machine learning. His current research focuses on several aspects of trustworthy machine learning, such as fairness, privacy-preserving machine learning, and explainability. Before his current position, he was a postdoctoral researcher at UQAM, working with Sébastien Gambs on machine learning ethics and privacy. He earned his Ph.D. in Computer Science at Université Toulouse III, under the supervision of Marie-José Huguet and Marc-Olivier Killijian.
During his Ph.D., he was affiliated to LAAS-CNRS as a memebr of both TSF and ROC research groups and worked on privacy-enhancing technologies for ridesharing.
Hiccups on the Road to XAI
Post-hoc explanation is the problem of explaining how a machine learning model -- whose internal logic is hidden to the end-user and generally complex -- produces its outcomes. Current approaches for solving this problem include model explanations and outcome explanations. While these techniques can be beneficial by providing interpretability, there are two fundamental threats to their deployment in real-world applications: the risk of explanation manipulation that targets the trustworthiness of post-hoc explanation techniques and the risk of model extraction that jeopardizes their privacy guarantees. In this talk, we will discuss common explanation manipulation and privacy vulnerabilities in state-of-the-art post-hoc explanation techniques as well as existing lines of research that try to make these techniques more reliable.