Sébastien Da Veiga
Responsable d'équipe IA pour la conception et la simulation
Sébastien Da Veiga is a senior expert in statistics and optimization at Safran, an international high-technology group supplier of systems and equipment in the aerospace and defense markets. He obtained is PhD thesis in statistics at Toulouse University in 2007, and his habilitation thesis on interpretable machine learning in 2021. He is currently the head of a research team working on the use of artificail intelligence for design and simulation. His research interests include computer experiments modeling, sensitivity analysis, optimization, kernel methods and random forests.
Interpretability in an Industrial Context – A Sensitivity Analysis Perspective
Manufacturing production and the design of industrial systems are two examples where interpretability of learning methods enables to grasp how the inputs and outputs of a system are connected, and therefore to improve the system efficiency. Although there is no consensus on a precise definition of interpretability, it is possible to identify several requirements: “simplicity, stability, and accuracy”, rarely all satisfied by existing interpretable methods. In this talk, we will discuss two complementary approaches for designing interpretable algorithms. First, we will focus on designing a robust rule learning model, which is simple and highly predictive thanks to its construction based on random forests. Second, for explaining black-box machine learning models directly, we will develop come connections between variable importance and sensitivity analysis. The objective here is to use sensitivity analysis as a guide for analyzing available importance measures, and conversely to use machine learning tools for proposing new powerful methods in sensitivity analysis.