Interpretability and explainability in machine learning

Activity: Talk or presentation typesInvited talkScientific

Description

In recent years, AI and particularly machine learning have experienced a clear increase in interest from outside academia. The unprecedented performance of machine learning algorithms in solving complex tasks from a high volume of structured and unstructured data caught the attention of industry, governments, and society. The use of sub-symbolic ensembles or deep learning techniques led to this massive performance capacity in very specific tasks. However, the responsible use of machine learning has added another variable to this equation: the interpretability component. Multiple application domains exist in which predicting with high accuracy is not enough. For high-stakes decisions affecting humans, explaining why or how an intelligent algorithm made a decision or took action is also required.
Period3 Mar 2022
Held atFlemish Artificial Intelligence Academy, Belgium