Last week we had a seminar on Explainable Artificial Intelligence (XAI), given by Dr. Sebastian Lapuschkin, tenured researcher and head of the Explainable AI Group at Fraunhofer Heinrich Hertz Institute (HHI) in Berlin, Germany.
In the seminar, he lectured about three main topics:
- Why XAI is necessary, several different local XAI approaches and application example of local XAI.
- Evaluating models with local XAI: how do we quantify model behaviour with explanations? The difference between local and global XAI, and how one can connect both lines of XAI.
- How to improve models with techniques from XAI. That is: how to improve model efficiency by removing parts which are not needed, as identified by XAI? Or how do we improve the behaviour of a model by avoiding/unlearning erroneous prediction strategies?