iToBoS project at AI for Good – Discovery – Trustworthy AI

Geneva, 24/10/2022.

iToBoS project was presented at AI for Good, a series of talks that are driving forward technological solutions that measure and advance the UN’s Sustainable Development Goals by bringing together a broad network of interdisciplinary researchers, nonprofits, governments, and corporate actors to identify, prototype and scale solutions that engender positive change.

On October 24th 2022, Sebastian Lapuschkin was invited as a speaker for the Trustworthy AI track.

His talk covered the latest XAI research conducted at FHHI: The emerging field of eXplainable Artificial Intelligence (XAI) aims to bring transparency to today’s powerful but opaque deep learning models. However, the vast majority of current approaches to XAI only provides partial insights and leaves the burden of interpreting the model’s reasoning to the stakeholder. In this talk we introduce the Concept Relevance Propagation (CRP) approach, which combines the local and global perspectives of XAI and thus allows answering both the “where” and “what” questions for individual predictions in a post-hoc manner, without additional constraints imposed on the model. We further introduce the principle of Relevance Maximization for finding representative examples of encoded concepts based on their usefulness to the model. We thereby lift the dependency on the common practice of Activation Maximization and its limitations. We demonstrate the capabilities of our methods in various settings, showcasing that Concept Relevance Propagation and Relevance Maximization lead to more human interpretable explanations, and thus enabling novel analyses for gaining insights about the reasoning of AI.

More information at https://aiforgood.itu.int/event/towards-human-understandable-explanations-with-xai-2-0/