xAI Technical Reporting – 3rd Part

This is the final blog in a three-part series explaining the technical reporting for explainable artificial intelligence (xAI) used in the iToBoS project.

The iToBoS project is developing an intelligent full body scanner to improve the diagnosis of skin cancer using artificial intelligence (AI). The scanner uses a number of deep learning models to identify and track changes in skin lesions, helping doctors identify potential cancers faster than the current state of the art. However, the AI algorithms used create problems: namely, the “black box” issue, referring to the fact that AI algorithms do not share how they reach outputs, making it impossible for humans to provide oversight to the machine’s outputs. In real world contexts with potentially significant impacts on human life, such as healthcare, this issue prevents the deployment of potentially life-saving technology.

In AI research and development, explainability or xAI refers to efforts to counter the black box by identifying exactly how models reach certain outputs. Understanding these processes achieves the transparency necessary to implement these tools in real world contexts. In iToBoS, xAI tasks are led by the Fraunhofer Heinrich-Hertz-Institute (FHHI), who have to identify the most appropriate xAI approach for each type of model included in the tool.

We’ve broken the explanation of xAI in iToBoS into a series of blogs, to make the technical details more digestible. The first blog explained the two broad categories of xAI methods, local and global, used in the project. The second blog covered the different types of deep learning models—mole detection model, mole tracking model, and mole classification model—used in the project and the xAI methods used for each. This blog concludes the series by explaining the final two models used and their accompanying xAI approaches.

UV damage assessment Model

This model uses a convolutional neural network (CNN) architecture designed for classification and modified to address regression tasks. The model is used to extract imaging phenotypes related to patient sun damage, which are then used to calculate overall risk scores. For local explanations, the Layer-wise Relevance Propagation (LRP) is applied, with some modifications to address the continuous nature of the targeted variable. For global explanations, Concept Relevance Propagation (CRP) is applied to the last convolutional layer of the model.

Clinical and Imaging Risk Assessment Model

This model is responsible for integrating clinical and imaging data, whether existing or generated by other techniques, to estimate an individual’s melanoma risk score. The model incorporates the LRP method for explanations, which extends beyond CNN architectures and is applicable to various data types, including the numerical data used here. The results are a detailed analysis of each subpopulation as well as information regarding the five most relevant features.

This blog series covered the iToBoS project’s approach to explainability, a crucial element in the implementation of AI-powered technologies in real world contexts. You can read the other two blogs here and here.