XAI Hyperparameter Optimization

Rule-based eXplainable AI (XAI) methods, such as layer-wise relevance propagation (LRP) and DeepLift, provide large flexibility thanks to configurable rules, allowing AI practitioners to tailor the XAI method to the problem at hand.

This comes at the cost of a large number of potential XAI parameterizations, especially for complex models with many layers. However, finding optimal parameters is barely researched and often neglected, which can cause these methods to yield suboptimal explanations.

In this blog post, we demonstrate the hyperparameter optimization for LRP using the XAI evaluation framework presented in an earlier post. Specifically, we want to explain a VGG-11 model with BatchNorm (BN) layers trained on the ILSVRC2017 dataset.