Transformers

Transformers are deep learning architectures created to solve sequence-to-sequence tasks (such as language translation) and proposed in [1].

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond

Just shy over a year ago, the Quantus toolkit v0.1.1 has been shared with the Machine Learning (ML) community as a pre-print on arXiv.org.

XAI Beyond Explaining

Explainable Artificial Intelligence (XAI) can not only be employed to get an insight into the reasoning process of an Artificial Intelligence (AI) model.

Attention is all you need

The paper ‘Attention Is All You Need’ introduces transformers and the sequence-to-sequence architecture.

XAI Hyperparameter Optimization

Rule-based eXplainable AI (XAI) methods, such as layer-wise relevance propagation (LRP) and DeepLift, provide large flexibility thanks to configurable rules, allowing AI practitioners to tailor the XAI method to the problem at hand.

DenseNet Canonization

As we saw in a previous post, some challenges caused in explaining neural network decisions can be overcome via canonization.

What happens when you’re diagnosed with melanoma?

iToBoS is aiming to streamline melanoma diagnosis, but what happens once you are diagnosed with melanoma?

The fourth issue of the iToBoS newsletter has been launched

Offering a handful of assorted articles and updates, this release offers content about the project, technology and trends.

Optical Technologies for Skin Cancer Detection

Optical technologies are a promising tool for the early detection of melanoma, which is a type of skin cancer that can be aggressive and deadly if not detected and treated in its early stages.