Continuing from the previous blog (Ethical AI – Perspectives from Patient Advocates: Ethics and Emerging Technology – Group 2 (Part 1)), this blog focuses on the remaining results from Group 2. These results include discussions on explainability, trust and transparency.
Explainable AI (XAI) has shown the potential to increase understandability by showing awareness e.g. understanding underlying biological processes[1]. “Explainability” is the idea that we can objectively determine why an AI system makes specific inferences using the data it is analysing or learning from – essentially why it is giving the output that it does. At the same time, explainability is concerned with how the system relays that information to a human.
Therefore language such as “Data quality”, “How smart is this” and “Data is compromised” shows the importance that any potential results defined by an AI system, must be transparent and easily explain how and why a result was obtained, especially to those who may be impacted by the results.
Both “Competence” and “Integrity” may reflect two meanings. “Competence” might be in relation to the efficacy of the algorithm, where “Integrity” may be referring to the reliability of data used (“Data quality”). Similarly, both “Competence” and “Integrity” may be in relation to the capabilities of those who are developing the AI solution, or, those who are interpreting the final result.
It's clear from the results of the word cloud, “Transparency” is vital when using AI assisted tools. Highlighted as one of the key themes, it’s meaning can be interpreted in a various ways. For instance, “Infoxication” is overloading non experts with too many details which may mask relevant and important information, making the overall message non-transparent.
“Transparency” surrounding “privacy” is vital when developing XAI systems. In the health sector it is the publics’ health data that is the driver to ensuring the accuracy and efficiency of these systems, including iToBoS.
Being transparent about what data is collated, how it is used, how it is stored and who has “access” to it, is critical to get potential willing participants to share their data. By being transparent, this builds trust. Therefore it may help alleviate any potential fears the public may have such as being a “Clear target” for “Data leak” and the potential misuse of their data.
There is strong overlap of ethical norms discussed in the analysis of this cartoon. Demonstrating clear accountability for each aspect of the XAI system such as iToBoS builds trust in stakeholders. Transparency also builds trust and may potentially help eliminate fears that end users (healthcare providers and patients) may have. Being transparent with how a person’s privacy is protected may potentially lead to a large scale uptake in using these systems.
However, although it is essential that there is transparency and explainability of how an XAI system produces an end result, it is also important not to over complicate the message.
Toussaint et al.1 wrote “using AI in clinical practice is still unresolved and problematic due to open medical, legal, ethical and societal questions”.
If XAI systems are built considering the fundamental themes identified by the patient advocacy participants who provide real life expertise, there is potential for their use and uptake as healthcare tools in the wider health ecosystem. This is applicable for iToBoS as well as the wider application of AI to health, health data, and the provision of health services.
[1] Philipp A Toussaint et al., ‘Explainable Artificial Intelligence for Omics Data: A Systematic Mapping Study’, Briefings in Bioinformatics 25, no. 1 (22 November 2023): bbad453, https://doi.org/10.1093/bib/bbad453.