This blog is part one of the third, in a series of five, discussing the results of the Ethical AI workshop at MPNE Consensus Data 2024.
During the workshop the participants from the wider MPNE community, drawn from across Europe, were divided into 4 groups to discuss different cartoons, each depicting a different topic or theme related to emerging technology, health, and ethics. The focus of this blog is to examine the perspectives from participants assigned to Group 2, in relation to the cartoon below.
The patient advocates who engaged with the workshop provided a wealth of subject matter expertise on topics such as governance, accountability, privacy, fairness, trust and transparency. The application of these ethical requirements into the health domain are even more critical now, due to the increasingly rapid changes taking place within the sector, including the introduction of emerging technologies.
Trilateral Research moderated the workshop with participation from partners from the iToBoS project (IBM, Fraunhoffer, and MPNE). The participants presented their perspectives and experiences in relation to their cartoons, in a safe and protected environment under Chatham House Rules. The aim of the workshop was to identify key themes to incorporate into WP2 deliverables centred in the assessment law, ethics and the societal impact of the iToBoS technologies.
In addition to asking the participants from Group 2 to give their opinions and perspectives on the cartoon below, they were asked to describe how they felt towards it.
Using Slido, the following 23 responses from the 2 participants where collated when the participants openly, honestly and freely discussed the cartoon.
The results from the discussion implies accountability, explainability, trust and transparency are extremely important when implementing an AI system such as iToBoS, as a tool in healthcare. In this blog, the focus is on accountability.
“Accountability”, “Ownership” “Who is the owner” and to a lesser extent “irresponsible” all suggest there needs to be a very clear understanding of who is responsible and liable for the accuracy and efficiency of the AI solution.
“Not well understood by developers” may suggest it’s the developers alone who are accountable for how an AI system determines a result. However, words such as “Data management” and “Data quality” suggests data sources might also be considered accountable. This may be due to poor quality data, misinterpretation of data, or the risk of data tampering “Potential for data manipulation”, which will affect the end result.
The final results from these discussions will be presented in a follow up blog (Ethical AI – Perspectives from Patient Advocates: Ethics and Emerging Technology – Group 2 (Part Two).