Artificial Intelligence (AI) and Machine Learning (ML) used in healthcare

Despite the technology risks related to AI, the MDR regulatory frameworks does not include any specific requirements for the use of artificial intelligence in medical devices and no law, common specifications or harmonized standards exist to regulate AI application in medical devices.

Currently AI-based SW in the medial sector has to meet the general requirements of safety and performance valid for all medical devices and listed in the Annex I of the Medical Device Regulation (MDR) and specifically:

  • Demonstration of safety and performance; it means that software must be developed and manufactured in keeping with the state of the art.
  • Validation of the device against the intended purpose and verification against the specifications.
  • Development of the software in a way that ensures repeatability, reliability, and performance, including description of the methods used for verification.
  • Clinical evaluation of the software and the algorithms that can be based on a comparator device, sufficiently technical equivalent.

By following an autonomous path, the Special Committee on Artificial Intelligence in a Digital Age (AIDA), established by the European Parliament, has directed all its efforts in filling current regulatory gaps to develop a European framework on Artificial Intelligence (AIDA, 2021) and published the Artificial Intelligence Act (AIA) regulation applicable to all AI-driven SW (used in all applications not only in the medical ones).

In February 2020 the European Union published a position paper on the regulation of artificial intelligence in medical devices and, following a public hearing, AIDA has recently published a “Working Paper on AI and Health”. This has led the European Commission to propose a Regulation harmonizing the rules on Artificial Intelligence (COM, 2021).

As done by FDA in USA, the proposal of the European Commission foresees a total product lifestyle approach involving a pre-market assessment along with on-going reviews in which manufacturers would be expected to meet several legal requirements.

According to the proposal an AI system used as a medical device shall be classified as high risk and shall be subject to scrutiny by a Notified Body (COM, 2021, Art. 6). Moreover, the device shall fulfill all the requirements already established by the Medical Device Regulation and also those laid down in Chapter II of the Proposal (COM, 2021, Arts. 8, 43(3)) that include:

  • Variety and completeness of collected data (data used to train the AI-based medical device must be broad and representative of all relevant scenarios);
  • Manufacturers must accurately record and keep data used to train, build, test, and validate the AI-based medical device.
  • Regarding transparency, manufacturers are required to provide information on the AI-based medical device’s capabilities, limitations, and purposes for its intended use along with conditions under which it should function and expected levels of accuracy.
  • Manufacturers are required to demonstrate the robustness, accuracy and the ability of the AI-based medical device to correct errors and inconsistencies at all phases of the life cycle.
  • Manufacturers must ensure their AI-based medical device has an appropriate level of human oversight.
  • Robust risk assessment and management taking into consideration the continuous training and adaptation of the algorithms.

The adoption of a harmonized regulatory framework for Artificial Intelligence at a European level may help solve any doubts concerning the implementation of rules, the risks and the safe use of AI-enabled devices in healthcare mainly those with algorithms based on self-learning and in continuous evolution.

Some open issues remain; we report some of them here below:

  • The ratio of the two sets of needed data (training data and data for the validation of ML model); state of the art recommends a ratio of approx. 80% training data and 20% validation data but the ratio being used depends on many factors and is not set in stone.
  • Use of independent data for validation.
  • The number of data sets; it depends on the number of properties or dimensionality of data, statistical distribution of data that to prevent bias must represent the statistical distribution of the environment of application, learning methods used and other characteristics,
  • The quality and accuracy of data and of their labeling in deep learning models that rely on a supervised learning approach.
  • The management by the manufacturer of a fault condition (single fault condition) through the implementation of measures capable to minimize unacceptable risks and reduction of the performance of the medical device (MDR Annex I, 17.1):   this is difficult when AI models use a “black-box” model since there is no transparency in how these models arrive at their decisions. Explainable and approvable AI decisions are a prerequisite for the safe use of AI on actual patients.