#AI: Risks and Challenges (June 2022 edition)

Last week’s virtual e-Health conference and tradeshow featured some intriguing examples of how AI and machine learning are being used in the Canadian healthcare context – from developing a screening blood test for breast cancer to helping public health officials to manage the COVID-19 pandemic.

Perhaps more significant was a panel discussion on “practicing responsible” AI which noted that while AI has the potential to expand health services in underdeveloped regions globally, it also creates risks of creating “data poverty” by not properly including populations in the databases used to create the algorithms running clinical programs driven by AI.

A just published report by the European Parliament Panel for the Future of Science and Technology provides more of a deep-dive into the risks and ethical and societal impact of AI and machine learning touched on in the panel discussion at e-Health.

The European report was based on “a comprehensive interdisciplinary (but non-systematic) literature review and analysis of existing scientific articles, white papers, recent guidelines, governance proposals, AI studies and results, news articles and online publications.”

The report notes that “AI has progressively been developed and introduced into virtually all areas of medicine, from primary care to rare diseases, emergency medicine, biomedical research and public health. Many management aspects related to health administration (e.g. increased efficiency, quality control, fraud reduction) and policy are also expected to benefit from new AI-mediated tools.”

In the clinical setting specifically, the European report authors state the potential of AI “is enormous and ranges from the automation of diagnostic processes to therapeutic decision making and clinical research.”

The report goes on to identify and elaborate upon 7 main risks associated with the use of AI in medicine healthcare:

  • patient harm due to AI errors
  • the misuse of medical AI tools
  • bias in AI and the perpetuation of existing inequities
  • lack of transparency
  • privacy and security issues
  • gaps in accountability
  • obstacles in implementation

“Not only could these risks result in harms for the patients and citizens, but they could also reduce the level of trust in AI algorithms on the part of clinicians and society at large,” the authors state. “Hence, risk assessment, classification and management must be an integral part of the AI development, evaluation and deployment processes.”

Even with large-scale datasets with sufficient quality for training their AI technologies, the report says there are still at least three major sources of error for AI in clinical practice.

  1. Having AI predictions significantly impacted by noise in the input data during the usage of the AI tool. Eg. Scanning errors when using AI in ultrasound scanning.
  2. AI misclassifications due to dataset shift that occurs when the statistical distribution of the data used in clinical practice is shifted, even slightly, from the original distribution of the dataset used to train the AI algorithm.
  3. Predictions can be erroneous due to the difficulty of AI algorithms to adapt to unexpected changes in the environment and context in which they are applied.

The report authors outline the potential for misuse of medical AI tools and potential mitigating factors in the chart below:

From: European Parliament : Artificial Intelligence in Healthcare

When it comes to the know well demonstrated potential for bias in AI, the report suggests mitigating these risks by:

  • Systemic AI training with balanced and representative datasets
  • Involving social scientists in interdisciplinary approaches to medical AI
  • Promoting more diversity and inclusion in the field of medical AI

The report notes accountability is key to the greater acceptance of AI in the field of medicine. “…clinicians that feel that they are systematically held responsible for all AI-related medical errors – even when the algorithms are designed by other individuals or companies – are unlikely to adopt these emerging AI solutions in their day-to-day practice. Similarly, citizens and patients will lose trust if it appears to them that none of the developers or users of the AI tools can be held accountable for the harm that may be caused.” For this reason, the report authors state: There is a need for new mechanisms and frameworks to ensure adequate accountability in medical AI …”

Of course, the European report is far more comprehensive than the summary above and also provides detailed suggestions for mitigating the risks it identifies – some specific to the European policy environment and others not.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s