How do you license a black box?



Amidst all the recent hype and hope surrounding the use of machine learning and AI in medicine it is good to know someone is also asking how the heck we going to figure out whether the proposed algorithm or application is doing what it is supposed to be doing.

Enter Health Canada’s Digital Health Division (you will be pleased to know we have one), created a year ago to do just that (among other things).

At the recent CADTH symposium in Edmonton, Dr. Tyler Dumouchel (PhD), a senior scientific evaluator in that division gave a fascinating overview of regulatory challenges currently facing Health Canada as applications utilizing machine learning and AI are already being submitted (and receiving approval).

While noting that AI has the potential to revolutionize health care, Dumouchel said there is currently no established regulatory framework for assessing AI in medical devices. Hence the challenges, he enumerated which include:

  • How to balance encouraging innovation and facilitating market access with safety and effectiveness.
  • What pre-market authorization should be required from the manufacturer and should we be regulating manufacturing processes rather than the finished product?
  • How reliable and representative are the training data used to develop the particular application (a biggie)? Dumouchel noted such data need to reflect the Canadian patient population, be accurate, done on multi-centre basis, and include accurate disease prevalence data.
  • How do we know the algorithm used is generalizable and transferable and what is the best approach to use to ensure the algorithms used generate correct and predictable results?
  • What are the best metrics to use to assess the performance of an AI algorithm?
  • How can we ensure that AI is integrated appropriately into its intended environment without unintended consequences?
  • When and how do you assess to continuous learning algorithms where results can vary in time?
  • What sort of post-market regulation framework is appropriate for AI applications?
  • Who is accountable if the application makes a mistake?

These are all obviously important issues and it is gratifying that the federal government has acknowledged them and is taking them into consideration.

But there is some sense of urgency here. As Dumouchel points out, Health Canada doesn’t want to be left behind internationally when setting standards for medical devices that use AI algorithms.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s