AI in healthcare: Are we dinosaurs waiting for the comet to hit?

In the last couple of months lectures and articles about the impact AI will have on healthcare have grown at an exponential pace, especially in the wake of interest in ChatGPT and GPT-4 and how these large language AI models could revolutionize well … everything.

Witness for example this tweet by Canadian Medical Association President Dr. Alika Lafontaine while attending #TED23 earlier this month.

The world’s largest health IT conference – HIMSS – being held in Chicago at the same time also saw the potential impacts of new AI applications dominate the talks from the podium but largely in a positive fashion. Here the emphasis was on how large electronic medical record vendors and other software developers are integrating models such as ChatGPT into their systems in a way that could help physicians dramatically reduce the amount of paperwork they have to do or enhance care delivery.

However, for many the uneasiness felt by Dr. Lafontaine dominates. Perhaps one of the best showcases discussing the current state of those concerns in a policy and ethical context was a recent lecture and discussion dealing with the regulatory and ethical challenges of AI in healthcare, held by the University of Ottawa Centre for Health Law, Policy and Ethics.

For Dr. Colleen Flood, director of the centre and University Research Chair in Health Law & Policy, issues related to the use of AI in healthcare represent a prime opportunity for the federal government and Health Canada to proactively intervene and set policies in this area to protect the public.

“I think most legal scholars understand, even if the public and patients don’t, the incredible uphill battle that patients face to successfully sue a doctor or health care professional for negligent treatment,” said Dr. Flood in her introductory remarks at the session. “I think that the present difficulties that patients have in this regard will be greatly exacerbated by the black box of algorithmic decision making.”

“In the future … all doctors will be supported by AI, in their decision making, and in some cases may replace actual professional judgement or decision making.” She noted this could significantly impair the trust that is fundamental to the physician-patient relationship.

Dr. Flood said Health Canada must play a role to improve safety and quality of medical devices with AI. She returned to this theme in her closing remarks stating that “we have an opportunity here through federal regulation, through Health Canada, to really make the right platforms for Canadian AI innovation in healthcare that can set appropriate standards.

“We’re flat footed, a lot of time when it comes to regulation and legal responses, like dinosaurs waiting around in the swamp waiting for the comet to hit us. We have got to get a lot smarter, faster, flexible and innovative and actually use some of these technologies to help us regulate as well.”

Keynote speaker for the session was Glenn Cohen, deputy dean and professor at Harvard Law School and someone whose work was described by Flood as being “foundational” in the area of AI and healthcare.

After outlining the reasons for using AI in healthcare (see note below in ‘Bonus content’), Cohen then outlined 4 use cases of AI:

  • Choosing cancer therapeutics
  • ICU bed allocation
  • Starting does of FSH (follicle-stimulating hormone) during ovarian stimulation
  • Using AI to select embryo with the highest chance of successful pregnancy
  • Endocardial boundary detection for LVEF (left ventricular ejection fraction)

Cohen then presented a detailed analysis of the ethical considerations in each of the phases of building and implementing AI tools that use predictive analytics. For example, when it comes to acquiring data to build the algorithms to train and use AI, he said questions include:

Do patients need to be explicitly consented if we want to use their data – all the electronic medical records and the data that’s gone into them that you produced over the course of your lifetime? Has anybody ever asked you whether artificial intelligence agent can be trained on it? Is it enough to be notified or do we need actual consent? How representative is the data we’re going to have? What about people in rural settings? What about racial and other minorities? What about First Nations peoples?

At the end of the day, if you develop an AI model or tool that does work, Cohen asked “how do you ensure that it’s disseminated and available and licenced in a way that’s also equitable,” rather than just being used in a concierge medicine setting.

When it comes to liability, Cohen said that just like physicians, AI tools will also make errors in patient care. “Under the current law, if you’re a physician, you face liability only when you do not follow the standard of care and an injury result (so) the safest way to use medical AI is to confirm the thing you were going to do anyways.” But if you consider the benefit of medical AI is catch cases where a physician should do something different from standard care, he said, this approach “is leaving most of the value on the table.”

Cohen also dealt with the concept of explainable AI and the concept of having clinicians work in an environment where the algorithms used to make decisions by AI tools can be understood and explained. (A topic dealt with by Dr. Jeremy Petch (PhD), director of health innovation at Hamilton Health Sciences Centre at the recent HIMSS conference). The challenge with this, said Dr. Cohen is that the explanations used for the “black box” AI function may fit the data but may not be accurate.

One of the session commentators, Maggie Keresteci, executive director at Canadian Association for Health Services & Policy Research, brought a strong patient and caregiver focus to the discussion. In her remarks, Keresteci stressed the importance of having patient and caregiver involvement in the development of AI and medicine, its implementation and its governance.

In using AI in healthcare, Keresteci said she worried that AI will ignore patient stories, “reducing us to a specific demographic diagnosis or a disease profile. Data alone is not sufficient to provide excellence in healthcare.”

Bonus Content

Just because, I had an AI-driven tool from Humata.ai prepare a summary of the 1 hr. 25-minute lecture. Here, in slightly edited and modified form is what was produced.

“(The session) discusses the legal, ethical and practical considerations surrounding the use of artificial intelligence (AI) in healthcare. It highlights the need for adequate regulation of safety, quality and privacy of AI before it comes to the market. Patients may face challenges in successfully suing healthcare professionals for negligent treatment due to the block box of algorithmic decision making. Health Canada has the ability to regulate medical devices with AI, including software. But there are concerns with the present approach and the need to deal with transparency and issues of algorithmic bias. The use of AI in the medical field raises a number of ethical and legal issues, including data privacy, bias and discrimination as well as the need for separate governance of anonymized data. The goals of medical AI can be categorized into four areas:

  • Democratizing expertise
  • Automating drudgery
  • Optimizing resources
  • Expanding frontiers

The process of building AI tools involves acquiring data, building and validating the model, testing the model in real-world settings and disseminating the model. Liability is a concern as medical AI tools will inevitably make errors. Questions about informed consent, privacy, bias and explainability are also involved especially for underserved and underrepresented groups. The design of a comprehensive, coordinated health technology control system is needed to ensure the seamless and reliable use of AI in healthcare.

ICAM, I saw, I conquered: The International Congress on Academic Medicine (#ICAM2023)

They scheduled plenary sessions on ableism, humanism and the concept of One Health that links the health of humans, animals and the planet itself.

Then, they had patients sharing the podium.

Those who organized the first International Congress on Academic Medicine (ICAM) just completed in Quebec City intended it to be many things, but what it certainly was not was the type of medical education conference that Drs. William Osler or William Flexner would have imagined.

With more than 1500 delegates attending and another 100 tuning in virtually, the conference can be viewed as a success for the Association of Faculties of Medicine of Canada and its president and CEO Dr. Geny Moineau who initiated the idea.

With most of the luminaries of medical education in attendance and dozens of closed meetings hosted by several Canadian medical education organizations and focused on the business of medical education, the conference did bear many of the hallmarks of a traditional #meded gathering.

But it was during the plenary sessions dealing with the critical issues facing medicine today that the conference really came through. And many of the themes touched on in plenary such as the prevalence of racism in medicine and medical education and the urgent need to address EDI seemed to filter through many of the other breakout sessions.

The role for having patient participation in medical conferences has long been problematic for both patients and physicians with charges of tokenism being brought on one side and clinicians lamenting the loss of a space where they can network with just their peers on the other.

At ICAM, the organizers committed wholeheartedly to the self-assessed Patients Included charter for medical conferences which calls for patients to be meaningfully involved in all aspects of a conference from planning to presenting. At ICAM this approach resulted in a rich exchange of information and views perhaps best incapsulated in the closing plenary session where Dr. Brian Hodges, chief medical officer at the University Health Network in Toronto and president of the Royal College of Physicians and Surgeons of Canada engaged in conversation with patient partner and advocate Cecilia Amoakohene.

“Some of the best learnings we have had this week have come from patients,” said Dr. Moineau in her closing remarks.

For many, some of the remarks made at plenaries cannot have been comfortable as Canadian medical schools were blasted for failing to deal with many issues from ableism to the wellbeing of medical learners to adequately training learners to deal with the realities of medical practice today.

In terms of living up to the international part of its title, the conference had delegates from 44 countries and speakers from around the world dealing with global topics. Perhaps the most important issue facing Canada and other countries in healthcare today – the crisis in health human resources – had its own plenary with a strong global focus.

With a conference this diverse and ambitious it’s hard to assess whether it will do anything other than reinforce views of those who attended or who watched the plenaries virtually. But it certainly checked a lot of boxes. And the opportunity to hear CMA President Dr. Alika Lafontaine’s masterful, concise assessment of how we got to the muddle we are in today with respect to the Canadian healthcare system or Cecilia Amoakohene discussing why she feels she must dress well when going to the emergency room to try and counter racist views made it time well spent.

(Image: Patient advocate Cecilia Amoakohene in conversation with Dr. Brian Hodges)