In the deluge of recent presentations on artificial intelligence (AI) and medicine, one of the more relevant for Canadian doctors came at last month’s annual meeting of the Canadian Medical Protective Association (@CMPAMembers).
The video of that presentation has just become available and gives people an opportunity to hear from CMPA Executive Director and CEO Dr. Hartley Stern and Dr. David Naylor (@CDavidNaylor), professor of medicine and president emeritus of the University of Toronto and one of the most respected voices in Canadian health care.
Given that the CMPA provides legal advice to physicians, Dr. Stern’s comments were particularly relevant, and Dr. Naylor had the opportunity to expand on remarks he published last year in the Journal of the American Medical Association on the topic.
“Every area with images at its core for practice will become a realm where AI will have transformative effects ,” Dr. Nayor said, because of ability of a computer algorithm to quickly analyze individual pixels rather than just patterns and to characterize in much greater detail.
“Used intelligently (and) used to augment human intelligence, AI can streamline workflow and relieve us of drudgery and give us time to better physicians caring for people.” In addition, he said, AI can help solve complex analytic problems and capitalize of the widening availability and richness of data.
“Many of these algorithms will be at their best supporting us and not making decisions on their own,” he said.
On the negative side, Dr. Naylor said, AI has the potential to devalue judgement and dehumanize care, is dependent on the quality of information on which algorithms are based and makes decisions where causal pathways are hard to determine.
When it comes to the potential uses for AI in medicine, Dr. Naylor said, “these are early days,” noting Dr. Eric Topol’s assessment that few of the algorithms currently in use have been rigorously tested and evaluated. And, he said, nobody has yet established how to critically evaluate publications dealing with AI because they vary so much.
Dr. Naylor said physicians should take the middle ground and not be stampeded into either unquestionably rejecting or accepting the value of AI in medicine. The integration of AI into medicine will either be smooth or disruptive, Dr. Naylor concluded, and which way it goes will depend to a large degree on physicians.
Speaking after Dr. Naylor, Dr. Stern reiterated the potential benefits and challenges for using AI and deep learning in medicine. “There is great promise that we can improve diagnostic accuracy,” he said, and the ability of physicians to improve treatment plans can be improved, while reducing costs and the overuse of medical tests.
The promise, if properly implemented, is that AI will be able to transform the healthcare system and in so doing improve well-being and quality of life for physicians, Dr. Stern said.
However, he added, the regulatory framework and legal environment for using AI lags significantly behind the development of the technology.
For AI to be properly integrated into health care, Dr. Stern, said individual physicians need to be able to trust that the technology will do what it says it is going to do. One of the roles of CMPA, he said, will be to provide a bridge between the interest in these technologies from physicians and patients and their trust in them.
With poor communication being a main reason for complaints about physicians to regulatory authorities and the CMPA, the ability of physicians to properly explain the algorithms used by AI to their patients will be of critical importance. “You are going to have to learn how to tell that patient what this AI is going to do for you.”
Dr. Stern went on to then elaborate a number of key challenges to integrating AI including considerations of patient privacy while gathering the immense amounts of data required to develop reliable algorithms.
“In our environment, health care professionals are accountable clinical diagnosis and treatment plans. It’s you, not the machine.” He noted the Canadian regulatory and legal framework is not yet established for determining accountability when a wrong diagnosis is made based on an incorrect algorithm.
Dr. Stern urged the audience to get involved with medical associations and regulatory authorities in developing frameworks to provide adequate protections for physicians. “Without a clear policy, you are at risk.”
The session was accompanied by the release of a CMPA background document: “Can I get an (artificial) second opinion?” That document notes: “While AI can provide information for you to consider, it is important to ensure that actual medical care provided to the patient reflects your own recommendations based on objective evidence and sound medical judgment.”
(Image: Dr. David Naylor from CMPA video)