
Inevitably some innovative Canadian physicians are already using ChatGPT to help produce written documents for use in their practices.
However, when it comes to how and when to use such Artificial Intelligence (AI) tools or language models to help write letters to third-parties or assist in preparing clinical summaries for patients, Canadian physicians are currently pretty much on their own when it comes to specific official guidance.
The use of AI in clinical situations to assist in making diagnoses and prepare treatment options has been creeping up on the medical profession for the past decade. In contrast, ChatGPT appeared in a blaze of light on Nov. 30, 2022 and has supercharged debate about just how it and similar language AI tools can or should be used. In Canada, where the Canadian Federation of Independent Business has just released a report identifying unnecessary paperwork as a huge unnecessary burden on physicians, the potential of ChatGPT and similar tools seems obvious.
Developed by the US company, OpenAI, ChatGPT has the capacity to instantly tailor and write documents across the spectrum of written language and mirror human conversation. To date it has been available for free use and recently topped 100 million active users.
It is important to note upfront that ChatGPT has severely limited applications in clinical medicine because it only sources information available up to the end of 2021 and it also cannot produce references and has been known to make them up. However, there are indications OpenAI and other organizations such as Google are moving swiftly to overcome these barriers and it’s also worth noting that ChatGPT has already shown itself able to pass the US Medical Licensing Exam.
Reflecting the frenzied interest in this new technology, articles such as this one are appearing at an accelerating rate looking at the potential impact of ChatGPT and similar AI tools in medicine.
In an article published Feb. 2, the US publication Medical Economics quoted Dr. Ali Parsa, founder and CEO of Babylon, a global AI and digital health platform, as saying that conversational AI tools such as ChatGPT “can be trained to draft letters seeking prior authorizations, appeals of insurance denials, and other claims.” He also identified improving patient education by simplifying medical notes as another potential use of these tools.
Writing in Stat, Rushabh Hi. Doshi and Simar S. Bajaj, students at Yale School of Medicine and Harvard University respectively, gave a brief overview of the promises and pitfalls of using ChatGPT in medicine and identified administrative work as one of the potential areas of benefit. “ChatGPT could be used to help health care workers save time with nonclinical tasks, which contribute to burnout and take away time from interacting with patients,” they wrote. However, they added that ChatGPT gave several wrong answers when asked to supply US billing codes.
A prominent Hungarian futurist Dr. Bertalan Meskó (PhD) released a YouTube video in early February on potential uses of ChatGPT in which he predicted such tools could help relieve the shortage of physicians. With a global shortage of 5 million doctors, he said, “the risk of missing care due to capacity shortages … will soon outweigh the risk of (medical chatbot) algorithms being wrong.”
Dr. Meskó focused on the ability of ChatGPT to be “trained on a dataset of medical records to assist doctors and nurses with creating accurate and detailed clinical notes… it could also potentially take a bigger bite and help facilities with summarizing medical records or analyzing research papers.”
Noting the current unreliability of some information generated by ChatGPT, he said, “If you as a doctor sent a letter generated by ChatGPT to an insurance company, and the diagnostic test gets rejected because it doesn’t cite the proper literature, it’s pretty problematic. On the other hand, the letter itself looks fine. If you’re not too lazy to actually oversee it and include real references, it can still save you some time. This way is not much different from using templates.”
A sounding of Canadian national medical associations found specific policies on ChatGPT and other AI language tools have yet to be developed.
The Canadian Medical Protective Association (CMPA) produced the most detailed statement noting in part that “CMPA is aware of the existence of ChatGPT as an emerging AI communication tool. We are closely monitoring its emergence, as well as other AI tools, which may impact doctors’ medical practices.” The statement recommended a structured approach by physicians to using AI technologies based on three considerations:
- Critically reviewing and assessing whether the AI tool is suited for the intended use and nature of the doctor’s medical practice.
- Being mindful of a physician’s legal and medical professional obligations, including privacy and confidentiality obligations.
- Being aware of bias and seeking to mitigate it when possible by pursuing alternate sources of information and consulting colleagues.
The CMPA statement concluded that “in today’s environment, and for the foreseeable future, AI is not intended to replace a doctor’s clinical experience and appropriate assessment of a patient’s condition. The healthcare provider remains accountable for the information and care provided to the patient.” The CMPA statement complements what the association had already published in 2019 dealing with the use of AI in clinical decision-making.
Dr. Alika Lafontaine, president of the Canadian Medical Association (CMA), which has identified reducing the administrative burden on physicians as a top priority, released a statement in response to a request about the association stance on ChatGPT. Dr. Lafontaine acknowledged that while the association does not have an official position of ChatGPT to support physicians in their daily administrative tasks, it “recognizes the role that technology has always had as a disruptive force in healthcare.”
“ChatGPT and similar AI tools may eventually transform the practice of medicine, but those tools must be properly matched to problems. Addressing the administrative burden on physicians will not only require application of new technology, but redistributing the workload amongst health-care team members and investigating whether current administrative demands are necessary or useful.”
Dr. Eric Cadesky, a BC family physician and keen observer of medical technology advances says ChatGPT is “potentially revolutionary” in easing the burden of administrative tasks and democratizing medical education. But he adds that “just as CT is not a replacement for a thorough history and physical examination, AI is still (just) complementary to what we do” and can also suffer from the same biases as seen in society at large.
A spokesperson for the College of Physicians and Surgeons of Ontario said the College does not currently not have a specific policy covering the use of ChatGPT and other AI tools but noted a number of principles involved in providing care would apply. “Physicians have a responsibility to ensure that their work is complete and accurate. So just like if they’re using a scribe to do their records or if they’re delegating tasks to someone else, (physicians) are responsible.”
Both the College of Physicians and Surgeons of BC and the College of Medicine in Quebec noted they currently have no policy or standards in this area.
As for medical publishing, ChatGPT has already had an immediate impact. Once again physicians and other researchers have been experimenting with using the tool, this time to prepare and in some instances publish papers in medical publications using Chat GPT . Not surprisingly this has prompted a lively debate about the feasibility and acceptability of this approach as well as how to acknowledge use of the tool.
For example, the official journal of the Radiology Society of North America Radiology has just published an article and editorial dealing with ChatGPT and medical writing. The article author, Dr. Som Biswas, a pediatric radiology fellow at the Le Bonheur Children’s Hospital, University of Tennessee Health Science Centre, Memphis, said the article was written by ChatGPT and edited by himself. In the accompanying editorial, Dr. Philipe Katamura, head of applied innovation and AI at Dasa and affiliated professor of neuroradiology at Universidade Federal de São Paulo wrote “there is a hypothetical future in which this article will be one of the last to be written without the help of AI.”
The World Association of Medical Editors (WAME) has been quick to address the issue and on Jan. 21 published its recommendations on the use of ChatGPT and Chatbots in relation to scholarly publications.
“While ChatGPT may prove to be a useful tool for researchers, it represents a threat for scholarly journals because ChatGPT-generated articles may introduce false or plagiarized content into the published literature,” the document states. “Peer review may not detect ChatGPT-generated content.” The paper goes on to note that what ChatGPT does can “go against the very philosophy of science.”
The first and most important recommendation made by WAME is that Chatbots such as ChatGPT cannot be authors on scientific papers “as they cannot understand the role of authors or take responsibility for the paper.” The second recommendation from WAME states “Authors should be transparent when chatbots are used and provide information about how they were used.”
Canada’s most important general interest medical publication CMAJ is a member of WAME and has said it will be guided by the recommendations from the association in dealing with ChatGPT involvement in submissions to the journal.
Meanwhile academic institutions including medical schools are wrestling with how to deal with the fact that students can now use ChatGPT to produce credible written assignments that cannot yet be reliably detected by software tools. While some feel they are bowing to the inevitable and wrestling with how the learning process should be adapted to change how students learn, others such as the Paris Institute of Political Studies, or Sciences Po have announced a total ban on students using ChatGPT or other software for academic assignments “without transparent referencing.”
Currently experimentation with the potential uses of ChatGPT in medical practice and academia is running far ahead of policies and standards to guide such usage as more doctors actually make use of such tools the situation is likely to chage.