This
appeared last week:
Docs Using AI? Some Love It, Most Remain Wary
Christine Lehmann, MA
August 15, 2023
When OpenAI released ChatGPT-3 publicly last November, some doctors decided
to try out the free AI tool that learns language and writes human-like text.
Some physicians found the chatbot made mistakes and stopped using it, while
others were happy with the results and plan to use it more often.
"We've played around with it. It was very early on in AI and we noticed
it gave us incorrect information with regards to clinical guidance," said
Monalisa Tailor, MD, an internal medicine physician at Norton Health Care in
Louisville, Kentucky. "We decided not to pursue it further," she
said.
Orthopedic spine surgeon Daniel Choi, MD, who owns a small medical/surgical
practice in Long Island, New York, tested the chatbot's performance with a few
administrative tasks, including writing a job listing for an administrator and
prior authorization letters.
He was enthusiastic. "A well-polished job posting that would usually
take me 2-3 hours to write was done in 5 minutes," Choi said. "I was
blown away by the writing — it was much better than anything I could
write."
The chatbot can also automate administrative tasks in doctors' practices
from appointment scheduling and billing to clinical documentation, saving
doctors time and money, experts say.
Most physicians are proceeding cautiously. About 10%
of more than 500 medical group leaders said their practices regularly use
AI tools when they responded to a March poll by the Medical Group Management
Association.
More than half of the respondents not using AI said they first want more
evidence that the technology works as intended.
"None of them work as advertised," said one respondent.
MGMA practice management consultant Dawn Plested acknowledges that many of
the physician practices she's worked with are still wary. "I have yet to
encounter a practice that is using any AI tool, even something as low-risk as
appointment scheduling," she said.
Physician groups may be concerned about the costs and logistics of
integrating ChatGPT with their electronic health record systems (EHRs) and how
that would work, said Plested.
Doctors may also be skeptical of AI based on their experience with EHRs, she
said.
"They were promoted as a panacea to many problems; they were supposed
to automate business practice, reduce staff and clinician's work, and improve
billing/coding/documentation. Unfortunately, they have become a major source of
frustration for doctors," said Plested.
Drawing the Line at Patient Care
Patients are worried about their doctors relying on AI for their care,
according to a Pew
Research Center poll released in February. About 60% of US adults say they
would feel uncomfortable if their own healthcare professional relied on
artificial intelligence to do things like diagnose disease and recommend
treatments; about 40% say they would feel comfortable with this.
"We have not yet gone into using ChatGPT for clinical purposes and will
be very cautious with these types of applications due to concerns about
inaccuracies," Choi said.
Practice leaders reported in the MGMA poll that the most common uses of AI
were nonclinical, such as:
·
Patient communications, including call center
answering service to help triage calls, to sort/distribute incoming fax
messages, and outreach such as appointment reminders and marketing materials
·
Capturing clinical documentation, often with
natural language processing or speech recognition platforms to help virtually
scribe
·
Improving billing operations and predictive
analytics
Some doctors also told The
New York Times that
ChatGPT helped them communicate with patients in a more compassionate way.
They used chatbots "to find words to break bad news and express
concerns about a patient's suffering, or to just more clearly explain medical
recommendations," the story noted.
Is Regulation Needed?
Some legal scholars and medical groups say that AI should be regulated to
protect patients and doctors from risks, including medical errors, that could
harm patients.
"It's very important to evaluate the accuracy, safety, and privacy of
language learning models (LLMs) before integrating them into the medical
system. The same should be true of any new medical tool," said Mason
Marks, MD, JD, a health law professor at the Florida State University College
of Law in Tallahassee.
In mid-June, the American Medical Association approved two resolutions
calling for greater government oversight of AI. The AMA will develop proposed
state and federal regulations and work with the federal government and other
organizations to protect patients from false or misleading AI-generated medical
advice.
Marks pointed to existing federal rules that apply to AI. "The Federal
Trade Commission already has regulation that can potentially be used to combat
unfair or deceptive trade practices associated with chatbots," he said.
In addition, "the US Food and Drug Administration can also regulate these
tools, but it needs to update how it approaches risk when it comes to AI. The
FDA has an outdated view of risk as physical harm, for instance, from
traditional medical devices. That view of risk needs to be updated and expanded
to encompass the unique harms of AI," Marks said.
There should also be more transparency about how LLM software is used in
medicine, he said. "That could be a norm implemented by the LLM developers
and it could also be enforced by federal agencies. For instance, the FDA could
require developers to be more transparent regarding training data and methods,
and the FTC could require greater transparency regarding how consumer data
might be used and opportunities to opt out of certain uses," said Marks.
What Should Doctors Do?
Marks advised doctors to be cautious when using ChatGPT and other LLMs,
especially for medical advice. "The same would apply to any new medical
tool, but we know that the current generation of LLMs are particularly prone to
making things up, which could lead to medical errors if relied on in clinical
settings," he said.
More here:
https://www.medscape.com/viewarticle/994892?icd=login_success_email_match_norm
I have to say this all seems like pretty sensible advice for those who want
to start getting a feel for what is possible and how well it can work.
There is no need to hurry, but being King Canute and hoping to resist is also
not wise.
In passing, I would be keen to pass on any references in the space that
others have found useful!
David.