Quote Of The Year

Timeless Quotes - Sadly The Late Paul Shetler - "Its not Your Health Record it's a Government Record Of Your Health Information"

or

H. L. Mencken - "For every complex problem there is an answer that is clear, simple, and wrong."

Wednesday, October 16, 2019

Now Here Is An Issue It Will Take A While To Get Our Collective Heads Around!

This was published last week.

Potential Liability for Physicians Using Artificial Intelligence

JAMA. Published online October 4, 2019. doi:10.1001/jama.2019.15064
Artificial intelligence (AI) is quickly making inroads into medical practice, especially in forms that rely on machine learning, with a mix of hope and hype.1 Multiple AI-based products have now been approved or cleared by the US Food and Drug Administration (FDA), and health systems and hospitals are increasingly deploying AI-based systems.2 For example, medical AI can support clinical decisions, such as recommending drugs or dosages or interpreting radiological images.2 One key difference from most traditional clinical decision support software is that some medical AI may communicate results or recommendations to the care team without being able to communicate the underlying reasons for those results.3
Medical AI may be trained in inappropriate environments, using imperfect techniques, or on incomplete data. Even when algorithms are trained as well as possible, they may, for example, miss a tumor in a radiological image or suggest the incorrect dose for a drug or an inappropriate drug. Sometimes, patients will be injured as a result. In this Viewpoint, we discuss when a physician could likely be held liable under current law when using medical AI.

Medical AI and Liability for Physicians
In general, to avoid medical malpractice liability, physicians must provide care at the level of a competent physician within the same specialty, taking into account available resources.4 The situation becomes more complicated when an AI algorithmic recommendation becomes involved. In part because AI is so new to clinical practice, there is essentially no case law on liability involving medical AI. Nonetheless, it is possible to understand how current law may be likely to treat these situations from more general tort law principles.
The Figure presents potential outcomes for a simple interaction—for instance, when an AI recommends the drug and dosage for a patient with ovarian cancer. Assume the standard of care for this patient would be to administer 15 mg/kg every 3 weeks of the chemotherapeutic bevacizumab.
There is vastly more here:
The last two paragraphs of the article have a clear warning:
“Although current law around physician liability and medical AI is complex, the problem becomes far more complex with the recognition that physician liability is only one piece of a larger ecosystem of liability. Hospital systems that purchase and implement medical AI, makers of medical AI, and potentially even payers could all face liability.5 The scenarios outlined in the Figure and the fundamental questions highlighted here recur and interact for each of these forms of liability. Moreover, the law may change; in addition to AI becoming the standard of care, which may happen through ordinary legal evolution, legislatures could impose very different rules, such as a no-fault system like the one that currently compensates individuals who have vaccine injuries.
As AI enters medical practice, physicians need to know how law will assign liability for injuries that arise from interaction between algorithms and practitioners. These issues are likely to arise sooner rather than later.”
To me the most important thing to do would be to be very careful to fully understand the basis of any recommendation an AI makes and better still have it fully document the basis and reasoning for such a recommendation.
With this is hand it would be possible to realistically assess what was recommended and why and then decide how much of the advice to follow while noting the reasons for variance from the advice.
I suspect some variation of the above is where we will end up.
What do you think? This technology is already here – drug interaction detection etc. – and is only going to get more complex and opaque.
David.

No comments: