Friday, May 20, 2016

A Really Interesting Discussion Regarding AI In Medicine I Wonder Where It Will Really Go?

This appeared last week:

News in brief

Monday, 9 May, 2016
The role of the doctor as an expensive problem-solver may become redundant in the future, according to health experts commenting in the New Zealand Medical Journal. The authors believe that over the coming years, artificial intelligence (AI) will diagnose most health problems and even decide what treatment the patient should have. The health experts say that humans would continue to be an important part of health care delivery, but in many situations they would only be trained to fill the gaps where artificial intelligence is less capable. “Human doctors make errors simply because they are human, with an estimated 400 000 deaths associated with preventable harm in the US per year,” the authors wrote. “Furthermore, the relentless growth of first world health care demands in an economically-constrained environment necessitates a new solution. Therefore, for a safe, sustainable health care system, we need to look beyond human potential towards innovative solutions such as AI. Initially, this will involve using task-specific AI as adjuncts to improve human performance, with the role of the doctor remaining largely unchanged. However, in the longer term, AI should consistently outperform doctors in most cognitive tasks. Humans will still be an important part of health care delivery, but in many situations less expensive, fit-for-purpose clinicians will assume this role, leaving the majority of doctors without employment in the role that they were trained to undertake.”
The full news brief is here:
Here is the abstract:
6th May 2016, Volume 129 Number 1434
William Diprose, Nicholas Buist


Artificial intelligence (AI) is a rapidly growing field with a wide range of applications. Driven by economic constraints and the potential to reduce human error, we believe that over the coming years AI will perform a significant amount of the diagnostic and treatment decision-making traditionally performed by the doctor. Humans would continue to be an important part of healthcare delivery, but in many situations, less expensive fit-for-purpose healthcare workers could be trained to ‘fill the gaps’ where AI are less capable. As a result, the role of the doctor as an expensive problem-solver would become redundant.
Full text etc. is here for subscribers.
To me the first area of success is likely to be things like Watson and Isabel providing interactive decision support. After that there are many barriers with the need for development in all sorts of supporting domains (natural language processing, interface design etc. etc.)
What do you think?


Andrew McIntyre said...

My opinion is biased, but we need a focus on helping doctors do their job and not turning them into technical implementers of pathways and protocols without thinking about the patient and their problem and trying to work out the best pathway themselves. Decision support should be just that - "Support" and not an attempt to replace their clinical role. Rapid reliable transmission of patient data between providers is essential and that data should be high quality, standards based and as atomic as practical. Something like Watson would be fabulous to help you "search" the literature for similar patients and situations and suggest rare conditions and little known treatments.

Searching the web and using Youtube etc is fabulous for finding information but to make sense of it you need a medical degree or a lot of time to do your own learning or you could wind up in very strange places. It is possible for patients on a mission to become experts on a narrow area and actually make a difference but that is not the average patient.

I don't think the current government focus is primarily on providing doctors with easy access to high quality information and improving the quality of information that would make decision support possible. It appears to be directed at command and control and lacks any clear path to actually improving care. The PCEHR is not reliable or complete enough to add much value. Being handed a bunch of documents about a patient that you cannot rely on or drill down to the atomic data (Because its not there) may actually make the job and outcome worse. How often have "good ideas" actually turned out to be harmful when trialed in the real world even though they appeared to be a good idea. eg Vioxx

I personally would love to have access to Watson loaded with all the medical literature. Currently I use uptodate in that role but that is a summary and excludes rare and anecdotal cases, which sometimes can at least give you some guidance in difficult situations. I already have the rapid ability to transfer data between providers, but the quality and atomicity of that data is not good enough to do major decision support. There have been very little focus on good data and that's the foundation stone of getting benefit from eHealth.

Bernard Robertson-Dunn said...

Andrew said "I don't think the current government focus is primarily on providing doctors with easy access to high quality information and improving the quality of information that would make decision support possible."

Agree totally. The fact that they have made the health record personally controlled and allow patients to hide data means it is useless and untrustworthy in helping health care professionals make decisions.

Systems like Watson are useful when it comes to population health but not at the individual health care decision level.

And when it comes to AI replacing doctors, who is responsible for any decisions the system might make? Who can you sue if there is an error, omission or sub-optimal decision?

Andrew's correct when he says it's all about support. IMHO, the myHR supports very few health professionals, but does waste a lot of health carer's time pumping dubious data into a big, rarely used bucket.

And still only shows uploads and registrations. No mention of anyone using the system, better health care of money saved.

Bernard Robertson-Dunn said...

In a keynote speech, Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, noted that we are still in the Dark Ages of machine learning, with AI systems that generally only work well on well-structured problems like board games and highway driving.

Nevertheless, Etzioni considers it far too early to talk about regulating AI: “Deep learning is still 99 percent human work and human ingenuity. ‘My robot did it’ is not an excuse. We have to take responsibility for what our robots, AI, and algorithms do.”

What to Do When a Robot Is the Guilty Party