It rather looks like AI in Healthcare is already very much on the move!
Opinion
The AI revolution in health care is already here
Pay attention to the media coverage around artificial intelligence, and it’s easy to get the sense that technologies such as chatbots pose an “existential crisis” to everything from the economy to democracy. These threats are real, and proactive regulation is crucial. But it’s also important to highlight AI’s many positive applications, especially in health care. Consider the Mayo Clinic, the largest integrated, nonprofit medical practice in the world , which has created more than 160 AI algorithms in cardiology, neurology, radiology and other specialties. Forty of those have already been deployed in patient care. To better understand how AI is used in medicine, I spoke with John Halamka , a physician trained in medical informatics who is president of Mayo Clinic Platform. As he explained to me, “AI is just the simulation of human intelligence via machines.” Halamka distinguished between predictive and generative AI. The former involves mathematical models that use patterns from the past to predict the future; the latter uses text or images to generate a sort of human-like interaction. It’s that first type that’s most valuable to medicine today. As Halamka described, predictive AI can look at the experiences of millions of patients and their illnesses to help answer a simple question: “What can we do to ensure that you have the best journey possible with the fewest potholes along the way?” For instance, let’s say someone is diagnosed with Type 2 diabetes. Instead of giving generic recommendations for anyone with the condition, an algorithm can predict the best care plan for that patient using their age, geography, racial and ethnic background, existing medical conditions and nutritional habits. This kind of patient-centered treatment isn’t new; physicians have long been individualizing recommendations. So in this sense, predictive AI is just one more tool to aid in clinical decision-making. The quality of the algorithm depends on the quantity and diversity of data. I was astounded to learn that the Mayo Clinic team has signed data-partnering agreements with clinical systems across the United States and globally, including in Canada, Brazil and Israel. By the end of 2023, Halamka expects the network of organizations to encompass more than 100 million patients whose medical records, with identifying information removed, will be used to improve care for others. Predictive AI can also augment diagnoses. For example, to detect colon cancer, standard practice is for gastroenterologists to perform a colonoscopy and manually identify and remove precancerous polyps. But some studies estimate that 1 in 4 cancerous lesions are missed during screening colonoscopies. Predictive AI can dramatically improve detection. The software has been “trained” to identify polyps by looking at many pictures of them, and when it detects one during the colonoscopy, it alerts the physician to take a closer look. One randomized controlled trial at eight centers in the United States, Britain and Italy found that using such AI reduced the miss rate of potentially cancerous lesions by more than half, from 32.4 percent to 15.5 percent. Halamka made a provocative statement that within the next five years, it could be considered malpractice not to use AI in colorectal cancer screening. But he was also careful to point out that “it’s not AI replacing a doctor, but AI augmenting a doctor to provide additional insight.” There is so much unmet need that technology won’t reduce the need for health-care providers; instead, he argued, “we’ll be able to see more patients and across more geographies.” Generative AI, on the other hand, is a “completely different kind of animal,” Halamka said. Some tools, such as ChatGPT, are trained on un-curated materials found on the internet. Because the inputs themselves contain inaccurate information, the models can produce inappropriate and misleading text. Moreover, whereas the quality of predictive AI can be measured, generative AI models produce different answers to the same question each time, making validation more challenging. At the moment, there are too many concerns over quality and accuracy for generative AI to direct clinical care. Still, it holds tremendous potential as a method to reduce administrative burden. Some clinics are already using apps that automatically transcribe a patient’s visit. Instead of creating the medical record from scratch, physicians would edit the transcript, saving them valuable time. Though Halamka is clearly a proponent of AI’s use in medicine, he urges federal oversight. Just as the Food and Drug Administration vets new medications, there should be a process to independently validate algorithms and share results publicly. Moreover, Halamka is championing efforts to prevent the perpetuation of existing biases in health care in AI applications. This is a cautious and thoughtful approach. Just like any tool, AI must be studied rigorously and deployed carefully, while heeding the warning to “first, do no harm.” Nevertheless, AI holds incredible promise to make health care safer, more accessible and more equitable. Here is the link:
https://www.washingtonpost.com/opinions/2023/07/11/ai-health-care-revolution
If the penetration and use of AI at the Mayo Clinic is anything to judge by its all over bar the declaration of victory. It really feels that things are running too fast for all our benefit! Those asking for a short pause may not be that wrong!
Is you head spinning or are you relaxed at the pace of progress?
David.
"The quality of the algorithm depends on the quantity and diversity of data."
ReplyDeleteA better phrasing is
"The quality of the output of the algorithm depends on the quantity, the quality, the completeness and the diversity of data."
Which is why MyHR should never be used in predictive AI.
The basis in founded on humans, nothing to worry about David.
ReplyDelete