This appeared last week.
Artificial intelligence won’t replace a doctor any time soon, but it can help with diagnosis
September 19, 2017 2.16pm AEST
Author Luke Oakden-Rayner
Radiologist and PhD candidate, University of Adelaide
In the next few years, you will probably have your first interaction with a medical artificial intelligence (AI) system. The same technology that powers self-driving cars, voice assistants in the home, and self-tagging photo galleries is making rapid progress in the field of health care, and the first medical AI systems are already rolling out to clinics.
Thinking now about the interactions we will have with medical AI, the benefits of the technology, and the challenges we might face will prepare you well for your first experience with a non-human health care worker.
How AI can diagnose illness
The technology behind these advances is a branch of computer science called deep learning, an elegant process that learns from examples to understand complex forms of data. Unlike previous generations of AI, these systems are able to perceive the world much like humans do, through sight and sound and the written word.
While most people take these skills for granted, they actually play a major role in human expertise in topics like medicine. Since deep learning grants computers these abilities, many medical tasks are now being solved by artificial intelligence.
In the last 12 months, researchers have revealed computer systems that can diagnose diabetic eye disease, skin cancer, and arrhythmias at least as well as human doctors. These examples illustrate three ways patients will interact with medical AI in the future.
The first of these three ways is the most traditional, and will occur where specialised equipment is needed to make a diagnosis. You will make an appointment for a test, go to the clinic and receive a report. While the report will be written by a computer, the patient experience will be unchanged.
Google’s diabetic eye disease AI is an example of this approach. It was trained to recognise the leaky, fragile blood vessels that occur at the back of the eye in poorly controlled diabetes, and the AI is now working with real patients in several Indian hospitals.
The second way of interacting with medical AI will be the most radical, because many diagnostic tasks don’t need any special equipment at all. The Stanford team that created a skin cancer detector as accurate as dermatologists is already working on a smartphone app.
Before long, people will be able to take their own skin lesion selfies and have their blemishes analysed on the spot. This AI is leading the race to become the first app that can reliably assess your health without a human doctor involved.
The third method of interaction is somewhere in between. While detecting heart rhythms requires an electrocardiogram (ECG), these sensors can be incorporated into cheap wearable technology and connected to a smartphone. A patient could wear a monitor daily, record every heartbeat, and only occasionally see their doctor to review their results. If something serious occurs and the rhythm changes suddenly, the patient and their doctor could be immediately notified.
Many groups are working to bring medical wearables to the clinics now.
What are the benefits?
These systems are incredibly cheap to run, costing a fraction of a cent per diagnosis. They have no waiting lists. They never get tired or sick or need to sleep. They can be accessed anywhere with an internet connection.
Medical artificial intelligence could lead to accessible, affordable health care for everyone.
What are the downsides?
The largest concern is probably unrealistic expectations, created by the hype surrounding the technology. The enormous volume of carefully and expensively curated data required to train a system that can do everything a doctor can is currently far beyond our reach. Instead, we will see narrow systems performing individual tasks for the foreseeable future. To counter these inflated expectations, we need to promote informed voices in these discussions.
The privacy of our medical data will also be a challenge. Not only will many of these systems run in the cloud, but some forms of useful medical data are inherently identifiable. You can’t blur out a patient’s face, for instance, if the system analyses the face for signs of disease. High profile data breaches will be inevitable and will harm confidence in the technology.
The other major issue is the problem of accountability. Who is responsible for a medical error if no doctor is involved in the diagnosis, and we can’t even tell why the system got it wrong? Who do we blame when a doctor accepts the wrong recommendation of an AI? Patient advocates, doctors, governments, and insurance companies are wrestling with this issue, but we don’t have good answers yet.
Medical AI is coming, and you will experience it soon. Most of it will be invisible, working behind the scenes to make your care cheaper and more effective. Some of it will be in your hands, assessing your health at the touch of a button. The best thing to do right now is think about the various challenges, and be prepared for your first appointment.
The full article is here:
This is clearly one man’s view of medical AI but I would suggest is might be a little limited.
IBM’s Watson – while controversial in some quarters – is having a direct impact in the personalisation of cancer therapy among other things.
The following explores some additional areas:
HIT Think 5 components that artificial intelligence must have to succeed
Published September 20 2017, 3:25pm EDT
It seems like it was only a few years ago that the term “big data” went from a promising area of research and interest to something so ubiquitous that it lost all meaning, descending ultimately into the butt of jokes.Thankfully, the noise associated with “big data” is abating as sophistication and common sense take hold. In fact, in many circles, the term actually exposes the user as someone who doesn’t really understand the space. Unfortunately, the same malady has now afflicted artificial intelligence (AI). Everyone I meet is doing an “AI play.”
AI is, unfortunately, the new “big data.” While not good, it is not all bad either. After all, the data ecosystem benefited from all of the “big data” attention and investment, creating some amazing software and producing some exceptional productivity gains.
The same will happen with AI—with the increased attention comes investment dollars which in turn will drive adoption—enhancing the ecosystem.
But the question should be raised—what qualifies as AI?
First, the current AI definition is focused on the narrow, application specific AI, not the more general problem of artificial general intelligence (AGI), where simulating a person using software is the equivalent of intelligence.
Second, the vast, vast majority of the data that exists in the world is unlabeled. It is not practical to label that data manually, and doing so would likely create bias anyway. Unlabeled data presents a different challenge, but the key point here is that it is everywhere and represents the key to extracting business value, or any value.
Third, we are not producing data scientists at a rate that can keep pace with the growth of data. Even with the moniker as the “sexiest job of the 21st century,” the pace at which data scientists are created doesn’t begin to approach the growth rate we are seeing in data.
Fourth, data scientists, for the most part, are not UX designers or product managers or, in many cases, even engineers. As a result, the subject matter experts—those that sit in the business—don’t have effective interfaces to the data science outputs. The interfaces that they have—PowerPoint, Excel or PDF reports—have limited utility in transforming the behavior of a company. What is required is something to shape behavior and applications.
So what does qualify as intelligence? Here is a take for what AI should display, and it encompasses a framework. While some of these elements may seem self-evident, that is because they are taken as a single item. Intelligence has a broader context. All the elements must work in conjunction with each other to qualify as AI.
Discover
Discovery is the ability of an intelligent system to learn from data without upfront human intervention. Often, this needs to be done without being presented with an explicit target. It relies on the use of unsupervised and semi-supervised machine learning techniques (such as segmentation, dimensionality reduction, anomaly detection and more), as well as more supervised techniques where there is an outcome or there are several outcomes of interest.
Usually, in enterprise software, the term discovery refers to the ability of ETL/MDM solutions to discover the various schemas of tables in large databases and automatically find, join keys etc. This is not what we mean by discovery. We use the term very differently and has important implications.
In complex datasets, it is nearly impossible to ask the “right” questions. To discover what value lies within the data, one must understand all the relationships that are inherent and important in the data. That requires a principled approach to hypothesis generation.
One technique, topological data analysis (TDA), is exceptional at surfacing hidden relationships that exist in the data and identifying those relationships that are meaningful without having to ask specific questions of the data. The result is an output that is able to represent complex phenomena, and is therefore able to surface weaker signals as well as the stronger signals.
This permits the detection of emergent phenomena. As a result, enterprises can now discover answers to questions they didn’t even know to ask and do so with data that is unlabeled.
Lots more here:
My feeling is that AI is going to make all sorts of impacts, some we have yet to recognise of even imagine. Keeping a watching brief for the serious trials and clinical outcomes is vital!
David.
No comments:
Post a Comment