Quote Of The Year

Timeless Quotes - Sadly The Late Paul Shetler - "Its not Your Health Record it's a Government Record Of Your Health Information"

or

H. L. Mencken - "For every complex problem there is an answer that is clear, simple, and wrong."

Friday, August 18, 2023

This Is A Useful Introduction To AI In Healthcare.

This appeared a few months ago, but is well worth the read:

We need to chat about artificial intelligence

Enrico W Coiera, Karin Verspoor and David P Hansen

Med J Aust || doi: 10.5694/mja2.51992
Published online: 12 June 2023

With the arrival of large language models such as ChatGPT, AI is reshaping how we work and interact

Long foretold and often dismissed, artificial intelligence (AI) is now reshaping how we work and interact as a society.1 For every claim that AI is overhyped and underperforming, only weeks or months seem to pass before a new breakthrough asks us to re-evaluate what is possible. Most recent, it is the very public arrival of large language models (LLMs) such as the generative pre-trained transformers (GPTs) in ChatGPT. In this perspective article, we explore the implications of this technology for health care and ask how ready the Australian health care system is to respond to the opportunities and risks that AI brings.

GPTs are a recent class of machine learning technology. Guided by humans who provide it with sample responses and feedback, ChatGPT was initially trained on 570 gigabytes of text, or about 385 million pages of Microsoft Word, and, at first release, the language model had 175 billion parameters.2 This massive model of the relationships between words is generative in that it produces new text, guided by the model, in response to prompts. It can answer questions, write songs, poems, essays, and software code. Other generative AIs such as DALL‐E, which is trained on images, can create startlingly good pictures, including fictitious or “deep fake” images of real people.3

Today's LLMs are story tellers, not truth tellers. They model how language is used to talk about the world, but at present they do not have models of the world itself. The sheer size of ChatGPT means that it can perform tasks it was not explicitly trained to do, such as translate between languages. ChatGPT amassed 100 million users in the first two months that it was available.4 So compelling are the linguistic skills of LLMs that some have come to believe such AI is sentient,5 despite the prevailing view that as statistical pattern generators, they cannot have consciousness or agency. Australian singer Nick Cave called ChatGPT “a grotesque mockery of what it is to be human” after seeing it generate new songs in his style.6

The health care uses of generative models will soon become clearer.7 Epic has agreed with Microsoft to incorporate its GPT‐4 model into their electronic health records, which have been used for over 305 million patients worldwide.8 LLMs are likely to find application in digital scribes, assisting clinicians to create health records by listening to conversations and creating summaries of the clinical content.9,10 They can create conversational agents, which change the way we search medical records and the internet, synthesising answers to our questions rather than retrieving a list of documents.11

We should prepare for a deluge of articles evaluating LLMs on tasks once reserved for humans, either being surprised by how well the technology performs or showcasing obvious limits because of the lack of a deep model of the world.12 Especially when it comes to clinical applications, producing text or images that are convincing is not the same as producing material that is correct, safe, and grounded in scientific evidence. For example, conversational agents can produce incorrect or inappropriate information that could delay patients seeking care, trigger self‐harm, or recommend inappropriate management.13 Generative AI may answer patients’ questions even if not specifically designed to do so. Yet all such concerns about technology limitations are hostage to progress. It would be foolish indeed to see today's performance of AI as anything other than a marker on the way to ever more powerful AI.

The unintended consequences of AI

It is the unintended consequences of these technologies that we are truly unprepared for. It was hard to imagine in the early innocent days of social media, which brought us the Arab Spring,14 just how quickly it would be weaponised. Algorithmic manipulation has turned social media into a tool for propagating false information, enough to swing the results of elections, create a global antivaccination movement, and fashion echo chambers that increasingly polarise society and mute real discourse.

Within two months of the release of ChatGPT, scientific journals were forced to issue policies on “non‐human authors” and whether AI can be used to help write articles.15 Universities and schools have banned its use in classrooms and educators scramble for new ways to assess students, including returning to pen and paper in exams.16 ChatGPT is apparently performing surprisingly well on questions found in medical exams.17

The major unintended consequences of generative models are still to be revealed.18 LLMs can produce compelling misinformation and will no doubt be used by malicious actors to further their aims. Public health strategies already must deal with online misinformation; for example, countering antivaccination messaging. Maliciously created surges of online messages during floods, heat events, and pandemics could trigger panic, swamp health services, and encourage behaviours that disrupt the mechanics of society.19

The national imperative to respond to the challenges of AI

With AI's many opportunities and risks, one would think the national gaze would be firmly fixed on it. However, Australia lags most developed nations in its engagement with AI in health care and has done so for many years.20 The policy space is embryonic, with focus mostly on limited safety regulation of AI embedded in clinical devices and avoidance of general purpose technologies such as ChatGPT.

Much more here: Here is the link:

https://www.mja.com.au/journal/2023/219/3/we-need-chat-about-artificial-intelligence

This is a good start on the way to starting to understand the AI in healthcare field and it is well worth following up the various leads in the article. We are much closer to the beginning of all this than the end!

David.

 

1 comment:

Bernard Robertson-Dunn said...

re "The major unintended consequences of generative models are still to be revealed."

Can I suggest they need to be mitigated before these models get widespread use? Or, at least, we need to know the costs and risks associated with the use of generative models and the major unintended consequences.

It may well be a good idea to treat any AI as the medical profession would treat any other drug, treatment or diagnosis tool that is approved for use by healthcare professionals.

Otherwise they are just guessing.

One last thought - who pays for the testing and approval?