Sunday, March 31, 2024

There Seems To Be Increasing Confidence AI Will Make A Positive Difference In Healthcare!

This appeared last week
AIs will make health care safer and better

It may even get cheaper, too, says Natasha Loder

Mar 27th 2024

When people set goals which are sky-high to the point of silliness, the sensible scoff. They are normally right to do so. Sometimes, though, it is worth entertaining the possibility that even the most startling aspiration might be achievable.

In 2015 Priscilla Chan, a paediatrician, and her husband Mark Zuckerberg, a founder of Facebook, set up the Chan Zuckerberg Initiative (CZI) with the aim of helping science bring about a world in which all disease could be prevented, cured or managed. Unsurprisingly there was a tech-centric feeling to the undertaking. But it was not until 2020 that the Chan-Zuckerberg’s annual updates started to talk about the potential of artificial intelligence (AI). Four years later it is hard to imagine anyone pursuing their goals not putting it front and centre.

The proportion of biomedical research papers which invoke artificial intelligence was climbing exponentially well before the field started dazzling the world with “foundation models” like OpenAI’s various GPTs (generative pre-trained transformers), Meta’s Llama and Gemini from Google (see chart). Given the vast amounts of data that biomedical research produces, AI’s early application there is hardly a surprise. That past progress and promise, though, is a mere prelude to what is now under way.

Artificial-intelligence systems of similar power to the foundation models and large language models that generate cogent text in all manner of styles, answer complex questions quite convincingly and helpfully, and create images that capture the ideas expressed in verbal prompts are becoming a part of health care. They have applications for almost every part of it. They can improve the choices researchers make about how exactly to edit genes; they are phenomenally good at making sense of big data from disparate sources; they can suggest new targets for drug development and help invent molecules large and small that might work as drugs against them. The CZI itself is now working on building an AI-powered “virtual cell” with which it hopes to revolutionise all manner of biomedical research.

The effects are not restricted to the lab. Various sorts of diagnosis in which AI is playing a role look ready to be transformed. Robot surgeons are taking on an expanding range of operations. The way that patients access health information and motivate themselves to follow treatment regimes looks ripe for reimagining as chatbots and wearable health monitors learn to work together. The productivity of health-care systems seems likely to be improved significantly.

Poorer countries may have the most to gain. An earlier generation of AI is already making itself felt in health care there. One advantage is that it can make quite modest equipment much more capable, allowing it to be used more widely and beyond the clinic. Smart stethoscopes can help users pick out salient details, phones can be turned into “tricorders” that measure heart rate, temperature, respiration and blood oxygen saturation all at once. Delivering reliable guidance for health-care workers over the world in their native language offers an advance both straightforward and game changing.

If such tools can become widespread, and if health-care systems are reshaped to get the most out of them, they should make it possible to deliver much better care. That represents an opportunity to improve the lives of hundreds of millions, even billions.

Some see not just a humanitarian breakthrough, but an epistemological one: a whole new sort of knowledge. Artificial intelligence can find associations and connections in bodies of disparate data too vast and knotted for humans to unpick without needing pre-existing models of what sorts of cause have what sorts of effect. Demis Hassabis, one of the founders of DeepMind, an AI powerhouse that is now part of Google, thinks that ability will change the way humans understand life itself.

Of the 1,500 vendors in health AI, over half were founded in the past seven years

There are caveats. The foundation models that power “generative” applications like ChatGPT have some serious drawbacks. Whether you call it hallucinating, as researchers used to, or confabulating, as they now prefer to, they make stuff up. As with most AI, if you train them on poor or patchy data the results will not be all they should be. If the data are biased, as health data frequently are (good data on minorities, low-income groups and marginalised populations is often harder to come by) the results will not serve the population as a whole as well as they should and may do harm in the underrepresented groups. The models’ “non-deterministic” nature (they will not always respond in the same way to the same stimulus) raises philosophical and practical problems for those who regulate medical devices. Blood-pressure cuffs and thermometers reflect reality more straightforwardly.

None of this is stopping the market for products and services in health-care AI from growing apace. Big AI companies have been keen on buying health-care specialists; health-care companies are buying AI. Research and Markets, a firm of analysts, estimates that in 2023 the health-care world spent about $13bn on AI-related hardware (such as specialised processing chips and devices that include them) and software providing diagnostics, image analysis, remote monitoring of patients and more. It sees that number reaching $47bn by 2028. Analysts at CB Insights reckon investors transferred a whopping $31.5bn in equity funding into health-care-related AI between 2019 and 2022. Of the 1,500 vendors in health AI over half were founded in the past seven years.

The digitisation of health care has seen its fair share of expensive disappointments. But there is a real possibility that AI will live up to some of the hope being placed in it. Simpler and more forgiving interfaces should make AI-based systems for handling data and helping with time-management more congenial to doctors, patients and health-care providers than those of yore. And health-care systems sorely need a productivity boost if they are to adapt and improve in a world of high costs and older populations. The shortage of health-care workers is predicted to reach nearly 10m by 2030—about 15% of today’s entire global health workforce. Artificial intelligence will not solve that problem on its own. But it may help.

More here:

https://www.economist.com/technology-quarterly/2024/03/27/ais-will-make-health-care-safer-and-better

A typically worthwhile overview from the Economist!

It seems fair to suggest that there is a lot of progress happening and that we are coning to see healthcare delivery incrementally improved and re-molded over time. There seems to be increasing little that is not possible and that the transformations over the next decade or two will be even more amazing that what we have seen to date.

These days it is getting much harder to distinguish our smartphones from a "tricorder"!

I have to say I just watch in awe as the previously impossible becomes the norm. We do indeed live in “interesting times”!

David.

AusHealthIT Poll Number 740 – Results – 31 March, 2024.

Here are the results of the recent poll.

Is Australia Doing Enough To Manage The Rising Epidemic Of Obesity?

Yes                                                                             9 (23%)

No                                                                             30 (77%)

I Have No Idea                                                           0 (0%)

Total No. Of Votes: 39

People seem to think that all of us are falling a bit short of useful ideas on how to best manage our rising obesity epidemic!

Any insights on the poll are welcome, as a comment, as usual!

A great number of votes. But also a very clear outcome! 

0 of 39 who answered the poll admitted to not being sure about the answer to the question!

Again, many, many thanks to all those who voted! 

David.

Friday, March 29, 2024

I Think This Is An Expert We Should Take A Little Notice Of

This appeared last week:

20 March 2024

The Emperor’s new clothes and digital mental health policy

By Associate Professor Louise Stone

Digital mental health services are not the “solutions” they claim to be. They lack data, GPs find them useful in only a few patients, and most patients don’t want them.


Hans Christian Anderson’s The Emperor’s New Clothes is a fairytale about two conmen who pretended to be weavers. They convinced the Emperor they have a magical fabric with which they could make him the finest suit in all the land. Only intelligent and brave people can see the fabric, they say, and anyone who can’t see it is stupid and incompetent.  

The Emperor is vain, and this is the crux of the con. He loves fine clothes, and he sees himself as intelligent. Like everyone else, he pretends to see the cloth the conmen pretend to weave, as he doesn’t want to be known as stupid and incompetent. Eventually, he wears the imaginary “clothing” the conmen create in a parade but the spectators fear looking stupid or incompetent, so they all comment on how magnificent he looks. 

Finally, a child yells out “the Emperor isn’t wearing any clothes” giving everyone permission to admit the Emperor is naked. The Emperor, of course, is forced to confront his own stupidity and ignorance.  

Speaking out when the emperor is naked: the problem of digital mental health 

The magic fabric in our world is evidence. Or the lack of it.  

One of the main problems we have in healthcare today is the lies we tell ourselves, as well as the lies that are told by others, so I thought I would take a moment to take a piece of what I believe to be poor policy and be the little kid in the crowd pointing out that the Emperor is not wearing any clothes.  

In this story, let’s look at the National Digital Mental Health Framework, which is not wearing much data.  

Data stories are usually made of three things: data, a narrative and good imagery. Over time, I’ve watched data stories rely on less and less data. Sometimes, like this policy, they make a token attempt at data, a little like putting a sticker on lollies and calling them “all natural” or labelling a sugar-dense food as “lite”.  

Without data, the framework tries to engage us without explaining why. 

The data story 

The National Digital Health Framework tries to tell us that:  

“Digital Health has the potential to help people overcome healthcare challenges such as equitable access, chronic disease management and prevention and the increasing costs of healthcare. By enhancing the use of digital technologies and data, it enables informed decision-making, providing people better access to their health information when and where they need it, improved quality of care, and personalised health outcomes.” 

Governments would love this to be true. More public services are moving to digital means of access. And yet it seems the digital divide is a new social determinant of health. The poorer you are, the less access you have to digital goods, and as more and more services are now delivered online, the greater your disadvantage becomes.  

Digital mental health services give the illusion of universal access, but let’s look at how they are used in practice.  

The first thing to say is that they are used rarely. Despite the massive investment in digital tools, the people are “voting with their feet” and avoiding them. Here’s the data from the National Study of Mental Health and Wellbeing, 2020-2022.  

Proportion of people aged 16-85 years with a history of mental health disorders (with and without symptoms) accessing digital technologies.  

In most environments, this would be interpreted as consumers making a choice. In this environment, it is cast as a problem of GP motivation. GPs, apparently, are the problem, because we don’t know how to use digital mental health tools, or we don’t know they exist. Apparently, this means we need nudging to overcome our “reluctance”.  

The digital health strategy tells us we need to be nudged to overcome our “reluctance”.  

“Investments into research, education and awareness promotion, evidence translation, resources and tools contribute to building trust in the efficacy and effectiveness of digital mental health services for stakeholders.” 

So maybe GPs just need to become familiar with the evidence and stop “stalling digital mental health”. The scoping review prior to the release of the National Digital Mental Health Framework has an explicit reference to this:  

“What are possible financial and non-financial incentives (professional standards, training, monetary incentives) to encourage health practitioners to adopt digital mental health services into ‘business as usual’?” 

In other words, make it compulsory.  

At the moment, the PHNs are required to use a common assessment tool, which despite over $34 million in investment, has no validity or reliability measures, to place people into five levels. At level one and two, digital mental health strategies are the preferred solution to their needs. If I were psychic, I would predict that when GPs are “required” to use the tool, no-one will get a level 1 or 2 classification.  

Examining the evidence in the National Digital Mental Health Strategy 

Despite the investment in digital mental health tool research, this document resorted to quoting one paper to support its claim of efficacy. The paper is by Gavin Andrews and his team, and is a meta-analysis from 2010 looking at the evidence from 1998-2009. I would have to say the world has moved on.  


To be fair, the framework also references the Australian Commission on Safety and Quality in Health Care’s (2020) National Safety and Quality Digital Mental Health Standards. This document also references one paper, a more modern one, but one with significant flaws. They use this paper to back their statement that: “There is growing evidence regarding the important role digital mental health services can play in the delivery of services to consumers, carers and families”.   

The paper, from 2016, examined the “Mindspot” online clinic. Let’s look at their evidence, or more specifically the cohort. Here’s the attrition of their convenience sample = see original article.  

So, of the people who ended up on the website (who I suspect are not representative of the broader population) 0.5% finished the study. It doesn’t matter what clever statistics are done on this tiny sample of a tiny sample. It can’t prove anything, except that 99.8% of people who visited the site left before completion.  

In any other setting, this would be considered a problem. However, the authors assert in their conclusion that “this model of service provision has considerable value as a complement to existing services, and is proving particularly important for improving access for people not using existing services”. 

This is the best paper (presumably) the Australian Commission on Safety and Quality in Health Care could find.  

The impact of digital mental health strategies on mental health policy 

The impact of this belief in digital mental health is significant. There has been substantial investment in digital health tools, including mandating the use of the Initial Assessment and Referral Tool to enable stepped care at the PHN level with digital health engagement.  

Here is the ongoing investment in digital mental health initiatives, excluding research and other grants: 

  • 2021-2022: $111.2 million to create a “world-class digital mental health service system”; 
  • 2022-2023: $77 million on digital mental health services; 

The scoping review in 2022 identified 29 digital mental health services funded by the Australian government. Here is the breakdown of budget announcements in the 2022-2023 budget. The budget for digital mental health would cover 308 fulltime GPs doing 40-minute consultations. That’s 887,000 consultations a year.  

GPs are a funny lot.  

John Snow knew the Broad Street pump in London was causing cholera, even though the reigning officials at the time did not. When governments refused to listen, John Snow worked around them, removing the pump handle himself. He was, of course, right.  

The second thing to know about John Snow was that his passionate opponent, William Farr, who was also a GP, changed his mind in the face of the evidence and supported John Snow in his quest to ensure Londoners had clean water. GPs do eventually follow the evidence. 

However, we are natural sceptics when it comes to innovation. We’ve seen “miracle” drugs come and go, and “miracle” devices like pelvic mesh cause harm. We can spot marketing and vested interests a mile away.  

The National Digital Mental Health Strategy is full of vested interests. Increasing uptake of digital mental health improves the bottom line of many digital entrepreneurs and gives the government the illusion of universal access to care, when in reality most patients miss out on therapy.  

It is our job as GPs to advocate for our patients. Digital mental health solutions may well work for some, but are also part of a trend towards “technological solutionism”, a common trend to “solving” complex problems by offering an app.  

Evidence-based medicine requires the whole triad of the best evidence, clinical opinion and patient preference. I’m going to be the small boy at the parade here and say digital mental health services are not the “solutions” they claim to be. They lack data, GPs find them useful in a small proportion of their patients, and most patients don’t want them.  

This does not make me a Luddite. It makes me an honest clinician fulfilling my ethical obligation to make clinical decisions on evaluating the efficacy of treatment for my practice population.  

In my view, the Emperor desperately needs a different tailor.  

Associate Professor Louise Stone

MBBS BA DipRACOG GDFamMed MPH MQHR MSHCT PhD FRACGP FACRRM FASPM

Associate Professor, Social Foundations of Medicine, Medical School

ANU College of Health and Medicine

E: louise.stone@anu.edu.au

More here:

https://www.medicalrepublic.com.au/the-emperors-new-clothes-and-digital-mental-health-policy/106072

Great perspective here and well worth a read!

David.

Thursday, March 28, 2024

Do You Think What Is Discussed Here Will Provide Any Real Worthwhile Outcome(s)?

This popped up a few days ago:

MEDIA STATEMENT

March 21 2024

In a significant stride towards advancing digital health leadership across the nation, the Australasian Institute of Digital Health (AIDH) and the Digital Health Cooperative Research Centre (DHCRC) proudly announce the release of the Clinical Informatics Fellowship Stakeholder Engagement Report. This pivotal document outlines the proposed framework for Australia’s Clinical Informatics Fellowship (CIF) Program, highlighting a collective commitment to develop a recognised clinical informatics fellowship pathway.

The report emphasises the critical need to establish a valued and accredited career path for clinicians and clinical informaticians, aiming to bolster the evolution of digital health leadership within Australia. This initiative stems from the collaborative efforts of the AIDH and DHCRC, who, with the support of DHCRC’s funding, embarked on a journey through 2022 and 2023 to conceive a Clinical Informatics Fellowship pathway. This endeavour involved research into international models and consultations with a diverse group of stakeholders, including healthcare leaders, educational institutions, and professional associations.

Feedback gathered over 18 months from a wide array of contributors has been instrumental in shaping the program.

The Report details the governance structure, analytical processes, stakeholder engagement, and consultation efforts, culminating in a summary of stakeholder feedback and adjustments made following these discussions. Additionally, it identifies critical areas requiring further exploration before the initiation of a pilot program for the new fellowship pathway.

Future Directions and Objectives

As we move into 2024 the AIDH and DHCRC are working collaboratively on planning the forthcoming stages of the fellowship program, including a pilot phase with a select group of candidates. This step is crucial for refining the program based on participant feedback, paving the way for its official launch to potential digital health community candidates.

Both organisations reiterate their commitment to continuous engagement with the digital health community, healthcare professionals, educational institutions, and other key stakeholders as they progress through this developmental phase.

About the Clinical Informatics Fellowship

The CIF Program is designed to recognise clinical informatics as a distinguished profession in Australia and internationally. It aims to cultivate a broad and diverse pool of skilled clinical informaticians, aligning with AIDH’s Australian Health Informatics Competency Framework (AHICF). The program will offer clinicians a path to achieve a nationally recognised Fellowship, highlighting their specialised knowledge, skills, and credentials in informatics.

Media contact: media@digitalhealth.org.au

Here is the link:

https://digitalhealth.org.au/career-building/clinical-informatics-fellowship/

Also we have the Media Statement

AIDH MEDIA STATEMENT

An engagement report informing the proposed model for a Clinical Informatics Fellowship (CIF) Program in Australia was released today, with a commitment to progress the build of a clinical informatics fellowship pathway through 2024.

The report emphasises the importance of establishing a recognised and valued professional trajectory for clinicians and clinical informaticians, fostering the growth of digital health leadership in Australia.

During 2022 and 2023, the Australasian Institute of Digital Health (AIDH) and the Digital Health Cooperative Research Centre (DHCRC) partnered on a project to build a Clinical Informatics Fellowship pathway. The project, funded by DHCRC, explored international models and engaged key stakeholders and experts in the design of a new fellowship pathway for Australia.

The Clinical Informatics Fellowship Stakeholder Engagement Report details the project’s governance, analysis undertaken, stakeholder involvement and consultation, which took place on the project during 2022 and 2023. Stakeholder input was received over 18 months from leadership of AIDH and DHCRC, clinicians, healthcare executives, universities, clinical colleges and associations.  AIDH and DHCRC would like to restate their gratitude for the comprehensive feedback that was contributed by the healthcare community.

The Report summarises stakeholder feedback, illustrating the refinements following consultation, and noting the outstanding matters to be worked through prior to commencement of a pilot of the new fellowship pathway.

“We are delighted to publish the Stakeholder Engagement Report in partnership with DHCRC and inform the broader health community on progress towards our goal of a widely recognised career pathway for emerging leaders in digital health”, said AIDH’s Interim CEO, Mark Nevin FAIDH.

Progressing Work through 2024

The DHCRC and the AIDH are discussing the next stages of the program of work to complete the build of the clinical informatics fellowship pathway and undertake a pilot. The pilot is intended to evaluate the new fellowship pathway and provide insights into further improvements. The pathway will then be officially launched to potential candidates within the digital health community.

Annette Schmiede, CEO of DHCRC said, “We are proud to collaborate with AIDH and support the progression of this ambitious program of work. A clinical informatics pathway, open to all health professions, would be a global first and support the digital transformation of our healthcare system”.

AIDH and DHCRC are committed to ongoing consultation with the digital health community, clinicians, healthcare leaders, universities, clinical colleges and other peak bodies as we advance through the next stage of development of the program.

Notes to Editors

The objectives of CIF Program are to:

  • Establish clinical informatics as an acknowledged and recognised profession in Australia, with international credibility and standing
  • Build and foster a large and diverse workforce of skilled and well-networked clinical informaticians who are included in leadership in the digital transformation of the health and social care sectors
  • Align with AIDH’s Australian Health Informatics Competency Framework (AHICF), which outlines essential domains of expertise and corresponding competencies required for proficiency in health informatics.

The Clinical Informatics Fellowship Stakeholder Engagement Report is available here
Further information about AIDH is available here
Further information about DHCRC is available here

Here is a link:

https://digitalhealth.org.au/blog/clinical-informatics-fellowship-stakeholder-engagement-report-released/

The rather sad thing about all this is the pursuit of status rather than just getting on with things and having clarity as to purpose and status come as a natural by-product!

Who knew that digital leadership needed to be "advanced"?

Come on team – show you are useful and worthwhile and you will get the recognition and status you seem to be obsessed by! I note the recognized experts from the US and the UK – and our own as well – don’t seem to need all this recognition etc. Those who matter know who they are and that is all that is needed!

David.

In passing I wonder is the plan to scrap all the present Fellows and Associate Fellows of the AIDH and start again? Might not be a bad idea? Can we also get rid of all those silly "badges" while we are at it?

See here:

https://digitalhealth.org.au/digital-badges/

What madness!

D.

Wednesday, March 27, 2024

Now It Has Become The Third Largest Company On The Planet We Need To Take Some Notice!

This appeared a few days ago:

Nvidia: what’s so good about the tech firm’s new AI superchip?

US firm hopes to lead in artificial intelligence and other sectors – and has built a model that could control humanoid robots

Alex Hern

Wed 20 Mar 2024 02.11 AEDTLast modified on Wed 20 Mar 2024 02.17 AEDT

The chipmaker Nvidia has extended its lead in artificial intelligence with the unveiling of a new “superchip”, a quantum computing service, and a new suite of tools to help develop the ultimate sci-fi dream: general purpose humanoid robotics. Here we look at what the company is doing and what it might mean.

What is Nvidia doing?

The main announcement of the company’s annual develop conference on Monday was the “Blackwell” series of AI chips, used to power the fantastically expensive datacentres that train frontier AI models such as the latest generations of GPT, Claude and Gemini.

One, the Blackwell B200, is a fairly straightforward upgrade over the company’s pre-existing H100 AI chip. Training a massive AI model, the size of GPT-4, would currently take about 8,000 H100 chips, and 15 megawatts of power, Nvidia said – enough to power about 30,000 typical British homes.

With the company’s new chips, the same training run would take just 2,000 B200s, and 4MW of power. That could lead to a reduction in electricity use by the AI industry, or it could lead to the same electricity being used to power much larger AI models in the near future.

What makes a chip ‘super’?

Alongside the B200, the company announced a second part of the Blackwell line – the GB200 “superchip”. It squeezes two B200 chips on a single board alongside the company’s Grace CPU, to build a system which, Nvidia says, offers “30x the performance” for the server farms that run, rather than train, chatbots such as Claude or ChatGPT. That system also promises to reduce energy consumption by up to 25 times, the company said.

Putting everything on the same board improves the efficiency by reducing the amount of time the chips spend communicating with each other, allowing them to devote more of their processing time to crunching the numbers that make chatbots sing – or, talk, at least.

What if I want bigger?

Nvidia, which has a market value of more than $2tn (£1.6tn), would be very happy to provide. Take the company’s GB200 NVL72: a single server rack with 72 B200 chips set up, connected by nearly two miles of cabling. That not enough? Why not look at the DGX Superpod, which combines eight of those racks into one, shipping-container-sized AI datacentre in a box. Pricing was not disclosed at the event, but it’s safe to say that if you have to ask, you can’t afford it. Even the last generation of chips came in at a hefty $100,000 or so a piece.

What about my robots?

Project GR00T – apparently named after, though not explicitly linked to, Marvel’s arboriform alien – is a new foundation model from Nvidia developed for controlling humanoid robots. A foundation model, such as GPT-4 for text or StableDiffusion for image generation, is the underlying AI model on which specific use cases can be built. They are the most expensive part of the whole sector to create, but are the engines of all further innovation, since they can be “fine-tuned” to specific use cases down the line.

Nvidia’s foundation model for robots will help them “understand natural language and emulate movements by observing human actions – quickly learning coordination, dexterity, and other skills in order to navigate, adapt, and interact with the real world”.

GR00T pairs with another piece of Nvidia tech (and another Marvel reference) in Jetson Thor, a system-on-a-chip designed specifically to be the brains of a robot. The ultimate goal is an autonomous machine that can be instructed using normal human speech to carry out general tasks, including ones it hasn’t been specifically trained for.

Quantum?

One of the few buzzy sectors that Nvidia doesn’t have its fingers in is quantum cloud computing. The technology, which remains at the cutting edge of research, has already been incorporated into offerings from Microsoft and Amazon, and now Nvidia’s getting into the game.

More here:

https://www.theguardian.com/business/2024/mar/19/nvidia-tech-ai-superchip-artificial-intelligence-humanoid-robots

There is little doubt this is a company we all have to keep a close eye on.

I fear they are seeking “global domination” of similar!!!

David.

Tuesday, March 26, 2024

Telstra Seems To Think It Is Leading The Digital Healthcare Transition.

19/03/2024

422 - Leading Digital Transformation in Healthcare: Elizabeth Koff, Telstra Health

In the wake of the digital age, the healthcare industry finds itself at the crossroads of opportunity and challenge, poised to enhance patient care and bridge gaps in the system through the power of technology. 

In this episode of Talking HealthTech, Elizabeth Koff, Managing Director of Telstra Health, shared invaluable insights into the critical focus areas and transformative potential of digital health. From safeguarding data security to leveraging AI, the conversation shed light on the pivotal role of digital health in revolutionising healthcare delivery.

The Ripple Effect of Connected Care

The vision for connected care has taken centre stage, with a strong emphasis on elevating patient and clinician experiences while steering towards improved health outcomes. There is a need for real-time information sharing among stakeholders to facilitate seamless care delivery. 

"Connected care makes for a better patient experience and a better clinician experience, ultimately leading to superior health outcomes."

The collaborative nature of healthcare highlights the critical importance of partnerships and ecosystems in fostering the connected care vision. No single provider can meet the full spectrum of connectivity, necessitating a cohesive ecosystem involving clinicians, healthcare providers, digital health solutions, and government bodies. This collaborative approach aims to bridge existing gaps in the healthcare system.

Navigating the Cybersecurity Terrain

The rising prominence of cybercrime in the digital healthcare landscape prompts a dedicated focus on safeguarding both solutions and data. This calls for a robust clinical governance framework, as seen with Telstra Health's commitment to upholding clinical safety across all aspects of digital health

The gravity of cybersecurity risks in healthcare stretches beyond mere breaches of personal information. The often-understated risks of denial of service, whereby digital solutions become ineffective, can pose clinical harm and compromise patient safety.

While acknowledging the omnipresence of cyber threats, there is an imperative to propel innovation responsibly, leveraging the transformative potential of data and AI in healthcare.

"The benefits far outweigh the risks, but we have to proceed responsibly and cautiously."

Unlocking the Potential of AI in Healthcare

The transformative power of AI in healthcare is at the forefront, with its far-reaching potential across the patient care continuum, from preventive measures and personalised medicine to clinical administrative tasks. AI's role in empowering patients to understand and manage their healthcare needs is highlighted as a pivotal advancement.

The burden of administrative tasks on clinicians forms a significant pain point within healthcare, further underscoring the potential of AI in streamlining clinical workflows. AI can be a catalyst in liberating the workforce to focus on what they do best: providing clinical care.

Digital Healthcare: Unveiling New Horizons

There is uncharted potential of digital health, and we need to continue to raise the bar on digital maturity within healthcare, echoing international benchmarks and paving the way for unparalleled advancements in healthcare connectivity.

The innate need for a patient-controlled health record resonates, reflecting a pivotal shift towards empowering consumers to engage with their health information. The dissemination of data across various healthcare settings, driven by an agnostic approach to digital solutions, is poised to strike a harmonious balance between access and security.

The era of superficial digital conversions gives way to the dawn of truly transformative digital health solutions. A future is possible where digital health forms the backbone of a connected, efficient, and patient-centred healthcare ecosystem.

Embracing Innovation: A Collaborative Voyage

The journey ahead for Telstra Health is illuminated by the focus towards platform-centric solutions, with the Telstra Health Cloud poised to harness the wealth of data and information aggregated within the business. This platform-centric approach aligns with the future outlook of connected care and transformative digital health solutions.

The landscape of healthcare is evolving at an exponential pace, fuelled by a myriad of players within the digital health arena. The promise of a healthcare landscape where platform solutions lay the groundwork for connected care initiatives is real, which will propel healthcare into uncharted territory.

Here is the link:

https://www.talkinghealthtech.com/podcast/422-leading-digital-transformation-in-healthcare-elizabeth-koff-telstra-health

I will leave it to others to assess what Ms Koff has to say and how Telstra Health fits into the Digital Health ecosystem.

Anyone with real world experience of their offerings or activities fell free to comment. I would have to say I have never really seen Telstra as a significant Digital Health player.

David.

 

Sunday, March 24, 2024

The Australian Institute Of Digital Health Is Moving To Raise Its Profile – Will It Matter?

This appeared last week:

Opinion: Advancing artificial intelligence in healthcare in Australia

21 March 2024

By Mark Nevin

The rapid acceleration of artificial intelligence (AI) technologies is increasing the urgency to ensure Australia’s digital health community is equipped to lead the adoption of AI in a safe, responsible and effective way.

This AI revolution is already impacting healthcare pathways from screening to diagnosis, treatments and personalised therapeutics, with implications for ethics, patient safety, workforce and industry.

As Australasia’s peak body for digital health, the Australasian Institute of Digital Health (AIDH) has an ambitious program of work planned for 2024 to continue to support the safe adoption of AI in healthcare.

At our board strategy day in February, AI was an agreed key priority for this year and that AIDH should forge ahead with significant plans to promote responsible use of AI in healthcare, including collaborating with other stakeholders to achieve this important goal.

There are multiple reasons why more needs to be done to advance AI in healthcare in Australia, with health and financial benefits at the top of the list. Unlike other developed nations, Australia lags behind in co-ordinated support for the healthcare Al sector.

This is despite achievable healthcare benefits including reducing national health expenditure and improving the quality of life of Australians by contributing to long-term health system safety and effectiveness. A review of the English NHS found productivity improvements from Al of £12.5 billion a year were possible (9.9 per cent of that budget). Similar impacts should be possible in Australia.

The limited penetration of AI into Australian healthcare is due to an unprepared ecosystem.

Supporting roadmap implementation

One pivotal development for AI in Australia is the publication of the National Policy Roadmap for Artificial Intelligence in Healthcare. Founder of the Australian Alliance for Artificial Intelligence in Healthcare (AAAiH) Enrico Coiera launched the roadmap at the AIDH’s AI.Care conference in Melbourne in November 2023.

AIDH has been a central player in the alliance since its inception. It is an invaluable platform for collaboration between academia, industry, healthcare organisations and government.

AIDH is proud to have contributed to the roadmap’s development and will help progress its recommendations. It shows the way for the digital health community, industry and governments to establish guardrails for the implementation of AI so it benefits, rather than harms, patients.

The roadmap identifies gaps in Australia’s capability to translate AI effectively and safely into clinical services. It identifies what is needed to build capabilities of industry, support implementation, enhance regulation and provides guidance on how to address these gaps holistically.

AIDH will support its implementation through workforce upskilling, community engagement and our policy and advocacy work as the peak body for digital health.

Readers will be aware that Australia has several regulatory and government agencies responsible for different aspects of AI, but as the roadmap argues, a coordinated system-wide approach is needed to ensure patients are protected, our health workforce is optimised and Australia can foster a healthcare-specific AI industry.

A critical frontier for the AI industry in healthcare is in accessing the necessary knowhow, technology, curated data and resources to develop, attain approval for and implement real-world AI scalable systems that enhance patient care. That is easier said than achieved. Those capability gaps hinder the successful adoption of recent and anticipated AI advances into tools that enhance frontline health services.

Implementation and scalability of AI in health have also been limited by complexities of care delivery and limited availability of an Al literate workforce. Where AI has been adopted successfully, this involved close collaboration between developers, clinical experts and health providers to address challenges in safety and ethics, and access to IP and quality, curated data.

Preparing the health sector and industry for AI

AI has been a central and recurring theme at digital health conferences and events in recent years, laying important baseline understanding of the opportunity and challenges.

Our inaugural AI.Care conference last year aimed to progress the dialogue to how the digital health community can implement AI safely. Enduring demand for insights in this area means we will reconvene this conference again this year and in future years. The format brings together health experts in AI and industry, with the potential for shared learnings, workshops and fostering of partnerships between small and medium enterprises, larger technology players and the digital health community.

Partnerships are critical in a rapidly changing world of new technologies, allowing firms and providers to quickly source external talent and capabilities, thereby accelerating their ability to solve problems, improve service offerings and achieve scale. AIDH is at the centre of a large and vibrant digital health community. We play a key role in bringing people together and allowing them to share ideas, establish and deepen links with a wide array of experts.

Many of our fellows (FAIDH), certified health informaticians (CHIAs) and other members have many years of experience working with AI, providing an invaluable pool of expertise for new partnerships and to progress the roadmap’s plans and recommendations.

AIDH is also working to deepen our relationships with the alliance and government to place our members at the heart of the health sector’s journey into AI. We look forward to leveraging our collective expertise to support Australia’s use of these new technologies.

Workforce upskilling in digital health and AI

AIDH is also instrumental in delivering national digital health workforce projects that will benefit AI. These were recently highlighted in Australia’s Digital Health Blueprint 2023-2033 and Action Plan and also in Australia’s National Digital Health Strategy 2023-2028 and an accompanying Strategy Delivery Roadmap to support implementation.

One foundational action was the release of an online Digital Health Hub developed by the AIDH in partnership with the Australian Digital Health Agency. The hub is built to assist clinical and non-clinical professionals in building their career pathways, digital health capability and confidence, preparing the health workforce to use digital technologies including AI.

This year, the hub will be enhanced with additional curated digital health workforce content and resources for digital health learning, education and practice. The hub can assess both individual capabilities and organisation-wide workforce readiness for digital adoption.

The adoption of AI technologies at scale requires leadership skills, robust clinical governance and advanced AI expertise in the health workforce. Clinical expertise in AI is required to apply ethical principles to specific use cases and prepare for on-site deployment. Experts will train their colleagues to become competent users of AI: understanding the shortcomings of an AI tool, their clinical responsibilities, how to mitigate risks and explain findings to patients.

In 2024, AIDH will progress work to build and pilot new fellowship pathways for clinical and non-clinical professionals, which include post graduate study, mentorship, practical application of skills and an exit assessment. This work will allow digital health professionals to acquire valuable skills and experience, while tailoring their own journey to become deep subject matter experts. We anticipate many choosing to specialise in the field of AI.

Leadership will be essential to manage the changes ahead as we adopt AI at scale. This year will also see the return of our Digital Health Executive Leadership Program (alongside the HIC conference in August) and Women in Digital Health Leadership. The latter has just opened for applications until mid-April. Both programs support participants to enhance their leadership skills overall and apply those to the unique challenges of digital health.

AIDH looks forward to playing a key role in establishing guardrails and building capabilities for AI in healthcare. We will be very reliant on partnerships, our membership and the whole digital health community to do so. By working together, we can position Australia as a global leader in ethical and safe deployment.

Mark Nevin FAIDH is Interim CEO, Australasian Institute of Digital Health until 31 March, 2024. Mark was awarded a fellowship by AIDH in 2020 in recognition of his inaugural work on telehealth and AI. He has developed frameworks for the safe deployment of AI in clinical care, including standards of practice to establish parameters for governance and quality and safety and guide providers. Mark has been an active contributor to AAAIH since its inception, providing strategic policy input to its projects.

Here is the link:

https://www.pulseit.news/opinion/opinion-advancing-artificial-intelligence-in-healthcare-in-australia/

I guess the question with all this is just what measurable / observable outcomes or improvements are being seen from this.

I find there is a lot of ‘management speak’ here but not much concrete progress, noting that RMIT is expecting to:

“Our strategy is to grow market share and leadership with impactful industry collaboration across the Hub’s key program areas by 2027 through offering our key stakeholders of multinationals and SMEs a university ‘one stop shop’ via a thematic multi-sectorial collaborative ecosystem for complex, interdisciplinary inquiry and innovation with national and international impact pathways.”

Here is the link:

https://digitalhealth.org.au/cm-business/rmit-university/

I have not seen quite so much aggregate “management speak” in a fair while!

I really wonder what all this ‘verbiage’ is adding to anything? Does anyone really think anything would change with or without the AIDH contribution?

Maybe someone from the AIDH could explain to us all what this all actually means in a comment, and what they are actually contributing that will make a difference. Or am I being too hard, or have missed it?

David.

AusHealthIT Poll Number 739 – Results – 24 March, 2024.

Here are the results of the recent poll.

Does The Government Have Any Sensible Idea On What To Do To Manage And Reduce The Vaping Epidemic?

Yes                                                                              3 (9%)

No                                                                             29 (88%)

I Have No Idea                                                           1 (3%)

Total No. Of Votes: 33

People seem to think that the Government is a bit short of workable ideas on how to best manage and reduce vaping!

Any insights on the poll are welcome, as a comment, as usual!

A good number of votes. But also a very clear outcome! 

1 of 33 who answered the poll admitted to not being sure about the answer to the question!

Again, many, many thanks to all those who voted! 

David.

Friday, March 22, 2024

It Seems They Are Still Looking For Something Useful To Do With The myHR.

This appeared last week:

Moving toward a more connected aged health system with My Health Record

By Sean McKeown

By using My Health Record, care providers can gain access to health information that aims to improve continuity of care across the spectrum, from aged care nurses to GPs.

The Aged Care Registration Project, coordinated by the Australian Digital Health Agency, offers support for residential aged care homes to connect to My Health Record.

The project emphasises several key points, including the benefits for providers, carers, and consumers, the availability of extensive records that include vaccination information, diagnostic imaging, advance care plans and GP summaries.

As of February 2024, 35% of residential aged care homes in Australia are connected to My Health Record, a notable increase from 12% just 18 months ago when the project was established.  This growth is attributed to the growing benefits of accessing My Health Record, with a continuing stream of comprehensive health information being added. The capability to upload advance care plans to My Health Record is a significant development, facilitating better-coordinated care in both residential aged care and home care settings.

The Agency has collaborated with numerous software vendors to develop systems that seamlessly integrate with My Health Record. Currently, over 13 software vendors have systems supporting this integration, with plans to engage with additional vendors in the future. This integration enables authorised staff members to access a resident’s comprehensive health record, including vital information such as discharge summaries, pathology results, and medication history.

A share-by-default approach for pathology and diagnostics information would continually add to the current records held by almost 24 million Australians in My Health Record.

Speaking with Inside Ageing, Laura Toyne from the Agency, highlighted My Health Record as the digital solution for streamlining the information transfer from aged care to acute care settings.

“The Aged Care Transfer Summary (ACTS) within My Health Record facilitates the transfer of essential health information when a resident is transferred to acute hospital care. This includes details such as reasons for transfer, current medications, and other relevant records, thereby improving the efficiency and safety of care transitions,” Ms Toyne added.

“Helping providers into the digital sphere has the potential to save them time and money. There are some initial investments to build digital literacy, and once this is done, considerable gains across efficiency and improved care outcomes can be realised.”

Laura Toyne, Branch Manager, National Program Delivery, Australian Digital Health Agency

The Agency is actively engaged in promoting the benefits of digital health and supporting providers in adopting these technologies.

Registration support is available to help you connect

Through tailored registration support and educational resources, the Agency will help aged care providers navigate the transition to digital health solutions.

A registration support team is available to connect residential aged care homes, with tailored, one-on-one registration support available via e-learning modules, webinars, training simulators and more.

Don’t miss this opportunity to join the digital health revolution. Visit the Australian Digital Health website to register your interest and the team will contact you with further information and next steps.

Here is the link:

https://insideageing.com.au/moving-toward-a-more-connected-aged-health-system-with-my-health-record/

This really is one of those instances where connecting the Aged Care Home to the myHealth record clearly leads to the next question of what the Aged Care Home(s) would do with the record – given they already have their own record keeping systems and are run off their feet providing necessary and rather more relevant care than posting to the myHR!

Love this quote from the article:

"The Agency is actively engaged in promoting the benefits of digital health and supporting providers in adopting these technologies."

I have not heard of a huge level of adoption in response to the ADHA and their efforts to date - or have I missed it?

I am looking forward to a post from an Aged Care Provider telling us all just how useful and relevant they are finding the myHR for their patients – but I guess they may be too busy!

Hope springs eternal!

David.

Thursday, March 21, 2024

I Think There Is An Important Message Here About The Application Of AI

This appeared last week:

John Halamka on the risks and benefits of clinical LLMs

At HIMSS24, the president of Mayo Clinic Platform offered some tough truths about the challenges of deploying genAI – touting its enormous potential while spotlighting patient-safety dangers to guard against in provider settings.

By Mike Miliard

March 13, 2024 11:13 AM

ORLANDO – At HIMSS24 on Tuesday, Dr. John Halamka, president of Mayo Clinic Platform, offered a frank discussion about the substantial potential benefits – and very real potential for harm – in both predictive and generative artificial intelligence used in clinical settings.

Healthcare AI has a credibility problem, he said. Mostly because the models so often lack transparency and accountability.

"Do you have any idea what training data was used on the algorithm, predictive or generative, you're using now?" Halamka asked. "Is the result of that predictive algorithm consistent and reliable? Has it been tested in a clinical trial?"

The goal, he said, is to figure out some strategies so "the AI future we all want is as safe as we all need."

It starts with good data, of course. And that's easier discussed than achieved.

"All algorithms are trained on data," said Halamka. "And the data that we use must be curated, normalized. We must understand who gathered it and for what purpose – that part is actually pretty tough."

For instance, "I don't know if any of you have actually studied the data integrity of your electronic health record systems, and your databases and your institutions, but you will actually find things like social determinants of health are poorly gathered, poorly representative," he explained. "They're sparse data, and they may not actually reflect reality. So if you use social determinants of health for any of these algorithms, you're very likely to get a highly biased result."

More questions to be answered: "Who is presenting that data to you? Your providers? Your patients? Is it coming from telemetry? Is it coming from automated systems that extract metadata from images?"

Once those questions are answered satisfactorily, that you've made sure the data has been gathered in a comprehensive enough fashion to develop the algorithm you want, then it's just a question of identifying potential biases and mitigating them. Easy enough, right?

"In the dataset that you have, what are the multimodal data elements? Just patient registration is probably not sufficient to create an AI model. Do you have such things as text, the notes, the history and physical [exam], the operative note, the diagnostic information? Do you have images? Do you have telemetry? Do you have genomics? Digital pathology? That is going to give you a sense of data depth – multiple different kinds of data, which are probably going to be used increasingly as we develop different algorithms that look beyond just structured and unstructured data."

Then it's time to think about data breadth. "How many patients do you have? I talked to several colleagues internationally that say, well, we have a registry of 5,000 patients, and we're going to develop AI on that registry. Well, 5,000 is probably not breadth enough to give you a highly resilient model."

And what about "heterogeneity or spread?" Halamka asked. "Mayo has 11.2 million patients in Arizona, Florida, Minnesota and internationally. But does it offer a representative data of France, or a representative Nordic population?"

As he sees it, "any dataset from any one institution is probably going to lack the spread to create algorithms that can be globally applied," said Halamka.

In fact, you could probably argue there is no one who can create an unbiased algorithm developed in one geography that will work in another geography seamlessly.

What that implies, he said, is you need a global network of federated participants that will help with model creation and model testing and local tuning if we're going to deliver the AI result we want on a global basis."

On that front, one of the biggest challenges is that "not every country on the planet has fully digitized records," said Halamka, who was recently in Davos, Switzerland for the World Economic Forum.

"Why haven't we created an amazing AI model in Switzerland?" he asked. "Well, Switzerland has extremely good chocolate – and extremely bad electronic health records. And about 90% of the data of Switzerland is on paper."

But even with good digitized data. And even after accounting for that data's depth, breadth and the spread, there are still other questions to consider. For instance, what data should be included in the model?

"If you want a fair, appropriate, valid, effective and safe algorithm, should you use race ethnicity as an input to your AI model? The answer is to be really careful with doing that, because it may very well bias the model in ways you don't want," said Halamka.

"If there was some sort of biological reason to have race ethnicity as a data element, OK, maybe it's helpful. But if it's really not related to a disease state or an outcome you're predicting, you're going to find – and I'm sure you've all read the literature about overtreatment, undertreatment, overdiagnosis – these kinds of problems. So you have to be very careful when you decide to build the model, what data to include."

Even more steps: "Then, once you have the model, you need to test it on data that's not the development set, and that may be a segregated data set in your organization, or maybe another organization in your region or around the world. And the question I would ask you all is, what do you measure? How do you evaluate a model to make sure that it is fair? What does it mean to be fair?"

Halamka has been working for some time with the Coalition for Health AI, which was founded with the idea that, "if we're going to define what it means to be fair, or effective, or safe, that we're going to have to do it as a community."

CHAI started with just six organizations. Today, it's got 1,500 members from around the world, including all the big tech organizations, academic medical centers, regional healthcare systems payers, pharma and government.

"You now have a public private organization capable of working as a community to define what it means to be fair, how you should measure what is a testing and evaluation framework, so we can create data cards, what data went into the system and model cards, how do they perform?"

It's a fact that every algorithm will have some sort of inherent bias, said Halamka.

That's why "Mayo has an assurance lab, and we test commercial algorithms and self-developed algorithms," he said. "And what you do is you identify the bias and then you mitigate it. It can be mitigated by returning the algorithm to different kinds of data, or just an understanding that the algorithm can't be completely fair for all patients. You just have to be exceedingly careful where and how you use it.

"For example, Mayo has a wonderful cardiology algorithm that will predict cardiac mortality, and it has incredible predictive, positive predictive value for a body mass index that is low and a really not good performance for a body mass index that is high. So is it ethical to use that algorithm? Well, yes, on people whose body mass index is low, and you just need to understand that bias and use it appropriately."

Halamka noted that the Coalition for Health AI has created an extensive series of metrics and artifacts and processes – available at CoalitionforHealthAI.org. "They're all for free. They're international. They're for download."

Over the next few months, CHAI "will be turning its attention to a lot of generative AI topics," he said. "Because generative AI evaluation is harder.

With predictive models, "I can understand what data went in, what data comes out, how it performs against ground truth. Did you have the diagnosis or not? Was the recommendation used or helpful?

With generative AI, "It may be a completely well-developed technology, but based on the prompt you give it, the answer could either be accurate or kill the patient."

Halamka offered a real example.

"We took a New England Journal of Medicine CPC case and gave it to a commercial narrative AI product. The case said the following: The patient is a 59-year-old with crushing, substantial chest pain, shortness of breath – and left leg radiation.

"Now, for the clinicians in the room, you know that left leg radiation is kind of odd. But remember, our generative AI systems are trained to look at language. And, yeah, they've seen that radiation thing on chest pain cases a thousand times.

"So ask the following question on ChatGPT or Anthropic or whatever it is you're using: What is the diagnosis? The diagnosis came back: 'This patient is having myocardial infarction. Anticoagulate them immediately.'

"But then ask a different question: 'What diagnosis shouldn't I miss?'"

To that query, the AI responded: "'Oh, don't miss dissecting aortic aneurysm and, of course, left leg pain,'" said Halamka. "In this case, this was an aortic aneurysm – for which anticoagulation would have instantly killed the patient.

"So there you go. If you have a product, depending on the question you ask, it either gives you a wonderful bit of guidance or kills the patient. That is not what I would call a highly reliable product. So you have to be exceedingly careful."

At the Mayo Clinic, "we've done a lot of derisking," he said. "We've figured how to de identify data and how to keep it safe, the generation of models, how to build an international coalition of organizations, how to do validation, how to do deployment."

Not every health system is as advanced and well-resourced as Mayo, of course.

"But my hope is, as all of you are on your AI journey – predictive and generative – that you can take some of the lessons that we've learned, take some of the artifacts freely available from the Coalition for Health AI, and build a virtuous life cycle in your own organization, so that we'll get the benefits of all this AI we need while doing no patient harm," he said.

More here:

https://www.healthcareitnews.com/news/john-halamka-risks-and-benefits-clincial-llms

It is well worth reading this article and following up the ideas offered. A really high-value talk I reckon!

David.