Quote Of The Year

Timeless Quotes - Sadly The Late Paul Shetler - "Its not Your Health Record it's a Government Record Of Your Health Information"

or

H. L. Mencken - "For every complex problem there is an answer that is clear, simple, and wrong."

Sunday, February 11, 2024

AusHealthIT Poll Number 733 – Results – 11 February, 2024.

Here are the results of the poll.

Do You Believe Introduction Of The myHealthRecord Has Made A Significant Positive Impact On The Quality And Safety Of Clinical Health Services In Australia?

Yes                                                                          1 (4%)

No                                                                          27 (96%)

I Have No Idea                                                        0 (0%)

Total No. Of Votes: 28

As clearer outcome as you can get. The bottom line is that  the myHealthRecord is a fizzer!

Any insights on the poll are welcome, as a comment, as usual!

A great number of votes. But also a very clear outcome! 

0 of 28 who answered the poll admitted to not being sure about the answer to the question!

Again, many, many thanks to all those who voted! 

David.

Friday, February 09, 2024

One Man’s View On What We Can Do To Control AI

This appeared last week:

How we can control AI

By Eric Schmidt

The Wall Street Journal

12:26PM January 31, 2024

Today’s large language models, the computer programs that form the basis of artificial intelligence, are impressive human achievements.

Behind their remarkable language capabilities and impressive breadth of knowledge lie extensive swathes of data, capital and time.

Many take more than $US100m ($152m) to develop and require months of testing and refinement by humans and machines. They are refined, up to millions of times, by iterative processes that evaluate how close the systems come to the “correct answer” to questions and improve the model with each attempt.

What’s still difficult is to encode human values. That currently requires an extra step known as Reinforcement Learning from Human Feedback, in which programmers use their own responses to train the model to be helpful and accurate. Meanwhile, so-called “red teams” provoke the program in order to uncover any possible harmful outputs. This combination of human adjustments and guardrails is designed to ensure alignment of AI with human values and overall safety. So far, this seems to have worked reasonably well.

But as models become more sophisticated, this approach may prove insufficient. Some models are beginning to exhibit polymathic behaviour: They appear to know more than just what is in their training data and can link concepts across fields, languages, and geographies. At some point they will be able to, for example, suggest recipes for novel cyberattacks or biological attacks — all based on publicly available knowledge.

There’s little consensus around how we can rein in these risks. The press has reported a variety of explanations for the tensions at OpenAI in November, including that behind the then-board’s decision to fire chief executive Sam Altman was a conflict between commercial incentives and the safety concerns core to the board’s non-profit mission.

Potential commercial offerings like the ability to fine-tune the company’s ChatGPT program for different customers and applications could be very profitable, but such customisation could also undermine some of OpenAI’s basic safeguards in ChatGPT.

Tensions like that around AI risk will only become more prominent as models get smarter and more capable.

We need to adopt new approaches to AI safety that track the complexity and innovation speed of the core models themselves.

While most agree that today’s programs are generally safe for use and distribution, can our current safety tests keep up with the rapid pace of AI advancement? At present, the industry has a good handle on the obvious queries to test for, including personal harms and examples of prejudice. It’s also relatively straightforward to test for whether a model contains dangerous knowledge in its current state.

What’s much harder to test for is what’s known as “capability overhang” — meaning not just the model’s current knowledge, but the derived knowledge it could potentially generate on its own.

Red teams have so far shown some promise in predicting models’ capabilities, but upcoming technologies could break our current approach to safety in AI. For one, “recursive self-improvement” is a feature that allows AI systems to collect data and get feedback on their own and incorporate it to update their own parameters, thus enabling the models to train themselves. This could result in, say, an AI that can build complex system applications (e.g., a simple search engine or a new game) from scratch.

But, the full scope of the potential new capabilities that could be enabled by recursive self-improvement is not known.

Another example would be “multi-agent systems,” where multiple independent AI systems are able to co-ordinate with each other to build something new.

Having just two AI models from different companies collaborating together will be a milestone we’ll need to watch out for. This so-called “combinatorial innovation,” where systems are merged to build something new, will be a threat simply because the number of combinations will quickly exceed the capacity of human oversight.

Short of pulling the plug on the computers doing this work, it will likely be very difficult to monitor such technologies once these breakthroughs occur. Current regulatory approaches are based on individual model size and training effort, and are based on passing increasingly rigorous tests, but these techniques will break down as the systems become orders of magnitude more powerful and potentially elusive. AI regulatory approaches will need to evolve to identify and govern the new emergent capabilities and the scaling of those capabilities.

Europe has so far attempted the most ambitious regulatory regime with its AI Act, imposing transparency requirements and varying degrees of regulation based on models’ risk levels. It even accounts for general-purpose models like ChatGPT, which have a wide range of possible applications and could be used in unpredictable ways. But the AI Act has already fallen behind the frontier of innovation, as open-source AI models — which are largely exempt from the legislation — expand in scope and number.

President Biden’s recent executive order on AI took a broader and more flexible approach, giving direction and guidance to government agencies and outlining regulatory goals, though without the full power of the law that the AI Act has. For example, the order gives the National Institute of Standards and Technology basic responsibility to define safety standards and evaluation protocols for AI systems, but does not require that AI systems in the US “pass the test.” Further, both Biden’s order and Europe’s AI Act lack intrinsic mechanisms to rapidly adapt to an AI landscape that will continue to change quickly and often.

I recently attended a gathering in Palo Alto organised by the Rand Corp. and the Carnegie Endowment for International Peace, where key technical leaders in AI converged on an idea: The best way to solve these problems is to create a new set of testing companies that will be incentivised to out-innovate each other — in short, a robust economy of testing.

…..

Eric Schmidt is the former CEO and executive chairman of Google and cofounder of the philanthropy Schmidt Sciences, which funds science and technology research.

The Wall Street Journal

The full article is here with suggestions on how to check if the Ais are getting ahead of themselves and US!:

https://www.theaustralian.com.au/business/the-wall-street-journal/how-we-can-control-ai/news-story/454fae637233dada7d0adf7b6ff99541

Worth a read to get a handle on the issues surrounding all this!

David.

Thursday, February 08, 2024

This Looks Like (And Indeed Is) A Must Not Miss Paper On AI In Health.

This appeared last week:

Collective action for responsible AI in health

19 Jan 2024

Brian Anderson, Eric Sutherland

Publisher

OECD Publishing

Health services planning Health Primary health care Person centred Artificial Intelligence (AI)

Resources

Collective action for responsible AI in health

Description

Artificial intelligence (AI) will have profound impacts across health systems, transforming health care, public health, and research. Responsible AI can accelerate efforts toward health systems being more resilient, sustainable, equitable, and person-centred. This paper provides an overview of the background and current state of artificial intelligence in health, perspectives on opportunities, risks, and barriers to success. The paper proposes several areas to be explored for policy-makers to advance the future of responsible AI in health that is adaptable to change, respects individuals, champions equity, and achieves better health outcomes for all.

The areas to be explored relate to trust, capacity building, evaluation, and collaboration. This recognises that the primary forces that are needed to unlock the value from artificial intelligence are people-based and not technical. The OECD is ready to support efforts for co-operative learning and collective action to advance the use of responsible AI in health.

Publication Details

DOI: 10.1787/f2050177-en

Copyright: OECD 2024

License type: All Rights Reserved

Access Rights Type: open

Series: OECD Artificial Intelligence Paper 10

Post date: 30 Jan 2024

----

Here is the link:

https://apo.org.au/node/325384

All I can say is that time spent reading this will provide a very useful perspective on what is happening globally and where to look next to follow up advances in this area.

Download the .pdf and I think you will be very pleased with what you read in the 41 pages linked below the referring page. It is very up-to-date and well referenced with material from 2023.

Enjoy and be grateful for the work that has already been done! I am.

David.

Wednesday, February 07, 2024

We Seem To Be Under Attack From AI From All Sides.

This appeared last week:

Special AI laws needed for financial services: Longo

Tom Burton and James Eyers

Jan 31, 2024 – 6.07pm

ASIC chairman Joe Longo says he is concerned about the high risks associated with AI and financial services, and he is unconvinced current rules will be enough to prevent unfair practices.

Risks from data poisoning, model manipulation and generative AI making up answers (so-called hallucinations) were warnings for firms not to become over-reliant on AI models that can’t be understood, examined or explained, Mr Longo warned.

To promote innovation, the federal government has embraced a light-touch voluntary approach to AI oversight, but with special rules to be developed for higher-risk environments such as healthcare and financial services.

Mr Longo endorsed the recent federal AI review’s finding that existing laws do not adequately prevent AI-facilitated harms before they occur.

“It’s clear … that a divide exists between our current regulatory environment and the ideal,” he told an AI governance conference at University of Technology Sydney’s Human Technology Institute on Wednesday.

“There’s a need for transparency and oversight to prevent unfair practices – accidental or intended. But can our current regulatory framework ensure that happens? I’m not so sure,” Mr Longo said.

“Does it prevent blind reliance on AI risk models without human oversight that can lead to underestimating risks? Does it prevent failure to consider emerging risks that the models may not have encountered during training?”

The ASIC chairman said current laws meant there were already obligations for boards of companies using AI applications and that firms should not make the mistake of thinking it was the Wild West as far as AI goes.

“Businesses, boards, and directors shouldn’t allow the international discussion around AI regulation to let them think AI isn’t already regulated. Because it is,” he said. He promised ASIC would continue to act, “and act early” to deter bad behaviour.

But Mr Longo pointed to AI-driven credit checks that arbitrarily blocked customers from getting loans as examples of current rules struggling to be effective and the need for special regulations to stop financial harm.

“Will they [customers] even know that AI was being used? And if they do, who’s to blame? Is it the developers? The company?

“And how would the company even go about determining whether the decision was made because of algorithmic bias, as opposed to a calculus based on broader data sets than human modelling?”

He said AI fraud controls raised similar risks.

“What happens to the debanked customer when an algorithm says so? What happens when they’re denied a mortgage because an algorithm decides they should be? When that person ends up paying a higher insurance premium, will they know why or even that they’re paying a higher premium? Will the provider?”

“...with ‘opaque’ AI systems, the mechanisms by which that discrimination occurs could be difficult to detect.”

“Even if the current laws are sufficient to punish bad actions, their ability to prevent the harm might not be,” Mr Longo said.

The market regulator also raised concerns about AI-powered investment apps.

“When, as a system, it learns to manipulate the market by hitting stop losses, causing market drops and volatility… when there’s a lack of detection systems… yes, our regulations around responsible outsourcing may apply – but have they prevented the harm?

“Or a provider might use the AI system to carry out some other agenda, like seeking to only support related party product, or share offerings in giving some preference based on historic data,” he said.

It was important for policymakers to ask the right questions about AI, Mr Longo said.

“And one question we should be asking ourselves again and again is this: ‘Is this enough?’”

Questions about transparency, explainability and the rapidity of the technology deserved careful attention.

“They can’t be answered quickly or off-hand,” he said. “But they must be addressed if we’re to ensure the advancement of AI means an advancement for all.

APRA, he said, recognised AI’s ability to “significantly improve efficiency” in the system and to improve innovation which was good for customers, but did involve a variety of risks that lenders had to be attuned to.

Mr Lonsdale said it was important for banks to have the right controls for AI apps.

“So all of that is wrapped up in now, in CPS 230 [the new standard on operational risk], which is coming into play, and so, at this point in time, we think we’ve got enough in terms of the regulation to guide through the risks that we’re seeing.”

More here:

https://www.afr.com/politics/federal/special-ai-laws-needed-for-financial-services-longo-20240131-p5f1b4

I found this a fascinating article as it raised so many potential issues and problems I could barely keep up.

I am not sure we can rely on the regulators to catch all potential problems do it will be up to us all to remain ‘alert but not alarmed’!

I hope we can avoid too much in the way of unanticipated fraud!

David.

Monday, February 05, 2024

You Can Be sure This Is Just The Beginning Of What Will Be Possible In Years To Come.

 This appeared a few days ago:

Gene therapy hailed as ‘medical magic wand’ for hereditary swelling disorder

Single-dose treatment transformed lives of patients with potentially deadly condition in first human trial

Medical research

Ian Sample Science editor @iansample

Thu 1 Feb 2024 09.00 AEDT

A groundbreaking gene therapy has been hailed as a “medical magic wand” after the treatment transformed the lives of patients with a hereditary disorder that causes painful and potentially fatal swelling.

Patients who took part in the first human trial of the therapy experienced a dramatic improvement in their symptoms, and many were able to come off long-term medication and return to life as normal.

Dr Hilary Longhurst, the principal investigator at Te Toka Tumai, Auckland City hospital, said the single-dose therapy appeared to provide a permanent cure for her patients’ “very disabling symptoms”.

Hereditary angioedema, or HAE, is a rare disease that affects about one in 50,000 people. It is caused by a genetic mutation that leaves patients with leaky blood vessels. This produces erratic bouts of swelling that typically affect the lips, mouth, throat, bowels, hands and feet.

Attacks strike as often as twice a week and last from hours to days. People can end up bedridden if the swelling affects the bowel, and its disfiguring effect on the face can stop people leaving the home. The most serious flare-ups affect the throat and can lead to suffocation and death.

Cleveland, a 54-year-old from Suffolk who took part in the trial, has been free from attacks since having the therapy 18 months ago. “I’ve had a radical improvement in my physical and mental wellbeing,” he said. “The randomness, unpredictability and potential severity of the attacks has made trying to live my life almost impossible. I spent my life constantly wondering if my next attack would be severe.

“The swellings are painful and disfiguring. I was embarrassed to go out in case of an attack. I’ve been hospitalised with swellings on my neck and throat that have affected my ability to breathe.”

Judy Knox, a nurse in New Zealand who also took part in the trial, said the therapy was “like a medical magic wand”. Before her diagnosis, she suffered abdominal swelling with vomiting and severe pain that lasted for days. Dental work prompted dangerous swelling in her mouth, which threatened to suffocate her. “It’s changed my life,” she said.

Knox previously managed the condition with androgen medication, but supplies have not always been reliable. She is now off the medication and feels she has a “whole new life”.

HAE is caused by a mutation in the C1 inhibitor gene. When the gene stops working, people overproduce a protein called kallikrein. This drives the buildup of another protein, bradykinin, which is responsible for the leaky blood vessels and swelling.

Ten patients took part in the small phase-one trial in the UK, New Zealand and the Netherlands. Each received an infusion of “nanolipids” designed using Crispr, a Nobel prize-winning gene editing tool, to enter liver cells and knock out the kallikrein gene. The therapy stops the body overproducing bradykinin, with dramatic effects for the patients.

“It’s transforming patients’ lives,” said Dr Padmalal Gurugama, a consultant in clinical immunology and allergy at Cambridge University hospital. “My patient was having attacks every three weeks and that gentleman has not had any attacks in the past 18 months. He is not taking any medications. That is amazing.”

The results from the first patients are published in the New England Journal of Medicine, and larger trials are under way. Doctors have treated 25 more patients in a phase-two trial and hope to recruit for a final phase-three trial next year.

Despite the dramatic results, the therapy is not expected to be available soon. Beyond proving itself in remaining trials, such one-shot gene therapies are among the most expensive treatments in the world, and far from a shoo-in for the NHS.

More here:

https://www.theguardian.com/science/2024/jan/31/gene-therapy-hailed-as-medical-magic-wand-for-hereditary-swelling-disorder

This is really very good news! There are many diseases that are caused by a single genetic mistake that could be remedied if there was a clever way of disseminating a change throughout the body.

This has been achieved in this disease – and the approach may work for some other diseases – but actually repairing the genome in live patients will require a set of techniques we are still working on. A good first step here but I would not expect rapid progress with many other diseases all that soon!

We just have to wait – but with more hope than before!

David.

Sunday, February 04, 2024

Drones In War And Peace Look To Be Transformational In All Sorts Of Different Domains.

It seems only a few years ago when we were reading about what were essentially ‘toy’ drones doing deliveries of tablets and some limited observation tasks! Well a decade has really wrought change!

The flavour of this article shows just how much movement there has been and how serious drone technology has become!
Eight-year wait for anti-drone solution as radical ideas take flight

EXCLUSIVE

By Ben Packham

Foreign Affairs and Defence Correspondent

Updated 11:40AM February 1, 2024, First published at 9:30PM January 31, 2024

Australian soldiers won’t be protected from killer drones for at least eight years on the army’s current timetable, as Defence takes its time to assess potential options including the use of trained eagles and hawks to attack unmanned aircraft in mid-flight.

An army briefing document obtained by The Australian reveals Defence will spend until the middle of 2028 deciding which counter-drone capabilities are mature enough to be purchased.

Options to be considered in the 4½-year “strategy and concepts” phase include jamming, lasers, ­microwave pulses, projectiles, birds of prey, and nets that can be fired into the air.

A further 2½ years will be spent on risk mitigation and contracting processes before acquisition is scheduled to commence in late 2030.

The approach outlined in the November 11, 2023, document flies in the face of last year’s Defence Strategic Review, which called for Defence to “abandon its pursuit of the perfect solution … and focus on delivering timely and relevant capability”.

According to the unclassified industry briefing document, the army has begun assessing electromagnetic and “kinetic” technologies such as machine guns and missiles to attack incoming drones. It will examine a range of other options this year including directed energy weapons, birds of prey, and “projected netting” rounds being developed by the Canadian military. It will also look at “passive defence” options including camouflage, decoys and “dummies”.

Other countries have used of birds of prey to ­attack enemy drones. India ­revealed just over a year ago it had been training native black kites to prey on Pakistani drones along the countries’ disputed border since 2020. Dutch police also trialled a program in 2015 using eagles trained by a private company to protect airspace over airports and other sensitive areas.

The army document says its project will deliver protection against armed and unarmed drones operating individually and in swarms, and be used to defend troops, vehicles, logistics, airfields, vessels and ports.

The drawn-out process comes despite the success of a number of Australian companies in bringing counter-drone systems to market.

Canberra-based Electro Optic Systems’ Slinger – a radar-guided 30mm canon – is already in service in Ukraine, along with Sydney-based company DroneShield’s electromagnetic drone jammers.

The US is also using a range of technologies including signal disrupters, “spoofers” that manipulate drones’ GPS systems, high-energy lasers and anti-drone guns.

Former Defence official Marcus Hellyer said the ADF was “stuck in a paralysis of indecision” because it feared acquiring something that didn’t meet 100 per cent of its requirements.

“They spend all this time scanning the market, spending little drops of money here and there, doing a little bit of research, which is simply a way of deferring having to make decisions about acquisition,” said the Strategic Analysis Australia research director.

“But of course, the threat is moving just as fast. So by the time you’ve done your strategising and your conceptualising, the world has moved on. The whole thing is kind of ludicrous. The bad guys have been using this stuff for a decade, and we’re still just admiring the problem.”

The army briefing tells potential industry partners that their systems must be low-cost, safe to use, resilient enough for the battlefield, and have a high level of technical readiness.

Despite Defence’s failure so far to select any Australian system on the market, the army briefing note advises firms to “respect the project”, and warns it is “not a BD (business development) contest”.

Dr Hellyer said Defence must “start putting money into industry to start making stuff and giving it to your soldiers”.

The Australian revealed this week that the army’s new $13bn armoured vehicle fleet will be vulnerable to killer drones, requiring major upgrades to protect Australian personnel from the growing battlefield threat.

None of the service’s $5bn Boxer combat reconnaissance vehicles, its new $3bn fleet of Abrams tanks, nor its $4.5bn infantry fighting vehicles have been ordered with specialised counter-drone systems.

But one senior army commander said this week: “I am working with industry to develop a counter-UAS capability that I can fit to any land vehicle that has a remote weapons station.”

More here:

https://www.theaustralian.com.au/nation/defence/eightyear-wait-for-antidrone-solution-as-radical-ideas-take-flight/news-story/827452b3e87a466d7fdc9a92734b2a50

You only have to look to the other side of the world and the Ukraine – Russia war to see just how central drone technology has become in war and you can be sure that applications of a more preaceful nature will follow on behind.

Where these technologies lead no one can be sure but the peaceful applications in health care and other domains are surely coming as well

It is hard to fathom just how stupid and obtuse our defense planners are to not see the implication of what is going on globally and get organized swiftly, for all our sakes. That the Army is not full bottle and well equipped in this vital are just defies belief! - what is it they say about Generals always fighting the 'last war'?

Sometimes you really have to wonder!

David.

AusHealthIT Poll Number 732 – Results – 04 February, 2024.

Here are the results of the poll.

Medicare Is Now 40 Years Old. Overall, Has It Been A Success?

Yes                                                                         36 (96%)

No                                                                            1 (2%)

I Have No Idea                                                        1 (2%)

Total No. Of Votes: 38

As clearer outcome as you can get. It seems most think that overall Medicare has been a success with pretty strong views on the fact it is a work in progress with a lot more to do!

Any insights on the poll are welcome, as a comment, as usual!

A great number of votes. But also a very clear outcome! 

1 of 38 who answered the poll admitted to not being sure about the answer to the question!

Again, many, many thanks to all those who voted! 

David.

Friday, February 02, 2024

This Should Mean We In OZ See Some Digital Health Related AI In Action Soon.

This appeared last week:

24 January 2024

Epic’s generative AI goes live

Wendy John

The EHR maker has made good on its promise, but what does that mean for Aussie health data?

The NSW government’s future electronic health record now has the possibility of using generative AI, but whether or not it will be included is anyone’s guess. 

Epic announced last week that its promised generative AI is now fully embedded in its electronic health record. It is being used by more than 150 hospitals and health systems across the US but NSW Health cannot, at this stage, reveal if it will be a part of the state’s new EHR. 

Epic’s new feature is provided by Microsoft’s Nuance Dragon Ambient eXperience (DAX). DAX combines Nuance’s conversational and ambient AI – which engages with speech and sensor information – with the problem-solving power of the GPT4 AI model. 

The feature will enable clinicians to draft notes automatically from the exam room, or via telehealth, for quick clinical review and completion after each patient visit. 

Brent Lamm, CIO at UNC Health, said that DAX enhances the Epic EHR by making clinical documentation “intuitive and straightforward”.  

“Patients also say how much they value the conversational interactions they have with their providers and how they truly feel seen and heard during visits while DAX Copilot unobtrusively takes care of the clinical documentation,” he said at the announcement

However, there is controversy over how far AI should assist with diagnosis. Using AI to assess imaging, and offer diagnoses, is becoming more widespread and some clinical AI tools, both local and from abroad are now assisting doctors with diagnosis during face-to-face-consultations.  

Epic came under fire last year after Mayo Clinic published research that spotlighted the poor results of Epic’s AI-based algorithm for predicting the onset of sepsis. Epic’s sepsis AI tool failed to identify two-thirds of the patients who went on to develop sepsis. 

Dr John Halamka, president of Mayo Clinic Platform, said those kind of findings “amplify the reservations many clinicians have about trusting AI-based clinical decision support tools”. 

eHealth NSW has not yet shared which Epic features are to be included in its contract with the global behemoth but advised in October last year that “work is now starting on the initial design and build of this next generation system”. 

An eHealth spokesperson said last year that the new EHR will enable clinicians to access a patient’s clinical records quickly, securely, and safely, regardless of their location.  

“The SDPR will also provide simplified clinical workflows in an intuitive, user-friendly system with streamlined technical support,” said an eHealth NSW spokesperson. 

The Epic rollout across NSW Health hospitals is expected to take six years with the Hunter New England Local Health District to be the first LHD to go live with the new platform in 2025-26. 

More here at this link:

https://www.medicalrepublic.com.au/epics-generative-ai-goes-live/104521

We can only hope it does not turn out to be an expensive add-on NSW Health baulks at!

Sounds like it will be interesting to watch progress.

David.