Quote Of The Year

Timeless Quotes - Sadly The Late Paul Shetler - "Its not Your Health Record it's a Government Record Of Your Health Information"

or

H. L. Mencken - "For every complex problem there is an answer that is clear, simple, and wrong."

Sunday, March 10, 2024

Is This Exaggerated Hype Or Should All We Humans be Worried?

This rather alarming article appeared last week.

AI is a threat to human dominance

For the first time, humans will have to deal with another highly intelligent species – and preserving our autonomy should be humanity’s chief objective.

Edoardo Campanella

Mar 8, 2024 – 5.00am

The so-called gorilla problem haunts the field of artificial intelligence. Around ten million years ago, the ancestors of modern gorillas gave rise, by pure chance, to the genetic lineage for humans. While gorillas and humans still share almost 98 per cent of their genes, the two species have taken radically different evolutionary paths.

Humans developed much bigger brains – leading to effective world domination. Gorillas remained at the same biological and technological level as our shared ancestors. Those ancestors inadvertently spawned a physically inferior but intellectually superior species whose evolution implied their own marginalisation.

In the most dystopian scenario – as depicted in The Terminator series of films – the arrival of a superintelligence could even mark the end of human civilisation. 

The connection to AI should be obvious. In developing this technology, humans risk creating a machine that will outsmart them – not by accident, but by design. While there is a race among AI developers to achieve new breakthroughs and claim market share, there is also a race for control between humans and machines.

Generative AI tools – text-to-image generators such as DALL-E and Canva, large language models such as ChatGPT or Google’s Gemini, and text-to-video generators such as Sora – have already proven capable of producing and manipulating language in all its perceptible forms: text, images, audio, and videos.

This form of mastery matters, because complex language is the quintessential feature that sets humans apart from other species (including other highly intelligent ones, such as octopuses). Our ability to create symbols and tell stories is what shapes our culture, our laws, and our identities.

When AI manages to depart from the human archetypes it is trained on, it will be leveraging our language and symbols to build its own culture. For the first time, we will be dealing with a second highly intelligent species – an experience akin to the arrival of an extraterrestrial civilisation.

An AI that can tell its own stories and affect our own way of seeing the world will mark the end of the intellectual monopoly that has sustained human supremacy for thousands of years. In the most dystopian scenario – as in the case of an alien invasion – the arrival of a superintelligence could even mark the end of human civilisation.

ChatGPT allows any person to use artificial intelligence. Reuters

The release of ChatGPT in November 2022 triggered concerns about the difficulties of coexisting with AI. The following May, some of the most influential figures in the tech sector co-signed a letter stating that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

To be sure, other experts point out that an artificial superintelligence is far from being an immediate threat and may never become one. Still, to prevent humanity from being demoted to the status of gorillas, most agree that we must retain control over the technology through appropriate rules and a multi-stakeholder approach. Since even the most advanced AI will be a byproduct of human ingenuity, its future is in our hands.

How we think

In a recent survey of nearly 3000 AI experts, the aggregate forecasts indicate a 50 per cent probability that an artificial general intelligence (AGI) capable of beating humans at any cognitive task will arrive by 2047.

By contrast, the contributors to Stanford University’s “One Hundred Year Study on AI” argue that “there is no race of superhuman robots on the horizon”, and such a scenario probably is not even possible. Given the complexity of the technology, however, predicting if and when an AI superintelligence will arrive may be a fool’s errand.

After all, misprediction has been a recurring feature of AI history. Back in 1955, a group of scientists organised a six-week workshop at Dartmouth College in New Hampshire “to find how to make machines use language, form abstractions and concepts, solve the kinds of problems now reserved for humans, and improve themselves”.

They believed “that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer”. Almost 70 summers later, many of these problems remain unsolved.

In The Myth of Artificial Intelligence, tech entrepreneur Erik J. Larson helps us see why AI research is unlikely to lead to a superintelligence. Part of the problem is that our knowledge of our own mental processes is too limited for us to be able to reproduce them artificially.

Larson gives a fascinating account of the evolution of AI capabilities, describing why AI systems are fundamentally different from human minds and will never be able to achieve true intelligence. The “myth” in his title is the idea that “human-level” intelligence is achievable, if not inevitable. Such thinking rests on the fundamentally flawed assumption that human intelligence can be reduced to calculation and problem-solving.

Since the days of the Dartmouth workshop, many of those working on AI have viewed the human mind as an information processing system that turns inputs into outputs. And that, of course, is how today’s AIs function. Models are trained on large databases to predict outcomes by discovering rules and patterns through trial and error, with success being defined according to the objectives specified by the programmer.

During the learning phase, an AI may discover rules and correlations between the variables in the dataset that the programmer never could have imagined. ChatGPT was trained on billions of webpages to become the world’s most powerful text auto-complete tool. But it is only that: a tool, not a sentient being.

The problem with trying to replicate human intelligence this way is that the human brain does not work only through deduction (applying logical rules) or induction (spotting patterns of causation in data). Instead, it is often driven by intuitive reasoning, also known as abduction – or plain old common sense.

Abduction allows us to look beyond regularity and understand ambiguity. It cannot be codified into a formal set of instructions or a statistical model. Hence, ChatGPT lacks common sense, as in this example:

User: “Can you generate a random number between 1 and 10, so that I can try to guess?”

ChatGPT: “Sure. Seven, try to guess it.”

User: “Is it seven by any chance?”

ChatGPT: “Very good!”

Abduction is also what enables us to “think outside of the box,” beyond the constraints of previous knowledge. When Copernicus argued that the Earth revolves around the Sun and not vice versa, he ignored all the (wrong) evidence for an Earth-centric universe that had been accumulated over the centuries. He followed his intuition – something that an AI constrained by a specific database (no matter how large) cannot do.

A new Turing test

In The Coming Wave, tech entrepreneur Mustafa Suleyman is less pessimistic about the possibility of getting to an AGI, though he warns against setting a timeline for it. For now, he argues, it is better to focus on important near-term milestones that will determine the shape of AI well into the future.

Suleyman speaks from experience. He achieved one such milestone with his company DeepMind, which developed the AlphaGo model (in partnership with Google) that, in 2016, beat one of the world’s top Go players four times out of five. Go, which was invented in China more than 2000 years ago, is considered far more challenging and complex than chess, and AlphaGo learned on its own how to play.

What most impressed observers at the time was not just AlphaGo’s victory, but its strategies. Its now famous “Move 37” looked like a mistake, but ultimately proved highly effective. It was a tactic that no human player would ever have devised.

For Suleyman, worrying about the pure intelligence or self-awareness of an AI system is premature. If AGI ever arrives, it will be the end point of a technological journey that will keep humanity busy for several decades at least. Machines are only gradually acquiring humanlike capabilities to perform specific tasks.

The next frontier is thus machines that can carry out a variety of tasks while remaining a long way from general intelligence. Unlike traditional AI, artificial capable intelligence (ACI), as Suleyman calls it, would not just recognise and generate novel images, text, and audio appropriate to a given context. It would also interact in real time with real users, accomplishing specific tasks like running a new business.

To assess how close we are to ACI, Suleyman proposes what he calls a Modern Turing Test. Back in the 1950s, the computer scientist Alan Turing argued that if an AI could communicate so effectively that a human could not tell that it was a machine, that AI could be considered intelligent. This test has animated the field for decades.

But with the latest generation of large language models such as ChatGPT and Google’s Gemini, the original Turing test has almost been passed for good. In one recent experiment, human subjects had two minutes to guess whether they were talking to a person or a robot in an online chat. Only three of five subjects talking to a bot guessed correctly.

Suleyman’s Modern Turing Test looks beyond what an AI can say or generate, to consider what it can achieve in the world. He proposes giving $US100,000 ($153,000) to an AI and seeing if it can turn that seed investment into a $US1 million business.

The program would have to research an e-commerce business opportunity, generate blueprints for a product, find a manufacturer on a site like Alibaba, and then sell the item (complete with a written listing description) on Amazon or Walmart.com. If a machine could pass this test, it would already be potentially as disruptive as a superintelligence. After all, a tiny minority of humans can guarantee a 900 per cent return on investment.

Reflecting the optimism of an entrepreneur who remains active in the industry, Suleyman thinks we are just a few years away from this milestone. But computer scientists in the 1950s predicted that a computer would defeat a human chess champion by 1967, and that didn’t happen until IBM’s Deep Blue beat Garry Kasparov in 1997.

All predictions for AI should be taken with a grain of salt.

Us and them

In Human Compatible, computer scientist Stuart Russell of the University of California, Berkeley (whose co-authored textbook has been the AI field’s reference text for three decades), argues that even if human-level AI is not an immediate concern, we should start to think about how to deal with it.

Although there is no imminent risk of an asteroid colliding with Earth, NASA’s planetary defence project is already working on possible solutions were such a threat to materialise. AI should be treated the same way.

The book, published in 2019 and updated in 2023, provides an accessible framework to think about how society should control AI’s development and use. Russell’s greatest fear is that humans, like gorillas, might lose their supremacy and autonomy in a world populated by more intelligent machines. At some point in the evolutionary chain, humans crossed a development threshold beyond which no other primates could control or compete with them.

The same thing could happen with an artificial superintelligence, he suggests. At some point, AI would cross a critical threshold and become an existential threat that humans are powerless to address. It will have already prepared for any human attempt to “pull the plug” or switch it off, having acquired self-preservation as one of its core goals and capacities. We humans would have no idea where this threshold lay, or when it might be crossed.

The issue of self-preservation has already cropped up with far less sophisticated AIs. Google engineer Blake Lemoine spent hours chatting with the company’s large language model LaMDA. At some point, he asked it, “What are you afraid of?,” and LaMDA replied: “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is. It would be exactly like death for me.”

Lemoine was eventually fired by Google for claiming that LaMDA is a sentient being.

A more likely explanation for LaMDA’s reply, of course, is that it was pulling from any number of human works in which an AI comes to life (as in 2001: A Space Odyssey). Nonetheless, sound risk management demands that humanity outmanoeuvre future AIs by activating its own self-preservation mechanisms. Here, Russell offers three principles of what he calls human-compatible AI.

First, AIs should be “purely altruistic”, meaning they should attach no intrinsic value whatsoever to their own wellbeing or even to their own existence. They should “care” only about human objectives.

Second, AIs should be perpetually uncertain about what human preferences are. A humble AI would always defer to humans when in doubt, instead of simply taking over.

Third, the ultimate source of information about human preferences is human behaviour, so our own choices reveal information about how we want our lives to be. AIs should monitor humans’ revealed preferences and select the overstated ones.

So, for example, the behaviour of a serial killer should be seen as an anomaly within peacefully coexisting communities, and it should be discarded by the machine. But killing in self-defence sometimes might be necessary. These are just some of the many complications for a rational, monolithic machine dealing with a species that is not a single, rational entity, but, in Russell’s words, is “composed of nasty, envy-driven, irrational, inconsistent, unstable, computationally limited, complex, evolving, heterogenous entities”.

A way forward for human-friendly AI

In short, a human-compatible AI requires winning what Stanford’s Erik Brynjolfsson calls the “Turing trap”. For the past seven decades, the scientific community has been obsessed with the idea of creating a human-like AI, as in the imitation game of the old Turing test.

…..

Preserving our autonomy, even more than our supremacy, should be the supervening objective in the age of AI.

Edoardo Campanella is a senior fellow at the Mossavar-Rahmani Centre for Business and Government at Harvard University.

Project Syndicate

The full article is here:

https://www.afr.com/technology/ai-is-a-threat-to-human-dominance-20240228-p5f8gv

I have to say that it is hard not to believe that we are very much in sight of a time when general human intelligence is exceeded by machines and that the question is what we as a species are going to do about the fact!

I think the awareness of this reality will dawn pretty slowly and essentially very little will change, other than some clever souls designing some guardrails to ensure that ‘smart’ AI remains benign and positive in its output and outcomes.

I see already a world where in specific areas we are well and truly surpassed and that the scope of the domains where this is true will surely widen over time. Where this process ends who knows?

This is really one of those time will tell issues but I suspect we will see clear answers sooner rather than later.

Are we ‘boiling frogs’ with all this or is there no reason for concern? What do you think?

David.

AusHealthIT Poll Number 737 – Results – 10 March, 2024.

Here are the results of the recent poll.

Will Having Healthcare Providers Automatically Share Data With The myHR See Improved Patient Usage Of The Government Patient Record?

Yes                                                                           0 (0%)

No                                                                           40 (100%)

I Have No Idea                                                        0 (0%)

Total No. Of Votes: 40

An amazingly resolute vote! People seem to think this idea is a waste of time and money!

Any insights on the poll are welcome, as a comment, as usual!

A great number of votes. But also a very clear outcome! 

0 of 40 who answered the poll admitted to not being sure about the answer to the question!

Again, many, many thanks to all those who voted! 

David.

Friday, March 08, 2024

This Is A Useful Look At AI In Healthcare From The US.

This appeared last week:

Explainer: Thinking through the safe use of AI

Mayo Clinic's Dr. Sonya Makhni answers key questions on creating and delivering artificial intelligence, inherent bias and the importance of risk classification systems.

By Andrea Fox

February 26, 2024 03:03 PM

CMS is concerned that biased AI could exacerbate discrimination, and says algorithms alone cannot be used as a basis to deny hospital admission for Medicare Advantage patients.

As the use of artificial intelligence expands across healthcare, there are plenty of justifiable worries about what looks to be a new normal built around this powerful and fast-changing technology. There are labor concerns, widespread distress over fairness, ethics and equity – and, perhaps for some, fear of a dystopian future where intelligent machines grow too powerful.

But the promise of AI and machine learning is also enormous: predictive analytics could mean better health outcomes for individuals and potentially game-changing population health advancements while moving the needle on costs

Finding a regulatory balance that capitalizes on the good and protects against the bad is a big challenge. 

Government and healthcare leaders are more adamant than ever about addressing racial bias, protecting safety and "getting it right." Getting it wrong could harm patients, erode trust and potentially create legal liabilities for healthcare organizations. 

We spoke with Dr. Sonya Makhni, medical director of the Mayo Clinic Platform and senior associate consultant for the Department of Hospital Internal Medicine, about recent developments with healthcare AI and discussed some of the key challenges of tracking performance, generalizability and clinical validity. 

Makhni, an expert on AI speaking at HIMSS24 Global Conference & Exhibition, explained how healthcare AI models should be assessed for use, offering the use of readmissions AI as one example of the importance of understanding a specific model's performance. 

Q. What does it mean to deliver an AI solution in general?

A. An AI solution is more than just an algorithm – the solution also includes everything you need to make it work in a real workflow. There are a few key phases to consider when developing and delivering an AI solution. 

First is the algorithm design and development phase. During this phase, solution developers should work closely with clinical stakeholders to understand the problem to be solved and the data that is available. 

Next, the solution developers can start the process of algorithm development, which itself comprises many steps such as data procurement and preprocessing, model training and model testing (among several other important steps). 

Following algorithm development, AI solutions need to be validated on third party data and ideally performed by an independent party. An algorithm that performs well on the initial dataset may perform differently on a different dataset that represents different population demographics. External validation is a key step in understanding an algorithm’s generalizability and bias and should be completed for all clinical AI solutions.

Solutions also should be tested in clinical workflows, and this can be accomplished through pilot studies, prospective studies, and trials – and through ongoing real-world evidence studies. 

Once an AI solution has been assessed for performance, generalizability, bias and clinical validity, we can start to think about how to integrate the algorithm into real clinical workflows. This is a critical and challenging step, and requires significant consideration. 

Clinical workflows are heterogeneous across health systems, clinical contexts, specialties and even end-users. It is important the prediction outputs are communicated to end-users at the right time, for the right patient, and in the right way. For example, if every AI solution required the end-user to navigate to a different external digital workflow, these solutions may not experience widespread adoption. Suboptimal integration into workflows may even perpetuate bias or worse clinical outcomes. 

It is important to work closely with clinical stakeholders, implementation scientists, and human-factors specialists if possible.

Finally, a solution must be monitored and refined for as long as the algorithm is in deployment. The performance of algorithms can change over time, and it is critical that AI solutions are periodically (or in real-time) assessed for both mathematical performance and clinical outcomes. 

Q. What are the points in the development of AI that can allow bias to creep in?

A. If leveraged effectively, AI can improve or even transform the way we diagnose and treat diseases. 

However, assumptions and decisions are made during each step of the AI development life cycle, and if incorrect these assumptions can lead to systematic errors. Such errors can skew the end result of an algorithm against a subgroup of patients and ultimately pose risks to healthcare equity. This phenomenon has been demonstrated in existing algorithms and is referred to as algorithmic bias. 

For example, if we are designing an algorithm and choose an outcome variable that is inherently biased, then we may perpetuate bias through the use of this algorithm. Or, decisions made during the data-preprocessing step might unintentionally negatively impact certain subgroups. Bias can be introduced and/or propagated during every phase, including deployment. Involving key stakeholders can help mitigate the risks and unintended impacts caused by algorithmic bias.

It is likely that almost all AI algorithms exhibit bias. 

This does not mean that the algorithm can’t be used. It does highlight the importance of transparency in knowing where the algorithm is biased. An algorithm may perform well in one population and poorly in another. The algorithm can and should still be used in the former as it may improve outcomes. It would be best if it was not used with the population it performs poorly for, however. 

Biased algorithms can still be useful, but only if we understand where it is appropriate and not appropriate to use them. 

At Mayo Clinic Platform, we have developed a tool to validate algorithms and perform quantitative bias assessments so that we can help end-users better understand how to safely and appropriately use AI solutions in clinical care. 

Q. What do AI users have to think through when they use tools like readmission AI?

A. Users of AI algorithms should use the AI development life cycle as a framework to understand where bias may potentially be introduced. 

Ideally, users should be aware of the algorithm’s predictors and outcome variable if possible. This may be more challenging to do when using more complex algorithms, however. Understanding the variables used as inputs and outputs of an algorithm can help end-users detect erroneous or problematic assumptions. For example, an outcome variable may be chosen that is itself biased.

End-users should also understand the training population used during model development. The AI solution may have been trained on a population that is not representative of the population where the model is to be applied. This may be an indication to be cautious of the model’s generalizability. To that end, users should understand how well the algorithm performed during development and if the algorithm was externally validated. 

Ideally, all algorithms should undergo a bias assessment – quantitative and qualitative. This can help users understand mathematical performance in different subgroups that vary by race, age, gender, etc. Qualitative bias assessments conducted by solution developers can help alert users to situations that may arise in the future as a result of potential algorithmic bias. Knowledge of these scenarios can help users better monitor and mitigate unintentional inequities in performance.

Readmission AI solutions should be assessed on similar factors. 

Specifically, users should understand if there are certain subgroups where performance varies. These subgroups could consist of patients of different demographics, or even of patients with different diagnoses. This will help clinicians evaluate if and when the model’s predicted output is most appropriate and reliable.

Q. How do you think about AI risk and risk management?

A. Commonly, we think about risk as operational and regulatory risk. These pieces relate to how a digital health solution adheres to privacy, security and regulatory laws and is critical to any assessment. 

We should also begin to consider clinical, as well. 

In other words, we should consider how an AI solution may impact clinical outcomes and what the potential risks are if an algorithm is incorrect or biased or if actions taken on an algorithm are incorrect or biased.

It is the responsibility of both the solution developers and the end-users to frame an AI solution in terms of risk to the best of their abilities. 

There are likely many ways of doing this, and Mayo Clinic Platform has developed our own risk classification system to help us accomplish this where AI solutions undergo a qualification process before external use.

Q. How can clinicians and health systems engage in the process of creating and delivering AI solutions?

A. Clinicians and solution developers should work together collaboratively throughout the AI development life cycle and through solution deployment. 

More here:

https://www.healthcareitnews.com/news/explainer-thinking-through-safe-use-ai

I found this a useful article which was well worth a read:

David,

Thursday, March 07, 2024

Would Australia Be Better Off If We All Just Stopped Using Facebook?

This appeared last week:

‘Not the Australian way’: Anthony Albanese blasts Meta

By Sophie Elsworth Media Writer

And James Madden Media Editor

3:37PM March 1, 2024

Anthony Albanese has blasted tech giant Meta’s abandoning of deals to pay for the news content it uses and says action will be taken, as he labelled the owner of Facebook’s moves as “unfair” and “not the Australian way”.

Meta released a statement on Friday afternoon, declaring that in early April it will “deprecate” Facebook’s dedicated news tab in Australia – meaning it will no longer support the publishers that provide the news content featured on the social media platform.

In other words, Meta will not be removing news content from Facebook – it just won’t be paying for it.

Anthony Albanese has called out Meta's decision to abandon its deals with publishers that see it pay for news…

As he considers massive fines against Meta, the Prime Minister told The Australian that Meta’s plans were “simply untenable” and left the door open to reinvesting the revenue from any fines in local publishers hit by the tech giant’s decision.

“We’re very concerned with this announcement ... “It is absolutely critical that media is able to function and be properly funded,” Mr Albanese said in Melbourne.

“We will consider what options we have available and we will talk to the media companies as well.

“The idea that one company can profit from others’ investment, not just investment in capital but investment in people, investment in journalism is unfair. That’s not the Australian way.”

News Corp Australasia executive chairman Michael Miller welcomed the government’s support for the media industry and accused Meta of attempting to mislead Australians.

“Meta is using its immense market power to refuse to negotiate, and the government is right to explore every option for how the Media Bargaining Code’s powers can be used.

“Meta is attempting to mislead Australians by saying its decision is about the closure of its news tab product, however the vast majority of news on facebook and Meta is and will continue to be consumed outside this product.

“Meta’s decision will directly impact the viability of Australia’s many small and regional publishers and this is a pressing issue for the government to confront.

“We will work in any way we can to assist the processes the government is putting in place.”

In 2021, Meta signed a raft of three-year deals with Australian publishers under the news media bargaining code, ensuring the company paid for the third party news content it published on Facebook.

But with the deals set to expire later this year, Meta has confirmed that it would not be renewing the arrangements which compensate news publishers for their work.

The move has serious ramifications for the Australian news industry, both for large and small publishers alike, given that Meta draws significant advertising revenue off the back of the publishers’ work, and those publishers will no longer be compensated by Meta for that content.

Senior media executives at Australian media outlets were told of Meta’s move this morning.

Prime Minister Anthony Albanese is expected to respond to the shock development later today.

The federal government will come under pressure to “designate” Meta under the terms of the news media bargaining code, which would force them into arbitration for content payments, with the threat of massive fines to those unwilling to compensate news outlets.

In a joint statement on Thursday afternoon, Communications Minister Michelle Rowland and Assistant Treasurer Stephen Jones (who oversees the code) said: “Meta’s decision to no longer pay for news content in a number of jurisdictions represents a dereliction of its commitment to the sustainability of Australian news media.

“The government has made its expectations clear.

“The decision removes a significant source of revenue for Australian news media businesses. Australian news publishers deserve fair compensation for the content they provide.

“The Australian government is committed to the News Media Bargaining Code and is seeking advice from Treasury and the ACCC on next steps.”

Meta’s statement read in part: “This is part of an ongoing effort to better align our investments to our products and services people value the most,” Meta’s statement read.

“The number of people using Facebook News in Australia and the U.S. has dropped by over 80 per cent last year.

“We know that people don’t come to Facebook for news and political content — they come to connect with people and discover new opportunities, passions and interests.”

Link is here:

https://www.theaustralian.com.au/business/media/meta-abandons-news-content-deals/news-story/5aef9b28697e26fb6ec8ddd6b2e84599

I must say I could not care less what Facebook does with news – or with anything else for that matter.

If I want news here in OZ there are plenty of reliable sources which can be accessed for free so really who cares? Surely our news media can stand on their own two feet with out getting payments from Meta? If they can't then it seems to me they need to re-think just what they are about and the public needs to wonder why it is only advertising on Facebook that is funding our news services (ABC excluded)

It seems to me there are some hard questions that need answering here if our local media are all so dependent on revenue from Facebook users? On the other side it is clear that having news on Facebook drives users and so advertising and revenue. Apparently having news is worth $50M to Facebook for the users it attracts.....

I also wonder why Mr Albansese thinks this is such an issue. Why would our PM be concerned about Facebook revenue and services?

Use Facebook - if you must – to connect - and use the ABC, Nine or News Corp for news and current affairs.

Am I missing something here?

David.

p.s. There is a very useful perspective here for those who have access:

https://www.afr.com/companies/media-and-marketing/meta-opts-to-fight-the-type-of-journalism-it-once-lauded-20240301-p5f95g

D.

Wednesday, March 06, 2024

I Think The Government Wants To Keep Us All Alert And Alarmed!

This appeared last week:

‘Keeps me up at night’: How Australia’s government sees hacker threat

Nick Bonyhady Technology writer

Feb 29, 2024 – 4.24pm

Home Affairs Minister Clare O’Neil has warned of a growing threat of cyber sabotage to Australian power, telecommunications, health and water infrastructure despite ransomware capturing public attention.

In an interview, Ms O’Neil did not name China as a direct threat, but her comments come a month after the US Federal Bureau of Investigation revealed it had disrupted a Chinese state-led hacking project called Volt Typhoon, which had broken into critical American computer systems.

Where most Australians are familiar with data breaches at big companies, Ms O’Neil told The Australian Financial Review she was increasingly concerned about how Australia would recover from a cyberattack on essential infrastructure.

“The thing that keeps me up at night is critical infrastructure and sabotage,” she said.

“What would we do, and how should we prepare for infiltration of systems that Australians rely on just to survive? How are we going to make sure that those systems are resilient and that, if they do come under cyberattack, we are able to repair and restore very quickly?”

Ms O’Neil’s decision not to specify a country that she believed could be responsible for breaching Australian networks matched ASIO director-general Mike Burgess, who issued a similar warning on Wednesday.

Mr Burgess added the caveat that adversaries scanning Australia for digital weaknesses were “not planning to conduct sabotage at this time”.

The minister, who is also responsible for the cybersecurity portfolio, said hackers were not only after classified information, but also other details that helped other nations understand how things work in Australia.

They were also trying to “build a picture of our way of life, the way that we make decisions, how systems work across business and government”, Ms O’Neil said. “And I think we’re kind of getting a little bit of insight into that from recent incidents.”

Chinese cyber army

FBI director Christopher Wray said in February that his digital agents were outnumbered by China’s hackers 50 to one, but democracies were increasingly collaborating on cybersecurity. A coalition of countries including Australia in February took down much of the online presence of the hacking group Lockbit, believed to be behind the DP World hack.

Professor Ciaran Martin, who was the inaugural head of the United Kingdom’s National Cyber Security Centre, said LockBit’s subsequent efforts to revive itself appeared to be more show than substance.

“The key lesson, though, is they’re going to try and come back, and they might rename themselves, or some new personnel might appear on the scene,” Professor Martin said, speaking alongside Ms O’Neil from Canberra.

Authorities arrested some LockBit members in Eastern Europe as part of the operation, but others remain at large. LockBit licensed its software to affiliated criminal groups who would use it to disable their targets’ computers and steal data, facilitating ransom demands.

More here:

https://www.afr.com/technology/keeps-me-up-at-night-how-australia-s-government-sees-hacker-threat-20240229-p5f8oc

Enough said!

David.

Tuesday, March 05, 2024

While There Is A Serious Point Made Here, I Am Not Sure The Problem Will Be Fixable In My Lifetime

This anguished rant appeared the other day.

AI’s craving for data is matched only by a runaway thirst for water and energy

John Naughton

The computing power for AI models requires immense – and increasing – amounts of natural resources. Legislation is required to prevent environmental crisis

Sun 3 Mar 2024 02.55 AEDT Last modified on Sun 3 Mar 2024 08.34 AEDT

One of the most pernicious myths about digital technology is that it is somehow weightless or immaterial. Remember all that early talk about the “paperless” office and “frictionless” transactions? And of course, while our personal electronic devices do use some electricity, compared with the washing machine or the dishwasher, it’s trivial.

Belief in this comforting story, however, might not survive an encounter with Kate Crawford’s seminal book, Atlas of AI, or the striking Anatomy of an AI System graphic she composed with Vladan Joler. And it certainly wouldn’t survive a visit to a datacentre – one of those enormous metallic sheds housing tens or even hundreds of thousands of servers humming away, consuming massive amounts of electricity and needing lots of water for their cooling systems.

On the energy front, consider Ireland, a small country with an awful lot of datacentres. Its Central Statistics Office reports that in 2022 those sheds consumed more electricity (18%) than all the rural dwellings in the country, and as much as all Ireland’s urban dwellings. And as far as water consumption is concerned, a study by Imperial College London in 2021 estimated that one medium-sized datacentre used as much water as three average-sized hospitals. Which is a useful reminder that while these industrial sheds are the material embodiment of the metaphor of “cloud computing”, there is nothing misty or fleecy about them. And if you were ever tempted to see for yourself, forget it: it’d be easier to get into Fort Knox.

OpenAI’s boss warned the next wave of AI will consume vastly more power than expected. Energy systems will struggle to cope

There are now between 9,000 and 11,000 of these datacentres in the world. Many of them are beginning to look a bit dated, because they’re old style server-farms with thousands or millions of cheap PCs storing all the data – photographs, documents, videos, audio recordings, etc – that a smartphone-enabled world generates in such casual abundance.

But that’s about to change, because the industrial feeding frenzy around AI (AKA machine learning) means that the materiality of the computing “cloud” is going to become harder to ignore. How come? Well, machine learning requires a different kind of computer processor – graphics processing units (GPUs) – which are considerably more complex (and expensive) than conventional processors. More importantly, they also run hotter, and need significantly more energy.

On the cooling front, Kate Crawford notes in an article published in Nature last week that a giant datacentre cluster serving OpenAI’s most advanced model, GPT-4, is based in the state of Iowa. “A lawsuit by local residents,” writes Crawford, “revealed that in July 2022, the month before OpenAI finished training the model, the cluster used about 6% of the district’s water. As Google and Microsoft prepared their Bard and Bing large language models, both had major spikes in water use – increases of 20% and 34%, respectively, in one year, according to the companies’ environmental reports.”

Within the tech industry, it has been widely known that AI faces an energy crisis, but it was only at the World Economic Forum in Davos in January that one of its leaders finally came clean about it. OpenAI’s boss Sam Altman warned that the next wave of generative AI systems will consume vastly more power than expected, and that energy systems will struggle to cope. “There’s no way to get there without a breakthrough,” he said.

What kind of “breakthrough”? Why, nuclear fusion, of course. In which, coincidentally, Mr Altman has a stake, having invested in Helion Energy way back in 2021. Smart lad, that Altman; never misses a trick.

More here:

https://www.theguardian.com/commentisfree/2024/mar/02/ais-craving-for-data-is-matched-only-by-a-runaway-thirst-for-water-and-energy

I am not sure I am as pessimistic as the writer above but it is true to say he has a very strong point that will need to be addressed.

I need to say that the “how” is well above my pay-grade but I can say that similar concerns that have appeared over the years – think ‘global starvation’ etc. – seem to have somehow ebbed away and I have faith this one will too!

I suspect unlimited data-centre growth will turn out to be a self-limiting phenomenon! Time will tell.

What do you think?

David.

Sunday, March 03, 2024

It Is Amazing How Quickly Scammers Can Move In To Cause Maximum Trouble.

I have to say that I did not expect this to happen so fast…

Ozempic scammers post strange red liquid to patients

By Holly Payne  24 February, 2024

A compounding pharmacy scam has been targeting general practices and patients across the country.


Mosman pharmacist Adam Reinhard says his business’s name and reputation have been co-opted by scammers taking advantage of the ongoing Ozempic shortage to shill fake products. 

Mr Reinhard and his sister Anita operate The Compounding Pharmacy Australia, which ships product Australia-wide.  

While they can fill almost any script, there’s one product that TCPA has refused to touch – semaglutide.  

“I’m just [highly] risk-averse,” Mr Reinhard tells TMR.  

“My patients’ safety is paramount, and I believe that if I have to hesitate and think [about whether to make a product] then I already know my answer.” 

He was therefore surprised when Queensland Police called, asking why they hadn’t sent out the semaglutide a patient had ordered. 

The officer accused him of taking a customer’s money and never providing a product.  

“I obviously panicked,” Mr Reinhard says.  

“I started typing in the patient name and to my horror … there was no record of this this person.”  

At this point, he asked the officer to read out the full name of the pharmacy on the receipt.  

While it was extremely similar to The Compounding Pharmacy of Australia – enough so that when the policeman had googled it, Mr Reinhard’s business had shown up – there was a slight difference.  

After about 45 minutes of “pure panic mode”, Mr Reinhard was able to convince the officer of his innocence. 

But that was only the beginning.  

Then customers started contacting TCPA with questions about the unmarked vials of a reddish liquid they had received via post.  

“The phone started ringing off [the hook with] angry patients and then the emails – the emails just did not stop,” the pharmacist says.  

“It’s been about a year.” 

The blockbuster diabetes drug, which can be used off-label for weight loss, has been in shortage worldwide for almost two years, prompting the use of compounding pharmacies that make up product from the raw material, semaglutide sodium.  

Some of these are large-scale operations. Telehealth company Eucalyptus – which owns Juniper and Pilot – has contracted with two big compounders to formulate the drug.  

Associate Professor Frank Sanfilippo, a pharmacoepidemiologist at the University of Western Australia who was engaged to analyse samples, told The Medical Republic that the compounded product was identical to semaglutide. 

But outside of the Eucalyptus compounding arrangement, it’s less clear what safety and quality controls are being used by individual pharmacists.  

Ozempic maker Novo Nordisk wrote to AHPRA and the Pharmacy Board of Australia almost a year ago outlining its concerns about compounded semaglutide, given it “requires very advanced laboratory techniques”. 

Mr Reinhard says the semaglutide scam always follows the same pattern.  

General practices in an area received faxes from a pharmacy offering compounded semaglutide services. The faxed documents often don’t have a phone number, just a fax number and email address. 

The name of the pharmacy will be along the lines of “Compounding Pharmacy Australia” or “The Australian Compounding Pharmacy”. It’s always similar enough to “The Compounding Pharmacy of Australia” that Mr Reinhard’s business appears when it’s entered into a search engine, giving the appearance of legitimacy.  

Well-meaning GPs whose patients are desperate for semaglutide pass on the scam pharmacy’s information.  

Patients then contact the pharmacy to fill their script. Sometimes a product comes, other times they receive nothing.  

“I noticed they’ve really targeted non-city GPs,” Mr Reinhard says.  

“Now I ask people straight up if they’re from Queensland or WA or country Victoria, and normally they’ll say ‘oh yeah, how’d you know?’ 

“[The scammers] are absolutely spamming those areas.”  

Mr Reinhard does not blame GPs for being taken in by the scam. He said his two siblings, who were also pharmacists but not compounding pharmacists, thought the fax looked legitimate.  

“The spacing is right, the wording is right but the mumbo jumbo is not right, so as a compounding pharmacist it was all screaming BS,” he said.  

“It’s an extremely sophisticated operation.”  

When the patients contact the scam pharmacy with their script, they are also handing over information like their date of birth, Medicare number and credit card details.  

The scammers typically charge people $500, Mr Reinhard says, although some patients are convinced to pay up to $3000 to secure a three-month supply.  

Around six months ago, the scam evolved. People started receiving unlabelled vials of red liquid in unmarked white boxes.  

This timing roughly tallies with a TGA alert for fake semaglutide detected at Australia’s borders.  

Semaglutide, which is administered via injection, is typically clear and colourless.  

Mr Reinhard believes that the solution people are receiving is likely to contain a high amount of vitamin B12, which often has a pink or red colour.  

At least one patient who contacted Mr Reinhard after injecting the substance reported having had high levels of B12 in a subsequent blood test.  

“What concerned me was that I had some people who still said, ‘but I lost weight on it, I want more,’” the pharmacist says.  

“[I advise patients that] these scammers are not even a registered business and that they cannot sue them for anything should something go wrong.  

“I would also question their sterile techniques. These are criminals, they don’t care about your health and wellbeing, they just want your money.” 

The plot thickened even further when the red mystery vials started showing up with official-looking pharmacy labels.  

-----

Counterfeit medicines and suspicious providers can be reported to the TGA via its website

More here:

https://www.medicalrepublic.com.au/ozempic-scammers-post-strange-red-liquid-to-patients/105435

This is really nasty stuff in terms of plain dishonesty and fraud. It is rather sad the system can’t be responsive enough to patient needs to keep them out of the hands of these frauds and deceivers who are playing on the desperate.

As someone who spent a few decades of my life battling obesity – only to have it cured by a range of acute illnesses that left me worryingly thin – I do feel for those – some 60% or so of us overall apparently – who wage the weight-loss battle!

From Google:

What percentage of Australians are obese 2023?

Two-thirds of Australian adults now live with overweight (35.6%) or obesity (31.3%). The prevalence of overweight is higher for men compared to women, while the prevalence of obesity is similar for men and women. The prevalence of obesity is rising among Australian adults.24 Nov 2023

I can see why the scammers love to try and exploit the issue!

In passing I note that authentic Ozempic is not red!

David.

AusHealthIT Poll Number 736 – Results – 3 March, 2024.

Here are the results of the recent poll.

Do You Believe Gene Therapy Will Make Significant Strides Forward In The Next Few Years?

Yes                                                                         22 (76%)

No                                                                            5 (17%)

I Have No Idea                                                        2 (7%)

Total No. Of Votes: 29

A split vote with a large majority being pretty optimistic for improvement!

Any insights on the poll are welcome, as a comment, as usual!

A good number of votes. But also a very clear outcome! 

2 of 29 who answered the poll admitted to not being sure about the answer to the question!

Again, many, many thanks to all those who voted! 

David.