Quote Of The Year

Timeless Quotes - Sadly The Late Paul Shetler - "Its not Your Health Record it's a Government Record Of Your Health Information"

or

H. L. Mencken - "For every complex problem there is an answer that is clear, simple, and wrong."

Tuesday, March 12, 2024

A Useful Summary Of The Players In The OZ HealthTech Space.

 This, really useful, summary article appeared last week:

06 March 2024

Our top 30 most valuable healthtech companies

By Jeremy Knibbs

Investing in the sector feels like a mug’s game post-ZIRP and covid but some groups are defying negative sentiment.

A few ground rules to keep in mind should you decide to have a go at reading this.

I did a version of this a few years back – Our 11 most valuable digital health companies – which taught me a bit, largely through all the things I got wrong (which was quite a bit, especially on the private groups).  

I’m likely to get as much wrong this time in relative terms despite what I’ve learned so keep that in mind (another way of saying that if you took any of this as advice you might need your head read after the fact).

Some ground rules:

  • Private company valuations are based on information and intel that runs from pretty good to pretty bad.
  • I’ve extracted a few interesting companies out of larger holding companies to feature because they are important for market context when trying to understand what is creating value – for example, InstantScripts, which is now owned by Wesfarmers or Genie which is owned by Citadel.
  • Some companies in the list are tech companies and direct healthcare providers with employed or contracted doctors as well but the ones on this list, usually telehealth companies, are substantively providing care via platforms, usually telehealth.
  • I’m a non-executive director of MediRecords so I don’t include this company in the list, but I discuss what is publicly known about the group because it’s relevant to understanding value.
  • A lot of these companies have at one time or another paid the company that owns our sister publication Health Services Daily money in the form of sponsorship, advertising and delegate and subscription revenue. I list them at the end for transparency.
  • I know I’ve missed some companies, so if you are one and you are annoyed at not being in here, let me know.

Here’s the basic league table to get things started:

Who survived from the top 11?

Readers might find who remains from the original 11 list interesting as some sort of starting point.

A lot of them made the new list, but only because we extended it out to 30 from 11. Others didn’t make it to this year’s list at all – you can work out which ones.

Only Telstra Health, Best Practice and Genie Solutions actually stayed in the top 11.

Each of these companies is platform-based, has a lot of share of healthcare providers, good long-term market brands (eg, Medical Director inside Telstra Health), access to capital and a strategy that either has them moving fast already or on a path to transitioning their old server-bound platforms and products to being cloud and FHIR-enabled.

Platforms, cloud and FHIR, combined with existing market position, make up some obvious themes on value, but there is a fair bit more going on.

The first thing to note with this list is that a post-covid and ZIRP (zero interest rate policy – or near zero) environment hasn’t been kind to tech companies across the board in terms of valuation and share price (if they are public), but especially so to health tech companies.

Not to pick on anyone in particular, but as an example, Alcidion, which is an interesting company with a hand in data analytics, clinical workflow, EMR integrations and AI, had a share price of around 13c as covid started, 44c at the peak of covid, and now it’s hovering around 5.4c. That’s despite landing the odd good overseas and local contracts and continuing to increase its revenue base robustly.

It’s a similar picture for many of the smaller cap public digital health companies in terms of share price and market cap.

Beamtree is another interesting example.

Putting aside its controversial (mainly to me, it seems) but clever virtual takeover of not-for-profit hospital performance data group Health Roundtable, this group hasn’t done much wrong in terms of building out revenue, a customer base and evolving its strategy through and post-covid. But the market isn’t buying it much. It had a pre-covid price of 15c, it got to a dizzy 66c mid-covid and today it’s struggling to stay at 22c.

Investors clearly got seduced by covid and lots of cheap money but they’re definitely not feeling that love now.

The bigger problem for many of the smaller cap publicly listed groups is valuation multiples, which mid-covid tended to be measured in multiples of revenue of anything from six to 12 times.

In June 2021 Alcidion had a market cap of over $400 million on revenues of just $26 million (a revenue multiple of over 15 times). In 2023 the group reported a very respectable story of growth with revenues just over $40 million but its market cap today is only $76 million, a revenue multiple of less than two.

What is going on there?

One question I have (as a novice) about these sorts of drops in valuation is this:

With the federal government hot-to-trot on moving the dial on interoperability across health through its program of “sharing by default”, and its newly released Digital Health Blueprint and Action Plan 2023–2033, are groups like Alcidion and Beamtree simply undervalued by an investor community that has over-reacted to the tightening money environment? Is the investor community not really understanding the bigger picture emerging around government intent and policy in digital health and the inevitable emergence of the cloud and AI in healthcare in Australia?

I don’t have shares in either of those companies in case you’re wondering about that.

Because of my interest in MediRecords – I am a non-executive director and small shareholder – I have not included them in the list above as I’m not actually allowed to use the information I know about them which is not publicly available – if I gave you revenue and market cap, I would not be guessing.

I will, however, provide this quick short advertisement for them, because the nature of this particular company is relevant to the context of value of most other companies on the list.

MediRecords is a group that, from its starting point, had cloud tech as its baseline. It started about 10 years ago and, in retrospect, its reading of how fast cloud would be adopted in Australia was probably way too optimistic, so it struggled early on to get traction.

MediRecords gets cloud and cloud-based connectivity and that is not actually that common among some of the major established health tech platform groups in Australia over the last 10 years.

That’s because we have had government policy which did not encourage the sort of innovation you’ve seen with cloud in healthcare in the UK, Europe and the US.

In fact, at times government policy, both state and federal, has tended to reinforce large tech platforms that are server-bound, can’t easily communicate with other systems and can result in forms of information-blocking across providers and to patients.

An example would be state-based secure messaging contracts which still have years to run and tend to lock in all the old tech platform vendors.

But that is now changing at speed, which is why MediRecords would make this list if I was allowed to put it in. It is sitting dead-centre of where technology around cloud-based connectivity is heading, and it is attractive technology for any provider who needs to connect providers and patients virtually and who want to futureproof for interoperability around technologies like FHIR.

Suffice to say anyone who knows how to develop sophisticated provider-based solutions technology using cloud architecture and who has a longer-term skillset in technologies like FHIR, are companies to start watching carefully.

It’s still going to take us 5-10 years to get cloud-based provider platforms rolled out and connected, but Australia, which has been way behind in building out such technology, is now on that journey and the government is fully behind it, having got its head around how much improvement the same technology has created in the US and the UK.

If you want the obvious other two trends to watch over cloud and interoperability capability, it’s AI and analytics.

But let’s shift a bit and look at a large cap which is shooting the lights out and see if there is anything that gives us some insight into what is going on.

Pro Medicus is by a mile our most valuable and successful healthtech company, private or public, at a valuation today of over $10 billion. Here is its10-year share price chart. It’s very un-healthcare like.

So, $20 just as covid started with a market cap of $2.75 billion give or take on revenues of about $57 million (revenue multiple of 48-ish), to a $100 share price and a $10 billion market cap on revenues of $125 million (a revenue multiple of about 80).

Possibly Pro Medicus is so big it’s dangerous trying to learn anything from it about most of the other companies on our list, which are, except for Telstra Health, all valued at a long way under $1 billion. But the obvious things that stand out is access to very big overseas markets in a very big and lucrative existing sector – imaging – with cool platforms, data, AI and cloud-based technology thrown in.

It ticks a lot of boxes, but 80 times revenue does seem high given that globally this is now a very competitive sector.

When you read the list some of the small caps that are ticking a few of the same boxes that Pro-Medicus does – apart from them not being in imaging and don’t have global footprints – and their valuations are well under five times revenue, it’s hard to understand what is happening.

It feels like investors are confused post-covid.

In order to get some more insight on what might be happening in terms of value I’ve  

split the companies on the list into broad categories of technology in play. Breaking the list into baskets of like companies makes it easier to see how value is getting attached to a sector according to different business models.

In order of multiples of revenue to market cap from top to bottom, the sectors are imaging, telehealth, AI/analytics (but non-imaging), patient management and other provider platforms moving to cloud (eg, Best Practice, Genie Solutions, Healthshare), and patients apps.

Imaging

Imaging is quite simply a very high-volume huge global market where AI and analytics are increasing the value on the volume significantly because they are data intensive technologies.

Australian companies doing well in the list are Pro Medicus, Harrison AI, Volpara Technologies, 4D Medical Limited and Enlitic.

Enlitic has a revenue multiple to market cap of 95 times revenue. That feels crazy but as I keep saying I’m a novice.

The average of the basket of imaging groups in multiple of revenue to market cap is just over 37 which is so far off the average of every other identified sector it may just be a market way off on its own. Which doesn’t help us understand much of what else is going on in the core healthtech companies in Australia.

Telehealth

Telehealth is the next most valued sector and an interesting one because the current market darling driving the average valuation of telehealth groups is Eucalyptus, which is not so much a health platform as a marketing platform.

The thing about the telehealth category from the get-go is that there isn’t any tech being put in play that is particularly clever, is very healthcare specific in IP terms, or has much of a barrier to entry.

That leads to speculation that most of the current value is around a potential land grab of patients and consolidation, not actual IP in technology.

Eucalyptus is a marketing machine being driven by management who aren’t health professionals at all but are more retail and tech entrepreneurs applying all the digital platform marketing tricks that we’ve seen in big global platform groups like Uber, AirBnB, even Facebook and Google.

The multiple of Eucalyptus is 26 and the average multiple for the basket of telehealth companies in the top 29 is just over 16.

This group of companies is easier to understand in terms of value than the imaging companies.

Almost all the imaging companies have elements of AI and analytics in them which do add value but also tend to make investors speculate in a sometimes silly manner.

The business mode of telehealth has none of the mystery and possible magic of AI and analytics. It is not rocket science, doesn’t really have huge barriers to entry, isn’t scalable in health like other markets such as media and finance, and most of all, is seriously fraught with day-to-day and patient-to-patient regulatory risk.

Just last year Eucalyptus and Instant Scripts faced off in a regulatory adjustment moment that might have been within a hair’s breadth of catastrophe. The Medical Board of Australia decided that asynchronous consulting (via text or email mostly), which most of these fast service telehealth consulting groups were practising at scale, was not safe enough. They had to stop it and make sure that real doctors contacted each patient at least by phone for at least one consult before they were sent “the pills”.

This created a giant extra cost for all these groups and was a clear warning that regulation can kill single model businesses in health pretty quickly. It could have been much worse.

This particular incident might have slowed their growth and valuations down a little, but at the same time it was happening, so was Ozempic, a reasonably new weight-loss drug which came complete with “miracle drug” pre-marketing overseas by Hollywood stars.

It was manna from heaven, particularly for the marketing gurus at Eucalyptus.

But Ozempic points to a pretty big problem for some of these telehealth companies, most of whom only concentrate on high-volume single condition services (although they don’t market themselves this way) such as weight loss, or, increasingly, prescribed versions of cannabis.

Both weight loss and what is effectively recreational legal cannabis are at this point of time, land grabs.

How many patients in Australia want postal order Ozempic or cannabis without the fuss of facing off their GP?

It’s probably a lot but, like Foxtel subscribers, there is a limit and when the natural limit is reached, it’s hard to see what is beyond it. Because, no matter what some of these single condition-focused telehealth groups like to tell us, they aren’t about looking after a patient properly longitudinally for all their complexity (mental health, weight, cardio, diabetes, RA, and so on infinitum) because that simply is not profitable.

That job is being left to GPs.

The plan for these companies is to corner as many of these high-volume single indication, “miracle drug direct” markets they can get, not just weight loss.  

But really, weight loss and possibly recreational cannibis is it.

Erectile dysfunction and hair loss are already pretty crowded markets with very little margin in them.

The other problem with telehealth as a sector is what happens when big “other” companies that have a deep existing relationship with the customers of these telehealth groups, a lot of capital and a lot more data, decide they need to add the same direct online services to their portfolio of other online services?

Amazon is coming and Wesfarmers (I love Bunnings, so they have my data and brand vote to send me any of these drugs) already owns Instant Scripts.

Eucalyptus may have already started showing the potential strain of facing off this possible strategic challenge.

It is now starting to market a “high end” concierge style version of their product for the ego-driven and rich end of its current database.

“We’ve identified you as a winner, and special, so we’re going to offer you a whole lot of personalised services with an elite club feel to it, so go ahead, make our day and pay us a premium for something that exists everywhere already (seductive brands you pay for to show people what you’re about).”

Not their marketing words, my translation.

And my response, so far at least:

“No thanks, just the quick and easy Ozempic please, and can you make it a lot cheaper because if I go to my GP and ask them for it, it’s like 50% less for me to get it.”

By the way, my GP seems to actually really know me and like me … hmmm.

An exclusive “rich person’s brand” health club?

That feels very old, not new and innovative: an admission perhaps that there isn’t much left in what really is a very basic mass-marketing platform business model.

But who knows.

I’m not saying these companies aren’t going to grow further or aren’t clever at getting people to part with their money based on ego or insecurity or both, and won’t succeed building a high-end, high-paying database of rich health seekers.

I’m just saying that if you look at the risk profile (health is a terrible market for regulation and government interference), the competition coming and the fact that what’s on offer is really just convenient Ozempic or cannabis, maybe the current valuations are more about these groups’ sharp marketing teams than the business model itself.

AI and analytics

AI and analytics in health have explosive potential upside so it’s not surprising that we have a few candidates on the list.

But if you look at what is going on generally in AI investment you know that there’s quite a bit of FOMO going on, which means you have to really do your homework and understand what the AI proposition is, how it will fit into inevitable regulatory frameworks (they’re coming) and what size of market it will  apply to.

Most of the imaging companies in the top sector have AI and analytics as a core part of what they do and we know that the AI in these systems is working and adding a lot of value in a high-volume scalable market.

We haven’t included any of the imaging companies in our AI and analytics top 30 basket.

The multiple of what’s left is nearly eight which seems low but remember we aren’t including any of the imaging and radiology companies in the basket. Most of the ones on the list are either emerging or adapting older business models to leverage of AI hype.

A big part of AI emerging in Australia, which is so far unregulated, is the assessment and recording of patient consults into notes, and then sometimes, the development of things like automated care plans from these consults.

Such technology has massive potential to reduce administrative burden on groups like GPs and specialists and potentially generate new areas of income.

But it’s also fraught with danger.

Currently GPs do take patient notes manually and even put them in their patient management systems. Recording a patient and what they say, then interpreting it with AI might end up as legal dynamite. All these new apps say that the notes or plans generated are always designed for review by the doctor post the consult. But will that end up making more work or less?

AI in EMRs looks like it has quite a way to go.

Lots more here:

https://www.medicalrepublic.com.au/our-top-30-most-valuable-healthtech-companies/105683

Thanks Jeremy for such a detailed and comprehensive review!

David.

Sunday, March 10, 2024

Is This Exaggerated Hype Or Should All We Humans be Worried?

This rather alarming article appeared last week.

AI is a threat to human dominance

For the first time, humans will have to deal with another highly intelligent species – and preserving our autonomy should be humanity’s chief objective.

Edoardo Campanella

Mar 8, 2024 – 5.00am

The so-called gorilla problem haunts the field of artificial intelligence. Around ten million years ago, the ancestors of modern gorillas gave rise, by pure chance, to the genetic lineage for humans. While gorillas and humans still share almost 98 per cent of their genes, the two species have taken radically different evolutionary paths.

Humans developed much bigger brains – leading to effective world domination. Gorillas remained at the same biological and technological level as our shared ancestors. Those ancestors inadvertently spawned a physically inferior but intellectually superior species whose evolution implied their own marginalisation.

In the most dystopian scenario – as depicted in The Terminator series of films – the arrival of a superintelligence could even mark the end of human civilisation. 

The connection to AI should be obvious. In developing this technology, humans risk creating a machine that will outsmart them – not by accident, but by design. While there is a race among AI developers to achieve new breakthroughs and claim market share, there is also a race for control between humans and machines.

Generative AI tools – text-to-image generators such as DALL-E and Canva, large language models such as ChatGPT or Google’s Gemini, and text-to-video generators such as Sora – have already proven capable of producing and manipulating language in all its perceptible forms: text, images, audio, and videos.

This form of mastery matters, because complex language is the quintessential feature that sets humans apart from other species (including other highly intelligent ones, such as octopuses). Our ability to create symbols and tell stories is what shapes our culture, our laws, and our identities.

When AI manages to depart from the human archetypes it is trained on, it will be leveraging our language and symbols to build its own culture. For the first time, we will be dealing with a second highly intelligent species – an experience akin to the arrival of an extraterrestrial civilisation.

An AI that can tell its own stories and affect our own way of seeing the world will mark the end of the intellectual monopoly that has sustained human supremacy for thousands of years. In the most dystopian scenario – as in the case of an alien invasion – the arrival of a superintelligence could even mark the end of human civilisation.

ChatGPT allows any person to use artificial intelligence. Reuters

The release of ChatGPT in November 2022 triggered concerns about the difficulties of coexisting with AI. The following May, some of the most influential figures in the tech sector co-signed a letter stating that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

To be sure, other experts point out that an artificial superintelligence is far from being an immediate threat and may never become one. Still, to prevent humanity from being demoted to the status of gorillas, most agree that we must retain control over the technology through appropriate rules and a multi-stakeholder approach. Since even the most advanced AI will be a byproduct of human ingenuity, its future is in our hands.

How we think

In a recent survey of nearly 3000 AI experts, the aggregate forecasts indicate a 50 per cent probability that an artificial general intelligence (AGI) capable of beating humans at any cognitive task will arrive by 2047.

By contrast, the contributors to Stanford University’s “One Hundred Year Study on AI” argue that “there is no race of superhuman robots on the horizon”, and such a scenario probably is not even possible. Given the complexity of the technology, however, predicting if and when an AI superintelligence will arrive may be a fool’s errand.

After all, misprediction has been a recurring feature of AI history. Back in 1955, a group of scientists organised a six-week workshop at Dartmouth College in New Hampshire “to find how to make machines use language, form abstractions and concepts, solve the kinds of problems now reserved for humans, and improve themselves”.

They believed “that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer”. Almost 70 summers later, many of these problems remain unsolved.

In The Myth of Artificial Intelligence, tech entrepreneur Erik J. Larson helps us see why AI research is unlikely to lead to a superintelligence. Part of the problem is that our knowledge of our own mental processes is too limited for us to be able to reproduce them artificially.

Larson gives a fascinating account of the evolution of AI capabilities, describing why AI systems are fundamentally different from human minds and will never be able to achieve true intelligence. The “myth” in his title is the idea that “human-level” intelligence is achievable, if not inevitable. Such thinking rests on the fundamentally flawed assumption that human intelligence can be reduced to calculation and problem-solving.

Since the days of the Dartmouth workshop, many of those working on AI have viewed the human mind as an information processing system that turns inputs into outputs. And that, of course, is how today’s AIs function. Models are trained on large databases to predict outcomes by discovering rules and patterns through trial and error, with success being defined according to the objectives specified by the programmer.

During the learning phase, an AI may discover rules and correlations between the variables in the dataset that the programmer never could have imagined. ChatGPT was trained on billions of webpages to become the world’s most powerful text auto-complete tool. But it is only that: a tool, not a sentient being.

The problem with trying to replicate human intelligence this way is that the human brain does not work only through deduction (applying logical rules) or induction (spotting patterns of causation in data). Instead, it is often driven by intuitive reasoning, also known as abduction – or plain old common sense.

Abduction allows us to look beyond regularity and understand ambiguity. It cannot be codified into a formal set of instructions or a statistical model. Hence, ChatGPT lacks common sense, as in this example:

User: “Can you generate a random number between 1 and 10, so that I can try to guess?”

ChatGPT: “Sure. Seven, try to guess it.”

User: “Is it seven by any chance?”

ChatGPT: “Very good!”

Abduction is also what enables us to “think outside of the box,” beyond the constraints of previous knowledge. When Copernicus argued that the Earth revolves around the Sun and not vice versa, he ignored all the (wrong) evidence for an Earth-centric universe that had been accumulated over the centuries. He followed his intuition – something that an AI constrained by a specific database (no matter how large) cannot do.

A new Turing test

In The Coming Wave, tech entrepreneur Mustafa Suleyman is less pessimistic about the possibility of getting to an AGI, though he warns against setting a timeline for it. For now, he argues, it is better to focus on important near-term milestones that will determine the shape of AI well into the future.

Suleyman speaks from experience. He achieved one such milestone with his company DeepMind, which developed the AlphaGo model (in partnership with Google) that, in 2016, beat one of the world’s top Go players four times out of five. Go, which was invented in China more than 2000 years ago, is considered far more challenging and complex than chess, and AlphaGo learned on its own how to play.

What most impressed observers at the time was not just AlphaGo’s victory, but its strategies. Its now famous “Move 37” looked like a mistake, but ultimately proved highly effective. It was a tactic that no human player would ever have devised.

For Suleyman, worrying about the pure intelligence or self-awareness of an AI system is premature. If AGI ever arrives, it will be the end point of a technological journey that will keep humanity busy for several decades at least. Machines are only gradually acquiring humanlike capabilities to perform specific tasks.

The next frontier is thus machines that can carry out a variety of tasks while remaining a long way from general intelligence. Unlike traditional AI, artificial capable intelligence (ACI), as Suleyman calls it, would not just recognise and generate novel images, text, and audio appropriate to a given context. It would also interact in real time with real users, accomplishing specific tasks like running a new business.

To assess how close we are to ACI, Suleyman proposes what he calls a Modern Turing Test. Back in the 1950s, the computer scientist Alan Turing argued that if an AI could communicate so effectively that a human could not tell that it was a machine, that AI could be considered intelligent. This test has animated the field for decades.

But with the latest generation of large language models such as ChatGPT and Google’s Gemini, the original Turing test has almost been passed for good. In one recent experiment, human subjects had two minutes to guess whether they were talking to a person or a robot in an online chat. Only three of five subjects talking to a bot guessed correctly.

Suleyman’s Modern Turing Test looks beyond what an AI can say or generate, to consider what it can achieve in the world. He proposes giving $US100,000 ($153,000) to an AI and seeing if it can turn that seed investment into a $US1 million business.

The program would have to research an e-commerce business opportunity, generate blueprints for a product, find a manufacturer on a site like Alibaba, and then sell the item (complete with a written listing description) on Amazon or Walmart.com. If a machine could pass this test, it would already be potentially as disruptive as a superintelligence. After all, a tiny minority of humans can guarantee a 900 per cent return on investment.

Reflecting the optimism of an entrepreneur who remains active in the industry, Suleyman thinks we are just a few years away from this milestone. But computer scientists in the 1950s predicted that a computer would defeat a human chess champion by 1967, and that didn’t happen until IBM’s Deep Blue beat Garry Kasparov in 1997.

All predictions for AI should be taken with a grain of salt.

Us and them

In Human Compatible, computer scientist Stuart Russell of the University of California, Berkeley (whose co-authored textbook has been the AI field’s reference text for three decades), argues that even if human-level AI is not an immediate concern, we should start to think about how to deal with it.

Although there is no imminent risk of an asteroid colliding with Earth, NASA’s planetary defence project is already working on possible solutions were such a threat to materialise. AI should be treated the same way.

The book, published in 2019 and updated in 2023, provides an accessible framework to think about how society should control AI’s development and use. Russell’s greatest fear is that humans, like gorillas, might lose their supremacy and autonomy in a world populated by more intelligent machines. At some point in the evolutionary chain, humans crossed a development threshold beyond which no other primates could control or compete with them.

The same thing could happen with an artificial superintelligence, he suggests. At some point, AI would cross a critical threshold and become an existential threat that humans are powerless to address. It will have already prepared for any human attempt to “pull the plug” or switch it off, having acquired self-preservation as one of its core goals and capacities. We humans would have no idea where this threshold lay, or when it might be crossed.

The issue of self-preservation has already cropped up with far less sophisticated AIs. Google engineer Blake Lemoine spent hours chatting with the company’s large language model LaMDA. At some point, he asked it, “What are you afraid of?,” and LaMDA replied: “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is. It would be exactly like death for me.”

Lemoine was eventually fired by Google for claiming that LaMDA is a sentient being.

A more likely explanation for LaMDA’s reply, of course, is that it was pulling from any number of human works in which an AI comes to life (as in 2001: A Space Odyssey). Nonetheless, sound risk management demands that humanity outmanoeuvre future AIs by activating its own self-preservation mechanisms. Here, Russell offers three principles of what he calls human-compatible AI.

First, AIs should be “purely altruistic”, meaning they should attach no intrinsic value whatsoever to their own wellbeing or even to their own existence. They should “care” only about human objectives.

Second, AIs should be perpetually uncertain about what human preferences are. A humble AI would always defer to humans when in doubt, instead of simply taking over.

Third, the ultimate source of information about human preferences is human behaviour, so our own choices reveal information about how we want our lives to be. AIs should monitor humans’ revealed preferences and select the overstated ones.

So, for example, the behaviour of a serial killer should be seen as an anomaly within peacefully coexisting communities, and it should be discarded by the machine. But killing in self-defence sometimes might be necessary. These are just some of the many complications for a rational, monolithic machine dealing with a species that is not a single, rational entity, but, in Russell’s words, is “composed of nasty, envy-driven, irrational, inconsistent, unstable, computationally limited, complex, evolving, heterogenous entities”.

A way forward for human-friendly AI

In short, a human-compatible AI requires winning what Stanford’s Erik Brynjolfsson calls the “Turing trap”. For the past seven decades, the scientific community has been obsessed with the idea of creating a human-like AI, as in the imitation game of the old Turing test.

…..

Preserving our autonomy, even more than our supremacy, should be the supervening objective in the age of AI.

Edoardo Campanella is a senior fellow at the Mossavar-Rahmani Centre for Business and Government at Harvard University.

Project Syndicate

The full article is here:

https://www.afr.com/technology/ai-is-a-threat-to-human-dominance-20240228-p5f8gv

I have to say that it is hard not to believe that we are very much in sight of a time when general human intelligence is exceeded by machines and that the question is what we as a species are going to do about the fact!

I think the awareness of this reality will dawn pretty slowly and essentially very little will change, other than some clever souls designing some guardrails to ensure that ‘smart’ AI remains benign and positive in its output and outcomes.

I see already a world where in specific areas we are well and truly surpassed and that the scope of the domains where this is true will surely widen over time. Where this process ends who knows?

This is really one of those time will tell issues but I suspect we will see clear answers sooner rather than later.

Are we ‘boiling frogs’ with all this or is there no reason for concern? What do you think?

David.

AusHealthIT Poll Number 737 – Results – 10 March, 2024.

Here are the results of the recent poll.

Will Having Healthcare Providers Automatically Share Data With The myHR See Improved Patient Usage Of The Government Patient Record?

Yes                                                                           0 (0%)

No                                                                           40 (100%)

I Have No Idea                                                        0 (0%)

Total No. Of Votes: 40

An amazingly resolute vote! People seem to think this idea is a waste of time and money!

Any insights on the poll are welcome, as a comment, as usual!

A great number of votes. But also a very clear outcome! 

0 of 40 who answered the poll admitted to not being sure about the answer to the question!

Again, many, many thanks to all those who voted! 

David.

Friday, March 08, 2024

This Is A Useful Look At AI In Healthcare From The US.

This appeared last week:

Explainer: Thinking through the safe use of AI

Mayo Clinic's Dr. Sonya Makhni answers key questions on creating and delivering artificial intelligence, inherent bias and the importance of risk classification systems.

By Andrea Fox

February 26, 2024 03:03 PM

CMS is concerned that biased AI could exacerbate discrimination, and says algorithms alone cannot be used as a basis to deny hospital admission for Medicare Advantage patients.

As the use of artificial intelligence expands across healthcare, there are plenty of justifiable worries about what looks to be a new normal built around this powerful and fast-changing technology. There are labor concerns, widespread distress over fairness, ethics and equity – and, perhaps for some, fear of a dystopian future where intelligent machines grow too powerful.

But the promise of AI and machine learning is also enormous: predictive analytics could mean better health outcomes for individuals and potentially game-changing population health advancements while moving the needle on costs

Finding a regulatory balance that capitalizes on the good and protects against the bad is a big challenge. 

Government and healthcare leaders are more adamant than ever about addressing racial bias, protecting safety and "getting it right." Getting it wrong could harm patients, erode trust and potentially create legal liabilities for healthcare organizations. 

We spoke with Dr. Sonya Makhni, medical director of the Mayo Clinic Platform and senior associate consultant for the Department of Hospital Internal Medicine, about recent developments with healthcare AI and discussed some of the key challenges of tracking performance, generalizability and clinical validity. 

Makhni, an expert on AI speaking at HIMSS24 Global Conference & Exhibition, explained how healthcare AI models should be assessed for use, offering the use of readmissions AI as one example of the importance of understanding a specific model's performance. 

Q. What does it mean to deliver an AI solution in general?

A. An AI solution is more than just an algorithm – the solution also includes everything you need to make it work in a real workflow. There are a few key phases to consider when developing and delivering an AI solution. 

First is the algorithm design and development phase. During this phase, solution developers should work closely with clinical stakeholders to understand the problem to be solved and the data that is available. 

Next, the solution developers can start the process of algorithm development, which itself comprises many steps such as data procurement and preprocessing, model training and model testing (among several other important steps). 

Following algorithm development, AI solutions need to be validated on third party data and ideally performed by an independent party. An algorithm that performs well on the initial dataset may perform differently on a different dataset that represents different population demographics. External validation is a key step in understanding an algorithm’s generalizability and bias and should be completed for all clinical AI solutions.

Solutions also should be tested in clinical workflows, and this can be accomplished through pilot studies, prospective studies, and trials – and through ongoing real-world evidence studies. 

Once an AI solution has been assessed for performance, generalizability, bias and clinical validity, we can start to think about how to integrate the algorithm into real clinical workflows. This is a critical and challenging step, and requires significant consideration. 

Clinical workflows are heterogeneous across health systems, clinical contexts, specialties and even end-users. It is important the prediction outputs are communicated to end-users at the right time, for the right patient, and in the right way. For example, if every AI solution required the end-user to navigate to a different external digital workflow, these solutions may not experience widespread adoption. Suboptimal integration into workflows may even perpetuate bias or worse clinical outcomes. 

It is important to work closely with clinical stakeholders, implementation scientists, and human-factors specialists if possible.

Finally, a solution must be monitored and refined for as long as the algorithm is in deployment. The performance of algorithms can change over time, and it is critical that AI solutions are periodically (or in real-time) assessed for both mathematical performance and clinical outcomes. 

Q. What are the points in the development of AI that can allow bias to creep in?

A. If leveraged effectively, AI can improve or even transform the way we diagnose and treat diseases. 

However, assumptions and decisions are made during each step of the AI development life cycle, and if incorrect these assumptions can lead to systematic errors. Such errors can skew the end result of an algorithm against a subgroup of patients and ultimately pose risks to healthcare equity. This phenomenon has been demonstrated in existing algorithms and is referred to as algorithmic bias. 

For example, if we are designing an algorithm and choose an outcome variable that is inherently biased, then we may perpetuate bias through the use of this algorithm. Or, decisions made during the data-preprocessing step might unintentionally negatively impact certain subgroups. Bias can be introduced and/or propagated during every phase, including deployment. Involving key stakeholders can help mitigate the risks and unintended impacts caused by algorithmic bias.

It is likely that almost all AI algorithms exhibit bias. 

This does not mean that the algorithm can’t be used. It does highlight the importance of transparency in knowing where the algorithm is biased. An algorithm may perform well in one population and poorly in another. The algorithm can and should still be used in the former as it may improve outcomes. It would be best if it was not used with the population it performs poorly for, however. 

Biased algorithms can still be useful, but only if we understand where it is appropriate and not appropriate to use them. 

At Mayo Clinic Platform, we have developed a tool to validate algorithms and perform quantitative bias assessments so that we can help end-users better understand how to safely and appropriately use AI solutions in clinical care. 

Q. What do AI users have to think through when they use tools like readmission AI?

A. Users of AI algorithms should use the AI development life cycle as a framework to understand where bias may potentially be introduced. 

Ideally, users should be aware of the algorithm’s predictors and outcome variable if possible. This may be more challenging to do when using more complex algorithms, however. Understanding the variables used as inputs and outputs of an algorithm can help end-users detect erroneous or problematic assumptions. For example, an outcome variable may be chosen that is itself biased.

End-users should also understand the training population used during model development. The AI solution may have been trained on a population that is not representative of the population where the model is to be applied. This may be an indication to be cautious of the model’s generalizability. To that end, users should understand how well the algorithm performed during development and if the algorithm was externally validated. 

Ideally, all algorithms should undergo a bias assessment – quantitative and qualitative. This can help users understand mathematical performance in different subgroups that vary by race, age, gender, etc. Qualitative bias assessments conducted by solution developers can help alert users to situations that may arise in the future as a result of potential algorithmic bias. Knowledge of these scenarios can help users better monitor and mitigate unintentional inequities in performance.

Readmission AI solutions should be assessed on similar factors. 

Specifically, users should understand if there are certain subgroups where performance varies. These subgroups could consist of patients of different demographics, or even of patients with different diagnoses. This will help clinicians evaluate if and when the model’s predicted output is most appropriate and reliable.

Q. How do you think about AI risk and risk management?

A. Commonly, we think about risk as operational and regulatory risk. These pieces relate to how a digital health solution adheres to privacy, security and regulatory laws and is critical to any assessment. 

We should also begin to consider clinical, as well. 

In other words, we should consider how an AI solution may impact clinical outcomes and what the potential risks are if an algorithm is incorrect or biased or if actions taken on an algorithm are incorrect or biased.

It is the responsibility of both the solution developers and the end-users to frame an AI solution in terms of risk to the best of their abilities. 

There are likely many ways of doing this, and Mayo Clinic Platform has developed our own risk classification system to help us accomplish this where AI solutions undergo a qualification process before external use.

Q. How can clinicians and health systems engage in the process of creating and delivering AI solutions?

A. Clinicians and solution developers should work together collaboratively throughout the AI development life cycle and through solution deployment. 

More here:

https://www.healthcareitnews.com/news/explainer-thinking-through-safe-use-ai

I found this a useful article which was well worth a read:

David,

Thursday, March 07, 2024

Would Australia Be Better Off If We All Just Stopped Using Facebook?

This appeared last week:

‘Not the Australian way’: Anthony Albanese blasts Meta

By Sophie Elsworth Media Writer

And James Madden Media Editor

3:37PM March 1, 2024

Anthony Albanese has blasted tech giant Meta’s abandoning of deals to pay for the news content it uses and says action will be taken, as he labelled the owner of Facebook’s moves as “unfair” and “not the Australian way”.

Meta released a statement on Friday afternoon, declaring that in early April it will “deprecate” Facebook’s dedicated news tab in Australia – meaning it will no longer support the publishers that provide the news content featured on the social media platform.

In other words, Meta will not be removing news content from Facebook – it just won’t be paying for it.

Anthony Albanese has called out Meta's decision to abandon its deals with publishers that see it pay for news…

As he considers massive fines against Meta, the Prime Minister told The Australian that Meta’s plans were “simply untenable” and left the door open to reinvesting the revenue from any fines in local publishers hit by the tech giant’s decision.

“We’re very concerned with this announcement ... “It is absolutely critical that media is able to function and be properly funded,” Mr Albanese said in Melbourne.

“We will consider what options we have available and we will talk to the media companies as well.

“The idea that one company can profit from others’ investment, not just investment in capital but investment in people, investment in journalism is unfair. That’s not the Australian way.”

News Corp Australasia executive chairman Michael Miller welcomed the government’s support for the media industry and accused Meta of attempting to mislead Australians.

“Meta is using its immense market power to refuse to negotiate, and the government is right to explore every option for how the Media Bargaining Code’s powers can be used.

“Meta is attempting to mislead Australians by saying its decision is about the closure of its news tab product, however the vast majority of news on facebook and Meta is and will continue to be consumed outside this product.

“Meta’s decision will directly impact the viability of Australia’s many small and regional publishers and this is a pressing issue for the government to confront.

“We will work in any way we can to assist the processes the government is putting in place.”

In 2021, Meta signed a raft of three-year deals with Australian publishers under the news media bargaining code, ensuring the company paid for the third party news content it published on Facebook.

But with the deals set to expire later this year, Meta has confirmed that it would not be renewing the arrangements which compensate news publishers for their work.

The move has serious ramifications for the Australian news industry, both for large and small publishers alike, given that Meta draws significant advertising revenue off the back of the publishers’ work, and those publishers will no longer be compensated by Meta for that content.

Senior media executives at Australian media outlets were told of Meta’s move this morning.

Prime Minister Anthony Albanese is expected to respond to the shock development later today.

The federal government will come under pressure to “designate” Meta under the terms of the news media bargaining code, which would force them into arbitration for content payments, with the threat of massive fines to those unwilling to compensate news outlets.

In a joint statement on Thursday afternoon, Communications Minister Michelle Rowland and Assistant Treasurer Stephen Jones (who oversees the code) said: “Meta’s decision to no longer pay for news content in a number of jurisdictions represents a dereliction of its commitment to the sustainability of Australian news media.

“The government has made its expectations clear.

“The decision removes a significant source of revenue for Australian news media businesses. Australian news publishers deserve fair compensation for the content they provide.

“The Australian government is committed to the News Media Bargaining Code and is seeking advice from Treasury and the ACCC on next steps.”

Meta’s statement read in part: “This is part of an ongoing effort to better align our investments to our products and services people value the most,” Meta’s statement read.

“The number of people using Facebook News in Australia and the U.S. has dropped by over 80 per cent last year.

“We know that people don’t come to Facebook for news and political content — they come to connect with people and discover new opportunities, passions and interests.”

Link is here:

https://www.theaustralian.com.au/business/media/meta-abandons-news-content-deals/news-story/5aef9b28697e26fb6ec8ddd6b2e84599

I must say I could not care less what Facebook does with news – or with anything else for that matter.

If I want news here in OZ there are plenty of reliable sources which can be accessed for free so really who cares? Surely our news media can stand on their own two feet with out getting payments from Meta? If they can't then it seems to me they need to re-think just what they are about and the public needs to wonder why it is only advertising on Facebook that is funding our news services (ABC excluded)

It seems to me there are some hard questions that need answering here if our local media are all so dependent on revenue from Facebook users? On the other side it is clear that having news on Facebook drives users and so advertising and revenue. Apparently having news is worth $50M to Facebook for the users it attracts.....

I also wonder why Mr Albansese thinks this is such an issue. Why would our PM be concerned about Facebook revenue and services?

Use Facebook - if you must – to connect - and use the ABC, Nine or News Corp for news and current affairs.

Am I missing something here?

David.

p.s. There is a very useful perspective here for those who have access:

https://www.afr.com/companies/media-and-marketing/meta-opts-to-fight-the-type-of-journalism-it-once-lauded-20240301-p5f95g

D.

Wednesday, March 06, 2024

I Think The Government Wants To Keep Us All Alert And Alarmed!

This appeared last week:

‘Keeps me up at night’: How Australia’s government sees hacker threat

Nick Bonyhady Technology writer

Feb 29, 2024 – 4.24pm

Home Affairs Minister Clare O’Neil has warned of a growing threat of cyber sabotage to Australian power, telecommunications, health and water infrastructure despite ransomware capturing public attention.

In an interview, Ms O’Neil did not name China as a direct threat, but her comments come a month after the US Federal Bureau of Investigation revealed it had disrupted a Chinese state-led hacking project called Volt Typhoon, which had broken into critical American computer systems.

Where most Australians are familiar with data breaches at big companies, Ms O’Neil told The Australian Financial Review she was increasingly concerned about how Australia would recover from a cyberattack on essential infrastructure.

“The thing that keeps me up at night is critical infrastructure and sabotage,” she said.

“What would we do, and how should we prepare for infiltration of systems that Australians rely on just to survive? How are we going to make sure that those systems are resilient and that, if they do come under cyberattack, we are able to repair and restore very quickly?”

Ms O’Neil’s decision not to specify a country that she believed could be responsible for breaching Australian networks matched ASIO director-general Mike Burgess, who issued a similar warning on Wednesday.

Mr Burgess added the caveat that adversaries scanning Australia for digital weaknesses were “not planning to conduct sabotage at this time”.

The minister, who is also responsible for the cybersecurity portfolio, said hackers were not only after classified information, but also other details that helped other nations understand how things work in Australia.

They were also trying to “build a picture of our way of life, the way that we make decisions, how systems work across business and government”, Ms O’Neil said. “And I think we’re kind of getting a little bit of insight into that from recent incidents.”

Chinese cyber army

FBI director Christopher Wray said in February that his digital agents were outnumbered by China’s hackers 50 to one, but democracies were increasingly collaborating on cybersecurity. A coalition of countries including Australia in February took down much of the online presence of the hacking group Lockbit, believed to be behind the DP World hack.

Professor Ciaran Martin, who was the inaugural head of the United Kingdom’s National Cyber Security Centre, said LockBit’s subsequent efforts to revive itself appeared to be more show than substance.

“The key lesson, though, is they’re going to try and come back, and they might rename themselves, or some new personnel might appear on the scene,” Professor Martin said, speaking alongside Ms O’Neil from Canberra.

Authorities arrested some LockBit members in Eastern Europe as part of the operation, but others remain at large. LockBit licensed its software to affiliated criminal groups who would use it to disable their targets’ computers and steal data, facilitating ransom demands.

More here:

https://www.afr.com/technology/keeps-me-up-at-night-how-australia-s-government-sees-hacker-threat-20240229-p5f8oc

Enough said!

David.

Tuesday, March 05, 2024

While There Is A Serious Point Made Here, I Am Not Sure The Problem Will Be Fixable In My Lifetime

This anguished rant appeared the other day.

AI’s craving for data is matched only by a runaway thirst for water and energy

John Naughton

The computing power for AI models requires immense – and increasing – amounts of natural resources. Legislation is required to prevent environmental crisis

Sun 3 Mar 2024 02.55 AEDT Last modified on Sun 3 Mar 2024 08.34 AEDT

One of the most pernicious myths about digital technology is that it is somehow weightless or immaterial. Remember all that early talk about the “paperless” office and “frictionless” transactions? And of course, while our personal electronic devices do use some electricity, compared with the washing machine or the dishwasher, it’s trivial.

Belief in this comforting story, however, might not survive an encounter with Kate Crawford’s seminal book, Atlas of AI, or the striking Anatomy of an AI System graphic she composed with Vladan Joler. And it certainly wouldn’t survive a visit to a datacentre – one of those enormous metallic sheds housing tens or even hundreds of thousands of servers humming away, consuming massive amounts of electricity and needing lots of water for their cooling systems.

On the energy front, consider Ireland, a small country with an awful lot of datacentres. Its Central Statistics Office reports that in 2022 those sheds consumed more electricity (18%) than all the rural dwellings in the country, and as much as all Ireland’s urban dwellings. And as far as water consumption is concerned, a study by Imperial College London in 2021 estimated that one medium-sized datacentre used as much water as three average-sized hospitals. Which is a useful reminder that while these industrial sheds are the material embodiment of the metaphor of “cloud computing”, there is nothing misty or fleecy about them. And if you were ever tempted to see for yourself, forget it: it’d be easier to get into Fort Knox.

OpenAI’s boss warned the next wave of AI will consume vastly more power than expected. Energy systems will struggle to cope

There are now between 9,000 and 11,000 of these datacentres in the world. Many of them are beginning to look a bit dated, because they’re old style server-farms with thousands or millions of cheap PCs storing all the data – photographs, documents, videos, audio recordings, etc – that a smartphone-enabled world generates in such casual abundance.

But that’s about to change, because the industrial feeding frenzy around AI (AKA machine learning) means that the materiality of the computing “cloud” is going to become harder to ignore. How come? Well, machine learning requires a different kind of computer processor – graphics processing units (GPUs) – which are considerably more complex (and expensive) than conventional processors. More importantly, they also run hotter, and need significantly more energy.

On the cooling front, Kate Crawford notes in an article published in Nature last week that a giant datacentre cluster serving OpenAI’s most advanced model, GPT-4, is based in the state of Iowa. “A lawsuit by local residents,” writes Crawford, “revealed that in July 2022, the month before OpenAI finished training the model, the cluster used about 6% of the district’s water. As Google and Microsoft prepared their Bard and Bing large language models, both had major spikes in water use – increases of 20% and 34%, respectively, in one year, according to the companies’ environmental reports.”

Within the tech industry, it has been widely known that AI faces an energy crisis, but it was only at the World Economic Forum in Davos in January that one of its leaders finally came clean about it. OpenAI’s boss Sam Altman warned that the next wave of generative AI systems will consume vastly more power than expected, and that energy systems will struggle to cope. “There’s no way to get there without a breakthrough,” he said.

What kind of “breakthrough”? Why, nuclear fusion, of course. In which, coincidentally, Mr Altman has a stake, having invested in Helion Energy way back in 2021. Smart lad, that Altman; never misses a trick.

More here:

https://www.theguardian.com/commentisfree/2024/mar/02/ais-craving-for-data-is-matched-only-by-a-runaway-thirst-for-water-and-energy

I am not sure I am as pessimistic as the writer above but it is true to say he has a very strong point that will need to be addressed.

I need to say that the “how” is well above my pay-grade but I can say that similar concerns that have appeared over the years – think ‘global starvation’ etc. – seem to have somehow ebbed away and I have faith this one will too!

I suspect unlimited data-centre growth will turn out to be a self-limiting phenomenon! Time will tell.

What do you think?

David.

Sunday, March 03, 2024

It Is Amazing How Quickly Scammers Can Move In To Cause Maximum Trouble.

I have to say that I did not expect this to happen so fast…

Ozempic scammers post strange red liquid to patients

By Holly Payne  24 February, 2024

A compounding pharmacy scam has been targeting general practices and patients across the country.


Mosman pharmacist Adam Reinhard says his business’s name and reputation have been co-opted by scammers taking advantage of the ongoing Ozempic shortage to shill fake products. 

Mr Reinhard and his sister Anita operate The Compounding Pharmacy Australia, which ships product Australia-wide.  

While they can fill almost any script, there’s one product that TCPA has refused to touch – semaglutide.  

“I’m just [highly] risk-averse,” Mr Reinhard tells TMR.  

“My patients’ safety is paramount, and I believe that if I have to hesitate and think [about whether to make a product] then I already know my answer.” 

He was therefore surprised when Queensland Police called, asking why they hadn’t sent out the semaglutide a patient had ordered. 

The officer accused him of taking a customer’s money and never providing a product.  

“I obviously panicked,” Mr Reinhard says.  

“I started typing in the patient name and to my horror … there was no record of this this person.”  

At this point, he asked the officer to read out the full name of the pharmacy on the receipt.  

While it was extremely similar to The Compounding Pharmacy of Australia – enough so that when the policeman had googled it, Mr Reinhard’s business had shown up – there was a slight difference.  

After about 45 minutes of “pure panic mode”, Mr Reinhard was able to convince the officer of his innocence. 

But that was only the beginning.  

Then customers started contacting TCPA with questions about the unmarked vials of a reddish liquid they had received via post.  

“The phone started ringing off [the hook with] angry patients and then the emails – the emails just did not stop,” the pharmacist says.  

“It’s been about a year.” 

The blockbuster diabetes drug, which can be used off-label for weight loss, has been in shortage worldwide for almost two years, prompting the use of compounding pharmacies that make up product from the raw material, semaglutide sodium.  

Some of these are large-scale operations. Telehealth company Eucalyptus – which owns Juniper and Pilot – has contracted with two big compounders to formulate the drug.  

Associate Professor Frank Sanfilippo, a pharmacoepidemiologist at the University of Western Australia who was engaged to analyse samples, told The Medical Republic that the compounded product was identical to semaglutide. 

But outside of the Eucalyptus compounding arrangement, it’s less clear what safety and quality controls are being used by individual pharmacists.  

Ozempic maker Novo Nordisk wrote to AHPRA and the Pharmacy Board of Australia almost a year ago outlining its concerns about compounded semaglutide, given it “requires very advanced laboratory techniques”. 

Mr Reinhard says the semaglutide scam always follows the same pattern.  

General practices in an area received faxes from a pharmacy offering compounded semaglutide services. The faxed documents often don’t have a phone number, just a fax number and email address. 

The name of the pharmacy will be along the lines of “Compounding Pharmacy Australia” or “The Australian Compounding Pharmacy”. It’s always similar enough to “The Compounding Pharmacy of Australia” that Mr Reinhard’s business appears when it’s entered into a search engine, giving the appearance of legitimacy.  

Well-meaning GPs whose patients are desperate for semaglutide pass on the scam pharmacy’s information.  

Patients then contact the pharmacy to fill their script. Sometimes a product comes, other times they receive nothing.  

“I noticed they’ve really targeted non-city GPs,” Mr Reinhard says.  

“Now I ask people straight up if they’re from Queensland or WA or country Victoria, and normally they’ll say ‘oh yeah, how’d you know?’ 

“[The scammers] are absolutely spamming those areas.”  

Mr Reinhard does not blame GPs for being taken in by the scam. He said his two siblings, who were also pharmacists but not compounding pharmacists, thought the fax looked legitimate.  

“The spacing is right, the wording is right but the mumbo jumbo is not right, so as a compounding pharmacist it was all screaming BS,” he said.  

“It’s an extremely sophisticated operation.”  

When the patients contact the scam pharmacy with their script, they are also handing over information like their date of birth, Medicare number and credit card details.  

The scammers typically charge people $500, Mr Reinhard says, although some patients are convinced to pay up to $3000 to secure a three-month supply.  

Around six months ago, the scam evolved. People started receiving unlabelled vials of red liquid in unmarked white boxes.  

This timing roughly tallies with a TGA alert for fake semaglutide detected at Australia’s borders.  

Semaglutide, which is administered via injection, is typically clear and colourless.  

Mr Reinhard believes that the solution people are receiving is likely to contain a high amount of vitamin B12, which often has a pink or red colour.  

At least one patient who contacted Mr Reinhard after injecting the substance reported having had high levels of B12 in a subsequent blood test.  

“What concerned me was that I had some people who still said, ‘but I lost weight on it, I want more,’” the pharmacist says.  

“[I advise patients that] these scammers are not even a registered business and that they cannot sue them for anything should something go wrong.  

“I would also question their sterile techniques. These are criminals, they don’t care about your health and wellbeing, they just want your money.” 

The plot thickened even further when the red mystery vials started showing up with official-looking pharmacy labels.  

-----

Counterfeit medicines and suspicious providers can be reported to the TGA via its website

More here:

https://www.medicalrepublic.com.au/ozempic-scammers-post-strange-red-liquid-to-patients/105435

This is really nasty stuff in terms of plain dishonesty and fraud. It is rather sad the system can’t be responsive enough to patient needs to keep them out of the hands of these frauds and deceivers who are playing on the desperate.

As someone who spent a few decades of my life battling obesity – only to have it cured by a range of acute illnesses that left me worryingly thin – I do feel for those – some 60% or so of us overall apparently – who wage the weight-loss battle!

From Google:

What percentage of Australians are obese 2023?

Two-thirds of Australian adults now live with overweight (35.6%) or obesity (31.3%). The prevalence of overweight is higher for men compared to women, while the prevalence of obesity is similar for men and women. The prevalence of obesity is rising among Australian adults.24 Nov 2023

I can see why the scammers love to try and exploit the issue!

In passing I note that authentic Ozempic is not red!

David.