Quote Of The Year

Timeless Quotes - Sadly The Late Paul Shetler - "Its not Your Health Record it's a Government Record Of Your Health Information"

or

H. L. Mencken - "For every complex problem there is an answer that is clear, simple, and wrong."

Sunday, July 06, 2025

AusHealthIT Poll Number 801 – Results – 6 July 2025.

Here are the results of the recent poll.

Should Australia Increase Defence Spending To 3% Of GDP?

Yes                                                                     14 (42%)

No                                                                      14 (42%)

I Have No Idea                                                    5 (16%)

Total No. Of Votes: 33

A totally split vote with a few staying out of it!

Any insights on the poll are welcome, as a comment, as usual!

Not bad voter turnout – question must have been decent. 

5 of 33 who answered the poll admitted to not being sure about the answer to the question or wanted to stay out of it!

Again, many, many thanks to all those who voted! 

David.

Friday, July 04, 2025

It Really Seems Like Just Yesterday That Dire Straits Hit Our Consciousness!

This appeared last week:

It is 40 years since we embraced Dire Straits’s Brothers in Arms

Alan Howe

12:00AMJune 28, 2025

When it comes to recorded music, the tail has always wagged the dog; the length of songs has long been dictated by the recording format. The three-minute pop song wasn’t the idea of any artist. It came about because the early recording formats – particularly shellac 78s which exploded in affordable popularity between the wars – ran about that length.

And it stuck fast as radio formats and advertising were tailored to fit. A few longer songs – Don McLean’s American Pie, for instance – were defiantly broken in two with the first half on the A side and finishing when you flipped over the seven inch single. But they were oddities.

Arguably the Beatles’ two most creative albums were Rubber Soul and Revolver, and across them both there are just three songs that are more than three minutes long – one by two seconds, one by seven, the third by 18.

By the time Mark Knopfler’s band Dire Straits started ­recording, longer songs were tolerated by radio, but the band’s debut single, Sultans of Swing, at 5 minutes 47 seconds, was still an exception. Not for Knopfler, though. He has never been inclined to short musical ideas.

Over the next few albums he would take full control of the Dire Straits project. He was already writing all the songs and did so on every album the band recorded and not a moment of it boxed in by commercial radio rules. The first album, produced by Steve Winwood’s brother Muff, had come out of the blue. Their second came out of Nassau in the Bahamas less than a year later. It was a planned project with the Muscle Shoals Sound Studios team of Jerry Wexler and Barry Beckett in the drivers’ seat. This was their American label, Warner Bros, trying to guarantee a winner like its unexpected predecessor. And it was with its hit single, Lady Writer, assumed by many to be a leftover from the first album sessions. I­ndeed, it sounded like it might have been recorded the same day as Sultans of Swing.

Despite the laidback Bahamas seaside setting, Wexler and Beckett brought aspects of that undeniable funk and soul feeling to the sessions even if the Alabama grit didn’t travel. Knopfler’s brawny baritone barks and coaxes the ­storylines out of the music while his 1962 Fender leads its way lyrically and apparently effortlessly through unlikely melodies.

By the next album, Making Movies, Dire Straits was down to a three-piece, Knopfler’s brother David stepping away from the never-ending cycle of album-tour-album leaving bassist John Illsley and drummer Pick Withers. Knopfler co-produced Making Movies with Jimmy Iovine who had just come from making Patti Smith’s Easter album from which came the hit single Because The Night. Perhaps because of Iovine’s connections with Bruce Springsteen and his band, E Streeter Roy Bittan joined in on keyboards, cheekily reprising those first ethereal lines from the storming Jungleland that closed Springsteen’s breakthrough Born To Run album, for the cinematic single Romeo and Juliet. It could well be Knopfler’s finest and most enduring moment.

If he sounds like he means it, it’s because he did. He had just broken up with Holly Vincent, the Chicago-born lead singer of the punk rock band Holly and the Italians. The US outfit had moved to England in 1978 where punk rock was flourishing, but their shallow, derivative sounds – a flimsy mix of punk and new wave – failed to find and audience, and in any case Holly was deported.

But Knopfler’s heart was broken. Good thing, too. It led to the creation of one of the finest rock songs ever written. And it is so beautifully rendered; the arpeggios poured from his 1937 National Resonator guitar and he pulls together lyrical couplets as if he were Jimmy Webb.

When you can fall for chains of silver, you can fall for chains of gold

You can fall for pretty strangers and the promises they hold.

And then referencing West Side Story, that famous New York City retelling of Romeo and Juliet by Leonard Bernstein and Stephen Sondheim:

There’s a place for us, you know the movie song.

When you gonna realise it was just that the time was wrong, Juliet?

But Australians weren’t buying it. Neither, other than in the UK where it rose to No.8 on the charts, was anybody else. It is one of the mysteries of music that such wonderful songs can sometimes pass us by.

But Dire Straits was always an albums band. They have had four No.1 albums in Australia and the worst performed, which was Making Movies, came in at No.6. The other single from it, Tunnel of Love, reached only 62 on the local chart.

Knopfler took over production duties for 1982 Love Over Gold and it showed. Side one had just two songs: Telegraph Road at just over 14 minutes and the single Private Investigations at almost seven. Reluctantly trimmed to just less than six for the single, it must surely rate as one of the most unusual rock songs. Over a glorious guitar melody Knopfler narrates the three verses. It has no chorus. There is a musical interlude of 50 seconds and then he’s back with another line: “Scarred for life, no compensations. Private Investigations.” Knopfler then rates the pain of an untrusting relationship with distant chords until it all fades to nothing.

What could follow? What did is one of the remarkable stories in music. Australians have often defied international trends to adopt albums others overlooked. In 1972 we fell in love with Neil Diamond’s live double album Hot August Night. It had risen to No.5 in the US during a 19-week chart run, and No.32 in a fortnight placing on the UK charts. But across 1973 and 1974 it lodged at No.1 for five months in the Australian charts where it spent 224 weeks. Four years later, Australians adopted Boz Scaggs’ white soul classic Silk ­Degrees album and we were the only country in which it topped the charts staying across 1976 and beyond in a 102-week chart residency.

Forty years ago this month we similarly adopted Dire Straits’ Brothers in Arms. No great effort went into the album artwork, just a striking photo of that treasured 1937 guitar set against a beautiful sunset. But the album entered the charts at No.1 and stayed there for 22 weeks. Then the band toured here – a monster run of 55 dates for which 900,000 tickets were sold. A record broken only recently. Three singles, So Far Away, Money For Nothing and Walk Of Life, charted strongly and Brothers in Arms once again topped the charts for another consecutive 11 weeks. It was the first CD in the world to sell a million copies, and we did our bit; it was the biggest-selling record in Australia in 1985, and the second biggest-selling album the following year.

Brothers in Arms: No great effort went into the album artwork, just a striking photo of that treasured 1937 guitar set against a beautiful sunset.

It has sold 1,225,000 copies here which is 17 times Platinum. It lost out to Phil Collins’ No Jacket Required for the album of the year Grammy – a sin with which The Recording Academy must forever live. It did win Best Engineered Album and the surround sound re-release in 2006 won another. For Knopfler, that year was a whirlwind of concerts – 248 of them starting on April 25, 1985, taking in the London end of Live Aid, and finishing in Sydney with 16 consecutive nights. The tour then moved to Queensland, Perth and even Uluru before returning to Sydney for another four-night stint at the old Entertainment Centre, ending a year and a day later after it had begun.

The songs from Brothers in Arms came to Knopfler at various points, the title song arriving during the Falklands War from a comment his dad made in 1982: “My old man said ‘Isn’t it ironic that the Russians were brothers in arms with the fascist collection of generals in Argentina’. And that’s what put the phrase into my head.”

The band toured repeatedly leading up to the recording of Brothers in Arms. “We were working like trucks. It felt like a 24-hour day shift all the time,” he said, adding that he had to learn to compose songs on the road. “You’d see me walking up a hotel corridor with a chair with no arms on it. It was because there wasn’t one in my room, and I’d gone and found a chair I could play guitar on,” he said laughing at the memory.

The album was mostly recorded over an indulgent three-month stay at George Martin’s small AIR studios on Montserrat in the West Indies. It had the latest 24-track digital tape machine desks. But some of the new tapes were faulty and three tracks had errors on some channels. These were finished off at the Record Plant on Manhattan, giving Knopfler the chance to add some of the sophisticated brilliance of the Brecker brothers, Randy and the late Michael, to add trumpet and sax to Your Latest Trick. Two years before, Michael’s sax had also defined the glorious melody Knopfler wrote for Going Home, from the Local Hero soundtrack. “He’s just got New York in that sound in that sax. I don’t think he’s actually ever been equalled,” Knopfler said of the virtuoso who stood tallest in narrow canyons of Manhattan’s brass-section giants.

English rock band Dire Straits performing in concert at Football Park, South Australia, in 1986. Lead singer guitarist Mark Knopfler, right

Terence Williams, who’d replaced Withers on drums, played on all the Montserrat recording, but was uneasy about his contributions and flew home to make way for David Bowie’s drummer Omar Fakim who’d earlier made his name in Weather Report. But Williams’ fierce drum intro to Money for Nothing stayed. But not everything did: weeks after Brothers in Arms was released Knopfler and Sting performed it at Live Aid with Knopfler changing the word “faggot” to “queenie” and later “maggot”.

Knopfler remembers the Australia legs of that tour well.

“I like the smell of Australia,” he said. “I loved it there. It became a bit of a break for us. I still treasure my time there.”

Dire Straits had toured in 1981 and 1983 and Knopfler had become friends with eccentric Sydney artist Brett ­Whiteley and had seen his dramatic, 18-pannelled Alchemy work which these days can sometimes be seen at the Art Gallery of NSW in all its 16m glory. Knopfler had adapted some of the cover of his 1984 double live album of the same name.

“I think Brett was always very, very interested in what was happening in the rest of the world,” Knopfler says on a Zoom call from London. He agrees Whiteley was a bit bonkers. “Oh, yeah. Brett was Brett, bless him. He was bright, bright”, he says of the artist who died in 1992 at the age of 53 of a drug overdose. “I don’t ever think he was looking for a way out because absolutely he wasn’t.”

Sydney Entertainment Centre secret facts playing cards featuring artists, such as Dire Straits (pictured), who have played concerts at the venue. The Card set was found during the demolition of the Sydney Entertainment Centre. Picture: John Fotiadis

Just before he died, Whiteley had told Knopfler that “he was looking forward to his Paris show – which was tremendous – and he was looking forward to something else in painting and exhibitions. So he was really enthused by what he was doing. He just made a stupid mistake.”

Knopfler maintains many friends here in Australia and particularly likes Rushcutters Bay in Sydney, a short stroll down from Elizabeth Bay’s Sebel Townhouse where the band stayed for so many weeks as the city fell in love with seeing them live.

He tells me his sister, who lives in England, still buys him his favourite chocolate bar. “She goes out to a special candy sweet shop and buys those raspberry ripply things that get made in Australia and gives them to me. It’s the taste of Australia!”

I have never before heard a Cherry Ripe described in that way, but I understood.

MAKING MAGIC WITH BOB DYLAN

Mark Knopfler’s good friend from his early days was Steve Phillips, a blues enthusiast, guitarist, instrument builder and painter. In The Gallery, a song from Dire Straits’ first album, was about Phillips’ sculptor father Harry. Phillips’s mother was a painter. Phillips is the bloke who bought that National guitar which, of course, shines like the Mississippi Delta across so many of Knopfler’s songs.

That was in the 1960s when Phillips played around Leeds pubs and at one point was interviewed by a young reporter called Knopfler. They became friends and Phillips introduced his mate to older American blues. Years later, Phillips would record some of it, including songs written by Blind Willie McTell (below). “It was a tremendous broadening of my knowledge,” said Knopfler of playing and rehearsing with Phillips. “My finger-picking became more refined and more and more bluesy. I was going forward with jazz a little bit, going forward with chords on the one hand, and on the other going backwards into the past with the National and with other string instruments.”

It was timely training. By 1977, Knopfler had formed Dire Straits. They recorded their debut album in a deconsecrated church in London in February the following year and by early 1979 had the single Sultans of Swing climbing charts worldwide and were performing small venues in Los Angeles where, on February 28, 1979, Bob Dylan went to see them at the late show at the Roxy.

Dylan’s concerts and records had been poorly received for a few years and, the man who believed Christ had visited him in a Tucson hotel the year before, was bent on making changes to his life – and certainly to his sound. He went backstage and asked Knopfler to help out on his next album which would be the Christian/folk/blues music shock that was Slow Train Coming.

Knopfler was back to work on Dylan’s Infidels in 1983, an album they co-produced and on which should have been perhaps Dylan’s most exquisitely perfect composition – a stark, melancholy, lyrically absolute song Dylan had just written called Blind Willie McTell. Just the two of them: Knopfler on an acoustic 12-string and Dylan on piano. The dull foot tapping throughout is Knopfler’s.

McTell himself had been recorded by musicologist John Lomax who, apart from teaching himself ancient Greek and Latin, realised before anyone else the invaluable contribution to American music that had been made by black American cowboys. He and his sons John Jnr and Alan spent their lives recording the music that was already fading from American life and whose practitioners were dead, in jail or otherwise silenced.

McTell’s thin volume of music – he died in 1959 – is a jaunty mix of blues and country and perhaps with hints of Scott Joplin. One of his songs to survive – with that distinctive clear, penetrating vocal – is Statesboro Blues which has been recorded by Taj Mahal, David Bromberg and was famously part of most Allman Brothers’ sets. ­English folkie Ralph May recorded it before changing his surname to McTell.

Dylan’s song traces lines in American history from slavery, chain gangs, hostile Confederate “rebel yells” to the Civil War and the burning of rundown plantations after it when the cheap labour that sustained them dried up.

For Dylan, “God is in His heaven” but man proved himself untrustworthy in the Garden of Eden through “power and greed and corruptible seed”. Knopfler too was struck by this composition’s undecorated beauty. “I love that song,” he said. Indeed, they had been discussing influences with Dylan “who was big into Robert Johnson, and I said ‘do you listen to Blind Willie McTell?’. It could be that I put Blind Willie McTell into Bob’s head”.

Indeed it could. It’s not a song about McTell (pictured left), it is just a device to link the verses together, and unlike Johnson, McTell rhymes with lots of words. Dylan clearly thought he had never nailed the song he heard in his head. There are three versions of it about, the one with Knopfler that came out on the Official Bootleg Series, another with the Rolling Stones’ Mick Taylor on slide, and a third yet-to-surface version, of which Knopfler said: “I did (it) with electric guitar and piano. I don’t know what happened to that, which was really spaced out.”

On May 5, 1983, Dylan and Knopfler recorded it a final time, a hauntingly spare rendition. Still Dylan was unhappy. He never returned to that song. It sounds like another manufactured myth of Dylanology to point out that it would have been Blind Willie’s 80th birthday.

Alan Howe

Here is the link:

https://www.theaustralian.com.au/arts/review/it-is-40-years-since-we-embraced-dire-straitss-brothers-in-arms/news-story/935d211b4a46c31fef97551c069ffb76

The track (Telegraph Road) is on YouTube here:

https://www.youtube.com/watch?v=jhdFe3evXpk

The sound-track of my PhD! – was introduced to them by my research assistant and never looked back!

What with doing all my training I missed them until started doing my PhD research by which time they were “old hat”! A very great band IMVHO!

David.

Thursday, July 03, 2025

Finding New, Safe And Effective Drugs Is A Major And Important Health System Activity

This appeared last week:

Even a sceptic can see AI will transform the discovery of drugs

Tom Whipple

At every stage in the drugs pipeline there are ways in which AI could, in theory, make it less leaky.

10:00 AM March 29, 2025.

If you haven’t got idiopathic pulmonary fibrosis, ISM001-055 may not seem immediately interesting. In truth the molecule, designed to treat the lung condition, may not even seem that interesting if you do.

A study last month into its effectiveness in about 80 people was written in typically cautious language. The side effects weren’t awful. There were hints it improved lung function. The main conclusion was that there was enough evidence to justify collecting more evidence.

And the reason we should indeed be interested? It was found and designed using AI.

The drug is less an attempt by Insilico, the company behind it, to solve a rare lung disease than to tackle a widespread industry disease.

Today, getting a new drug costs £2 billion ($4.2bn). Of molecules considered promising enough to enter one end of the trial pipeline, 10 per cent emerge successful from the other. Computers have Moore’s law, the idea that performance doubles every couple of years. Pharma has Eroom’s law, the idea that drug discovery does the reverse.

ISM001-055 is a bet Eroom will end. Since the release of ChatGPT, we have used AI to write coursework, write generic PowerPoints and write bad poetry in the style of a pirate. In that time, in parallel, there has been a different, actually useful AI revolution. It seeks not to understand language but biology.

At every stage in the drugs pipeline there are ways in which AI could, in theory, make it less leaky. Currently, we seek drug targets through intuition and experimental data. AI, in theory, allows us to understand what is actually happening at the level of the protein. These are the molecular machines in cells whose structure determines their function but which for so long had structures that were fiendishly hard to determine.

AI, again in theory, allows us to map and parse the fiendishly complex metabolic pathways involving those proteins that, if they go wrong, are what we call “disease”.

Today, to get a drug, scientists look for plausible molecules we know of, screen them, then hope one does something to those proteins that is useful. AI potentially allows us to pick molecules from the 1080-odd possibilities, choosing those most likely to lock into the protein we want.

Will it work? ISM001-055 is the furthest such an AI-made drug has got. Insilico is not alone in betting on others getting further. The UK government announced a project to gather data on how drugs interact with proteins, to train models to identify new drugs and “slash development costs by up to pounds 100 billion”. If that sounds over-ambitious, Demis Hassabis, chief executive of Google DeepMind and Isomorphic Labs, Google’s AI drug company, goes further. He thinks AI will cure all diseases in a decade.

Absurd? It certainly sounds it to me. It also sounded absurd, though, when he suggested that within 10 years AI would be able to make a good guess at the structure of every protein, a grand challenge of biology that would all but guarantee someone got the Nobel. Well, today Hassabis is the someone with the Nobel.

But it doesn’t much matter if you think this has the whiff of hype. Currently, 90 per cent of drugs fail. If we get that to 80 per cent through AI we will have doubled our effectiveness. Humans are sufficiently bad at drug discovery that the bar for AI success is very low indeed.

The Times

Here is the link:

https://www.theaustralian.com.au/health/medical/even-a-sceptic-can-see-ai-will-transform-the-discovery-of-drugs/news-story/3e85ccb81b45c40186f7dcbd4a589d48

All I can do is wish all the boffins the best of luck in finding the molecules that work safely and effectively!

 Zillions of lives have been saved or enhanced by what has been done to date and there is no reason for things to change! AI is surely going to help all the efforts for the good of all!

David.

Wednesday, July 02, 2025

Clearly Chatbots Are Providing Something Much Worse Than Harmless Fun…

This appeared last week:

Experts alarmed over chatbot harm to teens

Marcus de Blonk Smith

12:00AMJune 28, 2025

AI chatbots are more dangerous to Australian teenagers than social media and YouTube, with the technology being linked to the suicide and radicalisation of teens overseas, experts warn.

Australia late last year passed groundbreaking legislation aimed at preventing anyone under the age of 16 from registering to join social media platforms, including Facebook, Instagram, X and Snapchat – and this week there were calls for the popular video-sharing app YouTube to be added to the ban.

There is also a push among some experts to extend the legislation to include generative AI companions and bots, with concerns about the manipulative nature of the technology.

A Florida mother in 2024 sued chatbot creator Character.ai, claiming it contributed to her 14-year-old son’s suicide.

Megan Garcia alleged the AI chatbot app had “abused and preyed on my son, manipulating him into taking his own life”.

In 2023, a UK court sentenced a 21-year-old man to nine years in jail for breaking into Windsor Castle with a crossbow. The trial heard how Jaswant Singh Chail had exchanged more than 5000 messages with an online companion he’d named Sarai, which included discussions about how he wanted “to assassinate the queen of the royal family”.

In an interview with The Australian, leading men’s health expert and clinical psychologist Zac Seidler warned the technology was being “weaponised” against young men.

The fact this technology “is being weaponised against young men, and the fact that their likelihood and impulse control and risk taking makes them more at risk for suicide and self-harm is really, really concerning”, he said.

Dr Seidler said AI chatbot platforms, which were “popping up everywhere”, were “purposefully going to create harm” and “exacerbate symptoms of disconnection and depression over time among young men”.

“If it continues with the trends that we’re witnessing at the moment, I am really concerned that this could get very, very bad in ways we can’t even see right now,” he said.

Mhairi Aitken, a senior ethics fellow at the Alan Turing Institute, said AI technologies had not been designed in ways that were safe for children, adding the risks were, in many ways, “much more dangerous” than social media.

“The risk here is that it’s exacerbating or amplifying a lot of the risks and harms of social media,” she said.

“We should have learned the lessons … there’s a real risk that this is repeating (those same mistakes) and potentially in a worse way.”

Toby Walsh, a leading expert in artificial intelligence at the University of NSW, agrees.

He says “we should be very concerned” at the development of companion AI chatbots, adding that “social media should’ve been a practice run”.

“We’re about to run another experiment on our young children and potentially the consequences could be quite detrimental,” he said.

Professor Walsh urged politicians and regulators to closely consider banning the technology, which would “incentivise the tech companies to build age-appropriate spaces for young people”.

“There’s a lot of money at stake and there’s a lot of commercial pressure for them to do this,” he said.

“The only way to alleviate that pressure is to outlaw it if they’re not going to do the right thing.”

Meetali Jain, director and founder of the non-profit Tech Justice Law Project, said she was not in favour of banning the technology, but instead would like “to see a change in how Silicon Valley does business”.

“As the mother of two children myself, it’s really frightening,” she told The Australian.

“The more I’ve started to learn about companion chatbots, it terrifies me in ways that I never thought possible with social media.”

Ms Jain was contacted in May last year by Ms Garcia, and said the Florida mother had a hard time being believed and finding support for legal representation, and she had decided “to jump into it” and represent her, helping to draft a complaint.

Since launching Ms Garcia’s case in October in the federal courts in Florida, other families had come forward, Ms Jain said, adding that she was now representing three families who were alleging that AI companion chatbots had caused their children “pretty grievous harm”.

Asked what concerned her most about AI chatbots, Ms Jain said it was the fact the technology was being developed “under our nose in plain view, and yet we didn’t even know about it”.

She added: “I had a parent asking me a few months ago, ‘Do I know any sex addiction therapists for her 11-year-old?’”

Australian Psychological Society president Sara Quinn said she was “gravely concerned” about the potential harms of the technology for young people.

Dr Quinn called for a “comprehensive framework in Australia” and “more rigorous oversight from tech companies” to ensure that these tools were safe for children.

Speaking at the National Press Club on Tuesday, eSafety Commissioner Julie Inman Grant flagged that Australia’s social media ban would not address emerging AI harms for young people, but said the unregulated use of AI companions was of particular concern.

“The rise of powerful, cheap and accessible AI models without built-in guardrails or age restrictions are a further hazard faced by our children today,” she said.

In a statement, Ms Inman Grant told The Australian that eSafety had “been monitoring the rise in popularity” of AI chatbots and had “taken a number of steps to inform parents about the risks”.

“It is important that parents are aware these apps are out there and talk to their children as much as possible about their online activities,” she said.

Ms Inman Grant added that work was under way “to protect children from online harms associated with generative AI”.

Do you know more? Confidentially contact marcus.deblonksmith@news.com.au

Here is the link:

https://www.theaustralian.com.au/nation/experts-alarmed-over-chatbot-harm-to-teens/news-story/7d721c70a1ff1991989f103ec46fe20d

All in all a salutary tale about just what can go wrong with technology if implemented without careful observation of the outcome(s).

The pace of innovation is such that I suspect we will always be rather behind the consequences of implementation than we may see – rather than being ahead of the curve!

David

Tuesday, July 01, 2025

This Has To Really Be The Most Awful Potential Fate Hanging Over Any Family

This appeared last week:

Three Siblings, One Fatal Gene: A Family’s Fight Against Early-Onset Alzheimer’s

Among members of the Richardson family who carry a mutation in the PSEN1 gene, the average age when symptoms start is unthinkably young.

Dominique Mosbergen

Dow Jones

Hannah Richardson is hopeful about her future and its endless possibilities. But the 24-year-old’s plans are clouded by an unthinkable reality – there is a 50 per cent chance she will develop Alzheimer’s disease in her 30s.

Hannah’s family has a history of a rare genetic mutation that, when inherited, virtually guarantees that the carrier will die of an aggressive form of Alzheimer’s early in life. No drug has been found to stop it. But now researchers are exploring a new avenue: Could pre-emptive treatment slow or even halt the memory-robbing disease in people at high risk of developing it?

Hannah and her two siblings will help researchers test that theory. They are enrolling in a new clinical trial led by doctors at the Washington University School of Medicine. As part of the trial, the siblings will finally find out if they carry the fatal gene.

“I don’t know if being in the trial is going to save me or my siblings. But in my head, it’s the least I can do. Research is how cures are found, ” said Hannah, who dreams of becoming a physician assistant and is applying to graduate programs. Her brother Jacob, 22, and sister Rylee, 19, are both in college.

Unlike most cases of Alzheimer’s, which are unpredictable, “in this population we know who will develop the disease and when they will develop it,” said Heather Snyder, senior vice president of medical and scientific operations at the Alzheimer’s Association, a major funder of the trial.

Doctors have identified over 300 inheritable genetic mutations that cause early-onset Alzheimer’s. These rare genes account for less than 1% of people with the condition, but researchers say that studying families like the Richardsons can offer insights into how to prevent and treat Alzheimer’s in everyone.

Among the people in Hannah’s family who carry a mutation in what is known as the presenilin-1, or PSEN1, gene, the average age when Alzheimer’s symptoms start is 39, according to the family. The disease is aggressive once symptoms appear and the decline is swift, the family said.

“I call it the monster,” said Mary Salter, Hannah’s grandmother. Hannah’s grandfather and three of his four brothers died of Alzheimer’s in their early 40s, soon after developing symptoms. Hannah’s uncle died last year of the disease at the age of 44. And Hannah’s mother, Carrie Richardson, started showing subtle signs of the disease in her early 40s. Now at age 44, Carrie has started to decline mentally.

Carrie’s children remember vividly the day in 2012 when she learned she was a carrier of the PSEN1 mutation. Her eyes were red and puffy when she picked the kids up from school. Pressed by Hannah to tell them why, a sobbing Carrie told them the news in the car. The siblings, who were all under 12, remember crying, too, though not fully understanding why.

Today, Carrie has memory lapses and sometimes struggles to communicate, her children say. Her worsening symptoms forced her to quit her job as a preschool teacher and this month, she started the process of leaving her own home to move in with Mary.

Carrie and her brother enrolled in a clinical trial that began at WashU Medicine in 2012. That trial tested whether the early use of experimental drugs that target a sticky protein in the brain known as amyloid could slow the progression of Alzheimer’s in people who carried PSEN1 mutations and other rare genes. Hannah’s uncle hadn’t been able to join an extension of the trial because his symptoms became too pronounced, but her mum continues to be a participant and receives a bimonthly infusion of an antiamyloid drug.

In a paper published in Lancet Neurology in March that detailed interim results of the extended trial, WashU Medicine researchers said treating people like Hannah’s mum with antiamyloid drugs before Alzheimer’s symptoms began, delayed the onset of the disease — in some cases lowering participants’ risk of developing symptoms by 50 per cent.

The first antiamyloid drugs were approved by the Food and Drug Administration in 2021. These drugs clear accumulations, or plaques, of amyloid in the brain, which researchers once thought could be the root cause of Alzheimer’s. But some doctors have since questioned this hypothesis, as well as whether the benefits of these treatments outweigh their risks.

The drugs don’t stop Alzheimer’s in its tracks and though they have shown to reduce cognitive decline in some large clinical trials, the slowing was modest at best, said Dr. Scott Small, a Columbia University neurologist.

“The science now predicts that amyloid plaques are not the root source of Alzheimer’s,” Small said.

Swelling and bleeding in the brain is a possible side effect of antiamyloid drugs. Rarely, people have died from this complication. In about half of all patients in the WashU Medicine study, some brain swelling and microbleeds were detectable in MRI scans, though researchers said about 95% of participants had no symptoms from the medications.

Antiamyloid drugs remain the only FDA-approved treatment available that can change the course of Alzheimer’s. Dr Randall Bateman, a neurologist who led the WashU Medicine study, said early use of the treatments could improve their efficacy and safety. He said he remains optimistic that removing — or even preventing altogether — the build-up of amyloid could slow or even halt the progression of the disease.

Hannah and her siblings believe that antiamyloid drugs have helped stall their mum’s Alzheimer’s. That belief spurred them to enrol in a similar WashU Medicine clinical trial — one that seeks to treat carriers of rare genes with a different experimental antiamyloid drug, called remternetug, many years before they develop symptoms and, in some cases, even before amyloid has built up in their brains. Drugmaker Eli Lilly, which makes the drug, said it works similarly to earlier antiamyloid treatments but has the benefit of being administered as an injection rather than an intravenous infusion.

Rylee Richardson, a cheerleader and rising sophomore at Tulane University, said she and her siblings thought long and hard about participating in the trial. They ultimately decided that it was worth the potential risks.

“I will do anything that gives me and my siblings a better chance,” she said. Until now, Rylee and her siblings had decided not to find out if they carry the PSEN1 mutation out of fear that it could up-end their lives. But if they want to participate in the extended phase of the trial, they will have to learn the truth. The initial phase will last for two years and focus on basic efficacy and safety questions, said Dr. Eric McDade, a colleague of Bateman’s who is leading the trial.

Only people who test positive for a high-risk genetic mutation will be randomised to either receive a low dose of the active drug or placebo. For those who test negative, they can stay in the trial if they choose not to find out their genetic status and would be given a placebo.

Dr Richard Isaacson, a neurologist at the Institute for Neurodegenerative Diseases who isn’t involved in the trial, said he believes antiamyloid drugs could be helpful in slowing cognitive decline when used preventively in patients at risk of Alzheimer’s. He himself prescribes them to patients at his clinics in New York and Florida, but only in people who already have some amyloid build-up. He questioned whether it made sense to use these drugs in people who don’t have amyloid at all.

“That is not something that sits well with me,” said Isaacson. WashU Medicine researchers said that in animal experiments, antiamyloid treatments were most effective when used before evidence of amyloid build-up. But such an experiment has yet to be conducted in people.

Alzheimer’s has been a spectre that has haunted the Richardson siblings since they were children. The three of them made a pact years ago to not have children if they learned that they carried the Alzheimer’s gene.

“We decided that we would be the last three. No one has to suffer anymore from our family,” Hannah said. “We want to do everything we can to stop it for us and everybody else.”

Wall Street Journal

Here is the link:

https://www.theaustralian.com.au/health/medical/three-siblings-one-fatal-gene-a-familys-fight-against-earlyonset-alzheimers/news-story/267367843d5555748d7e957c9fe0a6ab

This is an awfully sad story which it seems those involved are approaching with considerable courage and insight.

It is hard to imaging being caught in such a horrible fate and I find the whole set of circumstances deeply confronting.

I am not sure I would be as stoic if I was caught in such a saga!

David.

Sunday, June 29, 2025

I Am Not Exactly Sure This Is A Path We Want To Travel Down!

This appeared last week:

The brain implant revolution is here. Why is its inventor Tom Oxley terrified?

Is our cognitive liberty at risk in Elon Musk’s new era of human enhancement? Amid warnings that brain implants could dismantle the concept of self, experts are joining forces to establish the rules for the future.

Natasha Robinson

10:30PM June 27, 2025.

The Weekend Australian Magazine

In Australia, the announcement landed quietly, buried in the technology pages of newspapers. The scant column inches ­devoted to this harbinger of the true AI revolution belied its significance. But the man at the centre of the crest of an era of superintelligence is in no doubt of what is coming. It infects his dreams.

“It’s just blowing me away, what is coming,” says Australian neurologist Tom Oxley, the co-inventor of the world’s most innovative brain-computer interface (BCI) that is at the forefront of the world’s progression towards cognitive artificial intelligence. “It’s phenomenal. The next couple of decades are going to be very hard to predict. And every day, I’m increasingly thinking that BCIs are going to have more of an impact than anyone realises.”

Brain-computer interfaces are tiny devices inserted directly into the brain, where they pick up electrical signals and transmit them to an external computer or device where they are ­decoded algorithmically. The subject of a cover story in this Magazine in 2023, a BCI called the Stentrode, developed at the University of ­Melbourne by Oxley’s company Synchron, is inserted into the brain non-invasively through the jugular vein.

In 2022, Synchron, which initially received funding from the US Defense Advanced ­Research Projects Agency (DARPA) and the Australian Government, and later attracted ­investment from the likes of Bill Gates and Jeff Bezos, had become the first company in the world to be approved by the US Food and Drug Administration to conduct a human trial of its BCI in the US – outpacing Elon Musk’s company Neuralink, which is operating in the same space. Since then the Stentrode has been implanted into 10 people with neurodegenerative disease, enabling them to control devices such as computers and phones with their thoughts.

While Oxley and his company co-founder Nicholas Opie’s vision for the company remains dedicated to restoring functionality in those with paralysis, Oxley is realistic that the technology will in coming years have wider ­application and demand: an era of radical human enhancement.

A seismic development in Synchron’s ­evolution occurred in March, when Oxley ­announced a partnership between the company and chipmaking giant Nvidia, to build an AI brain foundation model that learns directly from neural data. The model, dubbed Chiral, connects Syncron’s BCI – developed in Melbourne – with Nvidia’s AI computing platform Holoscan, which allows developers to build AI streaming apps that can be displayed on Apple’s Vision Pro spatial computer, the tech giant’s early foray into extended reality.

“A core human drive, encoded in our DNA, is to improve our condition,” says Oxley, a professorial fellow at the University of Melbourne’s department of medicine and now based in New York City. “For patients with neurological ­injury, this means restoring function. In the ­future, it seems inevitable that it will include enhancement [in the wider population]. BCIs will enable us to go beyond our physical limitations, to express, connect and create better than ever before. Neurotechnology should be a force for wellbeing, expanding human potential and improving quality of life.”

But the collision of the development of BCIs with the now-supercharged development of AI has ramifications almost beyond imagining. Currently, AI computational systems like ChatGPT learn from data, with machine ­learning technology modelling neural networks trained by large language models from text drawn from across the ­internet and digitised books.

The prospect of AI platforms accessing data streams directly out of the brain opens up a future in which our private thoughts could be made transparent. While the US Food and Drug Administration is tightly controlling the application of AI in the BCIs it will assess and approve, the prospect of these devices directly accessing neural data ­nevertheless opens up great potential for ­surveillance, commercial exploitation, and even the loss of what it means to be human.

“Liberal philosophers John Stuart Mill and John Locke and others, but even back further to ancient Eastern philosophers and ancient Western philosophers, wrote about the importance of the inner self, of cultivating the inner self, of having that private inner space to be able to grow and develop,” says Professor Nita Farahany, a leading scholar on the ethical, legal and social implications of emerging technologies.

She is working closely with Oxley on establishing an ethical framework for the future of ­neurotechnology. “It’s always been one of the cornerstones of the concept of liberty. The core concept of autonomy, I think, can be deeply ­enabled by neurotechnology and AI, but it also can be incredibly eroded.

“On the one hand, I think it’s incredible to enable somebody with neurodegenerative ­disease – who is non-verbal, or has locked-in syndrome – to reclaim their cognitive liberty and their self-determination, and to be able to speak again. I think that’s incredibly exciting. On the other hand, I find it terrifying.

“How do we make sure the AI interface is acting with fidelity and truth to the user and their preferences?”

Two decades ago, American inventor and ­futurist Ray Kurzweil predicted a moment in human history that he dubbed the “singularity”: a time when AI would reach such a point of ­advancement that a merger of human brains and the vast data within cloud-based computers would create a superhuman species. ­Kurzweil has predicted the year 2029 as the point at which AI will reach the level of human intelligence. The combination of natural and artificial intelligence will be made possible by BCIs which will ultimately function as nanobots, Kurzweil recently said in an interview; he reckons human intelligence will be expanded “a millionfold”, profoundly deepening awareness and consciousness.

Billionaire Elon Musk – whose company Neuralink is also developing a BCI – believes AI may surpass human intelligence within the next two years. Musk, who has previously described AI as humanity’s biggest existential threat, has warned of catastrophic consequences if AI gets out of control. He has stressed that AI must align with human values, and is now positioning BCIs as a way to mitigate the risks of artificial superintelligence. He believes BCIs hold the key to ensuring that the new era of AI – in which the supertechnology could become sentient and even menacing – does not destroy humanity. Musk’s vision for Neuralink’s BCI is to enhance humankind to offset the existential risks of artificial ­intelligence – a theory dubbed “AI alignment”. It’s an ­outlook in step with transhumanist philosophy, which holds that neurotechnology is the gateway to human evolution, and that technology should be used to transcend our physical and mental limitations.

But Oxley is at odds with Musk on AI alignment – and believes that using BCIs as a vehicle to ­attempt to match the power of AI is ethically problematic. He’s focused instead on laying the groundwork to ensure the future of AI does not undermine fundamental human liberty.

“BCIs can’t solve AI alignment,” Oxley says. “The problem isn’t bandwidth, it’s behavioural control. AI is on an exponential trajectory, while human cognition – no matter how enhanced – remains biologically constrained. AI safety depends on governance and oversight, not plugging into our brains. Alignment must be addressed in a paradigm where humans will never fully comprehend every model output or decision. This represents the grand challenge of our time, yet it is not one that BCIs will fix.”

Almost two years after I first reported on the development of Synchron’s ­pioneering, non-­invasive BCI, I’m sitting down with Oxley at a cafe in Sydney; he’s on a brief trip home from New York to see family. It’s difficult to reconcile his achievements with the unassuming, youthful 44-year old sitting opposite, as he grapples with the enormous weight of responsibility he now feels around his invention.

“Starting to understand that there are going to be mechanisms of subconscious thought process detection enabled by BCIs has made me realise that there is a danger with the technology,” Oxley says. “I am cautiously optimistic about the trajectory in the US, which I think is going to be gated by the FDA [Food and Drug Administration], which is kind of playing a global role [in regulating safety]. But there’s work to be done. Algorithms already manipulate human cognition. Integrating them directly into our brains puts us at risk of AI passively shaping our thoughts, desires and decisions, at a level we may not even perceive.

“I think this technology is just as likely to make us vulnerable as it is to help us, because you expose your cognitive processes that up until this point have been considered sacrosanct and very private. The technology is going to enable us to do things that we couldn’t previously do, but it’s going to come with risk.”

The magnitude of that risk, and the burden of conscience and intellect that comes with being an agonist in opening up the possibility of what AI pessimists fear could be a dystopian future, has triggered Oxley to shift gear from ­entrepreneur and inventor to the ethical ­steward of a cutting-edge tech company. He’s at the forefront of worldwide efforts to embed the right to cognitive liberty within a set of governing principles for the future of neurotechnology. It’s an extraordinary gear shift for the neurologist, whose career as an inventor was initially purely focused on wanting to improve the lives of patients who were paralysed. Now he finds himself leading what is essentially a burgeoning tech company valued at about $US1 billion.

“I did have a sense starting out that what we were doing was going to be hugely impactful,” he says. “I was looking to commit my intellectual, academic life to something that I thought was going to be impactful on a big scale. But the way it’s morphing and evolving now is quite humbling and exciting.

“I had an epiphany a couple of months ago that probably the most important thing I can do right now is to try and get the ethics of all of this right. That’s where I find myself right now. It’s in my dreams. It’s in my subconscious. It’s become probably the most important thing that I want to do.”

Cognitive liberty is a term popularised by Farahany, who says the concept of rights and freedoms embedded within liberal philosophy and democratic governance must be urgently updated and reimagined in the digital era.

“The brain is the final frontier of privacy. It has always been presumed to be a space of freedom of thought, a private inner sphere, a secure entity,” Farahany says. “If you think about what the concept of liberty has meant over time, that privacy and the importance of the cultivation of self is at the core of the concept of human autonomy.

“The right to cognitive liberty in the digital age is both the right to maintain mental privacy and freedom of thought, and the right to access and change our brains if we choose to do so. If we have structures in place, like a base layer that’s just reading neural data and a guardian layer that is adhering to the principles of ­cognitive liberty, we can align technologies to be acting consistent with enabling human flourishing. But if we don’t, that private inner space that was held sacred from the ­earliest philosophical writings to today – the capacity to form the self – I think will collapse over time.”

The future of AI-powered neurotechnology is already moving apace. Nvidia – which makes the chips used worldwide by OpenAI systems, and which now has a market capitalisation of $A5.47 trillion, closely rivalling Microsoft at the top of the leaderboard of the world’s largest companies by market cap – in January announced its predictions for the future of AI in healthcare. It named digital health, digital biology including genomics, and digital devices including robotics and BCIs as the most significant new emerging technologies. That reflected bets already placed by the market: the BCI ­sector is now powered by at least $33 billion in private investment.

Neural interface technologies are already hitting the consumer market prior to BCIs coming to fruition. Apple has patented a next-generation AirPods Sensor System that integrates electroencephalogram (EEG) brain sensors into its earphones. The devices’ ability to detect electrical signals generated by neuronal activity, which would be transmitted to an iPhone or computer, opens up the ability to ­interact with technology through thought ­control, and would give users insights direct from the brain into their own mental health, productivity and mood. Meta is working on wristwatch-embedded devices that utilise AI to interpret nerve impulses via electromyography, which would enable the wearer to learn, adapt and interact with their own mental state.

But the prospect of AI accessing neural data directly via BCIs is a whole new ball game. Transmitting neural data direct from the brain to supercomputers means an individual’s every thought – even subconscious thoughts one is not even aware of – could be made transparent, akin to uploading the mind. Beyond that, our thoughts could be manipulated by powerful algorithms that open up the possibility of a terrifying new era of surveillance capitalism or even coercive state control. “Our last fortress of privacy is in jeopardy,” writes Farahany in her seminal book The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology. “Our concept of liberty is in dire need of being updated.”

Farahany describes the early neurotech devices that are beginning to hit the market as “harbingers of a future where the sanctity of our innermost thoughts may become accessible to others, from employers to advertisers, and even government actors”.

“This is how we find ourselves at a moment when we must be asking not just what these technologies can do, but what they mean for the unseen, unspoken parts of our existence,” Farahany writes in her book. “This is about more than preventing unwanted mental ­intrusions; it is a guiding principle for human flourishing on the road ahead. We should move quickly to affirm broader interpretations of self-determination, privacy and freedom of thought as core components of cognitive liberty.”

The rise of social media, with its rampant ­algorithmic-enabled commercial exploitation, surveillance without consent and devastating impacts on human mental states, has already provided a glimpse of the consequences if the world does not achieve a critical balance ­between the positive potentials of AI-powered neurotechnology and the risks. Human concentration spans have been shredded by social media models that exploit dopamine-driven addiction to likes and attention; the mental health of many young people has deteriorated as a consequence, and data has been harvested and monetised on a massive scale. Oxley is ­determined not to let BCIs go in the same ­direction.

“The dopaminergic drive within a human makes us very vulnerable,” says Oxley. “And if AI opens up to market forces and is able to prey on the weakness of humans, then we’ve got a real problem. There is a duty of care with this technology.”

Oxley is now co-chairing, with Farahany, the newly formed Global Future Council on Neurotechnology, which convenes more than 700 experts from academia, business, government, civil society and international organisations as a time-bound think-tank. The Council – an ­initiative of the World Economic Forum – is concerned with ensuring the responsible development, integration and deployment of neurotechnologies including BCIs to unlock new avenues for human advancement, medical treatment, communication and cognitive augmentation.

UNESCO is also drafting a set of cognitive AI principles, while some Latin American countries have already moved to direct legislative regulation.

Oxley has now put forward his own vision for addressing the existential risks to human autonomy, privacy and the potential for discrimination. He has structured his neurotechnology ethical philosophy around three pillars: Human Flourishing, Cognitive Sovereignty and Cognitive Pluralism.

“Innovation should prioritise human agency, fulfilment, and long-term societal benefits, ensuring that advancements uplift rather than diminish human dignity,” Oxley stated in a public outline of his ideas in a LinkedIn post earlier this year. “Regulation should enable ­responsible progress without imposing unnecessary restrictions that limit personal autonomy or access to life-enhancing technologies. If we get it right, BCIs would become a tool for human expression, connection and productivity, enabling humans to transcend physical limitations.

“Individuals must have absolute control over their own cognitive processes, free from ­external manipulation or coercion. Privacy and security are paramount: users must own and control their brain data, ensuring it is protected from exploitation by corporations, governments, or AI-driven algorithms. BCIs must ­prevent subconscious or direct co-option and safeguard against covert or overt AI influence in commerce and decision-making. This may require decentralised, user-controlled infrastructure to uphold cognitive autonomy. Above all, BCIs should enhance personal ­agency, not erode it.”

If cognitive sovereignty cannot be guaranteed, AI-driven coercion and persuasion looms as a menacing prospect. “Advanced algorithms could exploit subconscious processes, subtly shaping thoughts, decisions and emotions for commercial, political or ideological agendas,” Oxley says. Rather, BCIs should enhance human agency, ensuring AI is “assistive, not intrusive… empowering individuals without shaping their decisions or subconscious cognition”.

Neither Oxley nor Farahany are in favour of centralised regulation. They favour “decentralised cognitive autonomy ... a user-controlled, secure ecosystem [which] ensures that thoughts, choices and mental experiences remain free from corporate or governmental influence.”

Oxley is also wary of the rise of “a singular model of intelligence, perception or cognition” that could promote tiered class systems, the rise of a “cognitive elite”, or deepen social inequalities.

“Cognitive diversity, much like neurodiversity, must be protected and upheld,” he says. “This includes addressing cultural discrimination between users and non-users of neurotechnology, particularly as enhancements become more widespread. Access to neurotechnologies must be democratised, ensuring that enhancements do not become a tool of exclusion but a potential means of empowerment for all.

“BCIs will either empower individuals or risk becoming tools of control. By prioritising human flourishing, cognitive sovereignty and cognitive pluralism, we can help ensure they enhance autonomy and creativity. There is much work ahead,” Oxley says.

That work must begin, says Farahany, with a worldwide collective effort to reshape core ­notions of liberty for the modern age.

“Having an AI that auto-completes our thoughts, that changes the way we express ourselves, changes our understanding of ourselves as well,” she says. “The systems that are sitting at the interface between this merger of AI and BCIs don’t have our empathy, don’t have our history, don’t have our cultural context and don’t have our brains, which have been built to be social and in relation to each other. And so I worry very much about how much of what it means to be human will remain as we go forward in this space.

“How much of what it means to be human will remain is up to us, and how we design the technology and the safeguards that we put into place to really focus on enhancing and enabling human self determination. But I think that unless we’re thoughtful, that isn’t an inevitable outcome. When our private inner sphere becomes just as transparent as everything else about us, you know, will we simply become the Instagram versions of ourselves?”

Oxley remains confident that we can keep the radical advancements that he is facilitating in check. “I think that if you look back at history, humanity has been through multiple periods of revolution and there was always this fear that things were about to go downhill, and they didn’t,” he says. “I think we stand on the precipice of the potential to expand the human experience in an incredibly powerful way. The thing that I’m most excited about with this technology is that it could help us overcome a lot of pain and suffering, and especially the human challenge of expressing our own experience. I think BCIs will ultimately enhance what it means to be human.”

Here is the link:

https://www.theaustralian.com.au/weekend-australian-magazine/the-brain-implant-revolution-is-here-why-is-its-inventor-tom-oxley-terrified/news-story/aa71edf8b3adc0ab971b0afbe352da5a

This is an optimistic take, I suspect, on where progress is just now! I fear where we are headed is rather unplanned and vulnerable to being led down all sorts of paths that may lead to very negative outcomes.

I suspect what to do with, and how to safely regulate, these advances is this is well above the paygrade and capabilities of most global and Australian regulators, which has to be a pretty worrying situation!

Just how such advances should be regulated and how that can be done is a question I do not see any clear answers on right now and I suspect most of Government is totally clueless as to what to do nest, if they even realise there might be a problem!

I look forward to some useful comments and suggestions! In the meantime I would avoid wiring to many brains with plugs etc.

David 

AusHealthIT Poll Number 800 – Results – 29 June 2025.

Here are the results of the recent poll.

Do You Believe Iran Can Survive Intact Having Found Itself In A Major War In The Middle-East With The US and Israel?

Yes                                                                     13 (54%)

No                                                                      11 (46%)

I Have No Idea                                                    0 (0%)

Total No. Of Votes: 24

A tiny majority reckons Iran has a future – many do not. A close-run thing as they say!

Any insights on the poll are welcome, as a comment, as usual!

Not bad voter turnout – question must have been too easy. 

0 of 24 who answered the poll admitted to not being sure about the answer to the question!

Again, many, many thanks to all those who voted! 

David.

Friday, June 27, 2025

This Is Really A Wonderful Australian Software Story! Pity About The Trolls

This appeared a few days ago:

Patent ‘trolls’ come for Canva as it prepares for share sales

Amelia McGuire Business reporter

Jun 22, 2025 – 8.00pm

Canva is being targeted by a Canadian serial litigant alleging the Australian design software giant has stolen its patents for an artificial intelligence voice generator that is now integrated into its platform.

Cedar Lane Technologies has made similar claims against hundreds of companies over the last five years including Amazon, Zoom and Huawei. Its lawyer, Isaac Rabicoff, has lodged separate patent claims against Canva in Texas on behalf of other companies over the last two months.

While Cedar Lane is regarded as a nuisance litigator – a California court found it had made “objectively frivolous and misleading arguments in defending their defective filings” in 2020 – it has had remarkable success in squeezing settlements and payments out of Silicon Valley giants.

The litigation comes as Canva rolls out a suite of AI tools, making a number of significant acquisitions in a bid to bolster its design software platform and attract more paying users. Last week, Canva said 25 million people or a 10th of its users were now paying, an increase of 5 million since September.

An increase in paying users is key for the company as it prepares for a long-awaited float. Canva confirmed on Sunday that it would facilitate a sale of employee and founder shares at the end of the year, as first revealed by The Australian Financial Review’s Street Talk column last week.

Secondary share sales are popular with mature private tech firms because they allow employees and investors to cash out instead of waiting for a public listing. Potential investors have been told the sale would amount to up to $US500 million ($773 million) worth of shares and would value Canva at $US37 billion, The Information reported at the weekend.

Canva’s peak valuation was $US40 billion at the height of the tech bubble in 2021, when interest rates were near zero and there was a lot of capital flowing into tech start-ups. Canva’s last share sale was in October when it said it had increased its valuation by $US6 billion to $US32 billon.

While Canva has not committed to a float, co-founder Cameron Adams said last week the company had been “IPO ready” for a long time. “It’ll happen when it makes sense but we’re not any closer to firming it up,” he said.

Canva’s platform is used for everything from marketing and sales presentations to educational audiobooks and YouTube clips, and the platform’s voice generator lets users add audio to projects using an AI-powered tool that draws from hundreds of voices.

In a five-page filing lodged with the District Court in Texas earlier this month, Cedar Lane claimed that Canva’s AI voice generator used the company’s “system, method and apparatus” for generating audio,

In response, Canva said the claims were baseless and “strongly opposed” them. “This kind of litigation misuses laws intended to support genuine creativity, instead using them to pressure companies into a quick settlement,” a Canva spokesman said. “Allowing actions like this to go unchallenged risks normalising a damaging trend across the broader industry, and we intend to vigorously defend ourselves.”

Patent litigation has been booming in the United States, often brought by companies with little public information disclosed.

Intel, for instance, was sued by VLSI Technologies in 2019, a semiconductor producer that has actually been defunct for decades. The new VLSI business, according to the Wall Street Journal, is backed by a New York-headquartered hedge fund, and tied up Intel in years of litigation.

Cedar Lane has been described in congressional hearings as a business whose main source of revenue is suing larger companies, a practice known as patent assertion. Such businesses are known by detractors as “patent trolls”.

Another of the firms suing Canva in the Texas courts, Hyperquery, has a similar business to Cedar Lane. In one week alone earlier this year, it launched litigation against another Australian-headquartered software giant, Atlassian, along with ByteDance and Sony. It is also suing Xero, the ASX-listed cloud-based accounting software business.

Here is the link:

https://www.afr.com/technology/patent-trolls-come-for-canva-as-it-prepares-for-share-sales-20250622-p5m9bg

I suppose these sort of commercial attempts at profiting on others hard work is just inevitable as in other areas of endeavour!

Sad that, but Canva is a great Australian story!

David.