Quote Of The Year

Timeless Quotes - Sadly The Late Paul Shetler - "Its not Your Health Record it's a Government Record Of Your Health Information"

or

H. L. Mencken - "For every complex problem there is an answer that is clear, simple, and wrong."

Thursday, April 04, 2024

This Is A Fantastic Story On How One Of Our Most Important Tech. Companies Emerged.

This appeared last week

Morris Chang turned 55. Then he started the world’s most important company

By Ben Cohen

The Wall Street Journal

1:24AM March 31, 2024

The world’s most valuable tech companies were founded in dorm rooms, garages and diners by entrepreneurs who were remarkably young. Bill Gates was 19. Steve Jobs was 21. Jeff Bezos and Jensen Huang were 30.

But what might just be the world’s most invaluable company was founded by Morris Chang when he was 55 years old.

Never has anyone so old created a business worth so much as Taiwan Semiconductor Manufacturing Company, known simply as TSMC, the chip manufacturer that produces essential parts for computers, phones, cars, artificial-intelligence systems and many of the devices that have become part of our daily lives.

Mr Chang had such a long career in the chip business that he would have been a legend of his field even if he’d retired in 1985 and played bridge for the rest of his life. Instead he reinvented himself. Then he revolutionised his industry.

But he wasn’t successful despite his age. He was successful because of his age. As it turns out, older entrepreneurs are both more common and more productive than younger founders. And nobody personifies the surprising benefits of mid-life entrepreneurship better than Mr Chang, who had worked in the US for three decades when he moved to Taiwan with a singular obsession.

“I wanted to build a great semiconductor company,” he told me.

What he built was unlike any existing semiconductor company. You probably use a device with a chip made by TSMC every day, but TSMC does not actually design or market those chips.

That would have sounded completely absurd before the existence of TSMC. Back then, companies designed chips that they manufactured themselves.

Mr Chang’s radical idea for a great semiconductor company was one that would exclusively manufacture chips that its customers designed. By not designing or selling its own chips, TSMC never competed with its own clients. In exchange, they wouldn’t have to bother running their own fabrication plants, or fabs, the expensive and dizzyingly sophisticated facilities where circuits are carved on silicon wafers.

The innovative business model behind his chip foundry would transform the industry and make TSMC indispensable to the global economy.

Now it’s the company that Americans rely on the most but know the least about. Morris Chang isn’t a household name, either, but he should be.

TSMC’s founder shaped the chip business over the past 70 years and still finds himself playing an important role today. His longevity puts him right at the top of the list of the people most responsible for cultivating the world’s most vital technology.

“Hardly anyone has been more influential,” says Chris Miller, the author of the book Chip War

I recently spoke with Mr Chang by video chat to find out what others can learn from his adventures as a middle-aged entrepreneur and why it’s never too late to try something new.

As the demand for chips intensifies and US-China relations deteriorate, the world increasingly depends on TSMC, and there are lots of questions about the future of this company that Mr Chang founded on a geopolitically vulnerable island. But the topic of our conversation was TSMC’s past.

Mr Chang, now 92, officially retired as TSMC’s chairman in 2018, but the white-haired pioneer was sitting at his desk in a suit and tie as he sipped from a glass of Diet Coke during our 90-minute interview.

I wanted to know more about his decision to start a new company when he could have stopped working altogether. What I discovered was that his age was one of his assets. Only someone with his experience and expertise could have possibly executed his plan for TSMC.

“I could not have done it sooner,” he says. “I don’t think anybody could have done it sooner. Because I was the first one.”

Texas, then Taiwan

Long before he moved to Taiwan in middle age, Morris Chang moved to the US as a teenager.

Mr Chang was born in mainland China and had a peripatetic childhood as his family bounced around the war-torn country. When he fled to the US in 1949, America felt to him like paradise. He later became a US citizen.

Mr Chang grew up dreaming of being a writer — a novelist, maybe a journalist — and he planned to major in English literature at Harvard University. But after his freshman year, he decided that what he actually wanted was a good job.

He transferred to the Massachusetts Institute of Technology, where he studied mechanical engineering, earned his master’s degree and would have stayed for his PhD if he hadn’t failed the qualifying exam. Instead, he got his first job in semiconductors and moved to Texas Instruments in 1958.

Back then, chips were known as things made from potatoes. But he came along as the integrated circuit was being invented, and his timing couldn’t have been any better, as Mr Chang belonged to the first generation of semiconductor geeks. He developed a reputation as a tenacious manager who could wring every possible improvement out of production lines, which put his career on the fast track.

Three years after he moved to Dallas, the company sent him to Stanford University for his PhD in electrical engineering. This time, he aced the qualifying exam and returned as Dr Chang. By the late 1960s, he was managing TI’s integrated-circuit division. Before long, he was running the entire semiconductor group.

Mr Chang was such a workaholic that he made sales calls on his honeymoon and had no patience for those who didn’t share his drive. These days, TSMC is investing $US40 billion ($61.3bn) to build plants in Arizona, but the project has been stymied by delays, setbacks and labour shortages, and Mr Chang told me that some of TSMC’s young employees in the US have attitudes toward work that he struggles to understand.

“They talk about life-work balance,” he says. “That’s a term I didn’t even know when I was their age. Work-life balance. When I was their age, if there was no work, there was no life.”

Mr Chang climbed the executive ranks at TI, but he was passed over for top jobs and felt like he was being put out to pasture. He wanted TI to focus on semiconductors, but the company wanted to keep selling consumer products.

“Home computers and all that stuff,” he says. “That was a serious distraction and a serious diversion of corporate resources.” In 1983, once he accepted that he wouldn’t be promoted, and his company wasn’t going to bet on a market that he believed was the future, he quit Texas Instruments.

Almost immediately, he was hired by electronics manufacturer General Instrument as president and chief operating officer. Almost immediately, he realised that he’d made a huge mistake. “I was a mismatch — a complete misfit,” Mr Chang says. After one year, he quit General Instrument, too.

Now he was turning 54 and had no clue what he was going to do next. He knew he wanted to work again and had venture-capital offers that he might have accepted if Taiwan hadn’t beckoned. But he could afford to wait for a better opportunity.

Mr Chang says he wouldn’t have taken the risk of moving to Taiwan if he weren’t financially secure. In fact, he didn’t take that same risk the first time he could have.

In 1982, Mr Chang received a tempting job offer from a powerful Taiwanese official named K.T. Li, the man credited with orchestrating the country’s postwar economic development and galvanising the nation’s tech industry. He wanted Mr Chang to be the president of Taiwan’s leading tech institute and spin research into profit.

By then, Mr Chang knew that he wasn’t long for Texas Instruments. But his stock options hadn’t vested, so he turned down the invitation to Taiwan. “I was not financially secure yet,” he says.

“I was never after great wealth. I was only after financial security.” For this corporate executive in the middle of the 1980s, financial security equated to $US200,000 a year. “After tax, of course,” he says.

Mr Chang’s situation had changed by the time Mr Li called again three years later. He’d exercised a few million dollars of stock options and bought tax-exempt municipal bonds that paid enough for him to be financially secure by his living standards. Once he’d achieved that goal, he was ready to pursue another one.

He calls moving to Taiwan his “rendezvous with destiny,” but the truth is that nothing about TSMC was destined.

“There was no certainty at all that Taiwan would give me the chance to build a great semiconductor company, but the possibility existed, and it was the only possibility for me,” Chang says. “That’s why I went to Taiwan.”

He had spent most of his career in Texas and thought he would retire in the US after 15 years in Taiwan. That was almost 40 years ago.

When older is better

Is Morris Chang an outlier?

Not long ago, a team of economists investigated whether older entrepreneurs are more successful than younger ones. By scrutinising Census Bureau records and freshly available Internal Revenue Service data, they were able to identify 2.7 million founders in the US who started companies between 2007 and 2014. Then they looked at their ages.

The average age of those entrepreneurs at the founding of their companies was 41.9. For the fastest-growing companies, that number was 45. The economists also determined that 50-year-old founders were almost twice as likely to achieve major success as 30-year-old founders, while the founders with the lowest chance of success were the ones in their early 20s. Every shred of evidence led them to a counterintuitive takeaway.

“Successful entrepreneurs are middle-aged, not young,” they wrote in their 2020 paper.

This is not the image of startup founders that most people have in their minds. They are more likely to think of Steve Jobs tinkering in a garage or Mark Zuckerberg coding in his dorm room. Microsoft, Apple, Nvidia, Alphabet, Amazon and Meta Platforms had founders who were 30 or younger, and Silicon Valley’s venture capitalists throw money at talented young entrepreneurs in the hopes they will start the next trillion-dollar company.

They have plentiful energy, insatiable ambition and the vision to peek around corners and see the future. What they don’t typically have are mortgages, family obligations and other adult responsibilities to distract them or diminish their appetite for risk. Mr Chang himself says that younger people are more innovative when it comes to science and technical subjects.

But in business, older is better. Entrepreneurs in their 40s and 50s may not have the exuberance to believe they will change the world, but they have the experience to know how they actually can.

Some need years of specialised training before they can start a company. In biotechnology, for example, founders are more likely to be college professors than college dropouts. Others require the lessons and connections they accumulate over the course of their careers.

“There are ideas that you can only have once you’ve been around and you’ve had a real job,” said MIT Sloan School of Management professor Pierre Azoulay, one of the paper’s authors. “Those are not typically challenges solved by twenty-somethings, because you need to be up close and personal with the problems of a corporate customer to imagine a solution.”

There was one more finding from their study of US companies that helps explain the success of a chip maker in Taiwan. It was that prior employment in the area of their startups — both the general sector and specific industry — predicted “a vastly higher probability” of success.

“The closer the industry match,” they wrote, “the greater the success rate.”

The founding of a foundry

Morris Chang had 30 years of experience in his industry when he decided to uproot his life and move to another continent. He knew more about semiconductors than just about anyone on earth — and certainly more than anyone in Taiwan. As soon as he started his job at the Industrial Technology Research Institute, Chang was summoned to K.T. Li’s office and given a second job.

“He felt I should start a semiconductor company in Taiwan,” Mr Chang says. “So that was the start of TSMC.”

When he sat down to figure out what TSMC’s business model should be, Mr Chang started by recognising what it couldn’t be.

“I decided right away that this could not be the kind of great company that I wanted to build at either Texas Instruments or General Instrument,” he says.

TI handled every part of chip production, but what worked in Texas would not translate to Taiwan. The only way that he could build a great company in his new home was to make a new sort of company altogether, one with a business model that would exploit the country’s strengths and mitigate its many weaknesses.

Mr Chang determined that Taiwan had precisely one strength in the chip supply chain. The research firm that he was now running had been experimenting with semiconductors for the previous 10 years. When he studied that decade of data, Mr Chang was pleasantly surprised by Taiwan’s yields, the percentage of working chips on silicon wafers. They were almost twice as high in Taiwan as they were in the US, he said.

Mr Chang knew his company wouldn’t have the resources to compete with Silicon Valley when it came to designing, selling or marketing chips. But he believed there was one potential competitive advantage for the company that would become TSMC: manufacturing chips — and only manufacturing chips.

The seeds of a pure chip foundry had been planted in his mind by Gordon Campbell, a semiconductor entrepreneur who visited Mr Chang during his otherwise regrettable year at General Instrument.

Chipmaker TSMC formally opened its first Japanese plant on Saturday (February 24), highlighting the Taiwanese…

Mr Campbell was familiar with the agonies and the inefficiencies of building and operating a fab. He felt startups were better off designing chips and outsourcing the manufacturing. To some in his business, this was unthinkable. “Real men have fabs,” the famous saying went. One man thought the future was fabless.

“People were ingrained in thinking the secret sauce of a successful semiconductor company was in the wafer fab,” Mr Campbell told me. “The transition to the fabless semiconductor model was actually pretty obvious when you thought about it. But it was so against the prevailing wisdom that many people didn’t think about it.”

He was thinking about it when he spoke with Mr Chang in late 1984. And soon Mr Chang was thinking about it, too. He began to think that every fabless company would need a foundry.

Taiwan’s government took a 48 per cent stake, with the rest of the funding coming from the Dutch electronics giant Philips and Taiwan’s private sector, but Mr Chang was the driving force behind the company. The insight to build TSMC around such an unconventional business model was born from his experience, contacts and expertise. He understood his industry deeply enough to disrupt it.

“TSMC was a business-model innovation,” Mr Chang says. “For innovations of that kind, I think people of a more advanced age are perhaps even more capable than people of a younger age.”

Mr Chang says the idea behind TSMC was also the result of the personal philosophy that he’d developed over the course of his long career. “To be a partner to our customers,” he says. That founding principle from 1987 is the bedrock of the foundry business to this day, as TSMC says the key to its success has always been enabling the success of its customers.

More here:

https://www.theaustralian.com.au/business/the-wall-street-journal/morris-chang-turned-55-then-he-started-the-worlds-most-important-company/news-story/9b1de69660a81bcbfd4b87673dfabce9

What a great story showing you are never to old to become an ‘overnight success’!

David.

Wednesday, April 03, 2024

This Is An Amazing Story Of Being At The Right Place, With The Right Skills And Technology, At The Right Time!

This appeared last week:

The Observer Artificial intelligence (AI)

How did a small developer of graphics cards for gamers suddenly become the third most valuable firm on the planet?

John Naughton

By turning his computer chip-making company Nvidia into a vital component in the AI arms race, Jensen Huang has placed himself at the forefront of the biggest gold rush in tech history

Sun 31 Mar 2024 03.00 AEDTLast modified on Sun 31 Mar 2024 09.16 AEDT

A funny thing happened on our way to the future. It took place recently in a huge sports arena in San Jose, California, and was described by some wag as “AI Woodstock”. But whereas that original music festival had attendees who were mainly stoned on conventional narcotics, the 11,000 or so in San Jose were high on the Kool-Aid so lavishly provided by the tech industry.

They were gathered to hear a keynote address at a technology conference given by Jensen Huang, the founder of computer chip-maker Nvidia, who is now the Taylor Swift of Silicon Valley. Dressed in his customary leather jacket and white-soled trainers, he delivered a bravura 50-minute performance that recalled Steve Jobs in his heyday, though with slightly less slick delivery. The audience, likewise, recalled the fanboys who used to queue for hours to be allowed into Jobs’s reality distortion field, except that the Huang fans were not as attentive to the cues he gave them to applaud.

Still, it made for interesting viewing. Huang is an engaging speaker and he has built a remarkable company in the years since 1993, when he first sketched his idea for Nvidia in a Silicon Valley diner. And the audience were in awe of him because they regard him as a man who saw the future long before they did, and hoped to catch a glimpse of what might be coming next.

And in this they were not disappointed. What’s coming next is Nvidia’s Blackwell B200 chip, complete with its 208bn transistors, and the family of monster machines that it will enable, including a formidable supercomputer that fits into a rack and has almost two miles of copper cabling neatly intertwined in its innards. Cue wild applause.

Watching this spectacle, the thought that came to mind was this: how did a small company specialising in graphics cards for gamers come to be the third most valuable company on the planet? And how did it happen so quickly at the end? After all, Nvidia was only worth $278bn in October 2022 and is now worth $2.3 trillion, trailing only Apple and Microsoft.

In February, Nvidia reported record quarterly revenue of $22.1bn, up 22% from the previous quarter and up 265% from a year ago

It’s a good story, and no doubt someone is already working on the screenplay. But even a cursory account produces a picture of a company that from the beginning was good at anticipating the needs of a particularly demanding class of users – gamers – and eventually realised that in developing processors that could address their needs it had produced a new kind of computer: a graphics processing unit (GPU) that could perform many calculations in parallel, as opposed to conventional CPUs that did everything serially.

A pivotal moment came in 2013, when Huang decided that GPUs could be useful for an emerging technology called machine learning, and that henceforth the company would focus on that. It was a bold bet at the time, and initially Wall Street thought it foolish. But when machine learning really started to take off and desperately needed parallel processing machines to handle the associated heavy computation, Nvidia hit the jackpot. If you wanted to do this kind of AI then you needed Nvidia GPUs – lots of them. And, more importantly, you needed a way to enable them to work seamlessly together – a kind of operating system. A Stanford software genius named Ian Buck created one for Huang. They called it CUDA (for compute unified device architecture), and from then on buying Nvidia kit became a no-brainer for anyone aspiring to get into the AI business.

Which is how Huang found himself virtually the only trader with a supply of ready-made shovels in the biggest gold rush in tech history. In February his company reported record quarterly revenue of $22.1bn, up 22% from the previous quarter and up 265% from a year ago.

The Blackwell B200 chip is his latest super shovel. And in his keynote, Huang unveiled what can be done with it. He rolled out the DGX GB200 NVL72 (Nvidia doesn’t do human-friendly labelling), which is a powerful supercomputer with 72 Blackwell processors in a single water-cooled rack. I couldn’t find any information about pricing, but then I guess that if you have to ask you can’t afford it.

Google, can, though. So can Microsoft, Meta, Oracle, Tesla, Amazon and Dell. From what their bosses say, they’ve already joined the queue for supplies. In this case, Huang has indeed seen the future. And it works for him. Whether it works for the rest of us, though, remains to be seen.

More here:

https://www.theguardian.com/commentisfree/2024/mar/30/nvidia-jensen-huang-ai-gold-rush-computer-chip-maker

This really is an example of cometh the hour, cometh the man with just the chips you needed to support the AI revolution.

It is another of those 20 year ‘overnight successes’!

Enough said.

David.

Tuesday, April 02, 2024

The Scale And Severity Of The Autism Epidemic Seems To Have Gone Unnoticed By Many.

I found this article a bit of a wake-up call!

Thousand-fold increase: What is driving the rise of autism?


By Natassia Chrysanthos

March 31, 2024

In the late 1980s, when Professor Cheryl Dissanayake started researching autism, she estimated in her doctoral thesis that three or four babies out of 100,000 would be diagnosed with the condition.

“Now, I’m telling you it’s three in 100,” she says.

That thousand-fold increase isn’t because anything has changed in us, biologically. But our understanding of autism has changed. The evolution of the diagnosis is having a profound influence on Australian society, from our schools and workplaces to our popular culture and policy debates.

A massive expansion in the medical criteria means children who were diagnosed with Asperger’s syndrome in the late ′90s are considered part of the autism spectrum today. Evolutions in research mean children are identified younger, sometimes as early as 18 months. And as awareness has improved and the neurodiversity movement flourished, adults who weren’t seen in childhood have figured out they, too, are autistic.

Now, there are signs Australia has higher rates of autism than comparable countries. It’s raised thorny questions. Could the $42 billion National Disability Insurance Scheme have inflated diagnosis rates? On the other hand, rates of diagnosis for women and girls are on the rise, but remain significantly lower than for boys and men. Are true rates of autism higher than we think? And is “the autism spectrum” even a useful term, given it captures such a range of human experiences?

There aren’t definitive answers. But most people point to 2013 as the point things really shifted. In the same year that Australia started rolling out a world-first disability insurance scheme, the official diagnostic criteria for autism expanded significantly.

Ten years later, the way those two forces have converged is driving a complex debate about the NDIS – now one of the federal government’s biggest budget pressures – and how public funding should be delivered to support Australians with autism.
size=2 width="100%" align=center>

Autism is a neurodevelopmental disability, which means that the brain develops differently to what we would typically expect. There is no single known cause, but researchers generally accept it’s a genetic condition that may also be influenced by someone’s environment.

But because there’s no genetic giveaway, autism is diagnosed based on behaviour. There are two core traits that autistic people display: they have difficulty with communication and social interaction, and have restricted interests or repetitive behaviours.

It’s a new condition in the scheme of things. The first paper to describe autism was written by Boston child psychiatrist Leo Kanner in 1943. He observed 11 children who were solitary, needed routine and had some language difficulty. This included the first person diagnosed with autism, Donald Triplett, who died last June. Austrian physician Hans Asperger was undertaking similar work at the same time.

But it wasn’t until 1980 that a diagnosis called “autism disorder” was added to the psychiatry bible: the Diagnostic and Statistical Manual of Mental Disorders, better known as the DSM. It was mainly children – and mainly boys – with intellectual disability and significant language impairments who were diagnosed, and they needed to have very significant difficulties to meet the threshold.

That stayed true until 1994. Researchers had realised there were some people who showed the core behaviours of autism – communication differences and repetitive interests – but did not have an intellectual or language impairment. Quite the contrary, they could be highly verbal or intelligent. And so several new diagnoses were introduced, reflecting those varying levels of ability.

There was Asperger’s syndrome for children who became hyper-fixated on certain interests and struggled to interact with others, but could also be highly intelligent. Classic autism was a diagnosis reserved for children who had more difficulty with language and communication. Childhood disintegrative disorder, or Heller’s syndrome, described those whose late onset developmental delays might have led to reversals in language, bladder control or motor skills. Children who did not meet the full criteria of those conditions were diagnosed with “pervasive developmental disorder not otherwise specified”. Australian children born in the ’90s and noughties were raised in this framework.

Then in 2013, “the spectrum” was born. The fifth and most recent edition of the DSM collapsed all those conditions under the umbrella term autism spectrum disorder, and marked the most significant change yet to how we understand autism.

Some autistic people are intellectually disabled, some are highly intelligent. One person could be non-speaking while another is highly verbal. Autism can present in hundreds of different ways, and no two people are the same. “That’s what we call the autism spectrum. We went from a unitary condition, where everyone had a similar and high level of impairment, to, actually: you can show all these behaviours. And so the numbers of children we diagnosed went from really small to large,” says Professor Andrew Whitehouse, head of the autism research team at Telethon Kids Institute.

But it’s not just children, any more. One of the fastest-growing categories of participant on the National Disability Insurance Scheme is now autistic adults. “Twenty years ago, adult diagnoses were extremely rare. Whereas in our contemporary world, adult diagnoses are almost as frequent as child diagnoses,” Whitehouse says.

“When these adults were children, we had a different conception of autism: that it was only for children who had very significant difficulty, like intellectual disability. As the diagnosis of autism has evolved, all of a sudden, the difficulties that [these adults] have been experiencing are seen in a new light and that they may actually meet criteria for autism.”

Still, there are two fundamental criteria for an autism spectrum disorder diagnosis: certain social and communication difficulties, and patterns of restrictive and repetitive behaviour. The DSM also describes three layers of severity, which are determined by how much support someone requires. These are referenced as level one, two and three autism, providing a shorthand for families to describe the level of disability their child experiences.
size=2 width="100%" align=center>

Most people in the autistic community, however, don’t think of autism as a linear spectrum. Some talk about the condition as a constellation. Others describe a “spiky profile” that mixes incredible strengths and gifts in some areas – such as visual thinking, deep interests, an eye for detail, strong memory – with intense struggles in others.

Dr Melanie Heyworth, an autistic researcher who founded the organisation, Reframing Autism, describes being autistic as having a busy brain, which can affect everything from movement to sensory experiences, communication, emotion and empathy. “Every autistic person’s brain is different than the next, and the way those elements interact look different for every autistic person,” she says.

“The primary differences that most autistic people would talk about are in the way that we communicate. I use spoken language as my primary form of communication, but other autistic people don’t [and are non-speaking], or they are multi-modal communicators.”

Heyworth, for example, needs to see a person’s face in order to engage with them. “I would not be able to have this conversation with you effectively over the phone,” she says. Some autistic people struggle to understand sarcasm or small talk, or find spoken language takes longer to process altogether. For that reason, they might find public speaking easier than casual conversation because it doesn’t require them to digest what they’re hearing at the same time as they’re trying to talk.

Autistic people often empathise, process their emotions, or show their feelings differently. Some find it difficult to regulate their emotions, which can be intense; others withdraw, and this is a common response spotted in children. Eye contact is often uncomfortable, and for some people it even leads to nausea or dizziness.

Sensory overload is also common. Some people might be hyper-sensitive, “where everything you touch feels like pain”, while others are overwhelmed by bright lights. “That’s the reason you get the stereotype of the child in school with the big ear muffs. Or you get me rocking back-and-forth at the moment,” Heyworth says. Some of those repetitive movements, such as body-rocking and hand flapping, are known as “stimming”, and can help autistic people regulate their emotions or cope with anxiety.

Many autistic people “mask”, which means they try to hide their unique behaviours, perhaps because they were bullied at school. “Those things impact mental health and it’s exhausting, they experience burnout,” Dissanayake says. Studies suggest masking is more common in girls, which is why autism in females is identified less frequently.

Children are usually diagnosed with autism following a multidisciplinary assessment, where clinicians from different professions – such as paediatricians, speech pathologists, clinical psychologists – watch them in a range of settings to understand whether they show signs of autism. Dissanayake says the point of treatment is to help children develop ways to communicate and learn from people around them.

“We’re not focused on making the child less autistic. What we want to do is improve their cognition. If they don’t develop an ability to communicate, it lessens their ability to learn, and they end up with a learning disability.”

These days, more autistic children in Australia are getting the help they need much earlier in their lives.

But as more families receive that support by signing up to the NDIS, which offers automatic entry with a level two autism diagnosis, there have been uneasy questions about whether Australian children are being steered towards a clinical diagnosis for funding in an otherwise expensive and hard-to-access system. More than 9 per cent of five- to seven-year-old children are now on the scheme, mostly for autism or developmental delays.

A research paper published last year, by Australian National University scholar Maathu Ranjan, lit a fuse under that debate.

Ranjan, who is on study leave from her role as a senior actuary at the National Disability Insurance Agency, looked at dozens of studies estimating autism prevalence around the world. While diagnosis rates have gone up everywhere over the past 20 years, she found the Australian growth rate for children was higher than in similar countries.

Her paper, which is not peer-reviewed, quoted international studies that showed autism prevalence rates of one in 36 children in the United States, one in 50 in Canada, and one in 57 in the United Kingdom. In Australia, it was up to one in 25.

She said there was “considerable controversy” around the drivers of rising autism rates worldwide. But in Australia, she pointed to the NDIS. “It is plausible that the growth of prevalence rates above the global average in Australia can be attributed to the financial incentives created by government policy, specifically the implementation of the NDIS.”

Whitehouse agrees that the NDIS has led to more kids being diagnosed and distorted our understanding of true prevalence rates in Australia. “I think it’s without question that the use of an autism diagnosis level two shifted the focus, and the centre of gravity, towards that diagnosis. Clinicians, we all want the best for the kids that we are seeing. And if there are children who require support, there is a focus towards how can we get them that support.”

He says the levels of severity outlined in the DSM were intended for doctors to use, not for policymakers to allocate disability funding. But while the NDIS was rolling out quickly, it was an effective way of identifying people who needed significant support and signing them up to the scheme.

…..

This is part one of three in a series about how our understanding of autism has changed and what it means for Australia.

More here:

https://www.smh.com.au/politics/federal/thousand-fold-increase-what-is-driving-the-rise-of-autism-20240221-p5f6sa.html

I have to say I found this a useful review of the present state of play. Well worth a browse!

David.

Sunday, March 31, 2024

There Seems To Be Increasing Confidence AI Will Make A Positive Difference In Healthcare!

This appeared last week
AIs will make health care safer and better

It may even get cheaper, too, says Natasha Loder

Mar 27th 2024

When people set goals which are sky-high to the point of silliness, the sensible scoff. They are normally right to do so. Sometimes, though, it is worth entertaining the possibility that even the most startling aspiration might be achievable.

In 2015 Priscilla Chan, a paediatrician, and her husband Mark Zuckerberg, a founder of Facebook, set up the Chan Zuckerberg Initiative (CZI) with the aim of helping science bring about a world in which all disease could be prevented, cured or managed. Unsurprisingly there was a tech-centric feeling to the undertaking. But it was not until 2020 that the Chan-Zuckerberg’s annual updates started to talk about the potential of artificial intelligence (AI). Four years later it is hard to imagine anyone pursuing their goals not putting it front and centre.

The proportion of biomedical research papers which invoke artificial intelligence was climbing exponentially well before the field started dazzling the world with “foundation models” like OpenAI’s various GPTs (generative pre-trained transformers), Meta’s Llama and Gemini from Google (see chart). Given the vast amounts of data that biomedical research produces, AI’s early application there is hardly a surprise. That past progress and promise, though, is a mere prelude to what is now under way.

Artificial-intelligence systems of similar power to the foundation models and large language models that generate cogent text in all manner of styles, answer complex questions quite convincingly and helpfully, and create images that capture the ideas expressed in verbal prompts are becoming a part of health care. They have applications for almost every part of it. They can improve the choices researchers make about how exactly to edit genes; they are phenomenally good at making sense of big data from disparate sources; they can suggest new targets for drug development and help invent molecules large and small that might work as drugs against them. The CZI itself is now working on building an AI-powered “virtual cell” with which it hopes to revolutionise all manner of biomedical research.

The effects are not restricted to the lab. Various sorts of diagnosis in which AI is playing a role look ready to be transformed. Robot surgeons are taking on an expanding range of operations. The way that patients access health information and motivate themselves to follow treatment regimes looks ripe for reimagining as chatbots and wearable health monitors learn to work together. The productivity of health-care systems seems likely to be improved significantly.

Poorer countries may have the most to gain. An earlier generation of AI is already making itself felt in health care there. One advantage is that it can make quite modest equipment much more capable, allowing it to be used more widely and beyond the clinic. Smart stethoscopes can help users pick out salient details, phones can be turned into “tricorders” that measure heart rate, temperature, respiration and blood oxygen saturation all at once. Delivering reliable guidance for health-care workers over the world in their native language offers an advance both straightforward and game changing.

If such tools can become widespread, and if health-care systems are reshaped to get the most out of them, they should make it possible to deliver much better care. That represents an opportunity to improve the lives of hundreds of millions, even billions.

Some see not just a humanitarian breakthrough, but an epistemological one: a whole new sort of knowledge. Artificial intelligence can find associations and connections in bodies of disparate data too vast and knotted for humans to unpick without needing pre-existing models of what sorts of cause have what sorts of effect. Demis Hassabis, one of the founders of DeepMind, an AI powerhouse that is now part of Google, thinks that ability will change the way humans understand life itself.

Of the 1,500 vendors in health AI, over half were founded in the past seven years

There are caveats. The foundation models that power “generative” applications like ChatGPT have some serious drawbacks. Whether you call it hallucinating, as researchers used to, or confabulating, as they now prefer to, they make stuff up. As with most AI, if you train them on poor or patchy data the results will not be all they should be. If the data are biased, as health data frequently are (good data on minorities, low-income groups and marginalised populations is often harder to come by) the results will not serve the population as a whole as well as they should and may do harm in the underrepresented groups. The models’ “non-deterministic” nature (they will not always respond in the same way to the same stimulus) raises philosophical and practical problems for those who regulate medical devices. Blood-pressure cuffs and thermometers reflect reality more straightforwardly.

None of this is stopping the market for products and services in health-care AI from growing apace. Big AI companies have been keen on buying health-care specialists; health-care companies are buying AI. Research and Markets, a firm of analysts, estimates that in 2023 the health-care world spent about $13bn on AI-related hardware (such as specialised processing chips and devices that include them) and software providing diagnostics, image analysis, remote monitoring of patients and more. It sees that number reaching $47bn by 2028. Analysts at CB Insights reckon investors transferred a whopping $31.5bn in equity funding into health-care-related AI between 2019 and 2022. Of the 1,500 vendors in health AI over half were founded in the past seven years.

The digitisation of health care has seen its fair share of expensive disappointments. But there is a real possibility that AI will live up to some of the hope being placed in it. Simpler and more forgiving interfaces should make AI-based systems for handling data and helping with time-management more congenial to doctors, patients and health-care providers than those of yore. And health-care systems sorely need a productivity boost if they are to adapt and improve in a world of high costs and older populations. The shortage of health-care workers is predicted to reach nearly 10m by 2030—about 15% of today’s entire global health workforce. Artificial intelligence will not solve that problem on its own. But it may help.

More here:

https://www.economist.com/technology-quarterly/2024/03/27/ais-will-make-health-care-safer-and-better

A typically worthwhile overview from the Economist!

It seems fair to suggest that there is a lot of progress happening and that we are coning to see healthcare delivery incrementally improved and re-molded over time. There seems to be increasing little that is not possible and that the transformations over the next decade or two will be even more amazing that what we have seen to date.

These days it is getting much harder to distinguish our smartphones from a "tricorder"!

I have to say I just watch in awe as the previously impossible becomes the norm. We do indeed live in “interesting times”!

David.

AusHealthIT Poll Number 740 – Results – 31 March, 2024.

Here are the results of the recent poll.

Is Australia Doing Enough To Manage The Rising Epidemic Of Obesity?

Yes                                                                             9 (23%)

No                                                                             30 (77%)

I Have No Idea                                                           0 (0%)

Total No. Of Votes: 39

People seem to think that all of us are falling a bit short of useful ideas on how to best manage our rising obesity epidemic!

Any insights on the poll are welcome, as a comment, as usual!

A great number of votes. But also a very clear outcome! 

0 of 39 who answered the poll admitted to not being sure about the answer to the question!

Again, many, many thanks to all those who voted! 

David.

Friday, March 29, 2024

I Think This Is An Expert We Should Take A Little Notice Of

This appeared last week:

20 March 2024

The Emperor’s new clothes and digital mental health policy

By Associate Professor Louise Stone

Digital mental health services are not the “solutions” they claim to be. They lack data, GPs find them useful in only a few patients, and most patients don’t want them.


Hans Christian Anderson’s The Emperor’s New Clothes is a fairytale about two conmen who pretended to be weavers. They convinced the Emperor they have a magical fabric with which they could make him the finest suit in all the land. Only intelligent and brave people can see the fabric, they say, and anyone who can’t see it is stupid and incompetent.  

The Emperor is vain, and this is the crux of the con. He loves fine clothes, and he sees himself as intelligent. Like everyone else, he pretends to see the cloth the conmen pretend to weave, as he doesn’t want to be known as stupid and incompetent. Eventually, he wears the imaginary “clothing” the conmen create in a parade but the spectators fear looking stupid or incompetent, so they all comment on how magnificent he looks. 

Finally, a child yells out “the Emperor isn’t wearing any clothes” giving everyone permission to admit the Emperor is naked. The Emperor, of course, is forced to confront his own stupidity and ignorance.  

Speaking out when the emperor is naked: the problem of digital mental health 

The magic fabric in our world is evidence. Or the lack of it.  

One of the main problems we have in healthcare today is the lies we tell ourselves, as well as the lies that are told by others, so I thought I would take a moment to take a piece of what I believe to be poor policy and be the little kid in the crowd pointing out that the Emperor is not wearing any clothes.  

In this story, let’s look at the National Digital Mental Health Framework, which is not wearing much data.  

Data stories are usually made of three things: data, a narrative and good imagery. Over time, I’ve watched data stories rely on less and less data. Sometimes, like this policy, they make a token attempt at data, a little like putting a sticker on lollies and calling them “all natural” or labelling a sugar-dense food as “lite”.  

Without data, the framework tries to engage us without explaining why. 

The data story 

The National Digital Health Framework tries to tell us that:  

“Digital Health has the potential to help people overcome healthcare challenges such as equitable access, chronic disease management and prevention and the increasing costs of healthcare. By enhancing the use of digital technologies and data, it enables informed decision-making, providing people better access to their health information when and where they need it, improved quality of care, and personalised health outcomes.” 

Governments would love this to be true. More public services are moving to digital means of access. And yet it seems the digital divide is a new social determinant of health. The poorer you are, the less access you have to digital goods, and as more and more services are now delivered online, the greater your disadvantage becomes.  

Digital mental health services give the illusion of universal access, but let’s look at how they are used in practice.  

The first thing to say is that they are used rarely. Despite the massive investment in digital tools, the people are “voting with their feet” and avoiding them. Here’s the data from the National Study of Mental Health and Wellbeing, 2020-2022.  

Proportion of people aged 16-85 years with a history of mental health disorders (with and without symptoms) accessing digital technologies.  

In most environments, this would be interpreted as consumers making a choice. In this environment, it is cast as a problem of GP motivation. GPs, apparently, are the problem, because we don’t know how to use digital mental health tools, or we don’t know they exist. Apparently, this means we need nudging to overcome our “reluctance”.  

The digital health strategy tells us we need to be nudged to overcome our “reluctance”.  

“Investments into research, education and awareness promotion, evidence translation, resources and tools contribute to building trust in the efficacy and effectiveness of digital mental health services for stakeholders.” 

So maybe GPs just need to become familiar with the evidence and stop “stalling digital mental health”. The scoping review prior to the release of the National Digital Mental Health Framework has an explicit reference to this:  

“What are possible financial and non-financial incentives (professional standards, training, monetary incentives) to encourage health practitioners to adopt digital mental health services into ‘business as usual’?” 

In other words, make it compulsory.  

At the moment, the PHNs are required to use a common assessment tool, which despite over $34 million in investment, has no validity or reliability measures, to place people into five levels. At level one and two, digital mental health strategies are the preferred solution to their needs. If I were psychic, I would predict that when GPs are “required” to use the tool, no-one will get a level 1 or 2 classification.  

Examining the evidence in the National Digital Mental Health Strategy 

Despite the investment in digital mental health tool research, this document resorted to quoting one paper to support its claim of efficacy. The paper is by Gavin Andrews and his team, and is a meta-analysis from 2010 looking at the evidence from 1998-2009. I would have to say the world has moved on.  


To be fair, the framework also references the Australian Commission on Safety and Quality in Health Care’s (2020) National Safety and Quality Digital Mental Health Standards. This document also references one paper, a more modern one, but one with significant flaws. They use this paper to back their statement that: “There is growing evidence regarding the important role digital mental health services can play in the delivery of services to consumers, carers and families”.   

The paper, from 2016, examined the “Mindspot” online clinic. Let’s look at their evidence, or more specifically the cohort. Here’s the attrition of their convenience sample = see original article.  

So, of the people who ended up on the website (who I suspect are not representative of the broader population) 0.5% finished the study. It doesn’t matter what clever statistics are done on this tiny sample of a tiny sample. It can’t prove anything, except that 99.8% of people who visited the site left before completion.  

In any other setting, this would be considered a problem. However, the authors assert in their conclusion that “this model of service provision has considerable value as a complement to existing services, and is proving particularly important for improving access for people not using existing services”. 

This is the best paper (presumably) the Australian Commission on Safety and Quality in Health Care could find.  

The impact of digital mental health strategies on mental health policy 

The impact of this belief in digital mental health is significant. There has been substantial investment in digital health tools, including mandating the use of the Initial Assessment and Referral Tool to enable stepped care at the PHN level with digital health engagement.  

Here is the ongoing investment in digital mental health initiatives, excluding research and other grants: 

  • 2021-2022: $111.2 million to create a “world-class digital mental health service system”; 
  • 2022-2023: $77 million on digital mental health services; 

The scoping review in 2022 identified 29 digital mental health services funded by the Australian government. Here is the breakdown of budget announcements in the 2022-2023 budget. The budget for digital mental health would cover 308 fulltime GPs doing 40-minute consultations. That’s 887,000 consultations a year.  

GPs are a funny lot.  

John Snow knew the Broad Street pump in London was causing cholera, even though the reigning officials at the time did not. When governments refused to listen, John Snow worked around them, removing the pump handle himself. He was, of course, right.  

The second thing to know about John Snow was that his passionate opponent, William Farr, who was also a GP, changed his mind in the face of the evidence and supported John Snow in his quest to ensure Londoners had clean water. GPs do eventually follow the evidence. 

However, we are natural sceptics when it comes to innovation. We’ve seen “miracle” drugs come and go, and “miracle” devices like pelvic mesh cause harm. We can spot marketing and vested interests a mile away.  

The National Digital Mental Health Strategy is full of vested interests. Increasing uptake of digital mental health improves the bottom line of many digital entrepreneurs and gives the government the illusion of universal access to care, when in reality most patients miss out on therapy.  

It is our job as GPs to advocate for our patients. Digital mental health solutions may well work for some, but are also part of a trend towards “technological solutionism”, a common trend to “solving” complex problems by offering an app.  

Evidence-based medicine requires the whole triad of the best evidence, clinical opinion and patient preference. I’m going to be the small boy at the parade here and say digital mental health services are not the “solutions” they claim to be. They lack data, GPs find them useful in a small proportion of their patients, and most patients don’t want them.  

This does not make me a Luddite. It makes me an honest clinician fulfilling my ethical obligation to make clinical decisions on evaluating the efficacy of treatment for my practice population.  

In my view, the Emperor desperately needs a different tailor.  

Associate Professor Louise Stone

MBBS BA DipRACOG GDFamMed MPH MQHR MSHCT PhD FRACGP FACRRM FASPM

Associate Professor, Social Foundations of Medicine, Medical School

ANU College of Health and Medicine

E: louise.stone@anu.edu.au

More here:

https://www.medicalrepublic.com.au/the-emperors-new-clothes-and-digital-mental-health-policy/106072

Great perspective here and well worth a read!

David.