Quote Of The Year

Timeless Quotes - Sadly The Late Paul Shetler - "Its not Your Health Record it's a Government Record Of Your Health Information"

or

H. L. Mencken - "For every complex problem there is an answer that is clear, simple, and wrong."

Thursday, August 31, 2023

Medical Start-Up Lands $2m For Device To Let People Speak Again!

This appeared a few days ago:

Medical start-up lands $2m for device to let people speak again

3.30PM – Aug 24, 2023

Tess Bennett

Medical tech company Laronix, which has created a device that lets people who have had their larynx removed speak again, has raised $2 million in seed funding.

The round was led by VC firm Scale Investors and includes participation from Dr Elaine Saunders and Dr Peter Blamey the co-founders of hearing aid company Blamey Saunders, and Kristy Chong, the ModiBodi founder who sold her company for $140 million last year.

Laronix was founded by Dr Farzaneh Ahmadi in 2020 after a decade of researching solutions for voice loss caused by the surgical removal of larynx (laryngectomy) as the result of throat cancer.

The start-up has gained FDA and TGA approval for its wearable electronic voice prosthesis that monitors the respiration signals a person’s body makes when they are trying to speak and uses AI to produce a voice.

Having secured seed funding, Laronix will expedite the launch of its first products in the US and Australia.

Dr Ahmadi said the products give wearers the option to speak with a male or female voice, as well as sing.

“Our journey started when a permanent voice-loss patient emailed our research group and said I am a singer, and my voice is my life and I am going to lose it forever, his genuine plea really affected me,” she said.

“I felt the immense pain of his condition, and the fact that there hasn’t there been any successful advancements over the last 40 years to improve these patients’ lives, especially when the suicide rate for laryngectomy patients is sadly so high.”

Article is found here:

https://www.afr.com/technology/the-tech-deals-you-need-to-know-20230614-p5dgkc

To those who need it this technology would clearly be an utter godsend! Great stuff, but I have to say it seems like a really tiny amount of money?

David.

Tuesday, August 29, 2023

The Story Of NVIDIA And Its Growth, Diven By AI, Must Be The Tech Story Of The Year!

 

This appeared last week.

How Nvidia became the world’s most important company overnight

By James Titcomb and Matthew Field

The three mid-ranking Silicon Valley engineers behind Nvidia had just $US40,000 ($62,350) between them when they started the company three decades ago, fuelled by a belief that 3D graphics would change the video game industry.

Now, it costs the same amount to buy just one of Nvidia’s microprocessors, and it can’t make enough of them.

The artificial intelligence (AI) boom has made Nvidia perhaps the world’s most important tech company and secured it a value of more than $US1.2 trillion.

Its latest boost came on Wednesday night, when the business revealed quarterly profits had climbed by a staggering 843 per cent in a year alone, up from $US656 million to $US6.2 billion.

Sales in its data centre business, which reflects demand for its top AI chips, climbed by 141 per cent in just three months, surpassing even Wall Street’s lofty expectations.

The best news for investors was that the company predicted the party would continue, forecasting another leap in sales in the third quarter of this year.

Shares, already at a record high, rose by 6.5 per cent as markets opened on Thursday and to some, the only question is how high can it can go.

The latest rally cemented Nvidia’s spot as a world-leading tech giant.

Big dreams

Jensen Huang, Nvidia’s chief executive and principal founder, always had big ambitions.

The company decided to focus on video games in the 1990s when Huang observed demand for increasingly advanced computer graphics and predicted a need for drastically more powerful chips as games moved towards more immersive and 3D worlds.

When Nvidia’s share price hit $US100, Huang had the company’s logo tattooed on his arm.

But even he would have failed to see the AI rush that boosted his personal fortune to $US42 billion.

Nvidia’s work on computer graphics in the 1990s led it to invent the graphics processing unit (GPU), a type of microchip dedicated to computer gaming and video tasks. GPUs, however, also excel at other types of number crunching.

Nvidia thrived during the cryptocurrency booms of 2017 and 2021, as its processors proved to be highly proficient at the high-powered mathematics needed to mint new Bitcoins.

It also rode a brief flurry of excitement around the “metaverse”, the virtual reality championed by Mark Zuckerberg.

However, the AI boom has eclipsed all that.

Large language models such as ChatGPT and Google Bard require thousands of GPUs, both for their initial “training” and for the subsequent interactions known as inferences.

Almost every company and government is falling over themselves to invest in AI.

Amazon, Microsoft and Google, which operate the giant cloud computing data centres on which AI models generally run, are placing orders worth billions of dollars with Nvidia.

To a degree, Huang is fortunate his video game chips are so well suited to AI, but his allies reject suggestions he simply stumbled upon a pot of gold.

“He was one of the early people to think about it (AI) and study it,” says a former Nvidia executive.

“Jensen had big ears, he was listening to what was happening, he was experimenting and investing in how they could tweak these GPUs to be better at this stuff.

“He took a decision to bet significantly on AI in around 2018.”

Today, the company has competition from chip making giants AMD and Intel but enjoys a huge head start. So much so that Nvidia is now practically synonymous with AI itself.

Vastly more here:

https://www.smh.com.au/business/companies/how-nvidia-became-the-worlds-most-important-company-overnight-20230825-p5dzbd.html

NVIDIA really seems to be one of a kind, and it is really hard to know just what comes next, but kaking ti to a market cap. Of US$1.2 Trillion is an indicator of considerable success, but surely also some hype?

It is an amazing success story but I am sure only one of a few if we manage to keep to planet on an even keel for the next 20 years or so. There are some real existential risks out there sadly!!

David.

 

Sunday, August 27, 2023

Is There Any Way Such Incompetence And Incapacity To Act Can Be Excused?

This saga popped up last week.

Climate crisis

Scientific journal retracts article that claimed no evidence of climate crisis

Publisher Springer Nature says 2022 article ‘not supported by available evidence’ as editors launch investigation

Graham Readfearn
@readfearn

Sat 26 Aug 2023 01.00 AESTLast modified on Sat 26 Aug 2023 01.01 AEST

One of the world’s biggest scientific publishers has retracted a journal article that claimed to have found no evidence of a climate crisis.

Springer Nature said it had retracted the article, by four Italian physicists, after an internal investigation found the conclusions were “not supported by available evidence or data provided by the authors”.

Climate sceptic groups widely publicised the article, which appeared in the European Physical Journal Plus in January 2022 – a journal not known for publishing climate change science.

Nine months later the article was reported uncritically in a page one story in the Australian newspaper and promoted in two segments on Sky News Australia – a channel that has been described as a global hub for climate science misinformation. The segments were viewed more than 500,000 times on YouTube.

The article claimed to have analysed data to find no trend in rainfall extremes, floods, droughts and food productivity.

“In conclusion on the basis of observational data, the climate crisis that, according to many sources, we are experiencing today, is not evident yet,” the article said.

Several climate scientists told the Guardian and later the news agency AFP that the article had misrepresented some scientific articles, was “selective and biased” and had “cherrypicked” information.

After those concerns were raised, Springer Nature announced in October it was investigating the article.

In a statement Springer Nature said its editors had launched a “thorough investigation”, which included a post-publication review by subject matter experts.

The authors of the article also submitted an addendum to their original work during the course of the investigation, the statement said.

“After careful consideration and consultation with all parties involved, the editors and publishers concluded that they no longer had confidence in the results and conclusions of the article,” the journal said.

“The addendum was not considered suitable for publication and retraction was the most appropriate course of action in order to maintain the validity of the scientific record.”

A retraction note appearing on the article says concerns were raised “regarding the selection of the data, the analysis and the resulting conclusions of the article”.

The note says the article’s conclusions “were not supported by available evidence or data provided by the authors”.

More here:

https://www.theguardian.com/environment/2023/aug/26/scientific-journal-retracts-article-that-claimed-no-evidence-of-climate-crisis

That a reputable journal publisher takes more that 18 months to notice that an off-topic article it published was untrue evidence-free rubbish really makes it hard to understand just how we poor ignoramuses at the end of the information food chain are to discern fact from fiction etc.

When you think about it the implications of all this are really scary – to say the least! Even if material is not taken down surely it can be flagged a possibly misleading in a week or two. No properly conducted peer-review process gets it this wrong unquestioningly!

Springer this is just not OK! What do others think?

David.

 

AusHealthIT Poll Number 711 – Results – 27 August, 2023.

Here are the results of the poll.

Are The ADHA, ATO And Federal Government Doing Enough To Counter E-Mail Scams And Fraud?

Yes                                                                     22 (50%)

No                                                                      19 (43%)

I Have No Idea                                                     3 (7%)

Total No. Of Votes: 44

A mixed outcome suggesting that a small majority of readers feel enough is being done but many don’t!

Any insights on the poll are welcome, as a comment, as usual!

A good number of votes. But also a rather mixed outcome. 

3 of 44 who answered the poll admitted to not being sure about the answer to the question!

Again, many, many thanks to all those who voted! 

David.

 

Thursday, August 24, 2023

It Is Hard To Pass Up Cold Hard Cash.

This appeared last week:

Pharmacies to charge for blood pressure checks and medication deliveries amid profits squeeze

By Natasha Robinson  Health Editor  

9:00PM August 16, 2023

Chemists will begin charging customers for blood pressure checks, medication home deliveries, wound dressing and baby weighing – all services previously provided for free – as community pharmacies face reduced revenues amid plunging confidence in the sector’s financial outlook.

The CommBank Pharmacy Insights 2023 report reveals four out of five community pharmacies intended to begin charging for previously free services to offset the impact of the federal government’s 60-day dispensing policy, with 90 per cent expecting profits to decrease by one-third.

While confidence in the sector prior to this May was at its highest point in 10 years with strong business value outlook, sentiment declined sharply after the announcement of the 60-day dispensing policy, plunging almost 100 points on the UTS Community Pharmacy Barometer Index to the lowest ebb in a decade.

After May, some 72 per cent of community pharmacies expected the value of their business to decrease compared to 10 per cent six months earlier.

The report, produced in partnership with the University of Technology Sydney and health analytics company IQVIA, is the most comprehensive research available on community pharmacies on an annual basis in Australia and tracks the confidence, perceptions and attitudes of pharmacy owners and employees. It is due to be publicly released on Thursday.

“Many are considering whether keeping their workforce and opening hours intact is financially viable, while others are still grappling with shortages,” said CommBank Health chief Albert Naffah.

Pharmacists in NSW will march in the Sydney CBD on Thursday in protest at the 60-day dispensing policy, an action organised by grassroots pharmacy owners independent of the Pharmacy Guild, which has been waging an organised campaign.

The policy, which will allow 60 days’ worth of 320 common medications to be prescribed for the price of one PBS co-payment, cleared the Senate last week amid a Coalition attempt to block it, and the legislation will formally come into effect next month.

Southwest Sydney pharmacy owner Quinn On said the plans by most chemists to open up extra revenue streams would soon come into effect.

“Many pharmacies do free deliveries of medication for their regulars and elderly patients, and that won’t happen anymore after 60-day dispensing,” he said. “We are having lots of meetings about how we can mitigate the impact of 60-days dispensing, we’re doing everything we can.”

Mr On named free wound dressing and blood pressure checks as services that would no longer be provided for free. Blood pressure checks would attract a $10-$15 charge. He currently employs a midwife 4½ hours a week who does baby weighing and advises new mums. “I get a line-up of people every Friday morning … I am now looking at can I continue to provide that service.”

Pharmacies were also planning to join more pharma programs, in which patients are enrolled on to pharmaceutical company programs that monitor blood pressure, cholesterol and medication compliance.

Pharmacies get a small fee for enrolling each patient.

The CommBank Pharmacy Insights confirmed moves across the sector to expand service delivery and push for a greater role in patient care.

“The breadth and size of the expected (profits) decrease is leading to a range of strategic responses,” the report said. “The top growth opportunity for pharmacists in November 2022 was expanding professional services.

More here:

https://www.theaustralian.com.au/nation/politics/pharmacies-to-charge-for-blood-pressure-checks-and-medication-deliveries-amid-profits-squeeze/news-story/8a55bf5db1ee38903d44a1b1b6312596

I guess this had to come!

David.

 

Wednesday, August 23, 2023

It Seems Doctors Are A Little Wary Regarding The AI Push Presently Underway.

This appeared last week:

Docs Using AI? Some Love It, Most Remain Wary

Christine Lehmann, MA

August 15, 2023

When OpenAI released ChatGPT-3 publicly last November, some doctors decided to try out the free AI tool that learns language and writes human-like text. Some physicians found the chatbot made mistakes and stopped using it, while others were happy with the results and plan to use it more often.

"We've played around with it. It was very early on in AI and we noticed it gave us incorrect information with regards to clinical guidance," said Monalisa Tailor, MD, an internal medicine physician at Norton Health Care in Louisville, Kentucky. "We decided not to pursue it further," she said.

Orthopedic spine surgeon Daniel Choi, MD, who owns a small medical/surgical practice in Long Island, New York, tested the chatbot's performance with a few administrative tasks, including writing a job listing for an administrator and prior authorization letters.

He was enthusiastic. "A well-polished job posting that would usually take me 2-3 hours to write was done in 5 minutes," Choi said. "I was blown away by the writing — it was much better than anything I could write."

The chatbot can also automate administrative tasks in doctors' practices from appointment scheduling and billing to clinical documentation, saving doctors time and money, experts say.

Most physicians are proceeding cautiously. About 10% of more than 500 medical group leaders said their practices regularly use AI tools when they responded to a March poll by the Medical Group Management Association.

More than half of the respondents not using AI said they first want more evidence that the technology works as intended.

"None of them work as advertised," said one respondent.

MGMA practice management consultant Dawn Plested acknowledges that many of the physician practices she's worked with are still wary. "I have yet to encounter a practice that is using any AI tool, even something as low-risk as appointment scheduling," she said.

Physician groups may be concerned about the costs and logistics of integrating ChatGPT with their electronic health record systems (EHRs) and how that would work, said Plested.

Doctors may also be skeptical of AI based on their experience with EHRs, she said.

"They were promoted as a panacea to many problems; they were supposed to automate business practice, reduce staff and clinician's work, and improve billing/coding/documentation. Unfortunately, they have become a major source of frustration for doctors," said Plested.

Drawing the Line at Patient Care

Patients are worried about their doctors relying on AI for their care, according to a Pew Research Center poll released in February. About 60% of US adults say they would feel uncomfortable if their own healthcare professional relied on artificial intelligence to do things like diagnose disease and recommend treatments; about 40% say they would feel comfortable with this.

"We have not yet gone into using ChatGPT for clinical purposes and will be very cautious with these types of applications due to concerns about inaccuracies," Choi said.

Practice leaders reported in the MGMA poll that the most common uses of AI were nonclinical, such as:

·         Patient communications, including call center answering service to help triage calls, to sort/distribute incoming fax messages, and outreach such as appointment reminders and marketing materials

·         Capturing clinical documentation, often with natural language processing or speech recognition platforms to help virtually scribe

·         Improving billing operations and predictive analytics

Some doctors also told The New York Times that ChatGPT helped them communicate with patients in a more compassionate way.

They used chatbots "to find words to break bad news and express concerns about a patient's suffering, or to just more clearly explain medical recommendations," the story noted.

Is Regulation Needed?

Some legal scholars and medical groups say that AI should be regulated to protect patients and doctors from risks, including medical errors, that could harm patients.

"It's very important to evaluate the accuracy, safety, and privacy of language learning models (LLMs) before integrating them into the medical system. The same should be true of any new medical tool," said Mason Marks, MD, JD, a health law professor at the Florida State University College of Law in Tallahassee.

In mid-June, the American Medical Association approved two resolutions calling for greater government oversight of AI. The AMA will develop proposed state and federal regulations and work with the federal government and other organizations to protect patients from false or misleading AI-generated medical advice.

Marks pointed to existing federal rules that apply to AI. "The Federal Trade Commission already has regulation that can potentially be used to combat unfair or deceptive trade practices associated with chatbots," he said.

In addition, "the US Food and Drug Administration can also regulate these tools, but it needs to update how it approaches risk when it comes to AI. The FDA has an outdated view of risk as physical harm, for instance, from traditional medical devices. That view of risk needs to be updated and expanded to encompass the unique harms of AI," Marks said.

There should also be more transparency about how LLM software is used in medicine, he said. "That could be a norm implemented by the LLM developers and it could also be enforced by federal agencies. For instance, the FDA could require developers to be more transparent regarding training data and methods, and the FTC could require greater transparency regarding how consumer data might be used and opportunities to opt out of certain uses," said Marks.

What Should Doctors Do?

Marks advised doctors to be cautious when using ChatGPT and other LLMs, especially for medical advice. "The same would apply to any new medical tool, but we know that the current generation of LLMs are particularly prone to making things up, which could lead to medical errors if relied on in clinical settings," he said.

More here:

https://www.medscape.com/viewarticle/994892?icd=login_success_email_match_norm

I have to say this all seems like pretty sensible advice for those who want to start getting a feel for what is possible and how well it can work.

There is no need to hurry, but being King Canute and hoping to resist is also not wise.

In passing, I would be keen to pass on any references in the space that others have found useful!

David.