Quote Of The Year

Timeless Quotes - Sadly The Late Paul Shetler - "Its not Your Health Record it's a Government Record Of Your Health Information"

or

H. L. Mencken - "For every complex problem there is an answer that is clear, simple, and wrong."

Sunday, June 23, 2024

It Seems The Backlash Against Social Media Is Growing!

This appeared a day or so ago:

US floats warnings for social media

By Adam Creighton : Washington Correspondent and Elizabeth Pike : Cadet Journalist

5:54PM June 18, 2024

Communications Minister Michelle Rowland says the government will consider tobacco-style warnings on social media platforms following a proposal in the US, as concerns mount over children’s online safety.

Ms Rowland said “every parent and caregiver was concerned” about youth social media usage as the “vectors for harm have never been more exemplified, have never been better understood, and continue to be better understood as each day goes by”.

US Surgeon General Vivek Murthy on Tuesday (AEST) recommended tobacco-style health warnings be placed on popular ­social media platforms such as TikTok and Instagram, amid mounting evidence of damage to users’ mental health, especially teenagers.

However, Ms Rowland questioned whether such a move was equally applicable in Australia, where social media platforms have been subject to tougher controls than in the US.

“Some of the comments the Surgeon General made were that these platforms are operating ‘under no rules’,” she said.

“We’re well placed in Australia; we do have a legislative framework, we also have a regulator in the eSafety Commissioner … We have always had this view the internet is not an ungoverned space.

“I think it also points (to the fact) smoking was once promoted as something that was healthy once upon a time. The challenges in retrofitting those harms is one that took decades, but it’s one that we are alive to as a government.”

Mr Murthy urged congress to pass legislation that would enable the Surgeon General, one of the nation’s top health officials, to warn social media users about the increasingly well-documented mental health costs of excessive social media use, declaring it an “emergency”.

“Adolescents who spend more than three hours a day on social media face double the risk of anxiety and depression symptoms,” he wrote in an opinion piece published in The New York Times.

“It is time to require a Surgeon General’s warning label on social media platforms, stating that ­social media is associated with significant mental health harms for adolescents.”

The proposal follows calls in Australia by Peter Dutton for an outright ban on social media for children aged under 16, a suggestion Anthony Albanese is backing if appropriate legislation can be made “workable”.

“I want people to spend more time on the footy field or the netball court than they’re spending on their phones,” the Prime Minister said last week.

“And a ban, if it can be effective, is a good way to go.

“(Social media) is a scourge, it is negative, it is having a negative impact on young people’s mental health and on anxiety, and if you look at all of the figures then we have real issues to deal with.”

While a social media ban treads water in Australia, attention has turned to the US as an example of what may lie ahead.

In March, Florida governor Ron DeSantis, following similar moves in Arkansas, Ohio and Utah, signed a law making it illegal for children under 14 to be social media account holders, while 14 and 15 year olds can do so only with parental consent.

Here is the link:

https://www.theaustralian.com.au/world/us-surgeon-general-wants-tobaccostyle-health-warnings-on-social-media/news-story/abaf6da9421dde5e6bbeb6ab6abc14c0

I have to say that while this is all well and good I really wonder about the practicality of such bans being applied to say, the under 16s for example. I cannot work out how such a ban might be enforced and without that this is really just ‘peeing into the wind’ where you are likely just to get wet!

Surely it would be better to fund some education campaigns, for parents, pointing out the potential harms arising from underage use of social media and leaving it to parents to regulate access as they see fit for their children.

Of course it is important to point out that there are many legitimate and useful aspects of social media use and we need to avoid a ‘throwing the baby out with bathwater’ scenario!

Moderation in all things seems applicable here as it is so often elsewhere! 

While doing a bit of social media management this also seems like a good idea!

https://www.theaustralian.com.au/business/technology/tech-giants-to-be-reined-in-by-esafety-code/news-story/3374b52dd7cadb114e7ce8ab2070b722

Tech giants to be reined in by eSafety code

EXCLUSIVE
By Noah Yim : Reporter

· Updated 5:42AM June 21, 2024, First published at 12:00AM June 21, 2024

Tech giants will be forced to tackle child sexual abuse and pro-terrorist material on their platforms under new mandatory standards to be imposed by the eSafety Commissioner, after “resistance” from some of the world’s biggest companies to tackle abhorrent ­material.

David.

AusHealthIT Poll Number 752 – Results – 23 June 2024.

Here are the results of the poll.

Should Chiropractors Be Prevented From "Treating" Patients Who Cannot Provide Their Own Verbal Consent?

Yes                                                                               22 (85 %)

No                                                                                  4 (15 %)

I Have No Idea                                                              0 (0%)

Total No. Of Votes: 26

A very clear cut vote but seemingly of little interest considering the vote count! The feeling seems to be to keep chiropractors away from non-adults!

Any insights on the poll are welcome, as a comment, as usual!

A poor voting turnout. 

0 of 26 who answered the poll admitted to not being sure about the answer to the question!

Again, many, many thanks to all those who voted! 

David.

Friday, June 21, 2024

Is This Technology Gone Mad Or An Essential Investment?

 This appeared last week:

We tried the $2000 baby tech dividing parents

There’s one question that many of today’s new parents or parents-to-be are asked over and over again. “Have you got a Snoo?”

For the unenlightened, the Snoo is a controversial $2000 piece of baby tech that automatically soothes your baby to sleep by imitating calming sensations of the womb.

It’s a smart bassinet that can listen for when your baby is fussing or crying and gently rock them, claiming to often calm crying in less than a minute.

The Snoo is like an extra set of arms for new parents – if that set of arms also had in-built Wi-Fi, microphones, white noise speakers and a motor.

The device is the brainchild of American pediatrician Harvey Karp, who wrote the book The Happiest Baby on the Block and pioneered a “5 S” approach to newborn babies: swaddle, side-stomach position, shush, swing and suck.

It’s become wildly popular with many new parents, given its allure of extra sleep for babies and their parents alike. Sleep deprivation can be one of the toughest aspects of parenting.

Snoo’s parent company, Happiest Baby, says Snoo boosts sleep by one to two hours a night from the very first days of life, and that by two to three months of age, most “Snoo babies” sleep more than nine hours a night.

Those are lofty claims and it’s understandable that some parents would be open to shelling out thousands of dollars for a bassinet, even one with plenty of bells and whistles.

A couple of disclaimers: Snoo parent company Happiest Baby sent over a review unit for the purposes of this write-up, so I didn’t pay the RRP. Secondly, I’ve only had one baby so far, who is at this stage a happy little two-month-old boy, so I don’t have a non-Snoo baby to compare.

Our experience, two months in at least, has been pretty remarkable. It’s no silver bullet by any means: our baby can often wake up at random times as all babies do, and sometimes won’t settle in the Snoo at all. But there have been plenty of moments of calm in what can otherwise be a chaotic, stressful and intense time.

One stand-out feature is that the Snoo automatically ramps its rocking speed up or down depending on if the baby is crying or settled, and often it works.

It’s not a cure-all for a crying or fussing baby. But at the same time, it’s hard to put a dollar value on a decent night’s sleep.

It does sometimes feel like a bit of a cheat code, and that’s what can make some parents and some paediatricians feel uncomfortable with it.

And that’s where the controversies come in. Critics say technology such as the Snoo could potentially negatively affect baby bonding, and that it can create bad habits by making the baby too reliant on rocking or white noise.

“I remain somewhat reluctant to have so much gadgetry around a newborn baby,” Australian pediatrician Dr Daniel Golshevsky, better known as Dr Golly, says in a blog post.

The Tweetycam baby monitor was one of our favourites.

“I see countless babies with untreated causes of unsettled behaviour, from colic to eczema, protein intolerance to reflux, and everything in between. I deeply believe that glossing over these treatable conditions does a disservice to babies.

“Prevention is the key, so yes – babies should/can snooze without Snoos.”

Snoo parent company Happiest Baby says, on the other hand, that its device is safe to use, and that it’s easy to wean babies off its sound, swaddle and motion.

It points to research commissioned by the company that has found that Snoo’s rocking, swaddling, and white noise combo works just as well as parents’ soothing to calm fussy babies quickly, and that 90 per cent of nurses surveyed say that Snoo reduces infant fussing.

We haven’t yet reached the weaning phase ourselves, given babies can use Snoo for up to six months and our little one is just two months. On some occasions, we have had to settle him in bassinets that aren’t the Snoo, and we’ve found we are doing all the same things the Snoo does for us – rhythmic rocking, swaying and playing varying levels and types of white noise.

Is it worth the $2000? Like so many baby purchases, you need to keep your expectations in check. It’s not a cure-all for a crying or fussing baby. But at the same time, it’s hard to put a dollar value on a decent night’s sleep. And we’ve been having an increasing number of those in recent weeks.

It’s not for everyone, but the Snoo has been the most useful, high-impact gadget I’ve used in a long time. And I’ve used a lot. Like so many baby purchases, it’s also worth checking out Facebook Marketplace: used Snoos typically retail for around half the price of their brand-new counterparts.

What else did we try?

Australian company Tweetycam’s range of products really impressed and its baby monitor ($239) was our favourite.

The monitor doesn’t use Wi-Fi – meaning no real security concerns– instead relying on a technology called FHHS to send the signal from the camera to the monitor.

The Tweetycam also monitors the room temperature, which is a real plus, and its picture and sound quality are excellent. The monitor itself is easy to use with no smartphone app required, and its battery life lasts about 12 hours. There’s definitely benefit in having a standalone monitor rather than needing to open a smartphone app every time you want to check on your baby.

The Tweetydreams nightlight and sound machine is also strongly worth considering, and at $120 is good value. That product requires a smartphone app, however, which can be a bit fiddly. A basic nightlight that just switches on and off has worked best for us, particularly when trying to bumble your way around in the middle of the night.

Our other favourite baby monitor for those who do prefer a smartphone app is the Lollipop smart baby camera ($319). It’s incredibly versatile, wrapping around the baby’s cot or bassinet, for example, or standing upright on its own. The Lollipop sends you phone notifications whenever it detects a noise, and you can monitor it from any device, be it a smartphone, tablet or desktop. Its picture quality is top-notch and it comes in fun colours such as cotton candy, pistachio and turquoise.
Other things to consider

The Snotty Boss nasal aspirator has been a godsend, and is superior (and decidedly less gross) to the manual method of sucking snot from baby’s nose. I didn’t try a “smart sock” – the baby monitor that tracks pulse and oxygen levels via a sock on the baby’s foot. That felt like a step too far.

And lastly, it might seem obvious to say, but technology is no replacement for human interaction. Parenting is a messy, tricky and intense thing, but no amount of technology will substitute for spending time with your little one. There are just some things that can make it that little bit easier.

https://www.smh.com.au/technology/we-tried-the-2000-baby-tech-dividing-parents-20240528-p5jh5y.html

It seems the life of infants in the 2020’s is a fair bit different from the time I was a nipper!

Amazing…

David.

Thursday, June 20, 2024

I Do Wonder About The ‘One Size Fits All’ Nature Of This Discussion.

This appeared last week:

Anthony Albanese backs under-16 social media ban


By Sarah Ison
Political Reporter

Updated 12:26PM June 14, 2024, First published at 10:30PM June 13, 2024

Anthony Albanese says a total ban on under-16s from accessing social media is a “good way to go” in curbing the serious online harms impacting children, declaring that Peter Dutton was just playing “catch up” by promising to legislate such a ban within the first 100 days of the Coalition taking office.

The Opposition Leader on Thursday doubled down on his pledge to use age verification to stop children accessing social media before the age of 16, saying it was “inconceivable” for tech giants to allow 13 year olds on to their platforms.

CyberCX chief strategy officer and former eSafety commissioner Alastair MacGibbon told The Australian tech giants made significant profit from allowing as large a cohort on to their platforms as possible and that mandates stopping them from “monetising our kids” were needed.

“I applaud politicians for ­actually starting to talk about taking action on something that I think deep down most people in the public have wanted a stance on for a while,” he said.

However, Mr MacGibbon said the technology that would be needed to implement such a ban “still has a long way to go”.

Opposition communications spokesman David Coleman said the Coalition would announce the measures it would use to implement the ban in due course, but pointed to the fact social media platforms were already using age verification technology in some circumstances.

“They do it for Facebook dating in the US, they do it for Instagram if you change your age from say 15 and say you’re 18 – because in that case it’s so obvious the person is probably a child, they have to look into it – but they’ve been doing that for some time. So the idea that the technology doesn’t exist, or it’s not possible, is wrong,” he said.

“We’ll release further details in due course, but plainly, the companies will be required to comply with the new law, and that will include penalties if they don’t.”

Labor invested more than $6m into an age assurance trial earlier this year, but the initiative is largely aimed at investigating technologies that can prevent people under 18 accessing adult content such as pornography.

It is not yet clear whether the trial will look to test the technology on social media access.

In response to Mr Dutton on Thursday handing “an offer of friendship to the government to make sure that we can join up together on this really important issue”, the Prime Minister indicated a ban would have bipartisan support.

“A ban, if it can be effective, is a good way to go,” he said.

But in a veiled swipe at Mr Dutton, Mr Albanese said the Opposition Leader was a latecomer to the issue. “It’s good that he’s caught up, and I welcome him catching up,” he said.

When asked if raising the age to 14, as South Australia is currently investigating, or 16, Mr Albanese said 16 was reasonable.

The increase in momentum for social media bans comes as eSafety research found a small number of “harmful voices”, including that of Andrew Tate – known for his misogynistic views – dominated online conver­sations about masculinity.

Here is the link:

https://www.theaustralian.com.au/nation/politics/anthony-albanese-backs-under16-social-media-ban/news-story/602f0cc94b74c1fe3863e7cd5d22623c

As an oldie this proposal has no effect on me, but I wonder about the choice of 16 as the cut off age. As at that age, the range of the level of maturity is very wide with some pretty mature and some really still pretty immature and young.

My feeling is that above age 12 or 13 there is a role for parents to decide what usage privileges are appropriate.

Of course as the article points out it is pretty hard to find or implement technology to regulate just what happens!

How do readers think this issue should be managed?

David.

Wednesday, June 19, 2024

Apple Seems To Really Be Convinced That “Apple Intelligence” Is Important! Time Will Tell.

This appeared last week:

What Apple Intelligence means for you

Though the “where” and “when” of Apple’s new AI system are still a mystery, we do know lot about the “who”, “what” and “why”.

John Davidson Columnist

There is one important thing to know about Apple’s big move this week into artificial intelligence: it’s not here yet, and strictly speaking it may never be here.

On Monday, US time, Apple announced dozens of new AI features for its phones, tablets, virtual-reality headsets and PCs, including a long-awaited overhaul of its Siri voice assistant, new ways to handle incoming emails and write outgoing ones, and new ways to generate, find and manage photos on Apple devices.

Many of the headline announcements, including the Siri overhaul and the deal Apple struck with OpenAI, which may see OpenAI’s ChatGPT occasionally popping up in response to Siri requests, fall under an AI system Apple calls “Apple Intelligence”.

Technically, Apple has only announced Apple Intelligence for the USA (or, to be even more pedantic, for “US English”), promising to start rolling out some AI features in the American autumn as part of the next major software update to the iPhone, iPad and Mac.

Apple has yet to reveal when, or even if, Apple Intelligence will come to Australia. The only public (or indeed private) statement is that “additional languages will come over the course of the next year” after the US English launch. So it could be 2025 before it rolls out here.

While it’s possible Australian users will be able to get earlier access to the features simply by going into the settings menus of their device and switching its language to US English, the geographic limit appears to be about more than just language (see private cloud compute below).

Early indications are Apple may enforce the limit through other means, too, such as checking the home address of the Apple iCloud account users will need to access the services. All is not lost, however. 


Apple Intelligence versus machine learning

Apple Intelligence refers primarily to the set of services that will require up-to-date hardware to run: the stuff that will need a Mac or iPad running an Apple Silicon chipset, or an iPhone greater than, or equal to, the iPhone 15 Pro.

Many of the interesting new AI features don’t fall under that umbrella. The enhancements to Apple’s browser Safari, for instance, which will allow web users to summarise and even index the content of websites, come under a different category that Apple generally refers to as “machine learning” rather than “Apple Intelligence”.

(In numerous background briefings I’ve had in the Apple Park headquarters in Cupertino this week, that’s the pattern that has emerged. If Apple is calling it “machine learning”, you won’t need a new iPhone for it, and it won’t be initially limited to the USA. Technically, Apple Intelligence could accurately be described as machine learning, too, but as you’ll see in a moment, it’s a much bigger and more ambitious framework than adding a bit of ML to a single app.)

The new calculator for the iPad falls under “machine learning” and should be available when iPadOS 18 is launched in September or October

Another example of this machine learning/Apple Intelligence divide is the Photos app that appears on iPhones, iPads, Macs and, from July 12, on Apple’s Vision Pro.

It’s receiving a significant update, with new features such as the ability to organise photos according to the grouping of people and pets in them. All the photos with only you and your dog will be separated out into one group, while photos with you, your dog and your mother will have another group.

That’s a machine learning feature that every Apple user will get.

But click on one of those photos to edit it, and only customers who qualify for Apple Intelligence will see a button for a new “Clean Up” tool that scans an image, finds people who don’t belong in the photo, and deletes them.

In addition to the machine learning and Apple Intelligence features, Apple has also announced scores of other new features in devices that have nothing to do with AI or ML.

iPhone users will be able to type in messages and schedule their delivery up to two weeks in advance, so they can send birthday wishes when they remember to do so, rather than risk forgetting to do so on the day itself.

(A handy tip coming from Apple insiders who have been testing that feature for months is to schedule the message for a random time – 8.07am rather than 8.00am, for example – so the recipient doesn’t realise they’re not top of mind.)

It’s fair to say, however, that the most exciting features fall under Apple Intelligence, and that the wait for them to arrive could feel like a long one.

How it will work

At the core of Apple Intelligence is a database called the semantic index.

Whenever an iPhone, iPad or Mac gets a new message or takes a new photo, or whenever the user creates a new document, the same mechanism that indexes the file so it can be found by Apple’s existing Spotlight search system will also send it off for processing by an AI indexer, which will extract the meaning of that document and store the meaning in the semantic index.

Where Spotlight might index, say, a new iPhone photo by the date and time it was taken, the semantic indexer might identify the people in the photo, match them up to people in the phone’s contacts database, and index them by the relationship they have to the iPhone’s owner.

Semantic indexes are protected by the same hardware-based “secure enclave” system that Apple uses to secure ultra-sensitive information such as passwords, and Apple insists they will never leave the device they’re created on.

Even when an Apple customer has multiple devices sharing data, the semantic index will never be shared between them. It will be created and maintained separately on each device.

Another database sitting on an Apple Intelligence device will contain a list of what Apple calls “app intents”.

It’s like an app store, except instead of holding lists of apps, it holds a list of the functions that the installed apps on the device can perform in an automated way.

If, say, the iPad version of Adobe’s Photoshop was able to take an image saved in the PNG format, and automatically export it as a JPEG file, Adobe would be able to inform the iPad of that function by calling up the app intent system when Photoshop was first installed on the iPad.

In addition to all of that, every Apple Intelligence device will contain dozens of different AI language models for analysing and generating text, as well as “diffusion” models for analysing and generating images.

Compared to the large language and diffusion models used in the cloud by chatbots such as OpenAI’s ChatGPT and Google’s Gemini, Apple’s models will be smaller and more numerous, each of them fine-tuned to perform a specialised task using as little processing power as possible.

(Samsung’s latest Galaxy phones and Google’s latest Pixel phones employ the same architecture: they contain numerous small models, which swap in and out of the phone’s AI processor depending on the task.)

Now, sitting above all of this will be a software layer known as an “orchestrator”, and it’s here that the fun begins.

The orchestrator takes incoming requests from the user, decides what data from the semantic index is needed to fulfil those requests, which model is best suited for the task, looks into the list of intents to see what apps might be able to perform functions useful to that task, and sets the wheels in motion getting the job done.

Using Apple Intelligence, it might be possible, for instance, for a user to ask Siri to find a picture of an uncle (identified using the semantic index), put a party hat on his head (using a diffusion model), convert that picture to a jpeg file (using the Photoshop function listed in the app intent database), and email it him (address from the semantic index, email function found in the app intent database) on his birthday (from the calendar, via the semantic index).

Private Cloud Compute

Some jobs, however, will require language or diffusion models that are simply too large, or require too much processing power, to be stored on the iPhone, iPad or Mac.

To run such models, Apple is filling data warehouses around the world with custom AI servers it’s building using the Apple Silicon chips that go into Macs and iPads, and running a highly secure version of the iPhone’s operating system, iOS.

That rollout is starting in the US, and it’s part of the reason Apple Intelligence will be only in US English when it first appears.

These “Private Cloud Compute” data warehouses are integral to Apple Intelligence, since they’re the only way Apple can send data from the user’s very private semantic index to the cloud, without breaking Apple’s promise that Apple Intelligence is private and secure.

When the orchestrator on a device running Apple Intelligence decides a job is too big for the models stored on the device itself, it may decide to bundle up the job, together with context data from the semantic index, and upload it to Private Cloud Compute.

There, the model will process the request, return the results, and then delete any record of the request.

(Unusually for a cloud server, PCC computers have no storage devices whatsoever, so not only can they not save semantic index data, they can’t even save the error logs and other logs that administrators usually rely on to manage server farms. It’s an ambitious strategy designed to assuage concerns about AI privacy.)

OpenAI

There is a final scenario, however, that has led to some controversy with the likes of Elon Musk.

If the orchestrator decides that the job is too big even for Private Cloud Compute, it will put some warning messages on the screen telling the user they’re about to leave Apple airspace 

and fly off into the unknown.

If the user agrees to throw caution to the wind, the orchestrator will bundle up the job and send it to a third-party system for processing.

On Monday, the only third-party Apple had inked a deal with was OpenAI, though officials said they expected to get Google on board, too, allowing jobs to be sent to Gemini.

It’s likely, however, that OpenAI and Google are both stopgap measures, until Apple’s own Personal Cloud Compute systems have models that can handle generic generative AI tasks themselves.

The real point of giving the orchestrator the last-resort option of going to a third party is to handle highly specialised AI tasks, such as medical models that take images of skin shot on an iPhone, and look for signs of skin cancer.

The very moment Apple gets its own Personal Cloud Compute system to a level good enough to handle more generic generative AI queries, it’s nearly certain it will decide it’s cheaper, more private and better PR for the orchestrator to take that option instead of using OpenAI or Google.

The only thing uncertain is how long it will take Apple to get its models to that level. But before we even get to worry about that uncertainty, we have another one to deal with.

Just when is any of it coming here?

John Davidson attended Apple’s Worldwide Developer Conference as a guest of Apple.


Here is the link:

https://www.afr.com/technology/what-apple-intelligence-means-for-you-20240613-p5jlie

I have to say I have read this through a couple of times and I am not sure just where it is all heading.  I suspect it will be a while before that is clear as well as when it is going to reach good old OZ

I suspect this is more of the revolution I have been chatting about in the last few months.

These newer technologies are seeming to have a life all of their own as they advance and I have to say I am still not sure what the actual destination is!

As with all things time will tell I guess….

David.