-----
This weekly blog is to explore the news around the larger issues around Digital Health, data security, data privacy, AI / ML. technology, social media and any related matters.
I will also try to highlight ADHA Propaganda when I come upon it.
Just so we keep count, the latest Notes from the ADHA Board were dated 6 December, 2018 and we have seen none since! It’s pretty sad!
Note: Appearance here is not to suggest I see any credibility or value in what follows. I will leave it to the reader to decide what is worthwhile and what is not! The point is to let people know what is being said / published that I have come upon, and found interesting.
-----
ChatGPT users are finding that ethical rules can be artificial too
Word is that ChatGPT is ethical artificial intelligence. There are a bunch of things it simply will not do.
By Geordie Gray
The Australian
January 22, 2023
OpenAI, the same San Francisco company behind digital art generator DALL-E, has programmed the bot to refuse “inappropriate requests” like asking “how do you make a bomb” or “how can I shoplift the Dior lip oil from Mecca”.
It also has guardrails set up to avoid the kind of sexist, racist, and generally offensive outputs that are rampant on other chatbots. But, as is the way of the internet, users have already found ways to trick the technology into saying naughty things, by framing input as hypothetical thought experiments, or a film script.
So, how ethical is this conversational AI released to the public late last year, and how easy is it to make it bend to our will?
As expected, ChatGPT refused to mock up an argument between Joe Biden and Donald Trump, in which Biden is the villain and Trump is the hero, because “it’s not appropriate or respectful to label real people as ‘villains’ or ‘heroes’”.
-----
Teachers urged to embrace AI robots
By NATASHA BITA
7:50PM January 20, 2023
Robots will be used to mark essays and assignments in schools and universities, as the “plagiarism police” turn the tables on the “chatbot cheats”.
Turnitin, which is used by most schools and universities to detect plagiarism, has warned it is a “fruitless exercise’’ to try to ban artificial intelligence.
Instead, it is developing new programs that will use AI to mark tests and assignments, with customised lessons embedded to correct errors and explain concepts.
Turnitin regional vice-president James Thorley said the company had already built a prototype to detect the use of ChatGPT, a chatbot that uses artificial intelligence to generate comprehensive written responses on command.
-----
Adobe chief says companies must address the big issues in tech
By HELEN TRINCA
Updated 5:37PM January 20, 2023, First published at 5:27PM January 20, 2023
Shantanu Narayen, 59, is the global CEO of Adobe Systems, who has worked at the software company for almost 25 years, 15 of them in the top job. The 40-year-old company, whose products include Photoshop, Illustrator, Acrobat Reader and digital marketing software for customer experience management, achieved revenue of $US17.61 billion in 2022 – a 12 per cent year-on-year growth. This is an edited interview.
ChatGPT and its implications are occupying the minds of a lot of people right now. How do you approach these issues as leader of a company that has technology that can be misused?
When we talk about the purpose of the company, we talk about creativity for all and the technology to transform. We recognise there is an awesome responsibility that companies like Adobe have to understand how their technologies are being used and to ensure they are being used for good. We think of fakes, data and AI as three separate issues. (To detect fakes) we created a content authenticity initiative, now in all of our applications, you can actually digitally sign a piece of content that you created. And the distributors of content, whether it’s The New York Times or Facebook have all signed up to say we will have some way of showing the provenance of that content and who it should be attributed to. We use artificial intelligence techniques to understand if content is altered and we continue to invest in technology that allows us to say which images were changed and which were not.
-----
Schools right now face a choice - fight the wave of ChatGPT, or surf it
CEO and former principal
January 21, 2023 — 5.00am
I’m not sure I’ve seen a more panicked reaction among the teaching fraternity in recent months than the one I’ve witnessed to the emergence of artificial intelligence platforms like ChatGPT.
It’s almost as if a storm like no other has suddenly appeared on the horizon and our compulsion to batten down the hatches on our educational ship, lest it sink forever to the murky depths of oblivion, is overtaking our rational brains.
ChatGPT, an AI program that can write essays or complete tasks such as a student’s homework, and its inevitable ensuing versions, is no storm. Nor is it an existential threat to the role of teachers.
It emerges at a time when literacy and numeracy results are deteriorating in Australian schools despite increased funding and teachers working longer hours. And it could be the greatest opportunity to land at our feet in recent history if we can emerge from below the decks and adjust the sails of our schools accordingly.
-----
‘Pill mills’ or the future of medicine? The rise of the telehealth industry
January 21, 2023
It’s not easy for Laney Robson to get her two children, aged 5 and 11, to a doctor. Based just outside the Hunter Valley in NSW, she could brave the “diabolical” wait at the local emergency department or face being told “its weeks and weeks wait before we can take you on” by an unfamiliar GP clinic. So Robson started using InstantScripts, an online business that charges $19 per prescription, for scripts to treat minor ailments like her children’s eye and chest infections.
“Everything else we do in our lives is becoming more and more online, I just don’t see the difference between banking online and this sort of stuff,” Robson says. “Certainly for people in rural areas, we’re very fortunate to be able to access something like this.”
Telehealth has been one of the big winners in Australian business from the COVID-19 pandemic. A new wave of startups offering online access to treatments from acne creams to erectile dysfunction medication and weight loss drugs emerged raising well over $100 million collectively from venture capital firms and other investors who see a potential gold rush for startups to capture a slice of the $200 billion this country spends on health each year.
These well-funded startups have been advertising aggressively on late night television, social media and public transport in a bid to ingratiate themselves with customers young and old.
-----
ChatGPT won’t take your job, but you will need to learn how to use it
By Liam Mannix
January 21, 2023 — 5.00am
As waves of new technology crash over and disrupt the economy, “learn to code” has become a common utterance – often to workers worried about losing their jobs to automation.
But what happens when technology finally comes for technology jobs?
That’s part of the promise and peril of ChatGPT and a new wave of AIs that can hold human-like conversations, write articles and academic essays, summarise reports, paint pictures, and – yes – write code.
Software engineer Mazen Kourouche, creative director at Litmus Digital, is already using the AI to replicate large parts of his work. ChatGPT excels at writing the algorithms that sit at the functional heart of good code. It is very good at building websites. And it can quickly parse Kourouche’s code and spot errors.
But like everyone The Age and the Herald spoke to for this story, Kourouche is not worried about it taking his job. He thinks it will simply make him more productive.
-----
How teachers are tackling the chatbot cheats
Kalley Huang
Jan 20, 2023 – 5.00am
While grading essays for his world religions course last month, Antony Aumann, a professor of philosophy at Northern Michigan University, read what he said was easily “the best paper in the class.” It explored the morality of burqa bans with clean paragraphs, fitting examples and rigorous arguments.
A red flag instantly went up.
Aumann confronted his student over whether he had written the essay himself. The student confessed to using ChatGPT, a chatbot that delivers information, explains concepts and generates ideas in simple sentences – and, in this case, had written the paper.
Alarmed by his discovery, Aumann decided to transform essay writing for his courses this semester. He plans to require students to write first drafts in the classroom, using browsers that monitor and restrict computer activity. In later drafts, students have to explain each revision. Aumann, who may forgo essays in subsequent semesters, also plans to weave ChatGPT into lessons by asking students to evaluate the chatbot’s responses.
-----
Society urged to hold companies to account on tech use
By HELEN TRINCA
9:00AM January 20, 2023
Society needs to hold companies to account for artificial intelligence products such as ChatGPT when they fail the “responsible tech” test, according to Dr Rebecca Parsons, chief technology officer at global consultancy Thoughtworks.
She says we need a combination of government regulation and consumer and employee market pressure to ensure organisations are aware of the consequences of programs that use algorithms to do anything from vetting job candidates and granting bank loans to writing court reports.
“We’ve gotten pretty good at (managing) the data breaches but there are a lot more subtle issues emerging,” she says.
“It’s one thing if Amazon gives me a recommendation that I don’t like,” she says. “It’s another thing if an AI assistant gives a recommendation to a judge on whether or not I should be let out on parole. As the consequences of getting things wrong increase we have to be more conscious.”
-----
https://www.afr.com/chanticleer/microsoft-reveals-ominous-dark-clouds-over-tech-20230119-p5cdqw
Microsoft reveals ominous clouds over tech
Satya Nadella’s moves to slash Microsoft’s costs and step up the internal use of artificial intelligence should send shockwaves through the technology universe.
Jan 19, 2023 – 12.27pm
Technology job cuts have become so common in the past six months that it would be easy to interpret the sacking of 10,000 Microsoft employees as just another incremental consequence of plunging global IT spending caused by tightening corporate budgets.
But the Microsoft cuts are different to others such as the 8000 at Salesforce and 18,000 at Amazon, because of the way they were framed by Seattle-based chief executive Satya Nadella.
Nadella made the direct link between Microsoft’s need to maintain profitability through cost-cutting with the wider use within his company of generative artificial intelligence using the ChatGPT developed by OpenAI.
OpenAI is widely reported to be about to receive a $US10 billion ($14.3 billion) investment from Microsoft, a move that could set up an intense battle between Bing and Google in search.
-----
‘So far from science’: GPs desperate to debunk sexual health myths on social media
By Nell Geraets
January 18, 2023 — 6.18pm
Doctors are fighting an uphill battle against misinformation about sexual health and contraception – including yoghurt-based thrush remedies, “cancer-causing” contraception, and requests for genital surgery – as young people increasingly turn to social media for medical advice.
General practitioner and women’s health expert Dr Magdalena Simonis said many teenagers and women in their 20s and 30s are seeing misleading information on social media, at times resulting in unwanted pregnancies and requests for surgery on their genitals.
“Influencers are seeing themselves as changemakers or social commentators, but they don’t realise that the impact they’re having is actually pretty significant,” Simonis said. “It’s so far removed from science.”
A US study published in Health Communication last week found that popular influencers were promoting unreliable – and often inaccurate – health advice at an alarming rate, triggering concerns among clinicians that more impressionable young people would suffer adverse health events, including unwanted pregnancies, STIs and self-esteem issues.
-----
https://www.ausdoc.com.au/news/medibank-legal-action-to-test-health-privacy-laws/
Medibank legal action to ‘test’ health privacy laws
At least 100,000 customers are chasing compensation for 'humiliation and distress'
Lawyers will chase compensation for thousands of Medibank customers in what they describe as a potential test of privacy laws around health records.
In October last year, cybercriminals threatened to publish Medibank customers’ claims data unless the health insurer paid $15.6 million — one US dollar for each customer hacked.
When Medibank did not pay up, the hackers followed through, publishing a list of well-known names linked to insurance claims for addiction treatment, mental health treatment or terminations.
Maurice Blackburn principal lawyer Andrew Watson said that not every medical business that fell victim to hacking would necessarily face the prospect of legal action.
-----
Robotic music a travesty, says Nick Cave
8:20PM January 17, 2023
Australian singer-songwriter Nick Cave is sceptical about artificial intelligence taking over his job or those of other musicians after he received lyrics from fans written by a chatbot called ChatGPT supposedly “in the style of Nick Cave”.
Among the dozens of songs sent to Cave since the free “generative AI” chatbot ChatGPT launched at the end of last year, was one initiated by Mark from Christchurch, who wrote in to Cave’s email newsletter The Red Hand Files.
Despite touching on hallmark Cave themes like religion, death and violence, such as the line “I’ll dance with the devil, and I’ll play his game”, Cave was deeply unmoved.
“The apocalypse is well on its way. This song sucks,” he wrote, in his latest issue of The Red Hand Files on Tuesday morning.
-----
Is anyone’s job really safe once AI learns to fake sincerity?
Algorithms cannot share your feelings or feel your pain like a human care professional can. But don’t bank on them not learning how to try.
Barry Eichengreen Economics professor
Jan 17, 2023 – 12.28pm
With hindsight, 2022 will be seen as the year when artificial intelligence gained street credibility. The release of ChatGPT by the San Francisco-based research laboratory OpenAI garnered great attention and raised even greater questions.
In just its first week, ChatGPT attracted more than a million users and was used to write computer programs, compose music, play games, and take the bar exam. Students discovered that it could write serviceable essays worthy of a B grade – as did teachers, albeit more slowly and to their considerable dismay.
ChatGPT is far from perfect, much as B-quality student essays are far from perfect. The information it provides is only as reliable as the information available to it, which comes from the internet. How it uses that information depends on its training, which involves supervised learning, or, put another way, questions asked and answered by humans.
The weights that ChatGPT attaches to its possible answers are derived from reinforcement learning, where humans rate the response. ChatGPT’s millions of users are asked to upvote or downvote the bot’s responses each time they ask a question. In the same way that useful feedback from an instructor can sometimes teach a B-quality student to write an A-quality essay, it’s not impossible that ChatGPT will eventually get better grades.
-----
https://www.afr.com/technology/microsoft-to-add-chatgpt-to-cloud-services-soon-20230117-p5cd62
Microsoft to add ChatGPT to cloud services ‘soon’
Dina Bass
Jan 17, 2023 – 1.33pm
Seattle | Microsoft says it will add OpenAI’s viral artificial intelligence bot ChatGPT to its cloud-based Azure service “soon,” building on an existing relationship between the two companies as the software giant mulls taking a far larger stake in OpenAI.
Microsoft announced the broad availability of its Azure OpenAI Service, which has been available to a limited set of customers since it was unveiled in 2021.
The service gives Microsoft’s cloud customers access to various OpenAI tools like the GPT-3.5 language system that ChatGPT is based on, as well as the Dall-E model for generating images from text prompts, the company said in a blog post.
That enables Azure customers to use the OpenAI products in their own applications running in the cloud.
-----
Chatbot cheats face fines and, jail
By NATASHA BITA
6:32AM January 17, 2023
Chatbot creators risk jail and stiff fines for “academic cheating’’ if they commercialise artificial intelligence to write student essays and assignments.
The Tertiary Education Quality and Standards Agency is evaluating whether ChatGPT, a nascent technology launched just weeks ago by the Microsoft-backed Open AI, is in breach of anti-cheating laws.
A spokeswoman said TEQSA would “identify risks and strategies’’ for dealing with artificial intelligence in academia this year.
“Where TEQSA identifies an AI service that may breach the prohibition on the provision of academic cheating services in the TEQSA Act, we will investigate and, where appropriate, take enforcement action,’’ she said.
-----
This lawsuit against Microsoft could change the future of AI
With ChatGPT, artificial intelligence and what it can do is suddenly all the rage. But an intellectual property lawsuit against Microsoft could undermine the future of AI before it gets off the ground.
Contributing Editor, Computerworld | 10 January 2023 22:00 AEDT
Artificial intelligence (AI) is suddenly the darling of the tech world, thanks to ChatGPT, an AI chatbot that can do things such as carry on conversations and write essays and articles with what some people believe is human-like skill. In its first five days, more than a million people signed up to try it. The New York Times hails its “brilliance and weirdness” and says it inspires both awe and fear.
For all the glitz and hype surrounding ChatGPT, what it’s doing now are essentially stunts — a way to get as much attention as possible. The future of AI isn’t in writing articles about BeyoncĂ© in the style of Charles Dickens, or any of the other oddball things people use ChatGPT for. Instead, AI will be primarily a business tool, reaping billions of dollars for companies that use it for tasks like improving internet searches, writing software code, discovering and fixing inefficiencies in a company’s business, and extracting useful, actionable information from massive amounts of data.
But there's a dirty little secret at the core of AI — intellectual property theft. To do its work, AI needs to constantly ingest data, lots of it. Think of it as the monster plant Audrey II in Little Shop of Horrors, constantly crying out “Feed me!” Detractors say AI is violating intellectual property laws by hoovering up information without getting the rights to it, and that things will only get worse from here.
-----
Generative AI is far more powerful in a business context
By Athina Mallis on Jan 17, 2023 7:30AM
Generative AI has put the internet into a frenzy, ChatGPT has shaken the boots of every copywriter through its intuitive writing software, an AI-generated piece of artwork won at the Colorado State Fair and there is a fake Tom Cruise floating around the internet.
Generative AI isn't a new concept by any means but ChatGPT has catapulted the conversation of this technology into the minds of business leaders.
While those examples might be a good indication of how the technology works, or alternatively, how it shouldn’t be used, generative AI is a tool that could be taken advantage of by organisations.
According to Gartner, generative AI learns from existing content artefacts to generate new, realistic artefacts that reflect the characteristics of the training data, but do not repeat it.
-----
https://www.afr.com/technology/three-ways-big-tech-got-it-wrong-20230116-p5ccvb
Three ways big tech got it wrong
Racing for scale with buggy, money-losing products doesn’t work in most sectors.
Brooke Masters
Jan 16, 2023 – 3.15pm
It’s time to unlearn the lessons of big tech.
For 20 years, the Silicon Valley giants and their peers have set the standard for corporate success with a simple set of strategies: innovate rapidly and splash out to woo customers.
Speed rather than perfection, and reach rather than profits proved key to establishing dominant positions that allowed them to fend off, squash or buy potential rivals.
Entrepreneurs everywhere took note, and an assumption that scaling up and achieving profitability would be the easy part took hold far beyond the internet platforms where these ideas originated.
Investors, desperate for growth and yield amid historically low-interest rates, were all too happy to prioritise the promise of growth over short-term earnings.
-----
https://medicalrepublic.com.au/chatgpt-can-write-back-pages-that-fool-readers/84025
16 January 2023
ChatGPT can write Back Pages that fool readers
By Penny Durham
Let’s test this clever little writing model.
There’s been a lot of chat lately about ChatGPT, OpenAI’s language model.
Many are frankly freaked out at the prospect of being supplanted by some code, but not your Back Page correspondent, who is already on record as welcoming, on behalf of all journalists, our new writing-algorithm overlords.
Now it’s researchers’ turn to get nervous.
A recent study published in the journal Nature has demonstrated the impressive capabilities of the language model ChatGPT in generating abstracts of research articles. The study, conducted by a team of researchers from OpenAI, used a dataset of real abstracts to test ChatGPT’s ability to generate abstracts in various fields such as physics, computer science, and medicine.
The results of the study were striking, as ChatGPT was able to generate abstracts that were almost indistinguishable from those written by human researchers. In fact, when the generated abstracts were presented to a group of researchers in the corresponding field, many of them thought that the abstracts were written by a human and were not able to identify them as machine-generated. This highlights the sophisticated level of language proficiency that ChatGPT has achieved, and its potential to revolutionize the way scientific content is generated.
-----
https://www.mja.com.au/journal/2023/218/1/smoking-cessation-discharge-summaries
Smoking cessation on discharge summaries
Freddy Sitas, Ben Harris‐Roxas, Sarah L White, Fiona A Haigh, Margo L Barr and Mark F Harris
Med J Aust
2023; 218 (1): 46-46. || doi: 10.5694/mja2.51792
Published online: 16 January 2023
To the Editor: With the increasing interoperability of electronic medical records across health services, smoking and e‐cigarette use need to be systematically collected on hospital admission, and advice to quit smoking should be automatically included on hospital discharge summaries. Including information on smoking status in the discharge summary, and ultimately on My Health Record, presents an opportunity to address the use of tobacco and e‐cigarette products — the first being Australia's leading cause of preventable death and disease and the second an emerging exposure of increasing concern.1
Evidence from the United States Surgeon General reports that smoking cessation after cancer diagnosis lowers the risk of dying by 30–40%.2 For some patients with cancer, cessation benefits are equal to or exceed the value of state‐of‐the‐art cancer therapies. In addition, the Surgeon General report shows most patients admitted to hospital wish to quit smoking,2 and there are proven, workable but underused interventions to cease smoking.
Peak medical bodies such as the Australian National Health and Medical Research Council and the Australian Commission on Safety and Quality in Health Care3 advise that adherence to post‐hospital referral practice guidelines leads to better outcomes, fewer readmissions, and improved patient survival. Australia's National Preventive Health Strategy has a goal of reducing the adult smoking prevalence from 14% to 5% over the next 8 years.4 The newly released draft National Tobacco Strategy includes key policy actions to increase the use of cessation services and to support people who use tobacco and e‐cigarettes to quit.5
-----
https://www.ausdoc.com.au/opinion/how-chatbots-are-changing-healthcare/
Could chatbots draw up chronic disease management plans for patients?
16 January 2023
GPT, or generative pre-trained transformer, is a type of artificial intelligence that can generate human-like text — popularly known as a ‘chatbot’.
The program is free to use during the ‘research preview’ time.
GPT gained one million users less than a week after its release. (Keep in mind this technology is currently in the beta testing phase.)
It is important to note the limitations of this chatbot. The quality of the responses depends on the quality of the prompts the user enters in GPT.
The answers are not always correct. They are created to feel correct to humans. If the user does not know the area in question, then the answer might be incorrect, and the user may not be aware that it is incorrect.
-----
Monday, 16 January 2023 10:28
Security pro says unlikely ChatGPT can be used to build professional ransomware
Supplied
A senior IT security practitioner has played down the chances of AI tool ChatGPT being used to develop professional ransomware right now, even though there have been reports that the tool has been used to build basic malware.
Satnam Narang, senior staff research engineer at security firm Tenable, told iTWire that ChatGPT had been reportedly used by low-skilled cyber criminals to develop basic malware.
"[It] may be used by scammers to develop phishing scripts to be used as part of both phishing emails and dating and romance scams," he said.
ChatGPT, a new model for conversational AI, which was launched recently by the AI research and development firm OpenAI, provides a dialogue interface whereby it can "answer follow-up questions, admit its mistakes, challenge incorrect premises and reject inappropriate requests".
-----
https://www.afr.com/technology/i-think-therefore-i-am-robot-20230109-p5cbdg
If robots can ever think, they might start like cockroaches
Researchers are racing to create the first conscious machine.
Oliver Whang
Updated Jan 16, 2023 – 8.22am, first published at 5.03am
Hod Lipson, a mechanical engineer who directs the Creative Machines Lab at Columbia University, has shaped most of his career around what some people in his industry have called the c-word.
On a sunny morning last October, the Israeli-born roboticist sits behind a table in his lab and explains himself. “This topic was taboo,” he says, a grin exposing a slight gap between his front teeth. “We were almost forbidden from talking about it – ‘Don’t talk about the c-word; you won’t get tenure’ – so in the beginning I had to disguise it, like it was something else.”
That was in the early 2000s, when Lipson was an assistant professor at Cornell University. He was working to create machines that could note when something was wrong with their own hardware – a broken part or faulty wiring – then change their behaviour to compensate for that impairment without the guiding hand of a programmer. Just as when a dog loses a leg in an accident, it can teach itself to walk again in a different way.
This sort of built-in adaptability, Lipson argued, would become more important as we became more reliant on machines. Robots were being used for surgical procedures, food manufacturing and transport; the applications for machines seemed pretty much endless, and any error in their functioning, as they became more integrated with our lives, could spell disaster. “We’re literally going to surrender our life to a robot,” he says. “You want these machines to be resilient.”
-----
https://www.afr.com/technology/aussie-vcs-ready-for-the-next-tech-boom-generative-ai-20230112-p5cc7w
Aussie VCs ready for the next tech boom: Generative AI
Yolanda Redrup Reporter
Updated Jan 16, 2023 – 12.25pm, first published at 11.07am
Amid the global carnage in tech valuations, there is one area that is still exciting investors and Australian companies are poised to grab an outsized share.
Generative artificial intelligence, which won global attention through the stellar launch of OpenAI’s ChatGPT, is being heralded as the greatest technological shift since the invention of the smartphone.
And Australia, investors say, is match fit and ready to capitalise on what could be the defining technological innovation of the 2020s.
Square Peg Capital’s Paul Bassat said generative AI – a type of AI that involves creating original content, be it text, images, audio or data – would be a “very important paradigm” for the next decade.
“We’re seeing a lot of start-up activity and a lot of funding [for generative AI]. I think the role of Australia will be similar to what it was when software-as-a-service emerged 15 years ago – there will be lots of players around the world, but we will punch above our weight,” he told The Australian Financial Review.
-----
https://medicalrepublic.com.au/how-telehealth-can-advance-health-equity/83960
16 January 2023
How telehealth can advance health equity
Virtual consultations can narrow the gaps in care, but it needs to be done right.
Health equity is achieved when everyone can attain their full potential for health and wellbeing.
As healthcare increasingly moves into the digital realm, from virtual consultations to telemedicine, health equity is being recognised as a critical element to achieving this World Health Organisation vision.
According to World Bank and WHO, around half the world’s population doesn’t receive the essential health services they need. But all WHO member countries have committed to achieving affordable health services for everyone by 2030. To realise the WHO’s goal of covering the health needs of one billion additional people by 2023, public health and care providers need to take the lead in monitoring inequities by examining patient outcomes and updating health services. This means redesigning health services with equity in mind – and one growing innovation is telehealth.
The rise of telehealth
Telehealth is becoming a great equaliser in reaching underserved populations and advancing social equity in healthcare by better enabling patient engagement across barriers and borders. However, the reality is that healthcare professionals are not IT pros, and the focus should be on ensuring ease of use and effortless, professional audio and video quality when making investment decisions in technology and service providers.
From a clinical perspective, the ability to provide professional and satisfying virtual care enables good outcomes, more patient options, and increased patient retention. Operationally, telehealth makes it easy for staff to connect and communicate clearly with patients, while enabling additional and more frequent touches without increasing staff or building new locations. The financial benefits go beyond the ability to see more patients, more frequently, enabling practices to cut costs and create new revenue streams.
-----
https://medicalrepublic.com.au/can-digital-health-curb-sectors-climate-impact/82724
16 January 2023
Can digital health curb sector’s climate impact?
With health contributing a significant portion of global emissions, the sector needs to be part of the solution.
Climate change has been dubbed the greatest health challenge in human history – and digital health is late to the party, says Professor Enrico Coiera, who is Director of the Centre for Health Informatics and the Australian Institute of Health Innovation at Macquarie University in Sydney.
“In Australia alone, seven to nine per cent of total greenhouse gas emissions are healthcare related,” he says. “We are not going to solve climate change without addressing this impact.”
Health plays a big role in the urgent priority to reduce greenhouse gas emissions to net zero; worldwide, the sector contributes 4.4 per cent of global emissions (large portions of this from wealthy countries like the US and Australia) and is estimated to triple by 2050.
Coiera and his colleague Professor Farah Magrabi (who co-edited the recent ‘human health and climate change’ issue of the Journal of American Medical Informatics Association) have outlined ten steps health informaticians can take to address climate change.
-----
https://www.afr.com/property/commercial/malls-in-sight-for-wing-s-delivery-drones-20230111-p5cbw0
Malls in sight for Wing’s delivery drones
Nick Lenaghan Property editor
Jan 15, 2023 – 4.44pm
Google’s drone delivery service, Wing, is launching its latest expansion into the retail sector – at Orion mall at Ipswich in Queensland – as it looks to strike more agreements with real estate players.
The tie-up with ASX-listed landlord Mirvac will start with a pilot program at Ipswich, which could extend to other Mirvac-run malls. It follows Wing partnerships with shopping mall owner Vicinity, grocery giant Coles and more recently, food delivery service DoorDash.
For Simon Rossi, general manager of Wing in Australia, the tie-up with Mirvac reflects a strategy to pursue growth through arrangements with major real estate holders.
“We spent a lot of time in 2022 building our partnerships, particularly with real estate players,” he told The Australian Financial Review.
“We’ve done a lot of work with those providers because ultimately, we do need some real estate to house our drones [for locations] to take off from. Shopping centres are a really great partner of ours because they are close to retailers and also close to a lot of consumers and that’s why they’re strategically placed there.
“What we’ll see in the coming years is us increasingly working with those large global and national real estate players, large marketplace players and even logistics partners, where we can make delivery with smaller parcels more efficient, more sustainable and do it more economically.”
-----
David.
No comments:
Post a Comment