Quote Of The Year

Timeless Quotes - Sadly The Late Paul Shetler - "Its not Your Health Record it's a Government Record Of Your Health Information"

or

H. L. Mencken - "For every complex problem there is an answer that is clear, simple, and wrong."

Wednesday, October 09, 2024

This Is Quite A Useful Update On The Cyber World And Privacy Topics

This appeared last week:

Digital Bytes – cyber, privacy, AI & data update

Oct 2024 Articles Written by Helen Clarke (Partner), Sophie Dawson (Partner), Keith Robinson (Partner), Emily Lau (Senior Associate), Viva Swords (Senior Associate), Lydia Cowan-Dillon (Associate)

While all eyes have been on the recent introduction of the privacy reform Bill to Parliament, there have been a number of other updates that continue to inform the shifting patterns of opportunity, legal risks and regulatory focus in relation to cyber, privacy, AI and data over the last three months.

In addition to the more substantive updates below, also keep in mind:

  • Significant data breaches and cyber incidents continue to make the headlines. Many businesses were affected by the July 2024 CrowdStrike outage, raising questions about the legal implications for regulatory compliance (including privacy compliance), insurance, business continuity and supply chain disruption, as well as whether events like this will trigger a change in approaches to contracts and liability.
  • The Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024 was introduced to Parliament on 12 September 2024. The Bill provides the Australian Communications and Media Authority (ACMA) with new powers to address seriously harmful content (including misinformation and disinformation) on digital platforms, with strengthened protections for freedom of speech.
  • A recent report from Tenable indicates that a significant proportion of Australian companies interviewed can lower their cyber insurance premiums by 5-15 per cent by implementing proactive risk-management measures.
  • ACMA continues to take regular enforcement action – notably, two infringement notices were issued against Telstra for failures to comply with scam rules and disclosure of unlisted phone numbers.
  • Draft legislation to implement a ‘Scams Prevention Framework’ has been released for consultation. The Treasury Laws Amendment Bill 2024: Scams Prevention Framework would require designated sectors to prevent, detect, report, disrupt and respond to scams and to implement appropriate governance arrangements. The framework would initially apply to banks, telecommunication providers and digital platform service providers (including social media, paid search engine advertising and direct messaging services) – future designated sectors would likely include superannuation funds, digital currency exchange providers, other payment providers and online marketplaces. Consultation closes on 4 October 2024.
  • The European Union’s AI Act took effect on 1 August 2024, so we should soon start seeing how this risk-based regulatory model plays out in practice. More importantly, however, Australia is making progress in its own regulatory model for AI – see below.
  • The sale of personal information is a topic of increasing focus, with Oracle reaching a US$115 million settlement (without admitting liability) in litigation claiming that it sold “digital dossiers” with data about hundreds of millions of people. Another settlement has made the news – genetics testing company 23andMe has recently settled a suit in relation to its 2023 data breach for US$30 million, and a promise of three years of security monitoring.
  • A number of different government bodies, such as AUSTRAC and the Australian Cyber Security Centre, have recently issued updated guidance on recommended practices for outsourcing and procurement.

Some Australian privacy reforms progress with Bill introduced to Parliament

Our Technology and Privacy specialists take you on a tour of the reforms in this article.

For those in a hurry, the Privacy and Other Legislation Amendment Bill 2024 (Cth) contains:

  • new infringement notice powers for Australia’s privacy regulator, the Office of the Australian Information Commissioner (OAIC);
  • a statutory tort for certain serious invasions of privacy, which includes a journalism exemption;
  • powers to prescribe foreign jurisdictions as having adequate privacy laws for the purpose of overseas disclosure of personal information;
  • clarification of entities’ information security obligations;
  • updates to the notifiable data breaches regime to provide additional flexibility in handling notifiable data breaches to reduce harm to individuals;
  • provisions mandating the development of a Children’s Privacy Code;
  • new offences to be included in the Criminal Code Act 1995 (Cth) for certain online communications that are “menacing or harassing” (referred to as “doxxing”); and
  • new transparency requirements in relation to entities’ use of personal information for automated decision-making.

Certain aspects of the reforms will be more important to some organisations than others, so it’s important to carefully identify those that may impact your business and operations.

However, at a minimum, businesses should be aware of the specific Australian Privacy Principle (APP) provisions that are proposed to be subject to the OAIC’s “infringement notices” power (see our earlier article for a list). Compliance with these provisions should be an area of focus, given the relative ease with which the OAIC will be able to take action in the event of non-compliance (if reforms are passed).

There is a raft of other recommended changes proposed through the reform process which were not included in this Bill. These reforms may be introduced at a later date.

Proposed mandatory guardrails for AI in high-risk settings

Following the Government’s previous announcement on its proposed risk-based approach to regulating AI (which we reported on in an earlier edition of Digital Bytes), on 5 September 2024, the Department of Industry, Science and Resources has issued a proposals paper for introducing mandatory guardrails for AI in high-risk settings containing:

  • two categories to define “high-risk” settings;
  • 10 proposed mandatory guardrails for the development and deployment of “high-risk” AI; and
  • three proposed approaches to regulation.

Submissions on the proposals paper are due by 4 October 2024.

Definition of “high-risk” settings

The paper proposes two categories that should be considered “high-risk”, where the proposed mandatory guardrails will apply.

The first category addresses where AI use is known or foreseeable, and proposes that what is “high-risk” will depend on the adverse impact (and severity and extent of the impact) on:

  • individuals’ rights recognised by Australian law;
  • people’s physical or mental health or safety;
  • legal effects, defamation or similarly significant effects on individuals;
  • groups of individuals or collective rights of cultural groups; and
  • the broader Australian economy, society, environment and rule of law.

The second category deems any advanced and highly capable AI models, where all possible risks and applications cannot be predicted, to be “high-risk”.

Ten proposed mandatory guardrails

The proposed mandatory guardrails are:

  1. establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance;
  2. establish and implement a risk management process to identify and mitigate risks;
  3. protect AI systems and implement data governance measures to manage data quality and provenance;
  4. test AI models and systems to evaluate model performance and monitor the system once deployed;
  5. enable human control or intervention in an AI system to achieve meaningful human oversight.
  6. inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content;
  7. establish processes for people impacted by AI systems to challenge use or outcomes;
  8. be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks;
  9. keep and maintain records to allow third parties to assess compliance with the guardrails; and
  10. undertake conformity assessments to demonstrate and certify compliance with the guardrails.

Approaches to regulation

The proposals paper canvasses and seeks feedback on the following three options for implementing the proposed mandatory guardrails:

  1. domain specific approach – adapting existing regulatory frameworks to include the proposed mandatory guardrails;
  2. framework approach – introducing framework legislation, with associated amendments to existing legislation; and
  3. whole of economy approach – introducing a new cross-economy AI-specific Act that would define these “high-risk” applications of AI and outline the new mandatory guardrails.

Organisations developing or deploying AI in “high-risk” settings may wish to submit feedback on the proposals paper by 4 October 2024.

AI Voluntary Safety Standard released

On the same day as the release of the ‘mandatory guardrails’ paper described above, the Department of Industry, Science and Resources issued a Voluntary AI Safety Standard setting out 10 voluntary guardrails to help organisations deploying and developing AI to benefit from and manage the risks associated with AI.

The intended audience is both developers and deployers of AI.

The 10 voluntary guardrails are the same as the proposed mandatory guardrails, set out above, with the exception of the 10th guardrail, which is:

Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness.

Organisations already developing or deploying AI should consider adopting the 10 steps in the AI Voluntary Safety Standards – even though this is not currently a legal imperative, adopting this approach will assist with risk management and may become industry practice.

Other key updates in the AI space

The Governance Institute of Australia (GIA) has released an issues paper on artificial intelligence (AI) and board minutes, addressing the growing use of transcription and generative AI tools to transcribe meetings and generate action items or summaries.

The issues paper flags confidentiality, cybersecurity, IP, inaccuracy and lack of transparency as key legal issues that may arise. It also highlights the importance of technological literacy of those using AI.

In light of directors’ statutory and common law fiduciary duties, the issues paper recommends directors ensure that any AI-generated minutes are a true reflection of board meetings.

To assist boards to navigate a disruptive technological trend, the Australian Institute of Company Directors (AICD) has released a suite of guidance materials for directors on AI governance, focusing on generative AI. ‘A Director’s Introduction to AI’ provides an overview of AI applications and relevance for directors, the risks and opportunities, and the applicable domestic and international regulatory environment.

Like the GIA AI issues paper, the AICD materials urge directors to be mindful of their duties when capitalising on the commercial benefits of generative AI in their organisations.

Generative AI is also raising competition law concerns. Competition and consumer law regulators in the United States, the European Union and United Kingdom have released a joint statement identifying trends in the AI market which they consider may impact a fair, open and competitive environment, and the following competition and consumer risks:

  • algorithms can allow competitors to share commercially sensitive information, engage in price fixing or collude on other terms that undermine competition;
  • providers of key inputs, such as specialised chips, substantial compute, data at scale and specialist technical expertise, may become bottlenecks in the AI supply chain and allow key players to have “outsized influence” in the market;
  • incumbent AI providers often have a significant market power and may seek to entrench their dominance, which may impact competition;
  • partnerships, investments and “other connections” between firms relating to the development of generative AI could in some cases attempt to influence competition and steer market outcomes;
  • deceptive or unfair use of consumer data to train AI models may give rise to regulatory non-compliances; and
  • businesses should be transparent with consumers when AI is incorporated in products and services.

AI’s impact on anti-competitive behaviour and detrimental outcomes to consumers will continue to be monitored by competition and consumer law regulators in these jurisdictions. In Australia, the Australian Competition and Consumer Commission (ACCC) recently released an announcement flagging competition issues in generative AI as a topic to be addressed in its 10th Digital Platform Services Inquiry report – so the issue is equally on the radar in Australia.

Key takeaways from regulators’ plans for 2024-25

In August 2024, key Australian regulators released their corporate plans for 2024-25, identifying their areas of focus for the year ahead. Rapid technological innovation was cited across the board as one of the driving factors impacting the regulators’ respective sectors and informing their strategic priorities.

The OAIC has outlined its focus on identifying the unseen harms that impact privacy rights in the digital environment. As part of this focus, it plans to implement a program of targeted, proactive investigations to uncover harms, provide avenues for remediation and set the standard for industry practice. It also flagged that it is looking to exercise its wider range of enforcement powers, which have been proposed through the privacy reforms.

The OAIC also has a new role in regulating the ‘Digital ID’ scheme, and has flagged that it is looking to increase the uptake of digital ID use in order to reduce avoidable over-sharing of identity information.

The OAIC states it is aiming to finalise 80 per cent of notifiable data breaches within 60 days, and 80 per cent of privacy complaints within 12 months.

ACMA has also released its annual compliance priorities for 2024-25, which include addressing misleading spam messages, and combatting misinformation and disinformation on digital platforms (note the new Bill referred to above).

Both the Australian Securities and Investments Commission (ASIC) and the Australian Prudential Regulation Authority (APRA) named cyber resilience as a key focus for 2024-25. 

In its corporate plan, ASIC stated that it intends to advance digital and data resilience and safety by:

  • implementing a supervisory cyber and operational resilience program;
  • monitoring the use of AI by Australian financial services licensees; and
  • monitoring the use of offshore outsourcing arrangements.

APRA plans to undertake a number of regulatory activities aimed at strengthening the cyber risk-management practices of regulated entities, including:

  • embedding Prudential Standard CPS 234 Information Security (CPS 234) and ensuring entities act on findings from CPS 234 independent reviews to lift minimum standards of cyber risk management;
  • releasing industry letters on high-risk cyber topics and expecting regulated entities to strengthen practices accordingly;
  • conducting a cyber operational resilience exercise to test industry preparedness in responding to cyber incidents; and
  • engaging with government initiatives on cyber regulation, generative AI, preparedness and incident response.

What you need to know from the latest OAIC reports and actions

The latest notifiable data breaches report released

The OAIC’s notifiable data breaches report for January to June 2024 was published on 16 September 2024. In the report’s foreword, the Australian Privacy Commissioner reminds entities that the scheme is now six years old, and “it is no longer acceptable for privacy to be an afterthought; entities need to be taking a privacy-centric approach in everything they do”.

The number of notifications received in this six-month period was the highest it has been since late 2020, with 527 notifications. Malicious or criminal attacks still make up the majority (67 per cent) of notified data breaches, with human error accounting for 30 per cent and system fault a mere 3 per cent. Incidents involving phishing (compromised credentials), ransomware and other compromised or stolen credentials make up the majority of reported cyber incidents.

Messages of note in the report include:

  • when assessing the relevance of a threat actor’s motivation, entities should not rely on assumptions and should weigh in favour of notification – in particular, the OAIC warns against taking a threat actor’s assurance that if a ransom is paid, data will not be mishandled, on face value, and points to the prevailing government recommendation that ransoms should not be paid;
  • there are a number of specific cyber threat mitigations identified that indicate the OAIC’s expectations for complying with information security obligations in APP 11.1 – e.g. multi-factor authentication where possible, password management policies, layering security controls, need-to-know access and security monitoring processes and procedures; and
  • there is a continued focus on supply chain management, and the steps entities should take when engaging a service provider that handles personal information on its behalf.

The current employee record exemption in the Privacy Act is interpreted narrowly

A recent privacy determination tests the limits of the employee record exemption in the Privacy Act 1988 (Cth) (Privacy Act).

In ALI and ALJ (Privacy) [2024] AICmr 131, an employee made a complaint after 110 staff were emailed an update about the employee’s (good) recovery following a medical episode in the workplace’s carpark which was witnessed by a number of other employees.

The employer argued that disclosing the employee’s personal and sensitive information in the update fell within the employee record exemption because the update was directly related to the employment relationship. However, the employer’s argument focused on its employment relationship with other employees who were concerned with the complainant employee’s recovery after the incident. As such, the OAIC was not persuaded that the update was directly related to the employer’s employment relationship with the complainant employee.

The OAIC then found that use of the employee’s personal information in the update breached APP 6, because the employer could have discharged its duty to its other employees without identifying the employee by name in the update.

The OAIC awarded the employee $3,000 for non-economic loss and $125 for expenses. The OAIC declined to award other remedies sought by the employee, such as a charitable donation or an employment reference.

A recent privacy assessment of the ‘my health app’ shows the OAIC’s attention to detail when reviewing privacy policies

The OAIC has recently conducted a privacy assessment of Australian Digital Health Agency’s (ADHA’s) 'my health app', including a review of its privacy policy.

Notably, the OAIC’s assessment included consideration of how the app’s privacy policy addressed overseas disclosure. It recommended that catch-all statements intended to “allow for situational responsiveness and to avoid breaching the policy” should be replaced with a more detailed and specific description of any overseas disclosure based on current practice (if there were any such disclosures).

Further, the OAIC noted that the privacy policy was lengthy, repetitive, and included operational and instructional information not relevant to the management of personal information. It recommended that the privacy policy should only include descriptions of how the entity manages personal information. The OAIC also repeated its general guidance that privacy policies should be easy to understand (for example, by avoiding jargon and legalistic terms).

Organisations should consider reviewing their privacy policies against these recommendations.

OAIC takes no further action in relation to Clearview AI and TikTok

In 2021, the OAIC found Clearview AI had breached Australians’ privacy through the collection of images without consent, and ordered the company to cease collecting the images and delete images on record within 90 days. Clearview initially appealed the decision to the Administrative Appeals Tribunal but ceased its appeal in August 2023. The OAIC recently announced that further action against Clearview AI was not warranted.

Further, despite raising concerns about TikTok’s use of pixel technology, the Australian Privacy Commissioner has declined to investigate, citing deficiencies with existing privacy laws. Given the recent privacy reform Bill does not include amendments to the definition of personal information, it is possible that further reforms are required to investigate the practice.

ACMA releases updated guidance on consent to marketing under the Spam Act

In response to a surge of regulatory activity under the Spam Act 2003 (Cth) (Spam Act) and the Do Not Call Register Act 2006 (Cth), in July 2024, ACMA released its Statement of Expectations (Statement). This ‘outcome-focused guide’ establishes ACMA’s expectations of how businesses should obtain consumer consent when conducting telemarking calls and e-marketing (via email, SMS and instant messages).

The key takeaways from the Statement are:

  • Express consent is preferred: while the Acts permit inferred consent in certain circumstances, ACMA recommends obtaining express consent, as it is clear and unambiguous.
  • Terms and conditions for express consent: ACMA recommends express consent based on clear terms and conditions, that are readily accessible (i.e. not hidden in fine print, lengthy privacy policies or behind multiple ‘click-throughs’). Terms and conditions should address what the consent is for (including for the types of products and marketing channels), who will use the consent (including affiliates and partners), how long the consent will be relied on, and how consent can be withdrawn.
  • Double opt-in: ACMA recommends taking a “double opt-in” approach, such as email confirmation of consent (e.g. an email providing a click-through link).
  • Do not rely on third parties: if working with third parties, businesses cannot assume that they will keep or obtain records of consent and marketing. Businesses must have their own robust and comprehensive processes in place to ensure that consent is reliably kept and maintained.
  • Obtain records of consent: businesses are required to retain records of consent and marketing. ACMA recommends that records include (but are not limited to), “the method used to obtain consent, the terms that applied and the date and time it was obtained”.
  • Requirements for valid consent: ACMA acknowledges the OAIC’s requirements for valid consent (informed, voluntary, current, specific, given by a person with capacity) under the Privacy Act, and says that those requirements “provide a framework to apply to consent gathering practices to ensure that they are consumer friendly”. However, ACMA falls short of expressly adopting those requirements.

The Statement also reinforces the existing legal requirements in relation to unsubscribe and opt-out options, including the fact that individuals should not be required to log in to a service to unsubscribe.

The release of this Statement indicates that practices regarding consent are on ACMA’s radar, and organisations should consider reviewing their practices against ACMA’s expectations in the Statement.

The consumer data right regime is expanded to include action initiation

The Consumer Data Right (CDR) regime is Australia’s data portability scheme. Introduced in 2019, the scheme has been rolled out sector by sector – so far, to banking and energy sectors – to allow consumers to direct their service providers (e.g. their bank) to provide their data directly to recipients accredited under the scheme (e.g. a budgeting app).

In a significant update to the scheme, legislation (originally introduced to Parliament in 2022) has recently passed which permits “action initiation”. Action initiation allows an accredited data recipient to take actions on the consumer’s behalf. For example, an accredited recipient (with the consumer’s consent) may be able to make payments, open and close accounts, switch providers and update details on the consumer’s behalf.

Action initiation will only be available for types of actions designated by the Minister, in relation to service providers designated as “action services providers” by the Minister.

Treasury also released, for public consultation, exposure draft amendments to the Consumer Data Right Rules which include (among other changes) proposals to simplify:

  • rules relating to consents, including permitting bundled consents; and
  • arrangements for businesses to nominate representatives.

Submissions have now closed.

Online safety updates

Straight from recent headlines, the Government announced that it is consulting on a proposal to impose social media age restrictions – we examined the proposals in this recent article.

Further, Australia’s eSafety Commissioner has recently issued new industry standards, commenced development of the next phase of industry codes, and issued a number of notices to digital platforms to report and provide information about measures being taken:

  • Relevant electronic service providers (e.g. online gaming and messaging services) and designated internet service providers (e.g. apps, websites, storage services, and some services that deploy or distribute generative AI models) will be required to comply with new industry standards from 22 December 2024. The new standards require providers to adopt compliance measures for specific categories of harmful online content, such as implementing systems to detect and remove that content. The standards also seek to address new harms and risks associated with the development of generative AI.
  • In early July 2024, the eSafety Commissioner issued notices to key industry bodies and associations to develop “Phase 2” industry codes for ‘age-inappropriate’ online material. The new codes must address prevention and protection of Australian children from access or exposure to these materials and provide Australian end-users with the ability to limit access and exposure. Draft codes are to be developed by industry by the end of the year.
  • Later in July, the eSafety Commissioner issued notices to companies including Apple, Google, Meta and Microsoft, requiring them to periodically report (for the next two years) on measures implemented to address online child abuse material. The first reports are due in February 2025.
  • The eSafety Commissioner has also recently exercised its expanded transparency powers (under the updated Basic Online Safety Expectations Determination) and requested information from digital platforms about how many Australian children are on their platforms and what age assurance measures they have in place to enforce their platforms’ age limits.

APRA releases CPG 230 to help entities prepare for 1 July 2025

APRA’s Prudential Standard CPS 230 Operational Risk Management (CPS 230) sits within the risk-management pillar of APRA’s framework. Operational risk management is essential to ensure the resilience of an entity and its ability to maintain critical operations through disruptions.

Our earlier edition of Digital Bytes canvassed CPS 230’s requirements, which take effect on 1 July 2025. CPS 230 sets baseline expectations for all APRA-regulated entities. Each regulated entity has operational risks, however APRA expects Significant Financial Institutions (SFIs) to have stronger practices to complement the size and complexity of their operations.

In July 2024, APRA released the final version of Prudential Practice Guide CPG 230 along with an accompanying statement setting out responses to submissions made through earlier consultation.

Of particular note, APRA has:

  • agreed to allow non-SFIs an additional 12 months before they must comply with business continuity and scenario analysis requirements under CPS 230;
  • set out details of its supervision programme for 2025-2028, with details of the prudential reviews it intends to undertake; and
  • published a ‘day one compliance checklist’ to assist entities to prepare for 1 July 2025.

APRA-regulated entities should also be aware that APRA has published a letter to its regulated entities providing additional insights on common cyber resilience weaknesses.

As reported in our recent Above Board publication, the eight observations in APRA’s letter relate to security in configuration management, privileged access management and security testing. These include “inadequate management and oversight of security test findings”; APRA’s guidance is that test results should be reported to the appropriate governing body or individual, with associated follow-up actions formally tracked. Testing, like threat detection, only works if it is followed through.

What do we know is coming next?

Some of the updates we can expect in the coming months include:

  • the Government is expected to introduce legislation to Parliament soon, mandating that organisations with a turnover of $3 million or more, and government entities, must report ransomware payments. Indications are that while the Government’s use of reported information will be subject to limited-use or ‘safe harbour’ protections, the full immunity from legal action sought by businesses will not be included;
  • the Senate Select Committee on Adopting Artificial Intelligence has had its reporting deadline extended from 19 September 2024 to 26 November 2024;
  • the OAIC has indicated that it will soon release updated privacy guidance for the not-for-profit sector;
  • as part of the ‘Commonwealth Cyber Uplift Plan’, a new cybersecurity industry advisory board will be established to advise Government; and
  • the final report from the latest three-yearly review of Australia’s credit reporting framework (including Part IIIA of the Privacy Act) was due to be delivered to relevant ministers by 1 October 2024.

Finally, if you’re currently focused on what you can do to minimise the aged and redundant personal information you hold, a recent case in the US on Google’s destruction of employee chat records is a timely reminder to ensure that you also take the right steps to preserve evidence.

How we can assist

We have a large team of privacy and cyber specialists, with substantial experience across the whole spectrum of data, privacy and cyber compliance and incident management.

For a more detailed briefing on any of these updates, or to discuss how we can assist your organisation to manage its risks in these rapidly evolving areas, please get in touch.

Big thanks to Alexandra Gauci, Bailey Britt, Dean Baker, James Finnimore, Leonie Higgins, Caitlin Abernethy and Saara Stenberg for their contributions to this edition of Digital Bytes.

Here is the link:

https://jws.com.au/insights/articles/2024/digital-bytes-cyber-privacy-ai-data-update-oct2024

And thanks to the authors for a thorough review!

David.

Tuesday, October 08, 2024

I Suspect We Should Be More Worried Than We Apparently Are About These Intrusions

This appeared last week:

Aussies’ private data being shared without consent through online advertisers exposing them to scammers

Australians’ private information is being shared hundreds of times a day with online advertisers who resell and distribute it to unknown parties, putting people at risk of targeted scams.

Emma Kirk

October 4, 2024 - 9:28PM

NewsWire

Data shows Australians putting themselves at risk online

A concerning number of Australians are putting themselves in danger online according to new data.

Australians’ personal information is being shared 450 times a day through online ad-tracking systems without their consent or knowledge.

A new report from Reset Tech found Australians’ live location data is shared hundreds of times each day through real-time bidding systems, software used by ad-exchange companies that collect masses of consumer data to share with advertisers.

The digital advocacy group used data from the Irish Council of Civil Liberties’ investigation into Australia’s hidden security crisis and made shocking discoveries about how Australians’ personal information is being used without their consent.

Reset Tech uncovered Australian’s “extraordinarily sensitive information” was being exposed to hundreds of unknown third-party actors every second of every day.

The company explained RTB systems worked each time a person opened a website or app with an ad, instantly launching an auction to help advertisers decide which ad space to bid on.

Australians’ private information is being shared with unknown third parties without their consent.

The report found everyday thousands of companies received data on every available ad slot on Australians’ devices, which they could copy to build their own databases about Australians and resell the information over and over again.

Data included information about people’s movements, sexual interests, financial concerns, service providers, personal problems, gambling, drinking habits and online purchases.

Data could be categorised to identify who overate to cope with stress, who acted on impulse, who got a thrill from shopping, and who was self-indulgent.

The report found one company had 17,500 unique data categories about Australians for sale.

Currently, Australia has no limits on how this sensitive information is used which means residents’ private information can be sold to scammers and foreign-state actors.

The report found scammers could access information generated through ad-tracking systems to target scam ads to victims.

The report highlighted scammers could buy this information from businesses to personalise scam ads that appeared to be from people’s service providers such as banks or telcos, or use it to scam people in other ways.

Real Tech executive director Alice Hawkins said no one knew who was buying the data or where it was going.

Ms Hawkins said there was no transparency over the transactions that were taking place in a largely business to business data trade.

“We don’t know who buys it and we don’t know what happens to the data after it is initially released and put on offer,” she said.

“There’s no way of knowing or controlling these data flows once they’ve been exposed through the RTB process.”

Ms Dawkins said it was widely recognised that browsing was not a private experience, but Australians would be shocked to hear the level of inferences and information that could be collected from advertisers, purchasers and advertising data.

She said the real point was how the information could be linked back to an identifiable person.

“The ad tech industry talk about the data being anonymous or anonymised, and the narrative of anonymous ads, which I just find so extraordinary,” she said.

“It’s the ad tech industries version of greenwashing.

“The whole point of the detail in these datasets is so you can target a person with ads relevant to them, such as through a cookie ID or a browser ID.

So the notion that all of this effort goes into targeting ads and then it couldn’t possibly be linked to a person is so nonsensical, because that’s the entire point.”

Ms Dawkins said the Australian parliament needed to set clearer expectations on what types of data was protected.

She said there needed to be a useful framework for businesses that handled, processed, collected and traded Australian’s data.

In September, Attorney-General Mark Dreyfus introduced the Privacy and Other Legislation Amendment Bill 2024 into parliament to offer better protection of Australian’s privacy.

A spokesperson for the Attorney-General said the government was committed to ensuring the Privacy Act worked for all Australians and was fit for purpose in the digital age.

“The Albanese Government’s landmark legislation now before the parliament will strengthen privacy protections for all Australians, including a statutory tort for serious invasions of privacy, targeted criminal offences to respond to doxxing and enable the development of a Children’s Online Privacy Code,” they said.

“This legislation is just the first stage of the Government’s commitment to providing individuals with greater control over their personal information.”

Consumer Police Research Centre deputy chief executive officer Chandni Gupta said Australians deserved privacy protections that were centred around people, not profit.

“It is time for the Federal Government to modernise what it means to be identifiable to cover data points obtained from any source and by any means,” he said.

“It must put the onus on businesses by imposing clear obligations on collecting, sharing and using consumer data that leads to fair and safe outcomes for Australians.”

Here is the link:

https://www.news.com.au/technology/online/aussies-private-data-being-shared-without-consent-through-online-advertisers-exposing-them-to-scammers/news-story/09c009c57a33c6d7780804294240b648

It really is pathetic just how little Government is doing to protect us all from exploitation and potential abuse. I guess we will only see any action when the abuse becomes annoying to the man in the street and there is demand for reform,

I fear that may well take a good while!

David.

Sunday, October 06, 2024

Those Mobile Phones In Peoples Pockets Are Typically A Portable Cesspit!

This appeared last week:

‘Mobile petri dishes’: What bugs are lurking on GPs’ phones?

Did you take your phone to Sydney’s WONCA World Conference? Swabs taken at the event found doctors’ mobiles are a ‘breeding ground for contamination’.


Chelsea Heaney


04 Oct 2024

We take our phones everywhere – to work, in the car, running errands, and even to the toilet.
 
But a new study carried out at last year’s WONCA World Conference has revealed the concerning biohazard risks found on our mobile phones.
 
In newly published research, based on samples taken from 20 attendees’ mobile phones at the conference in Sydney in 2023, the plethora of pathogens that GPs could be exposing their patients to was revealed.
 
Researchers swabbed the 20 phones and found 2204 microbes on the devices, including 882 bacteria, 1229 viruses, 88 fungi, and five single-celled protists.
 
Bond University expert and Associate Professor of Genomics and Molecular Biology Lotti Tajouri has unveiled the scope of the concerning and disturbing bacteria breeding grounds we all carry around every day.
 
Associate Professor Tajouri told newsGP that not only were mobile phones a breeding ground for contamination, but they also have the potential to spread super bugs.
 
‘Antibiotic resistance genes were found at a very high amount,’ he said.
 
‘We found bugs that are not only big killers, but they have a strong amount of virulent factors for antibiotic resistance.’
The members whose phones were swabbed travelled far and wide to get to Sydney, including six from Europe, three from Central or South America, three from Northern Africa, and one from Southeast Asia.
 
With this in mind, Associate Professor Tajouri argues that GPs need to be more mindful around the sanitisation of their phones.
 
‘It doesn’t make sense that we have mobile phones in medical settings,’ he said.
 
‘It doesn’t make sense that conferences of medical staff come with their mobile phones from overseas and are not sanitising them or knowing that mobile phones are literally mobile petri dishes.’
 
Of those attendees surveyed, 98% used their phones for work and 99% at home, meaning contaminated phones are being transported between locations.
 
Additionally, 99% of respondents said they used their phones while travelling – bringing those bugs from one end of the globe to the other in a matter of hours.
 
While 94% of WONCA participants said they washed their hands at work and 97% washed their hands after using the toilet, more than half used phones in the toilet, and 71% used their phones while eating.
 
‘It is evident there is a need to sanitise hands, but also sanitise mobile phones to prevent phones from negating hand hygiene practices,’ Associate Professor Tajouri said.
 
‘We take our phones absolutely everywhere, we travel with our phone, and we spread microbes all over the world.
 
‘All your microbes, regardless of the pathogenic or not, can be opportunistic, infectious diseases for immunocompromised individuals.’
 
Research team member and Chair of RACGP Expert Committee – Quality Care Professor Mark Morgan told newsGP mobile phones could be a ‘weak link’ in the spread of disease around the globe.
 
‘I think there’s certainly a reason to investigate it further and considering having home sterilising units at borders and in healthcare facilities,’ he said.
 
‘If you do need to access the phone during work, then it would be worth doing some hygiene immediately afterwards.’
 
Professor Morgan said he was surprised at how good mobile phones were at harbouring deadly microbes.
 
‘You can imagine every door handle and every keyboard that’s handled frequently to be covered in bacteria and we know they are,’ he said.
 
‘But there are all the things that contribute to them being such an excellent place for microbes to live.
 
‘I was surprised by the extent of resistance factors that were found.’
 
Associate Professor Tajouri said for GPs especially, who see multiple patients each day, phone sanitation is crucial.
 
‘Immunocompromised individuals in GP practices might be exposed to some bugs because the GP has not been sanitising,’ he said.
 
But according to Associate Professor Tajouri there is a solution in the form of making phone sanitation as simple as hand washing.
 
At WONCA, he demonstrated this with a UV-V phone sanitiser called CleanPhone, used for global health and infection prevention and control.
 
This sanitiser was designed to be practical and automatic, meaning medical staff can wash their hands while also sanitising their phones at the same time.
 
Associate Professor Tajouri said he would also like to see these sanitation devices rolled out at airports across the world, as well as at conferences, such as WONCA, on cruise ships, and food halls.
 
‘Yes, you have mobile phones that are contaminated but yes, there is a solution to it,’ he said.

Here is the link:

https://www1.racgp.org.au/newsgp/clinical/mobile-petri-dishes-what-bugs-are-lurking-on-gps-p

While. in normal use. this contamination does not seem to matter, there are clearly all sorts of circumstances where it potentially may, With that in mind all we can all do is be sensible and make sure we think about just where we are deploying, or merely carrying, our phones.

A little sensible care will go a long way towards protecting those who may be at risk from cross-infection and similar risks.

A little thought and care goes a long way!

David.

AusHealthIT Poll Number 767 – Results – 06 October 2024.

Here are the results of the poll.

Could Any Health App Be Worth More Than A Few Hundred Dollars A Month To Use, If That?

Yes                                                                                    2 (10%)

No                                                                                   17 (85%)

I Have No Idea                                                                 1 (5%)

Total No. Of Votes: 20

A very clear vote,  with the feeling being that health apps need to be relatively inexpensive.

Any insights on the poll are welcome, as a comment, as usual!

Not a wonderful voting turnout. 

1 of 20 who answered the poll admitted to not being sure about the answer to the question!

Again, many, many special thanks to all those who voted! 

David.

Friday, October 04, 2024

It Looks Like The Unwary And Asleep Are About To Be Caught Out!

This appeared last week:

Privacy time bomb: Australian businesses have six months to avoid legal firestorm

Chris Brinkworth

18 Sept,2024

The Australian Business Network

Many business entities and their marketing partners are unknowingly using tools and data in ways that could expose them to legal action within six months.

The long-anticipated Australian privacy reforms have arrived, offering businesses a unique opportunity to lead in data protection and consumer trust, while compelling them to act swiftly to avoid serious privacy penalties from very basic practices of which they may be unaware.

The “carrot and a stick” central to these reforms is the introduction of a “privacy tort”, which presents an opportunity (carrot) for companies to strengthen consumer trust by using a very big “stick” in the shape of class action and litigation.

It’s crucial to understand that these changes represent just the beginning of a broader reform agenda. This is the first tranche of agreed recommendations from the Privacy Act Review, with consultation on a second tranche of reforms likely to come in 2025.

Forward-thinking businesses have a unique opportunity to get ahead of the curve by embracing these initial changes.

With a 25-year career in targeting and tracking hundreds of millions of people using billions of pieces of behavioural data, identifiers, pixels, cookies and more, I must emphasise the profound impact these partial reforms will have on basic current business practices.

Many entities and their marketing partners are unknowingly using tools and data in ways that could expose them to legal action within six months. The early warnings are evident in published comments from the OAIC, ACCC and legal academics.

Attorney-General Mark Dreyfus articulates the context succinctly: “The digital economy has unleashed enormous benefits for Australians. But it has also increased the privacy risks we face through the collection and storage of enormous amounts of our personal data.”

This statement underscores the delicate balance between the industry’s desire for better targeting, measurement and identity resolution and the need to protect personal privacy. The introduction of a new privacy tort, set to take effect in six months, represents a strategic approach to reforming data practices in the digital economy, potentially reshaping how businesses approach these marketing objectives.

The reach of this new tort is extensive and should not be underestimated. While many may be quick to point out the failure to implement the vast majority of the proposed privacy reforms, we must give credit where it’s due.

The Attorney-General’s department has clearly thought strategically about how to significantly reform unethical and risky privacy practices in the digital economy through the use of the tort, a tool that has been in discussion for many years. This targeted approach, rather than rushing through wholesale reform just before an election, shows a measured response to a complex issue.

Do not be fooled into thinking partial reform means “no teeth that can bite”. The growing dependence on data and changes in global regulatory frameworks have led to a significant increase in privacy-related legal actions, providing plenty of examples of where to focus attention.

Privacy Commissioner Carly Kind has previously highlighted the extent of data collection: “Social media platforms and other websites receive personal information about internet users as they browse the web. This data can range from basic site visits to more detailed personal information like email addresses and mobile numbers.” She has noted “most people wouldn’t reasonably expect household brands, medical providers or news sites to disclose details about site visits, duration and content consumption to social media platforms,” describing such practices as “harmful, invasive and corrosive of online privacy.”

The new tort could potentially apply to various business practices, including excessive tracking and profiling, unauthorised mixing of personal data across business units, misleading privacy disclosures, risky data sharing practices, use of deceptive identifiers, lack of genuine user choice in data collection, and attempts to circumvent user privacy preferences.

Business leaders need immediately to ask themselves what our teams and technology partners are doing with customer data and are we aligned with reasonable consumer expectations? Are we inadvertently crossing lines that could expose us to legal action under the new privacy tort?

Kind has previously warned that “pixels are one of many tracking tools, including cookies, that permit granular user surveillance across the internet and social media platforms,” underscoring the sophisticated nature of tracking technologies and the need for robust regulation.

Dreyfus emphasises public sentiment driving these changes: “We know Australians are concerned about the protection of their personal information, and of the risks associated with the misuse or mismanagement of their information.” He adds, “Australians … expect that when they do (share their personal information), their information will be protected and that they will maintain control over it.”

For businesses, these reforms necessitate a thorough review of data practices. Companies must discuss with their legal and privacy teams the need to conduct Privacy Impact Assessments on every technology touching their customer data life cycle, overhaul processes and ensure comprehensive staff training.

As Kind has asserted, website providers “have an obligation to ensure that sharing web browsing data with social media platforms is in line with what internet users might reasonably expect”. This sets a new standard for transparency and user consent in data collection and sharing practices.

While these reforms may not represent a complete overhaul of Australia’s data protection landscape, the tort signals a significant shift towards greater accountability and transparency in data practices that will impact many businesses’ practices.

However, the complexity of modern data ecosystems and interwoven stacks, products and data partnerships means that internal reviews will not be sufficient. Without getting ahead of this through specialised audits and reviews that scrutinise data flows, tools and consents, businesses risk having these issues uncovered not by themselves or their BAU agency partners – but instead by litigators in the courtroom. The choice is clear: invest in expert-led Privacy Impact Assessments of your activity now, or potentially face costly legal battles and reputational damage.

Chris Brinkworth is managing partner at Civic Data.

Here is the link:

https://www.theaustralian.com.au/business/technology/privacy-time-bomb-australian-businesses-have-six-months-to-avoid-legal-firestorm/news-story/e6f67a9a06c15b2fc21156521fc30da2

I can hear the yelps now from those who thought change would never happen!

I predict a lot of fun and surprise about six months from now!

David.

 

Thursday, October 03, 2024

It Seems That IVF Is Not Quite As Innocuous As Was Initially Thought.

 This important article appeared last week.

IVF babies have ‘significantly increased’ risk of serious heart defects

By Wendy Tuohy

September 27, 2024 — 2.15pm

Children conceived through IVF and other reproductive technologies have a significantly higher risk of serious heart abnormalities than naturally conceived children, a large international study has found.

A study of 7.7 million children in four northern European countries found babies born through assisted reproduction including IVF, intracytoplasmic sperm injection and embryo freezing have a 36 per cent higher risk of serious heart abnormalities.

The study, published on Friday in the European Heart Journal, found the risk of heart defects – the most common form of birth abnormalities – was particularly associated with babies born in multiple births, a practise not encouraged in Australia. Some defects cause life-threatening complications.

However, overall the risk of such defects was still low for children born through assisted reproduction: 1.84 per cent, compared with 1.15 per cent for those conceived naturally. The risk for multiples born through IVF was higher, at 2.47 per cent, compared with 1.62 per cent for those conceived naturally.

Study leader Professor Ulla-Britt Wennerholm, of the University of Gothenburg in Sweden, said previous research had identified increased risks for babies born through assisted reproduction, including pre-term birth and low birth weight.

She said the fact the risk of heart defects was similar regardless of the type of assisted reproduction used may indicate a common factor underlying the parents’ infertility and congenital heart problems in their babies.

“Congenital heart defects can be extremely serious, requiring specialist surgery when babies are very young, so knowing which babies are at the greatest risk can help us diagnose heart defects as early as possible and ensure the right care and treatment are given,” Wennerholm said.

Factors that can increase the risk of congenital heart defects, such as a child’s year of birth, country of birth, mother’s age at delivery, if the mother smoked during pregnancy, or if the mother had diabetes or heart defects were taken into account.

Because technology is being used more widely around the world to aid conception, there may be a global increase in congenital heart abnormalities.

The University of New South Wales National Perinatal Epidemiology and Statistics Unit found in its 2023 assisted reproductive data report that a record one in 18 babies had been born in Australia through the technologies in 2021. The technologies helped with the conception of 18,594 babies.

A 2014 study by the Murdoch Children’s Research Institute found young adults conceived by assisted reproduction are just as healthy, smart and mentally stable as people conceived naturally, although they may carry a slightly higher risk of some illnesses such as asthma.

They concluded the rate of chronic illnesses, growth measures such as puberty milestones, educational achievements, and quality of life were generally similar between the naturally and medically conceived by the time they were young adults.

Fertility Society of Australia and New Zealand board member Dr Anne Clarke said the study was observational and could not be definitive in terms of cause and effect.

“The headline suggesting that IVF babies have a 36 per cent higher chance of major heart defects is misleading because the study highlights that the absolute risk is 1.84 per cent compared with 1.15 per cent in babies conceived without assisted reproductive technology (ART),” Clarke said.

Some of the births included in the study dated back to the 1980s, which made that part of the data old.

“The study also concludes that the increased risk is particularly associated with multiple births in assisted reproduction, but this is not encouraged in ART practice in Australia,” she said.

Dr Nathalie Auger, of the University of Montreal Hospital Research Centre, said assisted reproduction accounted for 2 to 8 per cent of births worldwide, and “while most neonates born after assisted reproductive technology are healthy, these procedures are not without risks”.

“Patients who use assisted reproductive technology tend to differ from the general population. These patients may have underlying morbidities that affect both fertility and the risk of heart defects,” she said.

Professor Bernard Tuch, consultant endocrinologist and director of the New South Wales Stem Cell Network, said the new data confirmed previous findings of heart abnormalities.

“It has been well documented that children born after assisted reproductive technology have a slightly but significantly increased risk of congenital abnormalities,” he said. “The commonest such abnormalities are cardiac in nature.”

Here is the link:

https://www.theage.com.au/national/ivf-babies-have-significantly-increased-risk-of-serious-heart-defects-20240927-p5ke0x.html

It seems unclear just how large the risk of problems is but it is clear we much keep careful monitoring running so any problems can be minimized!

David.