Quote Of The Year

Timeless Quotes - Sadly The Late Paul Shetler - "Its not Your Health Record it's a Government Record Of Your Health Information"

or

H. L. Mencken - "For every complex problem there is an answer that is clear, simple, and wrong."

Thursday, October 10, 2024

I Am Not Sure This Is Either Sensible Or Feasible. What Do You Think?

 This appeared last week:

The plan to save Australian lives with nationwide DNA screening

An ambitious vision to introduce mass genetic testing to the entire Australian adult population is being hailed as a solution to the country’s healthcare crisis. But can the health system cope?

Michael Smith Health editor

Oct 4, 2024 – 12.43pm

S even years ago, a scientist and a lawyer working together at Melbourne’s Monash University started collaborating on an ambitious idea.

They knew genetic testing could save lives and ease the burden on Australia’s stretched healthcare system by exposing the hereditary risk of certain cancers and heart disease early.

The science had been around for decades, but it was only more recently that the cost of testing huge numbers of people had come down, the internet was making the logistics easier and the technology existed to stabilise samples being sent by mail.

Jane Tiller, a lawyer and genetic counsellor who led a push to clamp down on life insurers using genetic information to discriminate against policyholders, was surprised DNA screening was not more widespread. Though many Australians were familiar with bowel cancer kits, pap smears and breast screenings, genomic screening was unreachable for most people. Many still do not know it is an option.

Geneticist Jane Tiller wants to scale up the genetic screening project to test 100,000 adults, a possible precursor to a nationwide program. Louis Trerise

Tiller teamed up with Paul Lacaze, a genomics researcher at Monash, to start working on a bold project to test 10,000 adult Australians for gene variants that increase the risk of certain hereditary cancers and heart disease. Logistically, it was challenging, and they needed government funding to make it happen. Their big breakthrough came in 2020 when the project received a $3 million government grant.

“Until then we weren’t sure if we would get funded. We weren’t sure if the health system was ready for this kind of thing,” Tiller says. “When we got that grant that changed everything and that set our course quite clearly in terms of what we were aiming for. We wanted an equitable screening program open to everyone for prevention.”

Tiller and Lacaze are now preparing to take their project called DNA Screen into its next important phase. They want to scale up the program to test 100,000 adults which, if successful, would be the precursor to a nationwide genomic screening program.

This means every adult Australian who wants a test could eventually have their DNA screened for free. Tiller and Lacaze say it would save countless lives by detecting the risk of disease early and save the healthcare system billions of dollars through prevention.

The projection estimates there are 318,000 Australian adults out of 20 million at high genetic risk of the three conditions they are screening for. Given half of those people on average are at risk of developing a cancer or coronary heart disease in their lifetime if it is left unidentified and untreated, they estimate up to 119,000 cancers and 40,000 heart attacks could occur if they are not identified.

“Our long-term goal is for every adult Australian who wants testing to be able to access genomic screening for disease prevention,” Tiller says. “If Australia was to do this, we would be world leaders. There is no country that has a national genomic screening program for adult prevention in the way we are proposing. It is feasible in Australia because of the size of our population, the decreasing cost of testing and our nationally funded healthcare system.”


DNA tests have been growing in popularity for the past decade. Millions of people, including celebrities such as Oprah Winfrey, are spitting into test tubes to learn about their ancestry, and police are using the science to solve crimes decades after they were committed. Screening as a way to detect the risk of disease is not new, but it is not widespread either.

Angelina Jolie inspired thousands of women to get tested when she had a double mastectomy in 2013 after a screening found a genetic mutation which meant she had a high risk of breast cancer.

While the idea of mass DNA testing for some sounds like something out of a dystopian science fiction movie, Monash says the demand was overwhelming when it launched its pilot in 2020.

Though the original plan was to recruit participants through social media over time, 10,000 people applied online on the first day after the story made the evening television news. “The interest was overwhelming. About 20,000 people signed up in the first week, and we had to turn our social media ads off,” Tiller says.

Monash University genomics researcher Paul Lacaze and senior research fellow and lawyer Jane Tiller teamed up to launch DNA Screen.  

The tests screened for three adult genetic conditions: hereditary breast and ovarian cancer; Lynch syndrome (which increases the risk of colorectal and other cancers); and familial hypercholesterolaemia, which increases the risk of coronary heart disease.

After a simple eligibility test, saliva testing kits were posted to the participants. They would spit into a tube with a temperature stabilising solution and mail the samples back to Melbourne for testing in Monash’s labs. The total turnaround time varied but for many was about six weeks. Samples deemed high risk were then analysed by a curation team and people were notified about the results.

The results of Monash’s DNA Screen national pilot study found about 2 per cent of those tested were high risk. Three-quarters of those people would not have qualified for any reimbursed genetic testing based on current criteria in the public healthcare system. The study also meant Monash could deliver a detailed cost-effectiveness analysis for a nationwide screening program based on testing costs of $200 to $400 per test.

Figures published in The Lancet medical journal state that at $200 per test, genomic screening would be cost-effective under the Australian threshold, and would require an investment of $832 million to screen half of the Australian population aged 18-40.

Monash is waiting on a funding decision for what it is calling the bridging phase. It proposes an investment by the federal government of $50 million to screen 100,000 Australians in partnership with the Department of Health over four to five years as a precursor to a nationwide government-funded screening program, which could be considered as early as 2030.

One problem with identifying such large numbers of people at risk of cancer or heart disease at the same time is the pressure it would put on the country’s already stretched health system. If Monash’s projections are correct, downstream genetic services would be significantly stretched if they received thousands of referrals. Tiller says the goal would be to prepare the health system and genetics services ahead of time.

“We know that genetic services would be significantly overstretched if they received thousands of referrals, and new, streamlined models are required to effectively deliver population genomics. These models need to be developed and tested in real time to generate evidence and ensure system readiness,” she says.

Critically, to achieve the cost-effectiveness, some government investment needs to be shifted into prevention.

With Australia’s healthcare system facing growing pressure from an ageing population and funding shortfalls, doctors are backing the Monash proposal as another investment in prevention. While DNA screening will add pressure onto the system, they say it is better to identify the risks early and save lives and stop unnecessary treatment in the years ahead.

“The system is absolutely creaking at the seams at the moment and the only way to do it is to pivot from treatment to prevention,” says Steve Robson, former Australian Medical Association president who supports DNA Screen.

A key concern about mass DNA testing was addressed last month when the federal government announced a ban on genetic discrimination. This means people cannot be obliged to share their test results with life insurers, who are also not allowed to ask for them.

Potential data breaches also worry some people. Last year, more than 6 million customers at genetics testing company 23andMe had their data exposed to a data breach. Monash destroys samples if people request it but less than 10 per cent of those screened asked for this.

For some, the concept of screening an entire country’s adult DNA will sound dystopian even if it saves lives. The 1997 science fiction movie Gattaca tells the story of the near future where society is divided between those born or enhanced with more desirable genes and those who are not.

Here is the link:

https://www.afr.com/companies/healthcare-and-fitness/the-plan-to-save-australian-lives-with-nationwide-dna-screening-20240911-p5k9uu

There is no doubt this is an ambitious ‘biggie’ and there is also no doubt that prevention is a very good thing for all concerned.

Whether anyone can successfully execute such a huge intervention has to be unknown at this point and you can be sure it will cost more than the initial estimate. This is one of those projects where the costs and benefits need to be fully worked through and a pretty decently scaled feasibility study conducted at the very least!

If all that lines up it will be a political and funding decision to go ahead - - along with a public education campaign to get the populace on side!

David.

Wednesday, October 09, 2024

This Is Quite A Useful Update On The Cyber World And Privacy Topics

This appeared last week:

Digital Bytes – cyber, privacy, AI & data update

Oct 2024 Articles Written by Helen Clarke (Partner), Sophie Dawson (Partner), Keith Robinson (Partner), Emily Lau (Senior Associate), Viva Swords (Senior Associate), Lydia Cowan-Dillon (Associate)

While all eyes have been on the recent introduction of the privacy reform Bill to Parliament, there have been a number of other updates that continue to inform the shifting patterns of opportunity, legal risks and regulatory focus in relation to cyber, privacy, AI and data over the last three months.

In addition to the more substantive updates below, also keep in mind:

  • Significant data breaches and cyber incidents continue to make the headlines. Many businesses were affected by the July 2024 CrowdStrike outage, raising questions about the legal implications for regulatory compliance (including privacy compliance), insurance, business continuity and supply chain disruption, as well as whether events like this will trigger a change in approaches to contracts and liability.
  • The Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024 was introduced to Parliament on 12 September 2024. The Bill provides the Australian Communications and Media Authority (ACMA) with new powers to address seriously harmful content (including misinformation and disinformation) on digital platforms, with strengthened protections for freedom of speech.
  • A recent report from Tenable indicates that a significant proportion of Australian companies interviewed can lower their cyber insurance premiums by 5-15 per cent by implementing proactive risk-management measures.
  • ACMA continues to take regular enforcement action – notably, two infringement notices were issued against Telstra for failures to comply with scam rules and disclosure of unlisted phone numbers.
  • Draft legislation to implement a ‘Scams Prevention Framework’ has been released for consultation. The Treasury Laws Amendment Bill 2024: Scams Prevention Framework would require designated sectors to prevent, detect, report, disrupt and respond to scams and to implement appropriate governance arrangements. The framework would initially apply to banks, telecommunication providers and digital platform service providers (including social media, paid search engine advertising and direct messaging services) – future designated sectors would likely include superannuation funds, digital currency exchange providers, other payment providers and online marketplaces. Consultation closes on 4 October 2024.
  • The European Union’s AI Act took effect on 1 August 2024, so we should soon start seeing how this risk-based regulatory model plays out in practice. More importantly, however, Australia is making progress in its own regulatory model for AI – see below.
  • The sale of personal information is a topic of increasing focus, with Oracle reaching a US$115 million settlement (without admitting liability) in litigation claiming that it sold “digital dossiers” with data about hundreds of millions of people. Another settlement has made the news – genetics testing company 23andMe has recently settled a suit in relation to its 2023 data breach for US$30 million, and a promise of three years of security monitoring.
  • A number of different government bodies, such as AUSTRAC and the Australian Cyber Security Centre, have recently issued updated guidance on recommended practices for outsourcing and procurement.

Some Australian privacy reforms progress with Bill introduced to Parliament

Our Technology and Privacy specialists take you on a tour of the reforms in this article.

For those in a hurry, the Privacy and Other Legislation Amendment Bill 2024 (Cth) contains:

  • new infringement notice powers for Australia’s privacy regulator, the Office of the Australian Information Commissioner (OAIC);
  • a statutory tort for certain serious invasions of privacy, which includes a journalism exemption;
  • powers to prescribe foreign jurisdictions as having adequate privacy laws for the purpose of overseas disclosure of personal information;
  • clarification of entities’ information security obligations;
  • updates to the notifiable data breaches regime to provide additional flexibility in handling notifiable data breaches to reduce harm to individuals;
  • provisions mandating the development of a Children’s Privacy Code;
  • new offences to be included in the Criminal Code Act 1995 (Cth) for certain online communications that are “menacing or harassing” (referred to as “doxxing”); and
  • new transparency requirements in relation to entities’ use of personal information for automated decision-making.

Certain aspects of the reforms will be more important to some organisations than others, so it’s important to carefully identify those that may impact your business and operations.

However, at a minimum, businesses should be aware of the specific Australian Privacy Principle (APP) provisions that are proposed to be subject to the OAIC’s “infringement notices” power (see our earlier article for a list). Compliance with these provisions should be an area of focus, given the relative ease with which the OAIC will be able to take action in the event of non-compliance (if reforms are passed).

There is a raft of other recommended changes proposed through the reform process which were not included in this Bill. These reforms may be introduced at a later date.

Proposed mandatory guardrails for AI in high-risk settings

Following the Government’s previous announcement on its proposed risk-based approach to regulating AI (which we reported on in an earlier edition of Digital Bytes), on 5 September 2024, the Department of Industry, Science and Resources has issued a proposals paper for introducing mandatory guardrails for AI in high-risk settings containing:

  • two categories to define “high-risk” settings;
  • 10 proposed mandatory guardrails for the development and deployment of “high-risk” AI; and
  • three proposed approaches to regulation.

Submissions on the proposals paper are due by 4 October 2024.

Definition of “high-risk” settings

The paper proposes two categories that should be considered “high-risk”, where the proposed mandatory guardrails will apply.

The first category addresses where AI use is known or foreseeable, and proposes that what is “high-risk” will depend on the adverse impact (and severity and extent of the impact) on:

  • individuals’ rights recognised by Australian law;
  • people’s physical or mental health or safety;
  • legal effects, defamation or similarly significant effects on individuals;
  • groups of individuals or collective rights of cultural groups; and
  • the broader Australian economy, society, environment and rule of law.

The second category deems any advanced and highly capable AI models, where all possible risks and applications cannot be predicted, to be “high-risk”.

Ten proposed mandatory guardrails

The proposed mandatory guardrails are:

  1. establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance;
  2. establish and implement a risk management process to identify and mitigate risks;
  3. protect AI systems and implement data governance measures to manage data quality and provenance;
  4. test AI models and systems to evaluate model performance and monitor the system once deployed;
  5. enable human control or intervention in an AI system to achieve meaningful human oversight.
  6. inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content;
  7. establish processes for people impacted by AI systems to challenge use or outcomes;
  8. be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks;
  9. keep and maintain records to allow third parties to assess compliance with the guardrails; and
  10. undertake conformity assessments to demonstrate and certify compliance with the guardrails.

Approaches to regulation

The proposals paper canvasses and seeks feedback on the following three options for implementing the proposed mandatory guardrails:

  1. domain specific approach – adapting existing regulatory frameworks to include the proposed mandatory guardrails;
  2. framework approach – introducing framework legislation, with associated amendments to existing legislation; and
  3. whole of economy approach – introducing a new cross-economy AI-specific Act that would define these “high-risk” applications of AI and outline the new mandatory guardrails.

Organisations developing or deploying AI in “high-risk” settings may wish to submit feedback on the proposals paper by 4 October 2024.

AI Voluntary Safety Standard released

On the same day as the release of the ‘mandatory guardrails’ paper described above, the Department of Industry, Science and Resources issued a Voluntary AI Safety Standard setting out 10 voluntary guardrails to help organisations deploying and developing AI to benefit from and manage the risks associated with AI.

The intended audience is both developers and deployers of AI.

The 10 voluntary guardrails are the same as the proposed mandatory guardrails, set out above, with the exception of the 10th guardrail, which is:

Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness.

Organisations already developing or deploying AI should consider adopting the 10 steps in the AI Voluntary Safety Standards – even though this is not currently a legal imperative, adopting this approach will assist with risk management and may become industry practice.

Other key updates in the AI space

The Governance Institute of Australia (GIA) has released an issues paper on artificial intelligence (AI) and board minutes, addressing the growing use of transcription and generative AI tools to transcribe meetings and generate action items or summaries.

The issues paper flags confidentiality, cybersecurity, IP, inaccuracy and lack of transparency as key legal issues that may arise. It also highlights the importance of technological literacy of those using AI.

In light of directors’ statutory and common law fiduciary duties, the issues paper recommends directors ensure that any AI-generated minutes are a true reflection of board meetings.

To assist boards to navigate a disruptive technological trend, the Australian Institute of Company Directors (AICD) has released a suite of guidance materials for directors on AI governance, focusing on generative AI. ‘A Director’s Introduction to AI’ provides an overview of AI applications and relevance for directors, the risks and opportunities, and the applicable domestic and international regulatory environment.

Like the GIA AI issues paper, the AICD materials urge directors to be mindful of their duties when capitalising on the commercial benefits of generative AI in their organisations.

Generative AI is also raising competition law concerns. Competition and consumer law regulators in the United States, the European Union and United Kingdom have released a joint statement identifying trends in the AI market which they consider may impact a fair, open and competitive environment, and the following competition and consumer risks:

  • algorithms can allow competitors to share commercially sensitive information, engage in price fixing or collude on other terms that undermine competition;
  • providers of key inputs, such as specialised chips, substantial compute, data at scale and specialist technical expertise, may become bottlenecks in the AI supply chain and allow key players to have “outsized influence” in the market;
  • incumbent AI providers often have a significant market power and may seek to entrench their dominance, which may impact competition;
  • partnerships, investments and “other connections” between firms relating to the development of generative AI could in some cases attempt to influence competition and steer market outcomes;
  • deceptive or unfair use of consumer data to train AI models may give rise to regulatory non-compliances; and
  • businesses should be transparent with consumers when AI is incorporated in products and services.

AI’s impact on anti-competitive behaviour and detrimental outcomes to consumers will continue to be monitored by competition and consumer law regulators in these jurisdictions. In Australia, the Australian Competition and Consumer Commission (ACCC) recently released an announcement flagging competition issues in generative AI as a topic to be addressed in its 10th Digital Platform Services Inquiry report – so the issue is equally on the radar in Australia.

Key takeaways from regulators’ plans for 2024-25

In August 2024, key Australian regulators released their corporate plans for 2024-25, identifying their areas of focus for the year ahead. Rapid technological innovation was cited across the board as one of the driving factors impacting the regulators’ respective sectors and informing their strategic priorities.

The OAIC has outlined its focus on identifying the unseen harms that impact privacy rights in the digital environment. As part of this focus, it plans to implement a program of targeted, proactive investigations to uncover harms, provide avenues for remediation and set the standard for industry practice. It also flagged that it is looking to exercise its wider range of enforcement powers, which have been proposed through the privacy reforms.

The OAIC also has a new role in regulating the ‘Digital ID’ scheme, and has flagged that it is looking to increase the uptake of digital ID use in order to reduce avoidable over-sharing of identity information.

The OAIC states it is aiming to finalise 80 per cent of notifiable data breaches within 60 days, and 80 per cent of privacy complaints within 12 months.

ACMA has also released its annual compliance priorities for 2024-25, which include addressing misleading spam messages, and combatting misinformation and disinformation on digital platforms (note the new Bill referred to above).

Both the Australian Securities and Investments Commission (ASIC) and the Australian Prudential Regulation Authority (APRA) named cyber resilience as a key focus for 2024-25. 

In its corporate plan, ASIC stated that it intends to advance digital and data resilience and safety by:

  • implementing a supervisory cyber and operational resilience program;
  • monitoring the use of AI by Australian financial services licensees; and
  • monitoring the use of offshore outsourcing arrangements.

APRA plans to undertake a number of regulatory activities aimed at strengthening the cyber risk-management practices of regulated entities, including:

  • embedding Prudential Standard CPS 234 Information Security (CPS 234) and ensuring entities act on findings from CPS 234 independent reviews to lift minimum standards of cyber risk management;
  • releasing industry letters on high-risk cyber topics and expecting regulated entities to strengthen practices accordingly;
  • conducting a cyber operational resilience exercise to test industry preparedness in responding to cyber incidents; and
  • engaging with government initiatives on cyber regulation, generative AI, preparedness and incident response.

What you need to know from the latest OAIC reports and actions

The latest notifiable data breaches report released

The OAIC’s notifiable data breaches report for January to June 2024 was published on 16 September 2024. In the report’s foreword, the Australian Privacy Commissioner reminds entities that the scheme is now six years old, and “it is no longer acceptable for privacy to be an afterthought; entities need to be taking a privacy-centric approach in everything they do”.

The number of notifications received in this six-month period was the highest it has been since late 2020, with 527 notifications. Malicious or criminal attacks still make up the majority (67 per cent) of notified data breaches, with human error accounting for 30 per cent and system fault a mere 3 per cent. Incidents involving phishing (compromised credentials), ransomware and other compromised or stolen credentials make up the majority of reported cyber incidents.

Messages of note in the report include:

  • when assessing the relevance of a threat actor’s motivation, entities should not rely on assumptions and should weigh in favour of notification – in particular, the OAIC warns against taking a threat actor’s assurance that if a ransom is paid, data will not be mishandled, on face value, and points to the prevailing government recommendation that ransoms should not be paid;
  • there are a number of specific cyber threat mitigations identified that indicate the OAIC’s expectations for complying with information security obligations in APP 11.1 – e.g. multi-factor authentication where possible, password management policies, layering security controls, need-to-know access and security monitoring processes and procedures; and
  • there is a continued focus on supply chain management, and the steps entities should take when engaging a service provider that handles personal information on its behalf.

The current employee record exemption in the Privacy Act is interpreted narrowly

A recent privacy determination tests the limits of the employee record exemption in the Privacy Act 1988 (Cth) (Privacy Act).

In ALI and ALJ (Privacy) [2024] AICmr 131, an employee made a complaint after 110 staff were emailed an update about the employee’s (good) recovery following a medical episode in the workplace’s carpark which was witnessed by a number of other employees.

The employer argued that disclosing the employee’s personal and sensitive information in the update fell within the employee record exemption because the update was directly related to the employment relationship. However, the employer’s argument focused on its employment relationship with other employees who were concerned with the complainant employee’s recovery after the incident. As such, the OAIC was not persuaded that the update was directly related to the employer’s employment relationship with the complainant employee.

The OAIC then found that use of the employee’s personal information in the update breached APP 6, because the employer could have discharged its duty to its other employees without identifying the employee by name in the update.

The OAIC awarded the employee $3,000 for non-economic loss and $125 for expenses. The OAIC declined to award other remedies sought by the employee, such as a charitable donation or an employment reference.

A recent privacy assessment of the ‘my health app’ shows the OAIC’s attention to detail when reviewing privacy policies

The OAIC has recently conducted a privacy assessment of Australian Digital Health Agency’s (ADHA’s) 'my health app', including a review of its privacy policy.

Notably, the OAIC’s assessment included consideration of how the app’s privacy policy addressed overseas disclosure. It recommended that catch-all statements intended to “allow for situational responsiveness and to avoid breaching the policy” should be replaced with a more detailed and specific description of any overseas disclosure based on current practice (if there were any such disclosures).

Further, the OAIC noted that the privacy policy was lengthy, repetitive, and included operational and instructional information not relevant to the management of personal information. It recommended that the privacy policy should only include descriptions of how the entity manages personal information. The OAIC also repeated its general guidance that privacy policies should be easy to understand (for example, by avoiding jargon and legalistic terms).

Organisations should consider reviewing their privacy policies against these recommendations.

OAIC takes no further action in relation to Clearview AI and TikTok

In 2021, the OAIC found Clearview AI had breached Australians’ privacy through the collection of images without consent, and ordered the company to cease collecting the images and delete images on record within 90 days. Clearview initially appealed the decision to the Administrative Appeals Tribunal but ceased its appeal in August 2023. The OAIC recently announced that further action against Clearview AI was not warranted.

Further, despite raising concerns about TikTok’s use of pixel technology, the Australian Privacy Commissioner has declined to investigate, citing deficiencies with existing privacy laws. Given the recent privacy reform Bill does not include amendments to the definition of personal information, it is possible that further reforms are required to investigate the practice.

ACMA releases updated guidance on consent to marketing under the Spam Act

In response to a surge of regulatory activity under the Spam Act 2003 (Cth) (Spam Act) and the Do Not Call Register Act 2006 (Cth), in July 2024, ACMA released its Statement of Expectations (Statement). This ‘outcome-focused guide’ establishes ACMA’s expectations of how businesses should obtain consumer consent when conducting telemarking calls and e-marketing (via email, SMS and instant messages).

The key takeaways from the Statement are:

  • Express consent is preferred: while the Acts permit inferred consent in certain circumstances, ACMA recommends obtaining express consent, as it is clear and unambiguous.
  • Terms and conditions for express consent: ACMA recommends express consent based on clear terms and conditions, that are readily accessible (i.e. not hidden in fine print, lengthy privacy policies or behind multiple ‘click-throughs’). Terms and conditions should address what the consent is for (including for the types of products and marketing channels), who will use the consent (including affiliates and partners), how long the consent will be relied on, and how consent can be withdrawn.
  • Double opt-in: ACMA recommends taking a “double opt-in” approach, such as email confirmation of consent (e.g. an email providing a click-through link).
  • Do not rely on third parties: if working with third parties, businesses cannot assume that they will keep or obtain records of consent and marketing. Businesses must have their own robust and comprehensive processes in place to ensure that consent is reliably kept and maintained.
  • Obtain records of consent: businesses are required to retain records of consent and marketing. ACMA recommends that records include (but are not limited to), “the method used to obtain consent, the terms that applied and the date and time it was obtained”.
  • Requirements for valid consent: ACMA acknowledges the OAIC’s requirements for valid consent (informed, voluntary, current, specific, given by a person with capacity) under the Privacy Act, and says that those requirements “provide a framework to apply to consent gathering practices to ensure that they are consumer friendly”. However, ACMA falls short of expressly adopting those requirements.

The Statement also reinforces the existing legal requirements in relation to unsubscribe and opt-out options, including the fact that individuals should not be required to log in to a service to unsubscribe.

The release of this Statement indicates that practices regarding consent are on ACMA’s radar, and organisations should consider reviewing their practices against ACMA’s expectations in the Statement.

The consumer data right regime is expanded to include action initiation

The Consumer Data Right (CDR) regime is Australia’s data portability scheme. Introduced in 2019, the scheme has been rolled out sector by sector – so far, to banking and energy sectors – to allow consumers to direct their service providers (e.g. their bank) to provide their data directly to recipients accredited under the scheme (e.g. a budgeting app).

In a significant update to the scheme, legislation (originally introduced to Parliament in 2022) has recently passed which permits “action initiation”. Action initiation allows an accredited data recipient to take actions on the consumer’s behalf. For example, an accredited recipient (with the consumer’s consent) may be able to make payments, open and close accounts, switch providers and update details on the consumer’s behalf.

Action initiation will only be available for types of actions designated by the Minister, in relation to service providers designated as “action services providers” by the Minister.

Treasury also released, for public consultation, exposure draft amendments to the Consumer Data Right Rules which include (among other changes) proposals to simplify:

  • rules relating to consents, including permitting bundled consents; and
  • arrangements for businesses to nominate representatives.

Submissions have now closed.

Online safety updates

Straight from recent headlines, the Government announced that it is consulting on a proposal to impose social media age restrictions – we examined the proposals in this recent article.

Further, Australia’s eSafety Commissioner has recently issued new industry standards, commenced development of the next phase of industry codes, and issued a number of notices to digital platforms to report and provide information about measures being taken:

  • Relevant electronic service providers (e.g. online gaming and messaging services) and designated internet service providers (e.g. apps, websites, storage services, and some services that deploy or distribute generative AI models) will be required to comply with new industry standards from 22 December 2024. The new standards require providers to adopt compliance measures for specific categories of harmful online content, such as implementing systems to detect and remove that content. The standards also seek to address new harms and risks associated with the development of generative AI.
  • In early July 2024, the eSafety Commissioner issued notices to key industry bodies and associations to develop “Phase 2” industry codes for ‘age-inappropriate’ online material. The new codes must address prevention and protection of Australian children from access or exposure to these materials and provide Australian end-users with the ability to limit access and exposure. Draft codes are to be developed by industry by the end of the year.
  • Later in July, the eSafety Commissioner issued notices to companies including Apple, Google, Meta and Microsoft, requiring them to periodically report (for the next two years) on measures implemented to address online child abuse material. The first reports are due in February 2025.
  • The eSafety Commissioner has also recently exercised its expanded transparency powers (under the updated Basic Online Safety Expectations Determination) and requested information from digital platforms about how many Australian children are on their platforms and what age assurance measures they have in place to enforce their platforms’ age limits.

APRA releases CPG 230 to help entities prepare for 1 July 2025

APRA’s Prudential Standard CPS 230 Operational Risk Management (CPS 230) sits within the risk-management pillar of APRA’s framework. Operational risk management is essential to ensure the resilience of an entity and its ability to maintain critical operations through disruptions.

Our earlier edition of Digital Bytes canvassed CPS 230’s requirements, which take effect on 1 July 2025. CPS 230 sets baseline expectations for all APRA-regulated entities. Each regulated entity has operational risks, however APRA expects Significant Financial Institutions (SFIs) to have stronger practices to complement the size and complexity of their operations.

In July 2024, APRA released the final version of Prudential Practice Guide CPG 230 along with an accompanying statement setting out responses to submissions made through earlier consultation.

Of particular note, APRA has:

  • agreed to allow non-SFIs an additional 12 months before they must comply with business continuity and scenario analysis requirements under CPS 230;
  • set out details of its supervision programme for 2025-2028, with details of the prudential reviews it intends to undertake; and
  • published a ‘day one compliance checklist’ to assist entities to prepare for 1 July 2025.

APRA-regulated entities should also be aware that APRA has published a letter to its regulated entities providing additional insights on common cyber resilience weaknesses.

As reported in our recent Above Board publication, the eight observations in APRA’s letter relate to security in configuration management, privileged access management and security testing. These include “inadequate management and oversight of security test findings”; APRA’s guidance is that test results should be reported to the appropriate governing body or individual, with associated follow-up actions formally tracked. Testing, like threat detection, only works if it is followed through.

What do we know is coming next?

Some of the updates we can expect in the coming months include:

  • the Government is expected to introduce legislation to Parliament soon, mandating that organisations with a turnover of $3 million or more, and government entities, must report ransomware payments. Indications are that while the Government’s use of reported information will be subject to limited-use or ‘safe harbour’ protections, the full immunity from legal action sought by businesses will not be included;
  • the Senate Select Committee on Adopting Artificial Intelligence has had its reporting deadline extended from 19 September 2024 to 26 November 2024;
  • the OAIC has indicated that it will soon release updated privacy guidance for the not-for-profit sector;
  • as part of the ‘Commonwealth Cyber Uplift Plan’, a new cybersecurity industry advisory board will be established to advise Government; and
  • the final report from the latest three-yearly review of Australia’s credit reporting framework (including Part IIIA of the Privacy Act) was due to be delivered to relevant ministers by 1 October 2024.

Finally, if you’re currently focused on what you can do to minimise the aged and redundant personal information you hold, a recent case in the US on Google’s destruction of employee chat records is a timely reminder to ensure that you also take the right steps to preserve evidence.

How we can assist

We have a large team of privacy and cyber specialists, with substantial experience across the whole spectrum of data, privacy and cyber compliance and incident management.

For a more detailed briefing on any of these updates, or to discuss how we can assist your organisation to manage its risks in these rapidly evolving areas, please get in touch.

Big thanks to Alexandra Gauci, Bailey Britt, Dean Baker, James Finnimore, Leonie Higgins, Caitlin Abernethy and Saara Stenberg for their contributions to this edition of Digital Bytes.

Here is the link:

https://jws.com.au/insights/articles/2024/digital-bytes-cyber-privacy-ai-data-update-oct2024

And thanks to the authors for a thorough review!

David.

Tuesday, October 08, 2024

I Suspect We Should Be More Worried Than We Apparently Are About These Intrusions

This appeared last week:

Aussies’ private data being shared without consent through online advertisers exposing them to scammers

Australians’ private information is being shared hundreds of times a day with online advertisers who resell and distribute it to unknown parties, putting people at risk of targeted scams.

Emma Kirk

October 4, 2024 - 9:28PM

NewsWire

Data shows Australians putting themselves at risk online

A concerning number of Australians are putting themselves in danger online according to new data.

Australians’ personal information is being shared 450 times a day through online ad-tracking systems without their consent or knowledge.

A new report from Reset Tech found Australians’ live location data is shared hundreds of times each day through real-time bidding systems, software used by ad-exchange companies that collect masses of consumer data to share with advertisers.

The digital advocacy group used data from the Irish Council of Civil Liberties’ investigation into Australia’s hidden security crisis and made shocking discoveries about how Australians’ personal information is being used without their consent.

Reset Tech uncovered Australian’s “extraordinarily sensitive information” was being exposed to hundreds of unknown third-party actors every second of every day.

The company explained RTB systems worked each time a person opened a website or app with an ad, instantly launching an auction to help advertisers decide which ad space to bid on.

Australians’ private information is being shared with unknown third parties without their consent.

The report found everyday thousands of companies received data on every available ad slot on Australians’ devices, which they could copy to build their own databases about Australians and resell the information over and over again.

Data included information about people’s movements, sexual interests, financial concerns, service providers, personal problems, gambling, drinking habits and online purchases.

Data could be categorised to identify who overate to cope with stress, who acted on impulse, who got a thrill from shopping, and who was self-indulgent.

The report found one company had 17,500 unique data categories about Australians for sale.

Currently, Australia has no limits on how this sensitive information is used which means residents’ private information can be sold to scammers and foreign-state actors.

The report found scammers could access information generated through ad-tracking systems to target scam ads to victims.

The report highlighted scammers could buy this information from businesses to personalise scam ads that appeared to be from people’s service providers such as banks or telcos, or use it to scam people in other ways.

Real Tech executive director Alice Hawkins said no one knew who was buying the data or where it was going.

Ms Hawkins said there was no transparency over the transactions that were taking place in a largely business to business data trade.

“We don’t know who buys it and we don’t know what happens to the data after it is initially released and put on offer,” she said.

“There’s no way of knowing or controlling these data flows once they’ve been exposed through the RTB process.”

Ms Dawkins said it was widely recognised that browsing was not a private experience, but Australians would be shocked to hear the level of inferences and information that could be collected from advertisers, purchasers and advertising data.

She said the real point was how the information could be linked back to an identifiable person.

“The ad tech industry talk about the data being anonymous or anonymised, and the narrative of anonymous ads, which I just find so extraordinary,” she said.

“It’s the ad tech industries version of greenwashing.

“The whole point of the detail in these datasets is so you can target a person with ads relevant to them, such as through a cookie ID or a browser ID.

So the notion that all of this effort goes into targeting ads and then it couldn’t possibly be linked to a person is so nonsensical, because that’s the entire point.”

Ms Dawkins said the Australian parliament needed to set clearer expectations on what types of data was protected.

She said there needed to be a useful framework for businesses that handled, processed, collected and traded Australian’s data.

In September, Attorney-General Mark Dreyfus introduced the Privacy and Other Legislation Amendment Bill 2024 into parliament to offer better protection of Australian’s privacy.

A spokesperson for the Attorney-General said the government was committed to ensuring the Privacy Act worked for all Australians and was fit for purpose in the digital age.

“The Albanese Government’s landmark legislation now before the parliament will strengthen privacy protections for all Australians, including a statutory tort for serious invasions of privacy, targeted criminal offences to respond to doxxing and enable the development of a Children’s Online Privacy Code,” they said.

“This legislation is just the first stage of the Government’s commitment to providing individuals with greater control over their personal information.”

Consumer Police Research Centre deputy chief executive officer Chandni Gupta said Australians deserved privacy protections that were centred around people, not profit.

“It is time for the Federal Government to modernise what it means to be identifiable to cover data points obtained from any source and by any means,” he said.

“It must put the onus on businesses by imposing clear obligations on collecting, sharing and using consumer data that leads to fair and safe outcomes for Australians.”

Here is the link:

https://www.news.com.au/technology/online/aussies-private-data-being-shared-without-consent-through-online-advertisers-exposing-them-to-scammers/news-story/09c009c57a33c6d7780804294240b648

It really is pathetic just how little Government is doing to protect us all from exploitation and potential abuse. I guess we will only see any action when the abuse becomes annoying to the man in the street and there is demand for reform,

I fear that may well take a good while!

David.