This
appeared last week:
Digital Bytes – cyber, privacy, AI & data update
Oct 2024 Articles Written by Helen Clarke (Partner), Sophie
Dawson (Partner), Keith Robinson (Partner), Emily Lau (Senior Associate), Viva
Swords (Senior Associate), Lydia Cowan-Dillon (Associate)
While all eyes have been on the recent introduction of the
privacy reform Bill to Parliament, there have been a number of other updates
that continue to inform the shifting patterns of opportunity, legal risks and
regulatory focus in relation to cyber, privacy, AI and data over the last three
months.
In addition to the more substantive updates below, also keep
in mind:
- Significant
data breaches and cyber incidents continue to make the headlines. Many
businesses were affected by the July 2024 CrowdStrike outage, raising
questions about the legal implications for regulatory compliance
(including privacy compliance), insurance, business continuity and supply
chain disruption, as well as whether events like this will trigger a
change in approaches to contracts and liability.
- The Communications Legislation Amendment (Combatting
Misinformation and Disinformation) Bill 2024 was introduced
to Parliament on 12 September 2024. The Bill provides the Australian
Communications and Media Authority (ACMA) with new powers to
address seriously harmful content (including misinformation and
disinformation) on digital platforms, with strengthened protections for
freedom of speech.
- A recent report from Tenable indicates that a
significant proportion of Australian companies interviewed can lower their
cyber insurance premiums by 5-15 per cent by implementing proactive
risk-management measures.
- ACMA
continues to take regular enforcement action – notably, two infringement
notices were issued against Telstra for failures to comply with scam rules and disclosure of unlisted phone numbers.
- Draft
legislation to implement a ‘Scams Prevention Framework’ has been released
for consultation. The Treasury
Laws Amendment Bill 2024: Scams Prevention Framework would require
designated sectors to prevent, detect, report, disrupt and respond to
scams and to implement appropriate governance arrangements. The framework
would initially apply to banks, telecommunication providers and digital
platform service providers (including social media, paid search engine
advertising and direct messaging services) – future designated sectors
would likely include superannuation funds, digital currency exchange
providers, other payment providers and online marketplaces. Consultation
closes on 4 October 2024.
- The European Union’s AI Act took effect on 1 August 2024,
so we should soon start seeing how this risk-based regulatory model plays
out in practice. More importantly, however, Australia is making progress
in its own regulatory model for AI – see below.
- The
sale of personal information is a topic of increasing focus, with Oracle
reaching a US$115 million settlement (without admitting liability) in litigation claiming
that it sold “digital dossiers” with data about hundreds of millions of
people. Another settlement has made the news – genetics testing company 23andMe
has recently settled a suit in relation to its 2023 data breach for US$30
million, and a promise of three years of security monitoring.
- A
number of different government bodies, such as AUSTRAC and the Australian Cyber Security Centre, have recently issued
updated guidance on recommended practices for outsourcing and procurement.
Some Australian privacy reforms progress with Bill
introduced to Parliament
Our Technology and Privacy specialists take you on a tour of
the reforms in this article.
For those in a hurry, the Privacy and Other Legislation
Amendment Bill 2024 (Cth) contains:
- new
infringement notice powers for Australia’s privacy regulator, the Office
of the Australian Information Commissioner (OAIC);
- a
statutory tort for certain serious invasions of privacy, which includes a
journalism exemption;
- powers
to prescribe foreign jurisdictions as having adequate privacy laws for the
purpose of overseas disclosure of personal information;
- clarification
of entities’ information security obligations;
- updates
to the notifiable data breaches regime to provide additional flexibility
in handling notifiable data breaches to reduce harm to individuals;
- provisions
mandating the development of a Children’s Privacy Code;
- new
offences to be included in the Criminal Code Act 1995 (Cth)
for certain online communications that are “menacing or harassing”
(referred to as “doxxing”); and
- new transparency requirements in relation to
entities’ use of personal information for automated decision-making.
Certain aspects of the reforms will be more important to
some organisations than others, so it’s important to carefully identify those
that may impact your business and operations.
However, at a minimum, businesses should be aware of the
specific Australian Privacy Principle (APP) provisions that are proposed
to be subject to the OAIC’s “infringement notices” power (see our
earlier article for a list). Compliance with these provisions should be an
area of focus, given the relative ease with which the OAIC will be able to take
action in the event of non-compliance (if reforms are passed).
There is a raft of other recommended changes proposed
through the reform process which were not included in this Bill. These reforms
may be introduced at a later date.
Proposed mandatory guardrails for AI in high-risk
settings
Following the Government’s previous announcement on its
proposed risk-based approach to regulating AI (which we reported on in an earlier edition of Digital Bytes), on 5 September 2024,
the Department of Industry, Science and Resources has issued a proposals paper for introducing mandatory guardrails for AI in
high-risk settings containing:
- two
categories to define “high-risk” settings;
- 10
proposed mandatory guardrails for the development and deployment of
“high-risk” AI; and
- three
proposed approaches to regulation.
Submissions on the proposals paper are due by 4 October
2024.
Definition of “high-risk” settings
The paper proposes two categories that should be considered
“high-risk”, where the proposed mandatory guardrails will apply.
The first category addresses where AI use is known or
foreseeable, and proposes that what is “high-risk” will depend on the adverse
impact (and severity and extent of the impact) on:
- individuals’
rights recognised by Australian law;
- people’s
physical or mental health or safety;
- legal
effects, defamation or similarly significant effects on individuals;
- groups
of individuals or collective rights of cultural groups; and
- the
broader Australian economy, society, environment and rule of law.
The second category deems any advanced and highly capable AI
models, where all possible risks and applications cannot be predicted, to be
“high-risk”.
Ten proposed mandatory guardrails
The proposed mandatory guardrails are:
- establish,
implement and publish an accountability process including governance,
internal capability and a strategy for regulatory compliance;
- establish
and implement a risk management process to identify and mitigate risks;
- protect
AI systems and implement data governance measures to manage data quality
and provenance;
- test
AI models and systems to evaluate model performance and monitor the system
once deployed;
- enable
human control or intervention in an AI system to achieve meaningful human
oversight.
- inform
end-users regarding AI-enabled decisions, interactions with AI and
AI-generated content;
- establish
processes for people impacted by AI systems to challenge use or outcomes;
- be
transparent with other organisations across the AI supply chain about
data, models and systems to help them effectively address risks;
- keep
and maintain records to allow third parties to assess compliance with the
guardrails; and
- undertake
conformity assessments to demonstrate and certify compliance with the
guardrails.
Approaches to regulation
The proposals paper canvasses and seeks feedback on the
following three options for implementing the proposed mandatory guardrails:
- domain
specific approach – adapting existing regulatory frameworks to include
the proposed mandatory guardrails;
- framework
approach – introducing framework legislation, with associated
amendments to existing legislation; and
- whole
of economy approach – introducing a new cross-economy AI-specific Act
that would define these “high-risk” applications of AI and outline the new
mandatory guardrails.
Organisations developing or deploying AI in “high-risk”
settings may wish to submit feedback on the proposals paper by 4 October 2024.
AI Voluntary Safety Standard released
On the same day as the release of the ‘mandatory guardrails’
paper described above, the Department of Industry, Science and Resources issued
a Voluntary AI Safety Standard setting out 10 voluntary
guardrails to help organisations deploying and developing AI to benefit from
and manage the risks associated with AI.
The intended audience is both developers and deployers of
AI.
The 10 voluntary guardrails are the same as the proposed
mandatory guardrails, set out above, with the exception of the 10th
guardrail, which is:
Engage your stakeholders and evaluate their needs and
circumstances, with a focus on safety, diversity, inclusion and fairness.
Organisations already developing or deploying AI should
consider adopting the 10 steps in the AI Voluntary Safety Standards – even
though this is not currently a legal imperative, adopting this approach will
assist with risk management and may become industry practice.
Other key updates in the AI space
The Governance Institute of Australia (GIA) has
released an issues paper on artificial intelligence (AI) and board
minutes, addressing the growing use of transcription and generative AI
tools to transcribe meetings and generate action items or summaries.
The issues paper flags confidentiality, cybersecurity, IP,
inaccuracy and lack of transparency as key legal issues that may arise. It also
highlights the importance of technological literacy of those using AI.
In light of directors’ statutory and common law fiduciary
duties, the issues paper recommends directors ensure that any AI-generated
minutes are a true reflection of board meetings.
To assist boards to navigate a disruptive technological
trend, the Australian Institute of Company Directors (AICD) has released
a suite of guidance materials for directors on AI governance,
focusing on generative AI. ‘A Director’s Introduction to AI’ provides an
overview of AI applications and relevance for directors, the risks and
opportunities, and the applicable domestic and international regulatory
environment.
Like the GIA AI issues paper, the AICD materials urge
directors to be mindful of their duties when capitalising on the commercial
benefits of generative AI in their organisations.
Generative AI is also raising competition law concerns.
Competition and consumer law regulators in the United States, the European
Union and United Kingdom have released a joint
statement identifying trends in the AI market which they consider
may impact a fair, open and competitive environment, and the following
competition and consumer risks:
- algorithms
can allow competitors to share commercially sensitive information, engage
in price fixing or collude on other terms that undermine competition;
- providers
of key inputs, such as specialised chips, substantial compute, data at
scale and specialist technical expertise, may become bottlenecks in the AI
supply chain and allow key players to have “outsized influence” in the
market;
- incumbent
AI providers often have a significant market power and may seek to
entrench their dominance, which may impact competition;
- partnerships,
investments and “other connections” between firms relating to the
development of generative AI could in some cases attempt to influence
competition and steer market outcomes;
- deceptive
or unfair use of consumer data to train AI models may give rise to
regulatory non-compliances; and
- businesses
should be transparent with consumers when AI is incorporated in products
and services.
AI’s impact on anti-competitive behaviour and detrimental
outcomes to consumers will continue to be monitored by competition and consumer
law regulators in these jurisdictions. In Australia, the Australian Competition
and Consumer Commission (ACCC) recently released an announcement flagging competition issues in generative AI as a topic to
be addressed in its 10th Digital Platform Services Inquiry report –
so the issue is equally on the radar in Australia.
Key takeaways from regulators’ plans for 2024-25
In August 2024, key Australian regulators released their
corporate plans for 2024-25, identifying their areas of focus for the year
ahead. Rapid technological innovation was cited across the board as one of the
driving factors impacting the regulators’ respective sectors and informing
their strategic priorities.
The OAIC
has outlined its focus on identifying the unseen harms that impact privacy
rights in the digital environment. As part of this focus, it plans to implement
a program of targeted, proactive investigations to uncover harms, provide
avenues for remediation and set the standard for industry practice. It also
flagged that it is looking to exercise its wider range of enforcement powers,
which have been proposed through the privacy reforms.
The OAIC also has a new role in regulating the ‘Digital ID’
scheme, and has flagged that it is looking to increase the uptake of digital ID
use in order to reduce avoidable over-sharing of identity information.
The OAIC states it is aiming to finalise 80 per cent of
notifiable data breaches within 60 days, and 80 per cent of privacy complaints
within 12 months.
ACMA
has also released its annual compliance priorities for 2024-25, which include
addressing misleading spam messages, and combatting misinformation and
disinformation on digital platforms (note the new Bill referred to above).
Both the Australian Securities and Investments Commission
(ASIC) and the Australian Prudential Regulation Authority (APRA) named
cyber resilience as a key focus for 2024-25.
In its corporate plan, ASIC
stated that it intends to advance digital and data resilience and safety by:
- implementing
a supervisory cyber and operational resilience program;
- monitoring
the use of AI by Australian financial services licensees; and
- monitoring
the use of offshore outsourcing arrangements.
APRA
plans to undertake a number of regulatory activities aimed at strengthening the
cyber risk-management practices of regulated entities, including:
- embedding
Prudential Standard CPS 234 Information Security (CPS
234) and ensuring entities act on findings from CPS 234 independent
reviews to lift minimum standards of cyber risk management;
- releasing
industry letters on high-risk cyber topics and expecting regulated
entities to strengthen practices accordingly;
- conducting
a cyber operational resilience exercise to test industry preparedness in
responding to cyber incidents; and
- engaging
with government initiatives on cyber regulation, generative AI,
preparedness and incident response.
What you need to know from the latest OAIC reports and
actions
The latest notifiable data breaches report released
The OAIC’s notifiable data breaches report for January to June 2024
was published on 16 September 2024. In the report’s foreword, the Australian
Privacy Commissioner reminds entities that the scheme is now six years old, and
“it is no longer acceptable for privacy to be an afterthought; entities need to
be taking a privacy-centric approach in everything they do”.
The number of notifications received in this six-month
period was the highest it has been since late 2020, with 527
notifications. Malicious or criminal attacks still make up the majority (67 per
cent) of notified data breaches, with human error accounting for 30 per cent
and system fault a mere 3 per cent. Incidents involving phishing (compromised
credentials), ransomware and other compromised or stolen credentials make up
the majority of reported cyber incidents.
Messages of note in the report include:
- when
assessing the relevance of a threat actor’s motivation, entities should
not rely on assumptions and should weigh in favour of notification – in
particular, the OAIC warns against taking a threat actor’s assurance that
if a ransom is paid, data will not be mishandled, on face value, and
points to the prevailing government recommendation that ransoms should not
be paid;
- there
are a number of specific cyber threat mitigations identified that indicate
the OAIC’s expectations for complying with information security
obligations in APP 11.1 – e.g. multi-factor authentication where possible,
password management policies, layering security controls, need-to-know
access and security monitoring processes and procedures; and
- there
is a continued focus on supply chain management, and the steps entities
should take when engaging a service provider that handles personal
information on its behalf.
The current employee record exemption in the Privacy Act
is interpreted narrowly
A recent privacy determination tests the limits of the
employee record exemption in the Privacy Act 1988 (Cth) (Privacy Act).
In ALI and ALJ (Privacy) [2024] AICmr 131, an employee
made a complaint after 110 staff were emailed an update about the employee’s
(good) recovery following a medical episode in the workplace’s carpark which
was witnessed by a number of other employees.
The employer argued that disclosing the employee’s personal
and sensitive information in the update fell within the employee record
exemption because the update was directly related to the employment
relationship. However, the employer’s argument focused on its employment
relationship with other employees who were concerned with the complainant
employee’s recovery after the incident. As such, the OAIC was not persuaded
that the update was directly related to the employer’s employment relationship
with the complainant employee.
The OAIC then found that use of the employee’s personal
information in the update breached APP 6, because the employer could have
discharged its duty to its other employees without identifying the employee by
name in the update.
The OAIC awarded the employee $3,000 for non-economic loss
and $125 for expenses. The OAIC declined to award other remedies sought by the
employee, such as a charitable donation or an employment reference.
A recent privacy assessment of the ‘my health app’ shows
the OAIC’s attention to detail when reviewing privacy policies
The OAIC has recently conducted a privacy assessment of Australian Digital Health Agency’s (ADHA’s)
'my health app', including a review of its privacy policy.
Notably, the OAIC’s assessment included consideration of how
the app’s privacy policy addressed overseas disclosure. It recommended that
catch-all statements intended to “allow for situational responsiveness and to
avoid breaching the policy” should be replaced with a more detailed and
specific description of any overseas disclosure based on current practice (if
there were any such disclosures).
Further, the OAIC noted that the privacy policy was lengthy,
repetitive, and included operational and instructional information not relevant
to the management of personal information. It recommended that the privacy
policy should only include descriptions of how the entity manages personal
information. The OAIC also repeated its general guidance that privacy policies
should be easy to understand (for example, by avoiding jargon and legalistic
terms).
Organisations should consider reviewing their privacy
policies against these recommendations.
OAIC takes no further action in relation to Clearview AI
and TikTok
In 2021, the OAIC found Clearview AI had breached
Australians’ privacy through the collection of images without consent, and
ordered the company to cease collecting the images and delete images on record
within 90 days. Clearview initially appealed the decision to the Administrative
Appeals Tribunal but ceased its appeal in August 2023. The OAIC recently
announced that further action against Clearview AI was not warranted.
Further, despite raising concerns about TikTok’s use of
pixel technology, the Australian Privacy Commissioner has declined to investigate, citing deficiencies with
existing privacy laws. Given the recent privacy reform Bill does not include
amendments to the definition of personal information, it is possible that
further reforms are required to investigate the practice.
ACMA releases updated guidance on consent to marketing
under the Spam Act
In response to a surge of regulatory activity under the Spam
Act 2003 (Cth) (Spam Act) and the Do Not Call Register Act 2006 (Cth),
in July 2024, ACMA released its Statement
of Expectations (Statement). This ‘outcome-focused guide’
establishes ACMA’s expectations of how businesses should obtain consumer
consent when conducting telemarking calls and e-marketing (via email, SMS and
instant messages).
The key takeaways from the Statement are:
- Express
consent is preferred: while the Acts permit inferred consent in
certain circumstances, ACMA recommends obtaining express consent, as it is
clear and unambiguous.
- Terms
and conditions for express consent: ACMA recommends express consent based
on clear terms and conditions, that are readily accessible (i.e. not
hidden in fine print, lengthy privacy policies or behind multiple
‘click-throughs’). Terms and conditions should address what the consent is
for (including for the types of products and marketing channels), who will
use the consent (including affiliates and partners), how long the consent
will be relied on, and how consent can be withdrawn.
- Double
opt-in: ACMA recommends taking a “double opt-in” approach, such as
email confirmation of consent (e.g. an email providing a click-through
link).
- Do
not rely on third parties: if working with third parties, businesses
cannot assume that they will keep or obtain records of consent and
marketing. Businesses must have their own robust and comprehensive
processes in place to ensure that consent is reliably kept and maintained.
- Obtain
records of consent: businesses are required to retain records of
consent and marketing. ACMA recommends that records include (but are not
limited to), “the method used to obtain consent, the terms that applied
and the date and time it was obtained”.
- Requirements
for valid consent: ACMA acknowledges the OAIC’s requirements for valid
consent (informed, voluntary, current, specific, given by a person with
capacity) under the Privacy Act, and says that those requirements “provide
a framework to apply to consent gathering practices to ensure that they
are consumer friendly”. However, ACMA falls short of expressly adopting
those requirements.
The Statement also reinforces the existing legal
requirements in relation to unsubscribe and opt-out options, including the fact
that individuals should not be required to log in to a service to unsubscribe.
The release of this Statement indicates that practices
regarding consent are on ACMA’s radar, and organisations should consider
reviewing their practices against ACMA’s expectations in the Statement.
The consumer data right regime is expanded to include
action initiation
The Consumer Data Right (CDR) regime is Australia’s
data portability scheme. Introduced in 2019, the scheme has been rolled out
sector by sector – so far, to banking and energy sectors – to allow consumers
to direct their service providers (e.g. their bank) to provide their data directly
to recipients accredited under the scheme (e.g. a budgeting app).
In a significant update to the scheme, legislation
(originally introduced to Parliament in 2022) has recently passed which permits
“action initiation”. Action initiation allows an accredited data recipient to
take actions on the consumer’s behalf. For example, an accredited recipient
(with the consumer’s consent) may be able to make payments, open and close
accounts, switch providers and update details on the consumer’s behalf.
Action initiation will only be available for types of
actions designated by the Minister, in relation to service providers designated
as “action services providers” by the Minister.
Treasury also released, for public consultation, exposure
draft amendments to the Consumer Data Right Rules which include (among other
changes) proposals to simplify:
- rules
relating to consents, including permitting bundled consents; and
- arrangements
for businesses to nominate representatives.
Submissions have now closed.
Online safety updates
Straight from recent headlines, the Government announced
that it is consulting on a proposal to impose social media age restrictions –
we examined the proposals in this recent article.
Further, Australia’s eSafety Commissioner has recently
issued new industry standards, commenced development of the next phase of
industry codes, and issued a number of notices to digital platforms to report
and provide information about measures being taken:
- Relevant
electronic service providers (e.g. online gaming and messaging services)
and designated internet service providers (e.g. apps, websites, storage
services, and some services that deploy or distribute generative AI
models) will be required to comply with new industry standards from 22 December 2024. The new
standards require providers to adopt compliance measures for specific
categories of harmful online content, such as implementing systems to
detect and remove that content. The standards also seek to address new
harms and risks associated with the development of generative AI.
- In
early July 2024, the eSafety Commissioner issued notices to key industry
bodies and associations to develop “Phase 2” industry codes for
‘age-inappropriate’ online material. The new codes must address prevention
and protection of Australian children from access or exposure to these
materials and provide Australian end-users with the ability to limit
access and exposure. Draft codes are to be developed by industry by the
end of the year.
- Later
in July, the eSafety Commissioner issued
notices to companies including Apple, Google, Meta and Microsoft,
requiring them to periodically report (for the next two years) on measures
implemented to address online child abuse material. The first reports are
due in February 2025.
- The
eSafety Commissioner has also recently exercised its expanded transparency
powers (under the updated Basic Online Safety Expectations Determination)
and requested information from digital platforms about how many
Australian children are on their platforms and what age assurance measures
they have in place to enforce their platforms’ age limits.
APRA releases CPG 230 to help entities prepare for 1 July
2025
APRA’s Prudential Standard CPS 230 Operational Risk
Management (CPS 230) sits within the risk-management pillar of APRA’s
framework. Operational risk management is essential to ensure the resilience of
an entity and its ability to maintain critical operations through disruptions.
Our earlier
edition of Digital Bytes canvassed CPS 230’s requirements, which take effect on 1
July 2025. CPS 230 sets baseline expectations for all APRA-regulated entities.
Each regulated entity has operational risks, however APRA expects Significant
Financial Institutions (SFIs) to have stronger practices to complement
the size and complexity of their operations.
In July 2024, APRA released the final version of Prudential Practice Guide CPG 230 along with an
accompanying statement setting out responses to submissions made through earlier
consultation.
Of particular note, APRA has:
- agreed
to allow non-SFIs an additional 12 months before they must comply with
business continuity and scenario analysis requirements under CPS 230;
- set
out details of its supervision programme for 2025-2028, with details of
the prudential reviews it intends to undertake; and
- published
a ‘day one compliance checklist’ to assist entities to prepare for 1 July
2025.
APRA-regulated entities should also be aware that APRA has
published a letter to its regulated entities providing additional insights on common cyber resilience weaknesses.
As reported in our recent Above Board publication, the eight observations in
APRA’s letter relate to security in configuration management, privileged access
management and security testing. These include “inadequate management and
oversight of security test findings”; APRA’s guidance is that test results
should be reported to the appropriate governing body or individual, with
associated follow-up actions formally tracked. Testing, like threat detection,
only works if it is followed through.
What do we know is coming next?
Some of the updates we can expect in the coming months
include:
- the
Government is expected to introduce legislation to Parliament soon,
mandating that organisations with a turnover of $3 million or more, and
government entities, must report ransomware payments. Indications are that
while the Government’s use of reported information will be subject to
limited-use or ‘safe harbour’ protections, the full immunity from legal
action sought by businesses will not be included;
- the
Senate Select Committee on Adopting Artificial Intelligence has had its
reporting deadline extended from 19 September 2024 to 26 November 2024;
- the
OAIC has indicated that it will soon release updated privacy guidance for
the not-for-profit sector;
- as
part of the ‘Commonwealth Cyber Uplift Plan’, a new cybersecurity industry
advisory board will be established to advise Government; and
- the
final report from the latest three-yearly review of Australia’s credit reporting
framework (including Part IIIA of the Privacy Act) was due to be
delivered to relevant ministers by 1 October 2024.
Finally, if you’re currently focused on what you can do to
minimise the aged and redundant personal information you hold, a recent case in
the US on Google’s destruction of employee chat records is a timely
reminder to ensure that you also take the right steps to preserve evidence.
How we can assist
We have a large team of privacy and cyber specialists, with
substantial experience across the whole spectrum of data, privacy and cyber
compliance and incident management.
For a more detailed briefing on any of these updates, or to
discuss how we can assist your organisation to manage its risks in these
rapidly evolving areas, please
get in touch.
Big thanks to Alexandra Gauci, Bailey Britt, Dean Baker,
James Finnimore, Leonie Higgins, Caitlin Abernethy and Saara Stenberg for their
contributions to this edition of Digital Bytes.
Here is the
link:
https://jws.com.au/insights/articles/2024/digital-bytes-cyber-privacy-ai-data-update-oct2024
And thanks to
the authors for a thorough review!
David.