This announcement appeared last week.
Timely tech sector code on disinformation
By Sunita Bose
February 22, 2021 — 12.00am
In December 2019, when the federal government asked the digital industry to develop a code of practice on disinformation, the world was a very different place.
But when the Digital Industry Group (DIGI) began developing the Australian Code of Practice on Misinformation and Disinformation that we’re launching today, it was the start of Australia’s first wave of COVID-19. The unprecedented challenges of these uncertain times remained front of mind.
The pandemic has stress-tested this code in a way that we may not have achieved outside of a time of crisis. This past year, the digital industry ramped up their efforts to combat misinformation and disinformation, adapting policies, directing people to official information, and initiating new collaborations with researchers.
This disinformation code has been adopted by Facebook, Google, Microsoft, Tik-Tok, Twitter, Redbubble. It provides a consistent framework and transparency with such work in Australia. Not just in relation to coronavirus, but any issue where harmful misinformation or disinformation may proliferate online.
The more DIGI engaged with the research, in order to ensure the code was evidence-based and reflected community expectations, the more it became apparent that the words ‘misinformation’ and ‘disinformation’ mean very different things to different people.
That’s why we sought advice from our academic and civil society partners at the University of Technology Sydney’s Centre for Media Transition and at First Draft, a global organisation that specialises in helping societies overcome false and misleading information.
First Draft’s Clare Wardle and Hossein Derakhshan have written: “Politicians have forever made unrealistic promises during election campaigns. Corporations have always nudged people away from thinking about issues in particular ways. And the media has long disseminated misleading stories for their shock value. However, the complexity and scale of information pollution in our digitally connected world presents an unprecedented challenge.”
Put another way, people misleading others – or people being misinformed – are not new problems. But companies signing this code want to reduce the risk of online misinformation causing harm to Australians. They also want to protect the privacy, freedom of expression and political communication Australians expect when they use digital services to express themselves or debate ideas.
We believe we have struck the right balance with this code.
Every company that signs it commits to safeguards to protect against online disinformation and misinformation. They commit to publishing and implementing policies on their approach, and providing a way for their users to report content that may violate those policies.
They also commit to adopting a range of scalable measures that reduce its spread and visibility, such as content labelling and removal, restricting inauthentic accounts and behaviours, partnerships with fact-checking organisations, and technology to help people to check the authenticity of digital content.
Every signatory commits to publicly releasing an annual transparency report about all of their efforts under the disinformation code, and the first set of these will be released in May.
The Government asked the digital industry to draw learnings from the European Union Code of Practice on Disinformation. The EU code provides an opt-in model to reflect the highly diverse nature of the digital industry, allowing different businesses to focus on the aspects of so-called “information disorder” where they are best placed to contribute. Unlike the EU, the Australian code has an expanded focus on harmful misinformation, as well as the foundational commitments explained above that all signatories must adopt. Like the EU, it offers some additional commitments that companies can choose if they’re relevant to their business.
For platforms that offer advertising, for example, there’s a commitment to address disinformation in paid content, which won’t be relevant to services that are funded through a subscription model. There’s a commitment to address fake bots and accounts that spread disinformation. And other commitments to help Australians know more about the source of content they see online, in both news and factual content, and political advertising.
The code encourages the digital industry to partner with universities and researchers which, together with the transparency reports, will go a long way to improving public understanding and addressing misinformation and disinformation over time.
We’ve consulted the public, the government and experts. A policy solution to these difficult issues isn’t easy. This code is a timely response to a complex issue of public concern, and provides a blueprint for best practice.
Sunita Bose is the Managing Director of the Digital Industry Group Inc. (DIGI).
Here is the link:
It is fair to say this has not exactly been universally hailed.
We have this:
Digi launches code of practice on disinformation; Reset Australia rejects as 'pointless'
Reset Australia describes the Australian Code of Practice on Disinformation and Misinformation newly promulgated by Digi – the body representing several of the digital giants – as "pointless".
The Digital Industry Group (Digi) representing Google, Facebook, Twitter, Microsoft, Redbubble, and TikTok in Australia today published its Australian Code of Practice on Disinformation and Misinformation [PDF] as its response to the Commonwealth Government's request for a voluntary code.
"Voluntary" means it is up to operators to opt in to the code. There seems to be no limit on the amount of time operators can take to respond to complaints, and the code does not require operators to delete or prevent access to material determined to be misleading, deceptive or fake.
So perhaps it is not surprising that Reset Australia – the local affiliate of the global initiative working to counter digital threats to democracy – describes the code as "pointless".
Reset Australia, which was involved in the consultation process for the code, further characterises it as "wholly inadequate."
Executive director Chris Cooper said "This limp, toothless, opt-in code of practice is both pointless and shameless. It does nothing but reinforce the arrogance of giants like Facebook.
"This code attempts to suggest it can help 'empower consumers to make better informed choices,' when the real problem is the algorithms used by Facebook and others actively promote disinformation, because that's what keeps users engaged.
"Any voluntary, opt-in code is inherently untrustworthy because we know it's not in the business interests of these platforms to take real action on misinformation.
Melissa Sweet on: February 22, 2021
Marie McInerney writes:
A new code of practice to address online misinformation and disinformation, developed for and by the digital tech industry, has been slammed by critics as falling way short of the critical action needed to prevent growing harms to public health and democracy.
Leading public health experts, who have fought long battles with food, alcohol, tobacco and gambling industries, warned that voluntary codes rarely work and challenged the tech industry to invite the public health sector to play “a serious role” in efforts to combat disinformation.
The Public Health Association of Australia (PHAA) said Facebook’s move to block Australian news, which sparked a public health emergency by also shutting down multiple health agencies and services across Australia, showed how hard industry will resist regulation that it perceives as a threat.
PHAA CEO Terry Slevin welcomed the general direction of the Australian Code of Practice on Misinformation and Disinformation, including its acknowledgement that the Code must extend to “public goods such as public health”, but said, for the most part, “voluntary codes are an unenforced gossamer-thin veil of pretence, with the objective of avoiding genuine regulation”.
Seeing the way the ‘Facebook war’ is playing out, it seems unlikely that large digital platforms will sacrifice their advertising revenue or other interests lightly.
“One test of the bona fides of this process will be whether they invite independent, scientific or government public health experts to play a serious role advising on the health risks that online misinformation can cause.
“What we need is a mandatory vigorously enforced code with access to meaningful penalties for breaches, founded on scientific evidence, if we want to see any change in the prioritisation of public health over private profits.”
Slevin said misinformation, whether about health information, the marketing of unhealthy products, or unhealthy directions in politics, is a very real issue.
“This is not just about ‘fake news’ in politics – although that is certainly important. Online misinformation exists as disguised marketing strategies for many unhealthy products, including tobacco and vaping use, alcohol promotion, unhealthy food marketing – especially to children, gambling promotion, and many other goods and services that cause disease and other forms of harm,” he said.
“PHAA is working on many such policy fronts, and digital misinformation is certainly a massive problem.”
The Digital Industry Group Inc (DIGI) – an Australian peak body representing Google, Facebook, Twitter, Microsoft, Redbubble, and TikTok – developed new voluntary code was developed at the request of the Federal Government.
Perhaps stung by Facebook’s retaliation last week against its mandatory media bargaining code, the Government said it will be “watching carefully to see whether this voluntary code is effective in providing safeguards against the serious harms that arise from the spread of disinformation and misinformation on digital platforms.”
Communications Minister Paul Fletcher said the Australian Communications and Media Authority (ACMA) will report to the Government by 30 June 2021 on the impact of the code and initial compliance, to “guide us on whether further action is needed”.
Reset Australia, the Australian arm of the global Reset initiative to counter digital threats to democracy, was scathing about the new code, saying it “does nothing but reinforce the arrogance of giants like Facebook” and that the Federal Government should not accept the “insouciant contempt” it shows for the Australian public.
It said the real problem with the code was that the algorithms used by Facebook and others “actively promote disinformation, because that’s what keeps users engaged”.
“This limp, toothless, opt-in code of practice is both pointless and shameless,” said Reset Australia executive director Chris Cooper, calling for an independent public regulator of the tech industry with the power to inspect and audit algorithms and to issue fines, notices, and other civil penalties.
It was a call backed by Melbourne University’s Public Health Professor Rob Moodie, who described self-regulation as “a complete smokescreen” and “window dressing”.
“Industry self-regulation virtually never works – why would it? Why would they ever develop rules that would in any harm their bottom line?” he told Croakey, pointing to past lessons from the food, tobacco and gambling industries.
“Whether you use the metaphor of the ‘fox looking after the henhouse’ or the ‘burglars fitting the locks’ it’s the same result – as Reset Australia says it is “a limp, toothless, opt-in code of practice which is both pointless and shameless – or shameful!”, he said, urging also that the tech giants be required to pay taxes in Australia.
“That would be a great start,” he said.
Lots more here:
Seems to me we will just have to wait 6-12 months to see who is right with all this but I rather suspect his code is the beginning of a process which may eventually make some difference rather than an end-point in itself!
Time will surely tell I guess.