The federal government will give the media regulator new legislative powers in an attempt to reduce the spread of misinformation and disinformation on global technology platforms such as Twitter, YouTube and Facebook.
Communications Minister Michelle Rowland is planning to introduce laws that will give Australia’s media watchdog the ability to retract information from the world’s most powerful tech companies if they fail to meet standards of a voluntary misinformation and disinformation code of practice.
The previous government, under then communications minister Paul Fletcher, attempted to introduce the same laws but did not do so before the 2022 federal election.
“Misinformation and disinformation poses a threat to the safety and wellbeing of Australians, as well as to our democracy, society and economy,” Rowland said. “A new and graduated set of powers will enable the [Australian Communications and Media Authority] to monitor efforts and compel digital platforms to do more, placing Australia at the forefront in tackling harmful online misinformation and disinformation.”
Under the proposed laws, which are expected to be legislated by the end of this year, the ACMA will have the power legally to request information from tech platforms such as Meta, Google and Twitter such as data on complaints handling and how they manage the spread of harmful content.
The ACMA will also be able to register and enforce new codes or industry standards, should voluntary efforts prove inadequate. This could include measures such as stronger tools to empower users to identify and report harmful content online.
Plans to give the media regulator more power come almost two years after the lobby group of the tech sector, DIGI, introduced a voluntary code of practice on disinformation and misinformation. Under the code, misinformation is defined as false or misleading information that is likely to cause harm, while disinformation is false or misleading information that is distributed by users via spam and bots.
The voluntary code was established at the request of the federal government following the release of an inquiry into the market power of digital platforms and was signed by tech companies including Google, Meta, Twitter, Microsoft and viral video site TikTok. In 2021, after the code was introduced, a report by the ACMA found 82 per cent of Australians had experienced misinformation about COVID-19 in the previous 18 months. This was exacerbated again with the proliferation of harmful content when Russia invaded Ukraine.
DIGI reviewed its voluntary code in December 2022 and has since implemented measures to improve it, such as modifying transparency reporting requirements for smaller tech platforms and redefining the definition of “harm”.
Sunita Bose, managing director of DIGI, welcomed the government’s plans.
“DIGI is committed to driving improvements in the management of mis- and disinformation in Australia, demonstrated through our track record of work with signatory companies to develop and strengthen the industry code,” Bose said.
Plans to give the media regulator more power in the fight against tech platforms is just one of several initiatives that Rowland has under way. She is also reviewing Australia’s broadcasting services act and the anti-siphoning scheme – which determines which major cultural and sports events should be available to the public on free-to-air television – and media diversity laws.
Attorney-general Mark Dreyfus is also hosting a roundtable next month with media organisations and stakeholders to discuss press freedom reform.
“There is agreement across the parliament and the community that improved protections for press freedom are needed,” Dreyfus said on Wednesday. “The Albanese government intends to progress legislative reform as a priority.”
Australia is not the only country attempting to crack down on the spread of disinformation and misinformation. The British government is on the verge of introducing laws that could make CEOs such as Meta’s Mark Zuckerberg criminally liable for harmful content consumed by children on social media.
Under the proposed bill, tech companies will be required to “remove illegal content” and must prevent children from accessing harmful and age-inappropriate content.