Collage image using speech bubbles and smartphones to communicate an online conversation.
Australia has passed a series of laws in recent years that enable government to take more control of online content.(Getty Images: DigitalVision)

When we’re scrolling online, most of us give little thought to what’s happening behind the scenes — who is making decisions about the content we can or cannot see.

Often this decision is in corporate hands: Facebook, TikTok and most major social media platforms have rules about what material they accept, but enforcement can be inconsistent and less than transparent.

In recent years, the federal government has also passed a suite of often controversial legislation allowing it more control over what’s online.

There’s the new Online Safety Act, for example, which was quickly passed in the middle of last year.

Among other powers, it requires the technology industry — which includes not just social media, but messaging services like SMS, internet service providers and even the company behind your modem — to develop new codes that will regulate “harmful online content”.

Drafted by industry groups, these codes will have a lot of say about how our technology is governed, but some are concerned they may have unintended consequences, not least because they borrow from an out-of-date classification scheme.

What are the codes?

After the Online Safety Act came into effect, the eSafety Commissioner instructed industry to develop draft codes to regulate “harmful online content”.

As determined by the eSafety Commissioner, this “harmful” material is dubbed “Class 1” or “Class 2”.

These are borrowed from the National Classification Scheme, which is better known for the ratings you see on films and computer games. More on this in a moment.

In general, you can think of Class 1 as material that would be refused classification, while Class 2 might be classified X18+ or R18+.

Ultimately, the industry has come up with draft codes describing how they’ll put protections in place against the access or distribution of this material.

A blonde woman with shoulder length hair speaking with two other women behind her on either side
e-Safety Commissioner Julie Inman Grant oversees the new Online Safety Act.(ABC News: Adam Kennedy)

They vary per sector and by the size of the business. For example, the code might require a company to report offending social media content to law enforcement, have systems to take action against users that violate policies, and use technology to automatically detect known child-sexual exploitation material.

What industries will the codes affect?

  • social media services
  • electronic services used for messaging including SMS and WhatsApp or email
  • designated internet services that include websites and online storage, such as Dropbox and Google Drive
  • internet search engines
  • app distribution services — Apple IOS and Google Play stores
  • hosting services such as Amazon Web Services
  • internet carriage services like Telstra and Optus
  • manufacturers and suppliers of equipment that connects to the internet, and those who maintain and install it — think modems and smart home devices
  • – Source: onlinesafety.org.au

What kind of content will be affected?

For now, the draft codes just deal with what’s been dubbed Class 1A and IB material.

According to eSafety, Class 1A might include child sexual exploitation material, as well as content that advocates terrorism or depicts extreme crime or violence.

Class 1B, meanwhile, might include material that shows “matters of crime, cruelty or violence without justification”, as well as drug-related content, including detailed instruction of proscribed drug use. (Classes 1C and 2 largely deal with online pornography.)

Clearly, there is content in these categories the community would find unacceptable.

The problem is, critics argue, that Australia’s approach to classification is confusing and often out of step with public attitudes. The National Classification Scheme was enacted in 1995.

“The classification scheme has long been criticised because it captures a whole bunch of material that is perfectly legal to create, access and distribute,” said Nicolas Suzor, who researches internet governance at the Queensland University of Technology.

And rating a movie for cinemas is one thing. Categorising content at scale online is quite another.

Consider some potential Class 1B material — instructions in matters of crime or information about prohibited drug use.

There are scenarios where we might hypothetically want such information available, Dr Suzor suggested, such as the ability to supply information about safe medical abortions to people in certain states of the US.

“These are really hard categories to apply at any sort of ‘internet scale’, because you very clearly run up into all of the grey areas,” he said.

There was a recent review of Australian Classification Regulation and a report was delivered in May 2020, but it’s still unclear how this might affect the proposed industry codes designed to regulate “harmful online content”.

Will companies have to monitor my messages now?

The codes are intended to affect almost any industry that touches the internet, and there are concerns about how privacy could be affected when they are applied to personal messages, files and other content.

Some large social media platforms already use digital “fingerprinting” technology that tries to proactively detect known child sexual exploitation or pro-terror material before it’s uploaded.

The eSafety Office has indicated its interest in the codes requiring a level of proactive monitoring — catching “harmful” content before it’s posted.

In the draft codes, however, industry groups said when it came to private file storage or communications, extending proactive detection could have a serious impact on privacy.

There’s also concern that the codes could entrench an approach to content moderation that’s only really available to the big players. Scanning tools aren’t necessarily cheap or readily available.

“Many of these proposed solutions require big tech to stay big to meet these compliance requirements,” said Samantha Floreani, a program lead with Digital Rights Watch.

A spokesperson for eSafety said it would not expect the industry codes to place the same level of commitments on smaller businesses as larger businesses.

Then there’s the issue of whether proactive detection systems are accurate, and if there are avenues for appeal.

Gala Vanting, national programs manager at the Scarlet Alliance, said the use of this technology is of particular concern for those in the sex work industry.

“It’s very likely to over-capture content. It’s very unskilled at reading context [around] sexual content,” she said.

Another complicating factor is there’s also a review of the Privacy Act taking place, which could affect the operation of these codes. Say, for example, by introducing requirements that might limit scanning.

A spokesperson for Attorney-General Mark Dreyfus said the department would produce a final report later this year recommending reforms to Australian privacy law.

What’s next?

The draft industry codes are now open for feedback from the public. Then the eSafety Commissioner’s office will assess whether it considers the codes up to scratch.

But according to some accounts, consultation has been fractious and many civil society groups think the consultation window is too small and unrealistic.

There’s also some frustration that the codes are being developed ahead of the Privacy Act review, among other potential changes to online regulation that are on the table, which could lead to a pretty confusing regulatory system for online content.

Then there’s the debate over whether Australia is taking the right approach to these issues at all.

The Online Safety Act itself was controversial — particularly because of the amount of discretion it put in the hands of the communications minister and the eSafety commissioner.

“Whilst there would be some self-evident material that would not pass muster … it’s enormous power in the hands of one person who is in effect determining what are community expectations,” said Greg Barns of the Australian Lawyers Alliance.

“The broader issues of what constitutes harm then starts to merge into freedom of speech issues, but also transparency and accountability.”

Dr Suzor said that in general, he’s “totally on board” with the idea that governments want more of a say in the standards set for acceptable online content.

But in practice, he suggested there was not much clarity about what the codes were designed to do.

“The codes are agreements to do basically what the industry is already doing, at least the larger end of the industry,” he said.

“I actually don’t know what they’re meant to achieve, to be honest.”

Source – https://www.abc.net.au/news/science/2022-09-21/internet-online-safety-act-industry-codes/101456902