The Australian Human Rights Commission (AHRC) has recommended a new national human rights law to reduce potential discrimination caused by artificial intelligence (AI).
In a report titled ‘Rights for a Digital Age,’ the AHRC warned about the potential for AI algorithms and decision-making processes to lead to bias in public decisions.
“The Commission has long supported the introduction of a federal Human Rights Act as the best way to anchor the promotion and protection of human rights in Australia and this position paper offers a viable and actionable set of proposals to achieve this,” says Rosalind Croucher, the President of the Australian Human Rights Commission in the report.
This news comes at a time when the Australian government continues to advocate for a digital identity plan, which includes using facial recognition technology and AI-driven systems — although there are few regulations governing such technologies. Biometrics and facial recognition are not explicitly mentioned in the 380-page report.
The risks posed by AI include the association of specific racial characteristics with crime risk, according to the report.
“In particular, AI decision-making relies on input data, which, if flawed or unrepresentative, may affect algorithmic ‘learning’ processes,” states the report. “AI learning relies heavily on correlation and can manifest discriminatory outcomes that are not necessarily obvious to human users.”
One example is the rise of policing based on algorithms associated with crime risk in certain postcode areas – often areas with high minority populations, according to the report. The commission says this is an example of how AI technology is used for decision-making could lead to systemic bias across government departments and agencies.
It has recommended introducing a new Australian Human Rights Act to enshrine fairness principles and rights considerations for all public decision-making processes.
“The purpose of such an Act is to change the culture of decision-making and embed transparent, human rights-based decisions as part of public culture,” says Croucher. “The outcome needs to be that laws, policies and decisions are made through a human rights lens, and it is the upstream aspect that is so crucial to change.”
The report also cites the potential costs of failing to consider human rights early in decision-making. It notes the $1.8 billion settlement the government paid out due to its ‘robodebt’ program, an automated debt assessment and recovery program employed by Services Australia from 2015 to 2019.
Its publication follows the Human Rights and Technology Report from 2021, which made similar recommendations but has yet to receive a government response.
The AHRC report also calls for the commission to gain new powers to conciliate human rights complaints, as was envisioned when it was established in 1986. Courts would also be required to interpret legislation under the proposal in line with the Human Rights Act where possible.