• Insights

Australia’s new approach to regulating AI: calculated risk?

Australia
15.06.23
5
Written by
Corrs Chambers Westgarth, Australia's leading independent law firm.
Australian companies may soon be required to comply with a regulatory framework for the use of Artificial Intelligence, which is to be developed following a public consultation run by the Department of Industry, Science and Resources.  

The Safe and Responsible AI in Australia discussion paper, announced by the Australian Minister for Industry and Science last week, poses a range of consultation questions concerning the direction and scope of Australia’s approach to regulating the rapidly developing technology. The paper has a particular focus on adoption of a ‘risk-based framework’ favoured in other advanced economies, including the European Union. 

Organisations now have an opportunity to make submissions on the direction of Australia’s regulatory approach, with public consultation closing on 26 July 2023. As this is an early-stage consultation, we anticipate further rounds of public consultation will follow, although no timeline has been provided for the finalisation of legislation. 

Regulatory models for AI

As regulators in jurisdictions around the world race to formulate a response to increasingly powerful AI platforms like OpenAI’s ChatGPT, a number of different approaches have taken shape. 

One model is to focus on technological neutrality, as proposed in a current white paper open for consultation in the United Kingdom entitled AI regulation: a pro-innovation approach. Under this proposal, no AI-specific laws will be developed. Instead, regulators are advised to consider five principles in applying existing regulatory frameworks to the use of AI. Also proposed are relaxed-regulation sandboxes’, attempts to avoid stifling development in the AI field. 

A contrasting and more prescriptive approach is that taken by the People’s Republic of China, which is developing regulations for specific use cases of AI. It has laws which govern how companies develop ‘deep synthesis technology’, used to generate deep fakes, and is currently consulting the public on draft rules to manage how companies develop generative AI products. 

The risk-based model, which is the focus of the Australian discussion paper, can be seen as a ‘Goldilocks’ model, striking a balance between the two other approaches, and is the official approach to AI regulation called for by members of the G7 following its 49th summit in April this year. The European Parliament is also set to vote this month on a risk-based regulatory model that would introduce separate regulatory requirements for minimal, limited, high and unacceptable risks. Under the EU’s AI Act, AI systems posing a minimal risk will be permitted with no special mandatory obligations, and AI posing an unacceptable risk will be banned. The Act is likely to pass and be adopted by the end of 2023. 

While no commitments have been made regarding Australia’s approach to regulating AI, we consider it likely that a risk-based model will be adopted, given the international climate and the number of consultation questions devoted to this model in the discussion paper. However, as the regulation of AI remains uncharted territory worldwide (even in the EU, which typically leads the world in technology regulation), issues may arise with the risk-based approach as the technology evolves. 

One issue beginning to emerge already is the question of whether certain aspects of AI require specific rules which cannot be dealt with under a universal framework. This was seen in the last-minute drafting of the EU’s AI Act, which inserted an obligation on generative AI platforms (platforms which generate text, image, video and other media) to disclose where models had been trained on copyright works. This would put a significant burden on language models like ChatGPT which train on publicly available texts in which copyright may subsist. China is also specifically addressing generative AI, with a draft law now open for public comment which would impose requirements on the content that generative AI models train on. Australia’s discussion paper is apparently alive to the issue of technology-specific regulation, and poses the consultation question of how a risk-based framework applies to foundational models for generative AI (large language models and multi-modal foundation models). This is an important issue for many organisations given many AI platforms use the same underlying foundational model, and when addressing it lawmakers should be mindful of the risk of duplication and potential inconsistency in any compliance measures. 

Discussion paper proposals 

The Discussion Paper provides a ‘possible draft’ risk management framework, modelled after the EU’s. This would see the development of obligations attaching to platforms categorised as: 

  • low risk, with limited, reversible or brief impacts; 
  • medium risk, with ongoing and difficult-to-reverse impacts; or 
  • high risk, with systemic, irreversible or perpetual impacts. 

 

Though the specific obligations to be imposed as part of this framework are subject to consultation, the Australian discussion paper does provide the following draft elements, with varying standards of compliance for each risk level. We have set these out below alongside our comments on issues that organisations should consider.

The paper also poses more open-ended consultation questions about the general direction of Australia’s AI regulation, including whether sector-specific regulation should be considered, how regulation should apply to foundational models like ChatGPT and whether some AI implementations should be banned. This last question concerns a key difference between the Australian discussion paper’s draft framework and the model proposed in the EU, which imposes a complete ban on certain AI implementations which pose an ‘unacceptable risk’, such as government-sponsored social scoring. 

Another open question is the extent to which AI-related proposals in other discussion and policy papers, namely the recent Privacy Act Review Report and the Australian Human Rights Commission’s Human Rights and Technology Final Report, will be aligned with Australia’s eventual AI law. For example, the Privacy Act Review Report proposes privacy impact assessments be made for activities with high privacy risks, including automated decision making. 

What's next?

AI is developing rapidly, as are attempts by regulators in jurisdictions around the world to grapple with and address its risks. 

The Safe and Responsible AI in Australiadiscussion paper represents Australia’s first step towards defining its own approach to regulating AI. Regardless of whether Australia adopts a risk-based framework, or chooses another approach, the technical and regulatory burden on entities implementing AI will likely be significant. 

For more information about employment

Authors
James North
Corrs Chambers Westgarth
Phoebe Wynn-Pope
Head of Responsible Business and ESG - Australia
Corrs Chambers Westgarth

Related Insights

ESG: what next?
Global
25.04.23
1
Changes afoot in Australia
Australia
29.03.23
1