Mandatory Guardrails
Background
On 5 September 2024, Minister Husic and the Department of Industry, Science and Resources launched a follow-up to its earlier "Safe and Responsible AI" consultation. The Government’s response to that first consultation called for further work to explore a risk-based approach to the regulation of AI.
The Mandatory Guardrails consultation proposed a system for defining high-risk AI based on a set of principles and then proposed 10 mandatory guardrails that would be applied across all AI models and systems as well as all AI developers and deployers.
The mandatory guardrails paper represents a substantial increase in maturity from the earlier safe and responsible AI paper. It acknowledges the potential for catastrophic harm from AI, including via weaponisation. The paper also discusses a range of serious risks from frontier AI systems, including the risk of loss of control and accidents involving agentic systems.
Our submission
Our Submission supported the broad direction of the consultation. The Government is right to take the significant risks of AI seriously. We support imposing meaningful obligations on the developers of frontier systems. Some of the mandatory guardrails, like the obligation to establish procedures for identifying foreseeable risks and managing them, could make a real difference when applied to leading AI labs.
Our submission focused on ways the overall regulatory model could be made more practical. We argue that applying a single set of guardrails to all AI models and systems and all developers and deployers is likely to result in overregulation, underregulation, or both. We emphasise that the manner of regulation is also crucial - Australia needs to be courageous and ensure our domestic regulations shape global behaviour. Finally, our submission included a detailed attachment that provided a comparative analysis of the EU AI Act and Canada’s proposed regulation of AI focusing on the best way to define “high-risk AI” and “general purpose AI”.
Unfortunately, the limited consult window (less than half that of the earlier consultation) meant that we didn’t have time to coordinate the views of Australians for AI Safety.