Ministerial Directions Powers under the SOCI Act
Department of Home Affairs
Key Resources
Background
The Department of Home Affairs consulted on proposed amendments to the ministerial directions powers in Part 3 of the Security of Critical Infrastructure Act 2018 (SOCI Act). They proposed that the current directions power is difficult to use in practice — it requires a formal Adverse Security Assessment from ASIO and a test proving no other regulatory system could address the risk. The Government outlined five measures to make the power easier to enact and more flexible, including a new ability to issue directions across entire sectors where a specific vendor, product, or technology poses a systemic national security risk.
Our submission
Good Ancestors supported the proposed reforms and focused on two measures with particular relevance to AI.
On Measure 1 (reforming the s32 directions power), AI-related national security risks may emerge faster than traditional threats. A compromised AI model embedded across multiple critical infrastructure sectors could cause cascading harm within hours. The proposed reforms would allow Government to respond at a speed closer to that at which AI risks can materialise.
On Measure 3 (high-risk vendor restrictions), the ability to issue directions across an entire asset class matches how AI is increasingly embedded throughout society. A single general-purpose AI model could be integrated into energy, water, communications, and transport systems simultaneously. The AI Legislation Stress Test, produced by Good Ancestors with input from 64 experts, found that 78–93% of experts rated current government measures for mitigating AI threats as inadequate.
For the Minister to exercise this power against AI risks, Government needs information it does not currently have. The submission makes two recommendations.
Recommendations
1. Require transparency from frontier AI developers. Neither the Minister nor critical infrastructure entities can assess AI risks without information about model capabilities, vulnerabilities, and safety testing. Voluntary disclosure has proven unreliable. The submission argues that critical infrastructure entities should actively seek transparency documentation from AI vendors, and that the absence of such documentation should itself be treated as a risk factor under SOCI.
2. Establish an AI incident monitoring and reporting mechanism. Australia has no system to track AI failures, security incidents, or near-misses in critical infrastructure. Comparable mechanisms exist for aircraft (ATSB), medical devices (TGA), and consumer goods (ACCC). The EU and California have established or are establishing mandatory AI incident reporting. An Australian mechanism could start with voluntary reporting and scale to mandatory reporting over time.