Cyber Security Legislative Package

Background

On 9 October 2024, the Parliament's Joint Committee on Intelligence and Security began an inquiry into the Cyber Security Legislative Package. This package aims to implement initiatives under the Australian Cyber Security Strategy and address legislative gaps in cyber security.

Good Ancestors submitted evidence highlighting how AI capabilities create significant implications for cyber security and critical infrastructure protection. Since the legislative package seeks to prepare Australia for unexpected and catastrophic risks, it must adequately capture emerging AI risks.

AI will transform cyber threats

AI models are on track to dramatically increase the capability of cyber attackers. While defenders must secure entire systems, attackers need to find just one vulnerability.

Evidence shows that AI excels at capabilities necessary for conducting cyber attacks:

  • Researching vulnerabilities in targets
  • Creating targeted phishing attacks
  • Generating and modifying malware
  • Developing defense evasion techniques

Recent studies reveal concerning trends:

  • While GPT-3.5 was limited to basic attacks, GPT-4 could exploit 87% of "one-day" vulnerabilities
  • In roughly one year, AI capability jumped from executing almost zero attacks to 90% of attacks
  • This improvement came from fine-tuning within a single generation of AI models

Within the next term of government, we could face a world where simple actors can deploy thousands of highly capable AI bots working autonomously to find and exploit vulnerabilities. This represents a step-change in the cyber security landscape.

Beyond cyber: AI threats to critical infrastructure

AI also presents other significant risks to critical infrastructure and national security:

  • Biosecurity risks: AI systems are approaching the capability to help actors design and release novel pathogens
  • Dual-use capabilities: AI systems developed for beneficial purposes can be repurposed for harm

These risks have two key implications for critical infrastructure:

  1. Critical infrastructure will face potentially novel and catastrophic risks, increasing the need for flexible risk management planning
  2. Critical infrastructure itself could be both a victim and a source of these risks (e.g., cloud computing infrastructure powering AI systems could be used maliciously)

Our recommendations

Package-specific recommendations

  1. Clarify the definition of "serious deficiencies" to fully capture foreseeable and harder-to-foresee risks from AI
  2. Expand the meaning of "critical data storage or processing asset" to explicitly include AI systems, models, and related infrastructure
  3. Specifically reference "AI crises" in the Explanatory Memorandum alongside current references to terrorist attacks, floods, and bushfires

General recommendations

  1. Complete an all-hazards "national risk assessment" that considers future risks from AI
  2. Establish an Australian AI Safety Institute to evaluate AI models and inform policymakers of pending capabilities and risks
  3. Use existing risk management powers to address national security risks from AI
  4. Invite Australian AI safety scientists to participate in international dialogues, including with China

International context

Australia has already acknowledged these risks through:

  • The Australian Cyber Security Strategy, which recognizes AI will bring new kinds of risk
  • The Mandatory Guardrails Proposal, highlighting AI's potential for catastrophic harm
  • International commitments through the Hiroshima Process and Bletchley Declaration
  • Alignment with US approaches under its Executive Order on AI Safety

The time has come to move from acknowledging these risks to taking concrete action. Our cyber security laws must anticipate this changing risk landscape and give us mechanisms to prevent dangerous tools from becoming widely available.