Home / Newsletter / December, 2025
AI Policy & GovernanceDecember, 2025

AI Policy and Governance Newsletter — December 2025

Australia announces its AI Safety Institute and releases the National AI Plan, Anthropic reports AI-powered foreign espionage, and challenges to Australia's forecasted AI boom emerge around security risks and trust issues.

December Newsletter

9 December 2025

Hi,

Australia is getting an AI Safety Institute—but no new AI laws.

Government released its National AI Plan and announced the long-awaited Australian AI Safety Institute, fulfilling a commitment made at the 2024 Seoul AI Summit. The $30m institute will start operating in early 2026 as the Government's "hub of AI safety expertise".

The Plan itself takes a hands-off approach, offering guidance instead of mandating guardrails. Industry groups welcomed the announcement. Critics, including members of Government's own AI Expert Group, were less impressed. Their key criticism: the Plan lacks a coherent strategic identity for Australia in the global AI landscape.

Meanwhile, obstacles to Australia's forecasted AI productivity boom are emerging. Security risks, trust issues, and regulatory uncertainty are cited as major barriers to adoption in recent industry surveys. US tech companies continued to push back against Government on data sovereignty requirements.

Public concern is salient, as a new Australian National University survey found that 77% of Australians see AI attacks on people and businesses as a major or moderate threat—the highest-rated concern among 15 national security threats surveyed. This ranked above economic crisis, supply disruptions, and military conflict.

Internationally, frontier AI lab Anthropic reported the "first AI-orchestrated cyber espionage campaign", conducted by a Chinese state-sponsored group using its Claude Code agent.

Welcome to the AI Policy and Governance newsletter from Good Ancestors. We track the biggest developments in AI policy and safety, at home and abroad.

Featured Australian publications

Government releases:

  • National AI Plan (Department of Industry, Science and Resources) The Plan sets out a framework for Australia's AI governance, anchored around three goals: "capturing the opportunity", "spreading the benefits", and "keeping Australians safe".
  • AI Plan for the Australian Public Service 2025 (Digital Transformation Agency) The DTA outlines Government's plan to "accelerate the safe and responsible use of AI adoption across the public service". It builds on the APS-only GovAI platform released earlier this year.
  • Technology Investment and AI: What Are Firms Telling Us? (Reserve Bank of Australia) The RBA's November bulletin discusses a survey of Australian firms, outlining how AI innovation is impacting their operations and productivity. It also compares AI adoption and trust in Australia to global trends.

News & commentary

Government outlines its approach to AI, including Safety Institute

✍ Regulation, 🤝 Principles

The Australian Government has delivered on its commitment from the 2024 Seoul AI Summit to establish an Australian AI Safety Institute (AISI). The institute will begin operating in early 2026 with an initial $30 million allocation, becoming the "government's hub of AI safety expertise" to identify and address risks from AI systems. According to the media release, the AISI will "work across government to support best practice regulation, advise where updates to legislation might be needed and coordinate timely and consistent action to protect Australians".

Australia's AISI will be part of the International Network of AI Safety Institutes, alongside nine others. The announcement was broadly supported by safety experts, unions, and business groups.

The Government cited the AISI as a key measure to achieve its third core goal, "keeping Australians safe", in the National AI Plan. The Plan, released shortly after, takes a "hands-off", opportunity-first approach that has been welcomed by industry groups but harshly criticised by some members of the Government-appointed AI Expert Group and voices in Parliament, including Liberal MP Ted O'Brien and Greens senator David Shoebridge.

Some questioned the Plan's reliance on voluntary measures and existing laws instead of new mandatory guardrails, while others argued that the Plan lacks strategic clarity about Australia's role in the global AI landscape.

Comment:

The AISI announcement is genuinely promising. We're glad Government listened to experts and fulfilled its Seoul commitment. A strong AISI could put Australia on the path to global leadership in AI safety.

But, $30 million over 3.5 years is a very lean budget. Given talent and computational resource costs, AISI will struggle to deliver on its broad remit without substantial funding increases.

The National AI Plan shows that Government is aware of the potential risks of AI, and the dynamic nature of the risk landscape. However, the government's "guidance-over-guardrails" approach to AI regulation is inadequate for keeping pace with AI risks and incidents. It's out of step with what the public wants, what experts recommend, and what other countries have done. As we've previously pointed out, relying on Australia's existing laws ignores the reality of regulatory "gaps" and sets the scene for a dangerous game of whack-a-mole.

Challenges to Australia's forecasted "AI boom"

🌍 Tech, ✍ Regulation

Despite the promise of productivity gains from AI, as touted by frontier companies in the past, Australia faces obstacles to unlocking opportunities. Though companies are rushing to adopt AI, deployment faces additional challenges: investment in secure infrastructure, scaling difficulties, and public backlash threaten to delay or derail forecast economic benefits. Additionally, until the systems are capable enough to be trusted, the costs of checking AI output–and the costs of notcould further undermine productivity gains.

Public concern is salient. A survey of more than 12,000 Australians by the Australian National University's National Security College found that 77% rated "the use of artificial intelligence to attack Australian people and businesses" as a major or moderate threat in the next decade. That was the highest level of concern among 15 potential threats, ranking above severe economic crisis (75%), disruption to critical supplies (74%) and military conflict (64%). Overall, anxiety about national security threats increased 8% between November 2024 and July 2025, with perceptions rising across the board.

Recent industry surveys have found security risks, trust, and regulatory uncertainty as major barriers to AI adoption. An Okta poll of Australian tech and security executives found that 41% "said no single person or function was currently responsible for AI security risk in their organisation". Australian insurers' risk readiness has fallen to a four-year low, in large part due to the complex risk landscape created by emerging technologies like AI.

At the same time, US AI and cloud companies have continued to challenge Australia's sovereignty goals. Our shift from "atoms to algorithms" may increase the importance of securing our own data and computational capacity. The Tech Design Policy Institute shifted from this notion of "AI sovereignty" to "AI agency" in their recent discussion paper, emphasising that middle powers build strength through agency, not scale.

Comment:

Safety, sovereignty, and innovation aren't mutually exclusive. Sensible regulation is compatible and probably conducive to these goals. Concerns about security and trust rank as major barriers to adoption in these industry surveys—concerns that appropriate and dynamic regulation can help alleviate.

Australia has been eager to capitalise on the data centre boom, with state governments incentivising their development, attracting billions in investments and commitments from major companies.

A key challenge is that building a data centre may give a short-term economic win, but whether having a data centre located in Australia gives us any ongoing spot in the AI value chain turns on the details of the agreement. AI is already at risk of extracting wealth from the Australian economy. Bad data centre deals could allow AI companies to extract energy, water and other resources from Australia without adequate recompense. The details need to be right.

Frontier lab reports AI-powered foreign espionage

✍ Regulation, 🌍 Tech

US AI lab Anthropic claimed to have discovered the "first reported AI-orchestrated cyber espionage campaign"–a Chinese state-sponsored group using the company's Claude Code AI agent.

There was some scepticism about the lab's reporting of the incident, but the concerns are real and AI poses a genuine cybersecurity challenge, both from deliberate misuse and poor implementation. Anthropic CEO Dario Amodei is slated to appear before US Congress in a hearing on AI and cybersecurity on December 17.

Nationally, the majority of Australian executives rate cybersecurity as their top priority and leading risk.

Comment:

Anthropic's report offers an important reminder of the question of liability: when these systems are used to cause harm, who is responsible?

Research indicates that cybersecurity is a major concern for industry, but these systems may also pose more serious risks to Australia's security. AI systems have the potential to strengthen intelligence gathering and analysis, and to improve the efficiency with which agencies respond to threats. However, without clear regulation and global governance, these gains in efficiency and capability may be disproportionately enjoyed by bad actors. Wise regulation would increase the ability of "good guys" to use AI and hamper the use of AI by bad actors. Democracies may face a competitive disadvantage compared to authoritarian regimes and independent actors that would implement these systems with minimal guardrails.

In case you missed it…

Featured opportunities

That's all, for now!

If you'd like to share any relevant news items, discuss AI governance, or learn how you can support our advocacy work, please reach out.

Subscribe to this newsletter

Get monthly updates on AI Policy and Governance from around the world.