AI Policy and Governance Newsletter — December 2025
Australia announces its AI Safety Institute and releases the National AI Plan, Anthropic reports AI-powered foreign espionage, and challenges to Australia's forecasted AI boom emerge around security risks and trust issues.
December Newsletter
9 December 2025
Hi,
Australia is getting an AI Safety Institute—but no new AI laws.
Government released its National AI Plan and announced the long-awaited Australian AI Safety Institute, fulfilling a commitment made at the 2024 Seoul AI Summit. The $30m institute will start operating in early 2026 as the Government's "hub of AI safety expertise".
The Plan itself takes a hands-off approach, offering guidance instead of mandating guardrails. Industry groups welcomed the announcement. Critics, including members of Government's own AI Expert Group, were less impressed. Their key criticism: the Plan lacks a coherent strategic identity for Australia in the global AI landscape.
Meanwhile, obstacles to Australia's forecasted AI productivity boom are emerging. Security risks, trust issues, and regulatory uncertainty are cited as major barriers to adoption in recent industry surveys. US tech companies continued to push back against Government on data sovereignty requirements.
Public concern is salient, as a new Australian National University survey found that 77% of Australians see AI attacks on people and businesses as a major or moderate threat—the highest-rated concern among 15 national security threats surveyed. This ranked above economic crisis, supply disruptions, and military conflict.
Internationally, frontier AI lab Anthropic reported the "first AI-orchestrated cyber espionage campaign", conducted by a Chinese state-sponsored group using its Claude Code agent.
—
Welcome to the AI Policy and Governance newsletter from Good Ancestors. We track the biggest developments in AI policy and safety, at home and abroad.
—
Featured Australian publications
- Global AI Governance Law and Policy: Australia (International Association of Privacy Professionals, HCLTech) This article, part of a larger series, analyses the current state of AI governance in Australia and its historical context.
- Retrofitters, pragmatists and activists: Public interest litigation for accountable automated decision-making (Fraser and Stardust, Centre for Automated Decision-Making and Society) This Australian paper discusses the potential role of public interest litigation in promoting accountability for AI and automated decision-making.
- Agentic AI: Risks, opportunities, now and next – 60 expert interviews, executive insight, human intelligence (Omnicom Oceania for Mi3) This report evaluates expert views on the current and future impact of agentic AI on industry, and identifies gaps in current governance structures.
- From AI Sovereignty to AI Agency (Tech Design Policy Institute) The TDPi's discussion paper argues for a shared language around "AI agency" rather than "AI sovereignty". It assesses Australia's performance on relevant metrics and proposes a path forward.
Government releases:
- National AI Plan (Department of Industry, Science and Resources) The Plan sets out a framework for Australia's AI governance, anchored around three goals: "capturing the opportunity", "spreading the benefits", and "keeping Australians safe".
- AI Plan for the Australian Public Service 2025 (Digital Transformation Agency) The DTA outlines Government's plan to "accelerate the safe and responsible use of AI adoption across the public service". It builds on the APS-only GovAI platform released earlier this year.
- Technology Investment and AI: What Are Firms Telling Us? (Reserve Bank of Australia) The RBA's November bulletin discusses a survey of Australian firms, outlining how AI innovation is impacting their operations and productivity. It also compares AI adoption and trust in Australia to global trends.
—
News & commentary
Government outlines its approach to AI, including Safety Institute
✍ Regulation, 🤝 Principles
The Australian Government has delivered on its commitment from the 2024 Seoul AI Summit to establish an Australian AI Safety Institute (AISI). The institute will begin operating in early 2026 with an initial $30 million allocation, becoming the "government's hub of AI safety expertise" to identify and address risks from AI systems. According to the media release, the AISI will "work across government to support best practice regulation, advise where updates to legislation might be needed and coordinate timely and consistent action to protect Australians".
Australia's AISI will be part of the International Network of AI Safety Institutes, alongside nine others. The announcement was broadly supported by safety experts, unions, and business groups.
The Government cited the AISI as a key measure to achieve its third core goal, "keeping Australians safe", in the National AI Plan. The Plan, released shortly after, takes a "hands-off", opportunity-first approach that has been welcomed by industry groups but harshly criticised by some members of the Government-appointed AI Expert Group and voices in Parliament, including Liberal MP Ted O'Brien and Greens senator David Shoebridge.
Some questioned the Plan's reliance on voluntary measures and existing laws instead of new mandatory guardrails, while others argued that the Plan lacks strategic clarity about Australia's role in the global AI landscape.
Comment:
The AISI announcement is genuinely promising. We're glad Government listened to experts and fulfilled its Seoul commitment. A strong AISI could put Australia on the path to global leadership in AI safety.
But, $30 million over 3.5 years is a very lean budget. Given talent and computational resource costs, AISI will struggle to deliver on its broad remit without substantial funding increases.
The National AI Plan shows that Government is aware of the potential risks of AI, and the dynamic nature of the risk landscape. However, the government's "guidance-over-guardrails" approach to AI regulation is inadequate for keeping pace with AI risks and incidents. It's out of step with what the public wants, what experts recommend, and what other countries have done. As we've previously pointed out, relying on Australia's existing laws ignores the reality of regulatory "gaps" and sets the scene for a dangerous game of whack-a-mole.
Challenges to Australia's forecasted "AI boom"
🌍 Tech, ✍ Regulation
Despite the promise of productivity gains from AI, as touted by frontier companies in the past, Australia faces obstacles to unlocking opportunities. Though companies are rushing to adopt AI, deployment faces additional challenges: investment in secure infrastructure, scaling difficulties, and public backlash threaten to delay or derail forecast economic benefits. Additionally, until the systems are capable enough to be trusted, the costs of checking AI output–and the costs of not–could further undermine productivity gains.
Public concern is salient. A survey of more than 12,000 Australians by the Australian National University's National Security College found that 77% rated "the use of artificial intelligence to attack Australian people and businesses" as a major or moderate threat in the next decade. That was the highest level of concern among 15 potential threats, ranking above severe economic crisis (75%), disruption to critical supplies (74%) and military conflict (64%). Overall, anxiety about national security threats increased 8% between November 2024 and July 2025, with perceptions rising across the board.
Recent industry surveys have found security risks, trust, and regulatory uncertainty as major barriers to AI adoption. An Okta poll of Australian tech and security executives found that 41% "said no single person or function was currently responsible for AI security risk in their organisation". Australian insurers' risk readiness has fallen to a four-year low, in large part due to the complex risk landscape created by emerging technologies like AI.
At the same time, US AI and cloud companies have continued to challenge Australia's sovereignty goals. Our shift from "atoms to algorithms" may increase the importance of securing our own data and computational capacity. The Tech Design Policy Institute shifted from this notion of "AI sovereignty" to "AI agency" in their recent discussion paper, emphasising that middle powers build strength through agency, not scale.
Comment:
Safety, sovereignty, and innovation aren't mutually exclusive. Sensible regulation is compatible and probably conducive to these goals. Concerns about security and trust rank as major barriers to adoption in these industry surveys—concerns that appropriate and dynamic regulation can help alleviate.
Australia has been eager to capitalise on the data centre boom, with state governments incentivising their development, attracting billions in investments and commitments from major companies.
A key challenge is that building a data centre may give a short-term economic win, but whether having a data centre located in Australia gives us any ongoing spot in the AI value chain turns on the details of the agreement. AI is already at risk of extracting wealth from the Australian economy. Bad data centre deals could allow AI companies to extract energy, water and other resources from Australia without adequate recompense. The details need to be right.
Frontier lab reports AI-powered foreign espionage
✍ Regulation, 🌍 Tech
US AI lab Anthropic claimed to have discovered the "first reported AI-orchestrated cyber espionage campaign"–a Chinese state-sponsored group using the company's Claude Code AI agent.
There was some scepticism about the lab's reporting of the incident, but the concerns are real and AI poses a genuine cybersecurity challenge, both from deliberate misuse and poor implementation. Anthropic CEO Dario Amodei is slated to appear before US Congress in a hearing on AI and cybersecurity on December 17.
Nationally, the majority of Australian executives rate cybersecurity as their top priority and leading risk.
Comment:
Anthropic's report offers an important reminder of the question of liability: when these systems are used to cause harm, who is responsible?
Research indicates that cybersecurity is a major concern for industry, but these systems may also pose more serious risks to Australia's security. AI systems have the potential to strengthen intelligence gathering and analysis, and to improve the efficiency with which agencies respond to threats. However, without clear regulation and global governance, these gains in efficiency and capability may be disproportionately enjoyed by bad actors. Wise regulation would increase the ability of "good guys" to use AI and hamper the use of AI by bad actors. Democracies may face a competitive disadvantage compared to authoritarian regimes and independent actors that would implement these systems with minimal guardrails.
In case you missed it…
- Wave of model releases from AI companies: Google Deepmind released its latest iteration of Gemini and the image generation model Nano Banana, Anthropic released Claude Opus 4.5, xAI deployed Grok 4.1, OpenAI released two new iterations of GPT5, and Deepseek released v3.2.
- Second Key Update of International AI Safety Report: The UK's AI Security Institute has published a second report on technical approaches to managing AI risk.
- Trump to issue executive order creating national AI rule: After pausing a controversial draft in November, President Trump said he would sign an executive order this week to create a single national rule for AI, overriding state laws. State attorneys general have bipartisan opposition, arguing states need the ability to protect residents.
- Cybersecurity expert to review critical infrastructure laws: Cybersecurity expert Dr Jill Slay will lead an independent review of Australia's critical infrastructure laws.
- CSIRO restructures, cutting research teams: Up to 350 research positions at the CSIRO will be cut as the agency restructures around six focus areas, including AI.
- Statement on Superintelligence reaches 120,000 signatures: Politicians, researchers, celebrities, frontier AI company employees, and members of the public have signed a statement calling for a ban on superintelligence until it can be developed safely and with public buy-in.
- Australian workers worry about AI's effects: The ABC's reader callout has highlighted AI-driven uncertainty among workers at all career stages.
- OpenAI CEO issues internal 'code red': Sam Altman circulated an internal memo announcing that internal resources would be redirected to ensure ChatGPT maintains the edge over rival models, following the release of Google's Gemini 3.
- Senator David Pocock introduces bill to extend deepfake regulation: The independent's bill would allow stronger prosecution of those who share non-consensual deepfakes, extending existing protections for sexual content.
- RAND report outlines global options for countering rogue AI: This paper suggests three counterstrategies in the case of a globally distributed rogue AI: "high-altitude electromagnetic pulse, global Internet shutdown, and the deployment of specialised tool AI".
- EU accused of sacrificing privacy to please AI firms: The EU has drawn criticism for planning to compromise privacy restrictions for the benefit of industry, after proposing to delay key parts of its Artificial Intelligence Act and alter its data protection regulations.
Featured opportunities
- Grant: National Intelligence Postdoctoral Grants 2026 for postdoctoral early/mid-career researchers to undertake academic research related to national intelligence (Closes February 13)
- Grant: Australia-India Cyber and Critical Technology Partnership Round 5 which funds Australia-India research collaborations on AI standards, quantum technologies, cyber resilience, and internet governance, up to $200,000 per project (Closes December 19)
That's all, for now!
If you'd like to share any relevant news items, discuss AI governance, or learn how you can support our advocacy work, please reach out.
Subscribe to this newsletter
Get monthly updates on AI Policy and Governance from around the world.