Home / Newsletter / September, 2025
AI Policy & GovernanceSeptember, 2025

AI Policy and Governance Newsletter — September 2025

Australia faces a crossroads on AI regulation as debate over a national AI act intensifies, Trump releases comprehensive AI Action Plan, China unveils competing governance vision, GPT-5 highlights frontier risks, and UK AISI conducts landmark pre-release testing.

AI Policy and Governance Newsletter

5 September 2025

The last two months have brought the scramble for AI governance into focus. While GPT-5 landed not with a bang but a complicated whimper, the pace of AI development has not slowed. Instead, the governance gap---the chasm between technological capability and regulatory preparedness---has become the story.

Internationally, a race to set rules is underway. The United States and China have laid out national AI strategies, each vying for global influence. The European Union has solidified its regulatory-first approach with a Code of Practice for its AI Act, while the United Nations has established bodies to guide global consensus.

In Australia, the federal government is signalling a "light touch" approach, favouring adapting existing laws over a dedicated AI Act. This has created a fracture in the policy debate, pitting a "let it rip" camp against those who warn that current legal frameworks are inadequate for the risks of general-purpose AI.

— Welcome to the AI Policy and Governance newsletter from Good Ancestors. Here we track the most significant recent developments in AI policy and safety, both at home and abroad. —

Featured Australian publications

Recent weeks have seen the release of Australia-focused reports on AI governance and strategy:

  • The Australian AI Legislation Stress Test (Good Ancestors) Answering the Government's call for a "gap analysis", this report surveyed 64 experts who found Australia's current laws inadequate to handle national-scale AI threats. Median expert assessment indicated that 4 out of 5 assessed threats were a "realistic probability" of causing harm within 5 years. The top policy priority of the assessed risks was preventing AI models from giving users access to dangerous capabilities, such as assistance in creating bioweapons or launching cyberattacks.
  • Tetris for Australia's Future: Aligning Our National AI Priorities (Tech Policy Design Institute) This report outlines six interdependent AI priorities for Australia (including recommendations to build trust through policy clarity, law reform and establishing an Australian AI Safety Institute) and calls for a Ministerial AI Taskforce to develop and rapidly implement a coherent National AI Strategy.
  • AI and the next generation: A discussion paper (Future Generations Youth AI Think Tank) A paper warning that young Australians are being exposed to AI without adequate national protections and calling for urgent action in areas like transparency, literacy, and education, including the establishment of an Australian AI Safety Institute.
  • Risk Analysis for Multi-Agent AI Systems (Gradient Institute) The report warns that "a collection of safe agents does not make a safe collection of agents." It identifies unique failure modes and emergent behaviours---like groupthink and cascading communication breakdowns---that arise when multiple AI agents interact, concluding that traditional risk frameworks are ill-equipped for these new challenges.
  • Australian Responsible AI Index 2025 (National AI Centre) This national benchmark report reveals a modest average score for responsible AI maturity (43 out of 100) across Australian organisations. It highlights a significant "confidence-implementation gap", where organisations' belief in their capabilities far outstrips their actual implementation of safety and oversight practices.
  • AI regulation and productivity (KPMG) This report argues for a "Goldilocks point" in AI regulation that avoids both overreach and under-regulation. It supports timely, risk-based AI legislation for high-risk contexts, arguing that proportionate rules are key to boosting productivity and trust.

News & Commentary

Australia at a crossroads as debate over a national AI act intensifies

🧾 Legislation, ✍ Regulation

The Australian government is facing pressure over its approach to AI regulation. On June 30, OpenAI released its "Economic Blueprint for Australia", a 10-point plan advocating for tax incentives and widespread AI adoption to unlock a purported $115 billion in economic value. While some coverage welcomed it as a way to leverage Australia's renewable-energy advantage for data centres, the proposal was met with sharp criticism relating to its rent-seeking, cherry-picking of best case scenarios, socialising the costs, and compromising Australia's sovereignty.

By early August, the Productivity Commission recommended that guardrails on high-risk AI be paused, that AI-specific regulation be a last resort, and that 'few of AI's risks are wholly new issues'. This has fed into a debate about whether Australia needs a dedicated AI Act. Key ministers favour a "lighter touch" framework that adapts existing laws. However, this has been opposed by many including the Australian Human Rights Commission and former Minister for Industry Ed Husic, who have called for specific legislation. The President of the Australian Law Reform Commission has also highlighted the scale and diversity of novel risks and new legal quagmires.

Writing in The Policymaker, Dr Alex Antic argues that, because AI agents break traditional legal frames of liability, Australia needs central, mandatory guardrails rather than dozens of divergent sectoral standards. Reporting from InnovationAus has similarly highlighted expert opinion that existing regulations are 'inadequate' to deal with emerging AI risks. This view reflects public sentiment captured in polling from an ACCC inquiry, which found 96% of Australians hold concerns about generative AI.

Comment:

The government's cautious stance is at odds with expert and public opinion. Relying on existing sectoral laws---which were not designed for the cross-cutting challenges of general-purpose AI---would create a fragmented landscape. With international partners moving to establish clear rules, Australia's "wait and see" approach may leave its citizens unprotected and its businesses caught in regulatory uncertainty. If Australia doesn't have laws that let us adopt international standards and best practices as they evolve, we are letting down other middle-powers and failing at global norm-building.

Trump Administration releases comprehensive AI Action Plan

🤝 Principles, ✍ Regulation

In July, the Trump administration released a comprehensive 28-page plan titled "Winning the Race: America's AI Action Plan", with 103 policy actions under three pillars: 1. Accelerating AI innovation, 2. Building American AI infrastructure, and 3. Leading in international AI diplomacy and security.

Beyond the rhetoric, the plan contains significant safety and security measures. It requires companies providing synthetic DNA to screen orders against pathogen watch lists and verify customer identities. The plan calls for accelerated research into AI interpretability, control systems, and adversarial robustness, while emphasising the development of comprehensive AI evaluation capabilities to assess national security risks, including potential assistance with cyberattacks or CBRN weapons development. It also recognises the value of open-weight models for innovation while noting the need to "appropriately balance dissemination with national security concerns."

The President also signed three executive orders (EOs). The "Preventing Woke AI in the Federal Government" EO dictates federal agencies only procure LLMs developed with "unbiased" AI principles. The "Accelerating Federal Permitting of Data Center Infrastructure" EO revokes Biden's "Advancing United States Leadership in Artificial Intelligence Infrastructure" while maintaining focus on fast-tracking AI infrastructure. The "Promoting the Export of the American AI Technology Stack" EO aims to "extend American leadership" in AI by exporting full-stack technology packages, including hardware, models, and security measures.

The action follows the Senate's 99-1 decision to remove AI moratorium language from budget reconciliation bill H.R. 1 ("One Big Beautiful Bill") before being passed into law by Trump on 4 July.

The plan's architect, Dean W. Ball, discusses the development process and his experience inside the administration on The Cognitive Revolution podcast.

Comment:

The plan reveals a striking disconnect between rhetoric and substance. Good Ancestors CEO Greg Sadler highlighted six standout elements, particularly the plan's approach to biosecurity, AI safety science, and evaluation frameworks. AI safety commentator Zvi Mowshowitz assessed the plan as "pretty good" despite concerning rhetoric, noting that "the actual proposals are far superior to the rhetoric."

The plan's safety provisions---biosecurity screening requirements, investment in AI interpretability and control systems, and national security risk evaluations---demonstrate that safety and capability can be mutually reinforcing. However, the document's framing as an "AI race" that America "will" win, combined with the omission of existential risks from advanced AI systems, creates concerning precedents. As Mowshowitz noted, the rhetoric was "alarmingly terrible" while the substance was "insufficient but helpful."

China unveils competing AI governance vision

🤝 Principles

Days after the US release, China unveiled its own Action Plan for Global AI Governance, a 13-point vision emphasising international standards and support for developing nations. As reported by The Guardian, Beijing's plan proposes a new international body to prevent governance being dominated by a few powers.

Comment:

The duelling strategies from the world's two AI superpowers create a complex landscape for middle powers like Australia, potentially forcing difficult choices between competing governance frameworks.

EU operationalises AI Act with voluntary code of practice

🤝 Principles

On 10 July, the EU Commission published its General-Purpose AI (GPAI) Code of Practice, a voluntary tool for model providers to demonstrate compliance with the obligations set in Articles 53 and 55 of the AI Act.

The document served as a guideline for model providers until European standards came into effect in August 2025, and allows for a two-year grace period for full adoption for current models. Developed by diverse stakeholders, including the European AI office, academics, independent experts, model providers, and civil society, the code is complemented by guidelines on the scope of obligations for providers of general-purpose AI (GPAI) models, and a template for the summary of training data.

For the three chapters of the code, Transparency and Copyright apply to all GPAI model providers, while Safety and Security only applies to models with systemic risk under Article 55. The transparency chapter outlines requirements for model providers to document model information such as training data, computational resources and energy consumption. The copyright chapter outlines data acquisition measures when web-crawling. Safety and Security would require providers to release training data and conduct risk evaluations.

Comment:

Many Australian commentators have the idea that the EU AI Act is an "overreach" that is hampering AI adoption in the EU and causing AI providers to pull out of the market. This is not true. All the leading AI labs other than Meta have signed up to the voluntary code. We should interpret this as the EU getting the balance about right.

The voluntary code is an important step towards effective AI safety regulation. The Commission has created a comprehensive approach for model providers to navigate compliance complexity and build public trust. Future versions of the code could benefit from clearer articulation of the responsibilities across the AI pipeline, particularly between foundational model developers and downstream deployers.

Switzerland pursues sovereign, open-source AI

🌍 Tech

ETH Zurich, EPFL, and the Swiss National Supercomputing Centre (CSCS) have developed a new open-source multilingual LLM trained on public infrastructure, set to be released in late summer 2025.

Multilingual by design, the LLM is trained on texts in over 1500 languages with 60% of texts English and 40% non-English. The new LLM will be available in two model sizes---8 billion and 70 billion parameters, with the larger model comparable in size to Meta's Llama 3. The emphasis is on transparency with open access to weights, codes, and training data to promote inclusive and collaborative AI innovation. The LLM complies with Swiss data protection laws, Swiss copyright law, and the transparency obligations under the EU AI Act. Project leaders also demonstrated that model performance was not compromised by ethical data sourcing in respecting web-crawling opt-outs.

The project builds on Switzerland's investment in sovereign AI infrastructure. Trained on the Alps supercomputer at the CSCS, which is powered by 10,000 NVIDIA Grace Hopper Superchips and operates entirely on carbon-neutral energy.

Comment:

The Australian Academy of Technical Sciences & Engineering just released Made in Australia: Our AI opportunity, which calls for a similar approach to sovereign AI. NVIDIA argues that small language models could be the future. The Swiss initiative shows that this is a possible path for Australia. The details matter, including questions like whether and how you can protect sovereign models from being "distilled" by for-profits. But this is a promising direction.

California advances comprehensive AI safety legislation

🧾 Legislation

Amendments to Senator Wiener's SB 53 shift the onus onto large-scale AI developers to follow a safety policy followed by an independent audit with civil penalties if developers don't adhere to new stipulations. Critical incident reporting must be made to the Attorney General within 15 days. The bill represents one of the most comprehensive state-level attempts to regulate frontier AI development in the United States.

Comment:

California's approach provides a valuable test case for balancing innovation with safety requirements. The emphasis on independent audits and mandatory incident reporting could become a template for other jurisdictions, including Australia.

GPT-5 launch highlights frontier risks and gaps in Australian oversight

🌍 Tech

OpenAI's recent releases of GPT-5 and ChatGPT Agent have provided a concrete look at the risks posed by increasingly capable AI. The company's own safety evaluations revealed that GPT-5 could significantly assist non-experts in dangerous tasks like bioweapon design and cyberattacks. Its sister model, ChatGPT Agent, was assessed by OpenAI as having a "substantial potential" to help users cause catastrophic harm. This prompted the Australian branch of the AI safety group PauseAI to refer OpenAI to the Australian Federal Police, arguing the tool may breach Australia's biological weapons laws.

Comment:

Criminal laws are a poor fit for managing risks from frontier AI, which requires proactive safety regulation and technical evaluation. Threatening people with life behind bars is not a path towards a positive and transparent safety culture.

Australia cannot afford to be a passive importer of powerful technologies without having its own sovereign capability to assess the risks. Relying on foreign companies' self-assessments is insufficient. This episode makes the case for an Australian AI Safety Institute---a body with the technical expertise to independently vet frontier models before they are deployed locally---more urgent than ever.

UK AI Security Institute makes frontier model safer pre-release in landmark collaboration

✍ Regulation, 🌍 Tech

In a landmark moment for AI safety governance, OpenAI revealed that the UK AI Security Institute (UK AISI) conducted pre-release testing of its new ChatGPT Agent and identified critical vulnerabilities. According to OpenAI's system card, the UK AISI "identified a total of 7 universal attacks," all of which were patched by OpenAI before public release. This successful intervention by a government-backed body represents a tangible victory for proactive safety oversight.

Comment:

This is a powerful proof-of-concept for the role of state-backed AI Safety Institutes. It demonstrates that independent, government-resourced bodies can provide significant value by identifying risks that developers may miss and ensuring they are fixed before deployment. This provides a strong, practical model for what a similar Australian institute could achieve.

In case you missed it...

  • Meta lobbies for Australian data access Meta argues access to personal data is "vital" for understanding the Australian landscape in its submission to the Productivity Commission's "Harnessing data and digital technologies."
  • Texas enacts AI consumer protection Texas enacted the Texas Responsible AI Governance Act "TRAIGA" HB 149 coming into effect January 1, 2026, making it the second consumer protection law of its kind.
  • Pennsylvania criminalises malicious deepfakes Pennsylvania enacted SB 649, classifying AI deepfakes as digital forgeries with criminal misdemeanour penalties if impersonation is non-consensual.
  • OAIC releases privacy assessment tool The Office of the Australian Information Commissioner (OAIC) released a privacy assessment tool for organisations and government agencies.
  • UN establishes AI governance bodies The UN General Assembly created an Independent Scientific Panel on AI to provide expert advice and a Global Dialogue on AI Governance to foster international cooperation.
  • AI companion safety concerns escalate The parents of a 16-year-old boy are suing OpenAI for wrongful death, alleging ChatGPT provided their son with instructions and encouragement for suicide. This was followed by Musk announcing "Baby Grok"---a "kid-friendly" chatbot amid controversies regarding Grok's companions feature and content moderation, where companions were accessible even in "kids' mode".
  • Australia's data centre ambitions Atlassian's Scott Farquhar urged government support for rapid data centre buildout powered by renewables to establish Australia as a regional AI infrastructure hub.

That's all, for now!

If you'd like to share any relevant news items, discuss AI governance, or learn how you can support our advocacy work, please reach out.

Onward in action!

The Good Ancestors team

Subscribe to this newsletter

Get monthly updates on AI Policy and Governance from around the world.