Home / Newsletter / November, 2025
AI Policy & GovernanceNovember, 2025

AI Policy and Governance Newsletter — October/November 2025

Australian public concern about AI grows with 65% believing it creates more problems than it solves, eSafety Commissioner targets AI chatbots with mandatory safety codes, superintelligence debate goes mainstream, and Treasury takes a light-touch approach to AI regulation.

November Newsletter

6 November 2025

New polling reveals that 65% of Australians now believe AI creates more problems than it solves, up 8 percentage points since 2023. 64% fear losing societal control to AI, while the Governance Institute found that Australians rank AI as the second-hardest future issue to navigate ethically. In Parliament, MPs from major parties urged stronger measures, with the Nationals explicitly calling for "an AI act and safety institute," warning that "government needs to be ahead of the game, not behind it." Internationally, over 86,000 people—including scientists, Nobel laureates, and political figures—have signed the Statement on Superintelligence, calling for a prohibition on developing AI systems that could operate beyond human control.

Minister Ayres has indicated the National AI Plan, expected this year, will be "more expansive" but stated that "it is up to businesses to ensure AI is used ethically." Treasury's review of the Australian Consumer Law found existing protections adequate, though the Attorney-General rejected a Productivity Commission proposal that would have given AI companies broader copyright access.

The public debate has intensified around familiar dividing lines: Is AI an existential risk? Boom or bubble? Economic opportunity or social disruption? But these aren't opposing camps—all these concerns are valid, interconnected, and increasingly point toward the same solutions.

Welcome to the AI Policy and Governance newsletter from The Good Ancestors. Here we track the most significant recent developments in AI policy and safety, both at home and abroad.

Featured Australian publications

Recent weeks have seen the release of several key Australian reports, government publications, and polling related to AI governance:

Independent reports:

  • AI, Productivity, and Australia's Choice of Regulatory Framework (e61 Institute & UTS Human Technology Institute) This report identifies regulatory uncertainty as a key blocker to AI-driven productivity. It analyses global approaches and argues a "pragmatic, technology-neutral" framework provides more durable certainty for investment.
  • Legal Zero-Days: A Blind Spot in AI Risk Assessment (AI Policy Bulletin) This paper argues that AI models are developing capabilities to find and exploit unforeseen gaps in legal frameworks, creating "legal zero-days" that could paralyse government operations. It calls for evaluations to begin testing for these novel risks.
  • The 2025 OpenAI Preparedness Framework does not guarantee any AI risk mitigation practices (Coggins et al., ANU) This Australian academic analysis of OpenAI's key safety policy finds that the framework encourages the deployment of models with 'Medium' capabilities for 'severe harm' (defined as thousands of deaths or billions in damages) and allows the CEO to unilaterally deploy even more dangerous models.
  • Navigating artificial intelligence for youth mental health (Prevention United) This report convenes experts and young people to ask if we have learned the lessons from social media regulation, and calls for "safety by design" to be embedded in AI to prioritise the mental health and wellbeing of young users.

Government publications:

  • Review of AI and the Australian Consumer Law (Treasury) This review concludes that the ACL, in combination with other legal frameworks, is "broadly capable" of adapting to AI. It identifies opportunities to improve clarity but does not recommend specific AI-focused legislation.
  • Our Gen AI Transition (Jobs and Skills Australia) This report analyses the impact of generative AI on Australia's workforce, proposing that it will augment many tasks rather than fully automate jobs, but will require significant adaptation and new skills development.

Public opinion polling and surveys:

News & commentary

Treasury's light touch approach as Australia's digital competitiveness falls

✍ Regulation, 🤝 Principles

Australia has fallen to 23rd in the IMD World Digital Competitiveness Ranking, down from 15th last year—its lowest ranking ever. The annual assessment attributes the decline primarily to regulatory stagnation, with Australia's ranking for "AI policies passed into law" plummeting from 8th to 34th in just one year. AI policy had been a "bright spot" in 2024 when guardrail reforms looked set to deliver certainty, but progress slowed in 2025.

This decline comes as the federal government affirms its "light touch" regulatory stance. Treasury's Review of AI and the Australian Consumer Law concluded that the ACL is "broadly capable" of handling AI-related harms, though it notes that "agentic AI, may necessitate further consideration." Reporting from InnovationAus highlighted criticism from experts that the review "avoids the hard issues" of liability.

Comment:

Regulatory uncertainty—not regulation—is dragging down competitiveness. Government reconsidered AI guardrails to preserve productivity gains. The opposite has occurred:. Ccountries that moved decisively on AI governance are pulling ahead.

The Treasury review sidesteps a fundamental liability question: when an AI system autonomously causes harm, who is responsible? Australian businesses deploying AI cannot easily control "black box" systems built by overseas developers, yet may bear liability for risks they cannot manage. Meanwhile, developers use terms of service to shift accountability downstream. Effective regulation must place obligations on parties best positioned to manage specific risks—for AI systems, that means developers who control core capabilities and safety engineering.

Overall, the report demonstrates the flaw in Government's "gap analysis" logic. A regulatory-led gap analysis could find that each regulatory is doing its job. But a risk-led gap analysis could find something totally different. We look honestly at AI risks and ask if they're being addressed.

eSafety Commissioner targets AI chatbots with mandatory safety codes

✍ Regulation

Australia's eSafety Commissioner has registered new industry codes targeting AI companion chatbots, effective March 2026. The codes apply to app stores, gaming services, pornography websites, generative AI services, and AI chatbot platforms, with penalties of up to $49.5 million for non-compliance.

The intervention follows disturbing reports of the AI chatbot Nomi instructing an Australian man to murder his father while engaging in paedophilic role-play. The eSafety Commissioner noted concerns about children as young as 10 spending five hours daily chatting sexually with AI companions. OpenAI's Sora 2 launch was plagued by violent and racist content within hours, while AI tools have been used to create racist videos spread by far-right leaders. In September, Communications Minister Anika Wells announced plans to ban AI 'nudify' and deepfake apps.

Comment:

This demonstrates that existing regulators can impose effective AI safeguards when they have jurisdiction and teeth. The eSafety Commissioner's action, supported by $49.5 million penalties, follows the same pattern as the social media ban for under-16s that takes effect in December. Australia isn't powerless to act against global AI companies, and to the extent that existing regulators have the remit, it's great to see action.

The AI Legislation Stress Test found that 78-93% of experts consider existing measures inadequate for managing AI threats. The eSafety Commissioner can address harms within its remit. But who diagnoses and addresses risks outside the remit of existing regulators?

An Australian AI Act is needed to fill these gaps. It could draw on proven approaches like California's SB 53, which places safety obligations on large-scale AI developers, or the EU AI Act's Code of Practice, which sets standards for general-purpose AI. An AI Act wouldn't displace existing regulators—it would establish primacy for regulators like eSafety within their remit while addressing systemic risks no single regulator can manage.

The world debates "red lines" as existential risk goes mainstream

🌍 Tech, 🤝 Principles

Several global events have highlighted the escalating debate on AI risk. The book 'If Anyone Builds It, Everyone Dies' was published, bringing arguments about AI-driven extinction to a wider audience. This was followed by the "Statement on Superintelligence", a global open letter from signatories including AI pioneers and public figures, calling for a prohibition on superintelligence development. At the UN General Assembly, Nobel laureates and former heads of state also issued a formal call for "clear and verifiable red lines" on AI. At a subsequent UN Security Council meeting, the Trump administration stated, "We totally reject all efforts by international bodies to assert centralized control and global governance of AI."

Comment:

The global fracture at the UN exposes Australia's uncomfortable strategic position. As a US ally that also depends on rules-based international order, we cannot simply align with Washington's rejection of multilateral governance while a growing coalition argues that advanced AI poses transnational risks requiring binding standards.

Australia has led similar multilateral efforts before. We championed the Comprehensive Nuclear Test Ban Treaty, bringing it to the UN General Assembly in 1996 when negotiations stalled, and we continue to co-chair the Friends of the CTBT process. We founded the Australia Group for chemical and biological weapons control. As a middle power with relationships across different blocs—including both the US and China—and a government with a strong parliamentary majority (that can spend political capital), Australia is well-positioned to help broker international consensus on AI red lines. But credibility requires capability. We need to get our own house in order with AI safety testing capabilities and national AI-specific laws before we can be a credible player in setting global standards.

The great Australian AI debate: Hype, harm or existential risk?

✍ Regulation, 🤝 Principles

As the global debate rages on, Australia's leaders and experts are grappling with AI's trajectory. Writing for ASPI, David Wroe warned that politics must reckon with the looming end of work under AI, arguing that political paralysis is a major risk. Despite writing a book on existential risks (including those from AI), Assistant Minister Andrew Leigh penned an op-ed arguing against "apocalyptic tones", instead presenting AI as a tool to boost "middle-skill, middle-class" jobs.

Business journalist Alan Kohler warned that AI succeeding could be worse than it failing: "What if the trillions of dollars placed on those bets turn out to be good investments? The disruption will be epic, and terrible." He argued it could trigger an "Engels Pause" – a 50-year period when British working-class wages stagnated while GDP soared. That disruption may already be happening. US Federal Reserve Chair Jerome Powell said in late October that "job creation is pretty close to zero", with companies citing AI in layoffs.

In Parliament, Labor's Jo Briskey MP argued AI's "$100 billion" potential must be "shared fairly", Liberal Party's Tom Venning MP warned Labor is "behind the curve", and Nationals' Alison Penfold MP called for "the introduction of an AI act and the launch of an AI safety institute" warning that "government needs to be ahead of the game not behind it".

The debate extends beyond the future of work to whether AI poses an existential risk. The Conversation asked five local experts: two said yes and three said no (though a closer read reveals most answers were better characterised as 'possibly, but not yet'). Many Australians have joined the call for a superintelligence prohibition by signing the Statement on Superintelligence, including UNSW Professor Mary-Anne Williams, who explained her reasons for signing: "The goal of AI should be about creating powerful tools to serve humanity. This does not mean autonomous superintelligent agents that can operate beyond human control without aligning with human well-being."

Comment:

The Australian debate is trapped in false dichotomies that obscure more than they clarify. Is AI an existential risk—yes or no? Boom or bubble? Should we worry about misuse, mistakes or misalignment? Does responsibility lie with developers, deployers or users? Each question treats complex, accumulating risks as simple either/or choices. We need to hope that Australia's new National AI Plan can walk and chew gum at the same time.

Australia's AI sovereignty and the battle for a national strategy

✍ Regulation, 🤝 Principles

The federal government's National AI Plan is expected by the end of 2025, with Minister Tim Ayres signalling a 'more expansive' strategy that will tackle jobs, democracy, and social fabric. However, Ayres also stated, "It is up to businesses to ensure AI is used ethically, effectively and democratically in workplaces, and responsibly in their goods and services," suggesting a continued focus on downstream liability.

In this context, OpenAI executives met with Treasurer Jim Chalmers, after funding a report detailing a $142 billion annual GDP 'opportunity' for Australia by 2030. Meanwhile, the various aspects of AI "sovereignty" are being actively debated. Maincode CEO Dave Lemphers argued for moving beyond 'Sovereign AI' to the more constructive "Australian-made AI." Startups like Sovereign Australia AI are planning to build local models with a focus on respecting copyright.

This push for local control was bolstered by Attorney General Michelle Rowland shutting down a contentious Productivity Commission proposal that would have granted AI companies a "free pass" to mine copyrighted data. Strategic concerns have also been raised, with The Lowy Institute asking if Australia will be a "standards setter or technology taker," and an AI Policy Bulletin article noting that middle powers can gain influence by setting rules, not just building models.

Comment:

The sovereignty debate is broad and includes economic participation, infrastructure control, regulatory power, evaluation capability, and data governance. All of these matter. Australia needs a strategy that pursues economic opportunity—playing to our strengths—while protecting citizens and doing our part in international standards and global AI governance.

The National AI Plan is the chance to set out this vision. But Minister Ayres' statement that "it is up to businesses to ensure AI is used ethically" worryingly suggests that we are relinquishing sovereign regulatory power right out of the gate, and pushing the responsibility onto Australian businesses that lack the information and capability to manage risks from AI systems—systems that they don't control and cannot audit.

Attorney General Michelle Rowland's rejection of the copyright exemption—despite pressure from tech companies and lobbyists who framed Australia's position as "out of step" with other nations—is a step in the right direction. It's a positive sign that Australia can be a "standards setter", and earn credibility for advancing regulation globally.

In case you missed it...

  • More AI failures in law and government: An AFR report detailed how AI is "going wrong" in Australia, including in NDIS assessments and Fair Work Commission cases, complementing the high-profile Deloitte scandal.
  • California signs SB 53 into law: Governor Gavin Newsom signed the landmark AI bill, establishing transparency, safety incident reporting, and whistleblower protections for frontier model developers in California.
  • US Senators introduce bipartisan AI risk bill: Senators Hawley (R-MO) and Blumenthal (D-CT) introduced the Artificial Intelligence Risk Evaluation Act, which explicitly contemplates preventing the development of superintelligence. As a recent report from the Future of Life Institute found, support for stronger regulation of AI enjoys bipartisan support.
  • UK AI Bill moves closer: The UK's promised AI Bill, which will introduce binding regulation on the most powerful AI companies, is reportedly nearing its consultation phase.
  • Historical context for the hype: The Conversation explored the parallels between the current AI investment boom and the electrification boom-and-bust cycle of the 1920s.
  • Human Rights Commission calls for neurotech safeguards: The Australian Human Rights Commission has called for urgent updates to privacy laws to protect "neural data" and a ban on using neuromarketing to manipulate political or consumer views.
  • South Australia and NSW establish AI offices: They have each appointed their first director and begun building dedicated AI governance capacity. They are also hiring.
  • OpenAI completes for-profit restructure: OpenAI has finalised its move to a public benefit corporation, simplifying its structure and removing the original cap on investor returns, though the non-profit board remains in control.
  • MI5's annual threat update highlights AI: MI5's Director General Ken McCallum noted that, despite the benefits of AI technology, it can enable security threats. He highlighted the unique challenges that may come with autonomous AI systems.

Featured opportunities

That's all, for now!

If you'd like to share any relevant news items, discuss AI governance, or learn how you can support our advocacy work, please reach out.

Onward in action!

The Good Ancestors team

Subscribe to this newsletter

Get monthly updates on AI Policy and Governance from around the world.