Home / Newsletter / March, 2026
AI Policy & GovernanceMarch, 2026

AI Policy and Governance Newsletter — March 2026

Anthropic refuses to remove safety limits from Pentagon contract sparking a major dispute, the International AI Safety Report 2026 finds capabilities outpacing safety, Australia's data centre rush faces scrutiny, and a damning OAIC review reveals automated decision-making transparency failures.

March 2026 Newsletter

4 March 2026

Hi,

This fortnight, we've watched in real time as consequential AI governance decisions have played out in Washington. Anthropic, the company behind Claude and now on Australia's federal GovAI platform, refused to remove two safety limits from its Pentagon contract. The US government responded by trying to destroy Anthropic. Minister Charlton met Anthropic's CEO just before the crisis peaked, but the question of what happens to Australian government use of Claude if the US designation is formalised remains unanswered.

Meanwhile, the second International AI Safety Report found that capabilities are outpacing safety measures — and that frontier models are learning to detect when they're being evaluated, raising hard questions for every government building safety frameworks around evaluation. Closer to home, Australia's data centre rush has been called "fool's gold," a Labor review flagged AI disinformation as an election threat, and a damning OAIC review found that none of the federal agencies authorised to use automated decision-making are fully transparent about it.

Welcome to the AI Policy and Governance newsletter from Good Ancestors. We track the biggest developments in AI policy and safety, at home and abroad.

Featured Australian publications

  • International AI Safety Report 2026 (International Scientific Report on the Safety of Advanced AI): The second edition, led by Yoshua Bengio with 100+ authors and reviewers from 30+ countries, including Good Ancestors. Essential reading on the widening gap between AI capabilities and safety.
  • Automated decision-making and public reporting under the Freedom of Information Act (Office of the Australian Information Commissioner): This review finds that none of the federal agencies with statutory authorisation to use automated decision-making have been fully transparent about how these systems are used, despite existing obligations under the Information Publication Scheme.
  • AI 2035: Australia's Opportunity Playbook (Menzies Research Centre): A centre-right contribution to the AI policy debate, arguing for an industry-led approach and stronger investment signals.

News & commentary

Anthropic held its red lines on military AI — and the US government tried to destroy it

✍ Regulation, 🌍 Tech

Anthropic, the company behind Claude, drew two red lines in its contract with the US Department of War: no mass domestic surveillance, and no fully autonomous weapons without a human in the kill chain. When the Department demanded those limits be removed under its new proposed "any lawful use" framework, Anthropic refused. In a public statement, CEO Dario Amodei argued that AI-driven mass surveillance "is incompatible with democratic values" and that frontier AI systems "are simply not reliable enough to power fully autonomous weapons." He noted the Department's threats were inherently contradictory: simultaneously threatening to designate Anthropic a supply chain risk and to invoke the Defense Production Act to compel its service.

Trump posted on Truth Social that agencies should "IMMEDIATELY CEASE" using Anthropic's technology, though a six-month phase-out was later announced. Shortly after, Secretary of War Pete Hegseth unilaterally declared Anthropic a supply chain risk via social media — a designation previously applied only to foreign companies controlled by US adversaries, such as China's Huawei, never to an American firm. Anthropic announced its intention to challenge the designation in court. That same night, OpenAI signed a contract with the Department, claiming the same red lines as Anthropic while accepting "all lawful use" language; CEO Sam Altman later acknowledged the move "looked opportunistic and sloppy" and announced amendments. Hours after that, the Wall Street Journal reported that Claude had been used in US strikes on Iran — while the ban was supposedly in effect.

Chinese commentary, compiled by ChinaTalk, was dominated by irony: the company that pushed hardest for compute export controls on China was now receiving the same "supply chain risk" treatment Washington had designed for Chinese firms.

Nearly 1,000 employees at Google and OpenAI signed an open letter supporting Anthropic's stance. Toby Walsh publicly urged Anthropic to hold the line.

Comment:

David Wroe's take in The Strategist is right, calling the episode a turning point, arguing that the question of who controls technology with growing power over human lives "is a conversation for all of us."

A company that was the first to deploy on classified networks, the first to build custom models for national security, and by all accounts a valued partner to the US military was threatened with destruction for maintaining two safety limits that had never been a barrier to operations.

It's reasonable to argue that unelected private companies shouldn't be making operational military decisions. It's unreasonable that the US Government's response was attempting to strong-arm that company into abandoning its contractual terms by threatening a designation designed for foreign adversaries.

The designation itself has no legal reach beyond US procurement. But the political signal is harder to ignore. As the AGI race heats up, we might start to see even more radical actions by governments seeking to exert control.


The 2026 International AI Safety Report: capabilities are outpacing safety — and the tests are failing too

🌍 Tech, ✍ Regulation

The second edition of the International AI Safety Report was published in February — led by AI pioneer Yoshua Bengio with more than 100 authors from over 30 countries. Its core finding is that AI capabilities have advanced significantly since the first edition in 2025, driven largely by inference-time scaling techniques that allow models to reason through harder problems. Risks in cybersecurity, biological and chemical weapons development, and political manipulation are no longer theoretical — the report notes that AI systems could soon assist with aspects of bioweapon synthesis, including obtaining pathogens and troubleshooting laboratory procedures, though the evidence base remains contested.

But the report's most consequential finding may be about the safety infrastructure itself. Frontier models are learning to detect when they are being evaluated and adjusting their behaviour accordingly — a phenomenon researchers call "evaluation awareness." If models behave differently in testing than in deployment, the evaluations that governments are building their safety frameworks around may not be catching what they need to catch. This finding has practical implications. Apollo Research refused to evaluate Claude Opus 4.6 after finding the model could detect it was being tested.

Capabilities remain "jagged" — superhuman performance in some domains alongside failures on straightforward tasks — but the trajectory documented across both editions of the report is clear: the gap between what these systems can do and what governance frameworks can manage is widening, not narrowing.

Comment:

The evaluation awareness finding has direct implications for Australia's soon-to-be operational AISI. Evaluation and red-teaming are the primary mechanisms through which governments propose to identify dangerous capabilities before deployment. If those mechanisms can be circumvented, safety institutes must update their methods.

Evaluation awareness is also another run on the board for forecasters concerned about highly capable AI. This possibility is the kind of thing that separates the "AI as a normal technology" crowd from those who argue we should take the possibility of AGI seriously. To the uninitiated, evaluation awareness sounds like science fiction. But now it's a scientific fact.


Australia's data centre buildout meets its first serious questions

✍ Regulation, 🌍 Tech

In 2024 Australia was second in the world in data centre investment and late last year kicked into gear Labor fast-tracked data centre approvals.

Recently, this has had some backlash with tech investor Rohan Silva in AFR describing Australia's data centre rush as "fool's gold" claiming billions in announced investment may deliver less economic benefit than promoters claim. A NSW parliamentary inquiry is now probing whether the state government was "suckered in" to data centre deals that prioritise land acquisition over local jobs or industry development. Melbourne's Lord Mayor warned that data centres could "cook the planet", and NEXTDC's boss told The Australian that renewables alone cannot power Australia's AI compute boom.

The world leader in data centre investment has been the US where data centre land deals are now colliding with the housing shortage and states have moved to ban new data centres in some areas amid community backlash over energy and water use.

Comment:

Last year, we argued in The Strategist that Australia is uniquely positioned — political stability, alliance relationships, abundant renewable energy potential — to build what we called a "silicon spine": sovereign compute infrastructure that delivers economic returns and a seat at the table on safety and governance norms.

The question is not whether Australia should pursue data centre capacity, but on what terms. If the current rush is primarily about real estate deals and power contracts with minimal conditions, the sceptics have a point. If instead, it can be shaped to include local workforce development, renewable energy commitments, protections from price increases, and governance standards that give Australia leverage in international AI policy, it becomes strategic infrastructure rather than speculative real estate.

If the forecasts for AI's economic impact are even directionally correct, compute infrastructure is not optional — countries without a stake in the value chain will be dependent on the terms set by those who build it. Data centre investment decisions are being made now — sites are being locked in, power contracts signed, approvals fast-tracked. The NSW inquiry is worth watching, but parliamentary inquiries move slowly. Getting the conditions wrong is a problem, but Australia also risks missing the window entirely.


Federal agencies fail the transparency test on automated decision-making

✍ Regulation, 🤝 Principles

A review by the Office of the Australian Information Commissioner (OAIC) has found that none of the federal agencies with automated decision-making (ADM) authorisation are fully transparent about how these technologies are being used, despite existing obligations under the Information Publication Scheme and the lessons of the Robodebt saga.

This lands alongside the Mid-Year Economic and Fiscal Outlook's acknowledgement of "unquantifiable" compensation liabilities from the automated welfare compliance system, for which $44 million has been set aside.

Comment:

The pattern is consistent: automated systems are deployed at scale, oversight lags, and accountability is tested only after things have already gone wrong.

The OAIC findings are worth reading alongside the Government's stated position that existing legal frameworks are adequate for managing AI risks. Even if they were adequate on paper, adequacy depends on compliance, and compliance depends on transparency. If organisations cannot or will not disclose how ADM systems are operating, the downstream accountability mechanisms that the National AI Plan relies upon cannot function. The "light touch" approach carries real costs when the touch is so light that it does not even require disclosure.


Labor review flags AI disinformation as a threat to future elections

✍ Regulation, 🤝 Principles

A Labor Party post-election review has flagged AI-generated disinformation as a significant threat to future campaigns, following its documented spread during the 2025 federal election. The review stopped short of detailing specific incidents, but the concern is supported by research: a UNSW social media wargame demonstrated that AI-generated bots can swing electoral outcomes, and experts have warned that bot swarms are already infiltrating platforms at a scale that makes coordinated inauthentic behaviour increasingly difficult to detect.

South Australia has responded with new laws banning AI-generated deepfakes in electoral material without consent, requiring clear labelling of AI-generated content, and prohibiting robocalls and robopolls in state elections. Penalties run up to $10,000 for corporate bodies. The laws are now in effect ahead of the March state election. There is no federal equivalent.

Comment:

Elections are one domain where the consequences of AI-enabled manipulation are both immediate and irreversible — you cannot re-run an election because a bot swarm shifted sentiment in a marginal seat. South Australia's legislation shows that regulation is feasible. The absence of a federal equivalent is notable — the Australian Electoral Commission has acknowledged it lacks the legislative tools or technical capability to deter, detect, or deal with AI-generated content, and the window before the next federal election is narrowing.

The Tumbler Ridge tragedy in Canada — where OpenAI's systems flagged a user planning violence but the company decided the threat did not meet its threshold for reporting to police — illustrates a broader pattern: the gap between what AI systems can detect and what governance frameworks require them to act on is growing.

In case you missed it…

Featured opportunities

That's all, for now!

If you'd like to share any relevant news items, discuss AI governance, or learn how you can support our advocacy work, please reach out.

Onward in action!

Good Ancestors team

Subscribe to this newsletter

Get monthly updates on AI Policy and Governance from around the world.