Home / Newsletter / May, 2025
AI Policy & GovernanceMay, 2025

AI Policy and Governance Newsletter — May 2025

Trump dismantles Biden administration AI safety initiatives, OpenAI announces the Stargate project expansion to US allies, and the Federal Court of Australia begins consultations on generative AI use in legal proceedings.

AI Policy and Governance Newsletter

May 2025

Welcome to the monthly newsletter from The Good Ancestors keeping track of public policy news related to AI safety. The newsletter collates significant changes in AI policy around the world. The newsletter lists and explains different initiatives and provides brief high-level comments on the developments.

Initiatives are categorised according to their type (legislation, regulation, principles, other) and status (in force, in progress, proposed).

🧾Legislation = new law from a legislature (e.g. a parliament).

✍Regulation = rules made by an executive arm of government (e.g. a government agency or department).

🤝Principles = A statement that does not change the legal or regulatory space but signals certain intent from a government (e.g. statement of principles).

Recommendation: Check out Good Ancestors' White Paper to read our recommendations for AI development in Australia!

Trump dismantles Biden administration AI Safety initiatives

Category: ✍Regulation, in force

Trump's term began with the revocation of Biden's 2023 Executive Order (EO) on 'Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI)' and a new EO 'Removing Barriers to American Leadership in AI'.

Signalling the administration's focus on innovation and growth over responsible AI, the new EO has fewer guardrails than previous Trump EO in 2019, which included privacy and civil rights protections. The new EO states that AI development should be "free from ideological bias and engineered social agendas" and mandates that advisors and members of the executive office revoke any policies or regulations that deviate from Trump's agenda.

In April, memoranda M-24-10 and M-24-18, which outlined responsible government use and acquisition of AI, were replaced by M-25-21 and M-25-22. M-25-21 removes tiered AI risk level classification system to single 'high-impact category', redirects Chief AI officers to remove barriers for innovation, scale AI adoption and standardises datasets for reuse. M-25-22 seeks to optimise the acquisition of AI, standardising acquisition procedures and focusing on American-made AI.

Biden Era EO 'Advancing United States Leadership in Artificial Intelligence Infrastructure' which focused on tight export controls of advanced US semiconductors for AI systems, was rescinded before trade negotiations with the UAE.

Comment

The reversal of AI safety initiatives endangers efforts by the Biden administration to address harms caused by AI systems, such as discrimination and disinformation. Trump's belief that regulation stifles innovation is more ideological than evidence-based. Balanced regulation can provide business confidence and increase user trust at the same time as addressing real downside risks.

The focus on "winning the race" suggests the Trump administration believes that transformative AI or Artificial General Intelligence could arrive soon and upend the current global strategic balance.

The US's relaxation of AI safety rules makes action by Australia and other middle powers more important. Australia has an opportunity to establish an AI Safety Institute and get the balance right in an AI Act to manage risks and build trust. If Australia gets it right, other middle powers will follow.

OpenAI announces extension of Stargate project to 10 US allies

Category: 🤝Principles, proposed

OpenAI partnered with the Trump Administration, investment firm Softbank and tech company Oracle to boost development of AI infrastructure in the US under project name Stargate. Now they have announced a global initiative "OpenAI for Countries", with the aim of spreading 'democratic AI'.

The initiative hopes to build AI infrastructure, develop custom ChatGPT systems that accommodate local language and culture, establish a national startup fund, and continuously improve AI safety and security to uphold democratic processes. In return, partners would have to agree to invest in US AI leadership, an 'alternative' to authoritarian uses of AI for mass government surveillance.

Comment

The Stargate project presumably has countries like Australia in mind. AI compute—not just the data centres but the specific chips inside them that run and train AI models—could become an incredibly valuable resource if forecasts about AI growth are correct. Australia has the political and physical infrastructure to be an attractive destination for investment of this kind. Stargate could provide Australia a path to AI compute while also addressing specific issues around sovereign AI systems that respect local language and culture.

On the other hand, the fine print will be important. OpenAI realises that AI scaling laws mean that steadily increasing AI capability requires exponentially increasing compute. The US is seized by race dynamics and needs more investment to build more data centres to house more compute to "win the race". Countries signing up to Stargate must think carefully about the distribution of risks and benefits. In the worst case, they're spending public money to accelerate risky AI while benefits accrue mainly to offshore companies. Australia may be better served by building AI compute directly and making it available to our allies on our terms.

Federal Court of Australia to begin consultations on GenAI use in legal proceedings

Category: 🤝Principles, proposed

In response to the increasing use of LLMs for legal submissions, Chief Justice D S Mortimer of the Federal Court released a statement addressing the Judges' initial discussions on the creation of professional guidelines or Practice Notes related to Generative AI use.

The use of LLMs in court proceedings has brought on false citations by legal practitioners, such as a Melbourne lawyer who used legal tool Leap and cited an AI 'hallucination' case. While the court acknowledges that GenAI makes court proceedings more accessible and successful for non-legal litigants, the rise of inaccurate citations and submissions by legal practitioners raises concerns to the future of the profession.

The court's AI Project Group will hold consultations mid-june onwards and will accept submissions from legal professionals, self-represented litigants and the general public. AI Project Group will balance accessibility, ethical practice, accuracy and efficiency in the upcoming discussions.

Submissions can be emailed to AI_Consultation@fedcourt.gov.au until the 13th of June.

Comment

AI will impact basically every part of society, so it's good to see the inclusion of the general public in these consultations.

Given that a third of Australians hold a negative sentiment towards AI, inappropriate integration of AI into legal proceedings could erode trust in the legal system. The use of AI will continue regardless, but establishing transparency in how and when AI is used will help. Accountability for legal practitioners for AI hallucinations should be a cornerstone, but how this should apply to self-represented litigants is less clear.

One way this consultation could go wrong is if participants focus too much on today's AI systems and not on possible futures. Hallucinating case law is a problem, but it's easy to imagine this being resolved as models become more capable. Future AI systems could bring novel problems. AI systems already exceed human persuasiveness, and hyper persuasion would undermine the purpose of a court. Equally, if AI systems replace many solicitors and paralegals, the cost of litigation could dramatically reduce and increase the burden on the court system as parties who would have been shy of cost choose court over alternative dispute resolution.

Subscribe to this newsletter

Get monthly updates on AI Policy and Governance from around the world.