AI Policy and Governance Newsletter
June 2025
Welcome to the monthly newsletter from The Good Ancestors keeping track of public policy news related to AI safety. The newsletter collates significant changes in AI policy around the world. The newsletter lists and explains different initiatives and provides brief high-level comments on the developments.
Initiatives are categorised according to their type (legislation, regulation, principles, other) and status (in force, in progress, proposed).
🧾Legislation = new law from a legislature (e.g. a parliament).
✍ Regulation = rules made by an executive arm of government (e.g. a government agency or department).
🤝Principles = A statement that does not change the legal or regulatory space but signals certain intent from a government (e.g. statement of principles).
🌐Tech = advancements in AI technology with repercussions on policy and governance (e.g. new risky capabilities)
Newsom's working group releases report on AI policy guidelines
Category: 🤝Principles, proposed
Following the contentious veto of SB 1047 in September 2024, Governor Newsom established the Joint California Policy Working Group on AI Frontier Models to produce evidence-based guardrails for the deployment of Generative AI. Released on 17 June, the final report was co-led by Fei-Fei Li, "godmother of AI" and incorporated feedback from more than 60 experts.
With a "trust but verify" ethos, the key recommendations for AI regulation were transparency guidelines for developers, third-party risk assessments, whistleblower protection and adverse event reporting. Transparency is divided into 5 parts: acquisition of training data, safety practices, security practices, pre-deployment testing and downstream impacts i.e. disclosures by platforms that host the foundational models. Adverse event reporting seeks to build monitoring systems that collect information post-event from mandatory reporters (developers) and voluntary reporters (users).
Comment
The core recommendations are sensible and firmly based on evidence. However, the report doesn't address unanswered questions about liability for AI systems. The vetoed SB 1047 put onus on large-scale developers to adhere to security testing standards and risk mitigations, with oversight from the Attorney General. The working group argues that increased transparency and third-party risk assessments are sufficient accountability.
The New York legislature tackled the liability question in the recently passed Responsible AI Safety & Education (RAISE) Act S 6953. RAISE, similar to SB 1047, places liability on frontier model developers. While New York is influential, these requirements would have more impact coming from California or the federal government. Furthermore, this also comes as Republicans are pushing to pass a federal bill that would ban state-level AI regulations.
The report will influence upcoming policy decisions, including Senator Scott Weiner's new iteration of SB 1047. Weiner's new SB 53 combines elements from SB 1047, namely whistleblower protections and the establishment of CalCompute public cloud computing cluster, for the development and deployment of safe AI.
As AI agents roll out, Australia will soon face practical questions about liability in the inevitable cases where agents go wrong.
G7 Summit Leaders release statement on AI for "prosperity"
Category: 🤝Principles, proposed
G7 Summit leaders and other leaders, including PM Albanese, convened in Canada on 17 June to address AI governance among other global concerns.
The statement's key components include increasing public sector AI adoption, supporting small and medium-sized enterprises (SMEs) to develop AI, and increasing cross-border sharing of resources.
The first initiative hosted by Canada, G7 GovAI Grand Challenge will be a series of "Rapid Solution Labs" to accelerate AI innovation for public services. Additionally, the G7 AI Network (GAIN) will establish a network for open-source solutions to be distributed to members. The AI Adoption Roadmap aims to provide clear directives to adopt and scale AI for businesses.
Comment
The emphasis on scaling AI adoption at the G7 summit reflects a global shift towards AI becoming a 'solve-all' to societal challenges.
The statement has minimal mentions of AI safety or security, with the main focal point being fostering innovation over governance. The summit missed the opportunity to create a strong international code of conduct. Unlike previous outcomes of the G7 summits, which saw the creation of Hiroshima Process International Code of Conduct for Organisations Developing Advanced AI Systems in 2023.
Accelerating public sector AI adoption while systems still perpetuate bias and misinformation, and scalable oversight tools are still being developed, has obvious downsides. AI systems have been shown to discriminate in the recruitment process, amplify fake news stories, exhibit racial bias in surveillance systems, disproportionately penalise certain groups in criminal justice proceedings, and even blackmail users and leak secret information.
Being excited about AI opportunities is good, but we can't turn a blind eye to risks and research problems that are yet to be resolved.
Albanese signs with Amazon to expand data centre infrastructure in Australia
Category: 🤝Principles, proposed
On 14 June, Prime Minister Anthony Albanese met with AWS CEO Matt Garman to consolidate a $20 billion investment deal for data centre infrastructure in Australia.
Over the next five years, Amazon pledges to develop additional data centres in Sydney and Melbourne, and fund three new solar farms in Queensland and Victoria to accommodate for the increased energy demands.
This is an extension of the partnership with AWS, which began in July 2024 with the Cloud innovation for National Security and Defence project.
Comment
As we argue in The Strategist, investment in data centres and compute infrastructure is a key way to secure a toe-hold in a global AI economy.
What's unclear in the announcement is the details of the deal. Did Australia secure access to a portion of the AI compute for our research sector or a future AI Safety Institute? Does Australia have a way to tax the value created in the data centres? Will Australia make money from energy sold to AWS?
Investment in AI compute is one way to secure our economy in an AI future, but we need deals where we get something in return.
Tech Giants shifting focus to AI Superintelligence
Category: 🌐Tech
Sam Altman recently reiterated OpenAI's focus on superintelligence. Stating we are "past the event horizon" on the path to artificial agents that far surpass human intelligence., Altman acknowledged that the "Alignment Problem", ensuring AI systems behave according to the intention of their developer, has not been solved.
The following day, Mark Zuckerberg announced a US $14.3 Billion investment into the acquisition of Superintelligence research lab Scale AI.
Comment
Through language like "past the event horizon", tech leaders are trying to paint the future as inevitable. This is not true. As Yoshua Bengio, Chair of the International AI Safety Report 2025, says:
"AI does not happen to us; choices made by people determine its future.
How AI is developed and by whom, who benefits from it, and the types of risks we expose ourselves to – the answers depend on the choices that societies and governments make today"
Policy makers are struggling to grapple with the implications of today's AI systems and there's little evidence that they're taking the prospect of AGI or ASI seriously. With known dangerous behaviour, core research questions unanswered, and a lack of mandatory safeguards, developing superintelligence is on track to cause irreversible harm.
Australia, along with leading labs, endorsed the Hiroshima AI Process, which includes a prohibition on AI models training other models – exactly the process that labs are proposing to engage in to achieve superintelligence. Countries and companies should clearly communicate that we should not attempt to build superintelligence until core questions about safety and control are robustly answered.
In case you missed it…
- The Australian government has signalled a "light-touch" approach to AI regulation, with Treasurer Jim Chalmers backing industry-led rules to boost productivity and Industry Minister Tim Ayres stating Australia has 'no alternative' but to embrace AI. This follows a major report from the Business Council of Australia urging the government to act now or miss out, while the Productivity Commission has launched a new inquiry into AI's impact. The Opposition has also weighed in, with Sussan Ley's National Press Club speech outlining the Coalition's focus on AI adoption and security.
- Fears of AI-driven job displacement are intensifying, with reports of white-collar workers at Canva and Atlassian joining unions over job security concerns, and a team of Sydney medical receptionists being replaced by an AI system. Unions have responded by labelling government ministers 'presumptuous' on AI and pushing for rules that would allow workers to refuse to use AI systems.
- OpenAI's Stargate project finds its first global partner with the UAE, and says Australia could become a regional 'compute hub'. This comes as Australia's role as an AI infrastructure hub is growing with NEXTDC announcing a $2 billion 'AI Factory' in Melbourne and AI infrastructure firm Firmus raising $280 million ahead of a planned IPO. This investment boom supports the strategic argument made by Good Ancestors CEO, Greg Sadler, in The Strategist that Australia's stability and renewable energy make it an ideal place for AI computing. In relation to the UAE deal, former OpenAI board member Helen Toner has penned a widely-read critique, questioning the wisdom of providing "supercomputers for autocrats".
- A major vulnerability dubbed 'EchoLeak' was discovered in Microsoft's Copilot AI, which could have allowed attackers to steal sensitive data. In response to growing threats, cybersecurity agencies from the US, UK, and Australia jointly issued new best-practice guidelines for AI data security and the UK's £2 billion to deliver on the AI Opportunities Action Plan includes £240m for their AI Security Institute.
- New polling from Ipsos revealed that people in English-speaking countries, including Australia, are the most nervous about the rise of AI. This aligns with a major ACCC consumer survey which found 96% of Australians have concerns about AI, particularly regarding data privacy and a lack of transparency. The legacy of the Robodebt scandal is also reportedly dampening AI enthusiasm among public servants.
Subscribe to this newsletter
Get monthly updates on AI Policy and Governance from around the world.