AI Policy and Governance Newsletter
July 2025
Welcome to the AI Policy and Governance newsletter from The Good Ancestors keeping track of public policy news related to AI safety. The newsletter collates significant changes in AI policy around the world. The newsletter lists and explains different initiatives and provides brief high-level comments on the developments.
Initiatives are categorised according to their type (legislation, regulation, principles, other) and status (in force, in progress, proposed).
🧾Legislation = new law from a legislature (e.g. a parliament).
✍ Regulation = rules made by an executive arm of government (e.g. a government agency or department).
🤝Principles = A statement that does not change the legal or regulatory space but signals certain intent from a government (e.g. statement of principles).
🌐Tech = advancements in AI technology with repercussions on policy and governance (e.g. new risky capabilities)
EU Commission publishes Code of Practice for General Purpose AI
Category: 🤝Principles, proposed
On July 10, the EU Commission published its General-Purpose AI (GPAI) Code of Practice, a voluntary tool for model providers to demonstrate compliance with the obligations set in Articles 53 and 55 of the AI Act.
The document serves as a guideline for model providers until the European standards come into effect August 2025, allowing a two-year grace for full adoption for current models in the market.
Developed by multi-stakeholders, including European AI office, academics, independent experts, model providers and civil society. The code is complemented by the guidelines on the scope of obligations for providers of general-purpose AI (GPAI) models, and template for the summary of training data.
For the three chapters of the code, Transparency and Copyright apply to all GPAI model providers, while Safety and Security only applies to models with systemic risk under Article 55. The transparency chapter outlines requirements for model providers to document model information such as training data, computational resources and energy consumption. The copyright chapter outlines data acquisition measures when web-crawling. Safety and Security would require providers to release training data and conduct risk evaluations.
Comment
The voluntary code is a necessary step to ensure the effectiveness of AI safety regulation. In combination with the published guideline and template, the Commission has offered a comprehensive approach for model providers to navigate compliance complexity and build public trust.
The code could benefit from clearer articulation of the responsibilities across the AI pipeline, particularly between foundational model developers and downstream deployers. A clear distinction between shared versus individual obligations would support effective implementation.
The next stage sees the AI Office and AI Board to review, and approve via an implementing act.
Switzerland enters AI race with multilingual LLM
Category: 🌐Tech
ETH Zurich, EPFL and the Swiss National Supercomputing Centre (CSCS) have developed a new open-source multilingual LLM trained on public infrastructure, set to be released late summer 2025.
Multilingual by design, the LLM is trained on texts in over 1500 languages with 60% of texts English and 40% non-English. The new LLM will be available in two model sizes- 8 billion and 70 billion parameters, with the larger model comparable in size to Meta's Llamas 3.
The emphasis is on transparency with open access to weights, codes and training data to promote inclusive and collaborative AI innovation. The LLM complies with Swiss data protection laws, Swiss copyright law and the transparency obligations under the EU AI Act. Project leaders also demonstrated that model performance was not compromised by ethical data sourcing in respecting web-crawling opt-outs.
The project builds upon Switzerland's investment in sovereign AI infrastructure. Trained on the Alps supercomputer at the CSCS which is powered by 10,000 NVIDIA Grace Hopper Superchips and operates entirely on carbon-neutral energy.
Comment
The release of an open-source model will shift reliance away from closed commercial models that focus primarily on general purpose use. The pro-social LLM provides a specialised approach, intending to progress AI application for healthcare, education and climate science.
However, how adherence to regulations will be monitored once the code is released and modified, remains uncertain. The model is yet to be evaluated against independent safety criteria such as the AI Safety scorecard.
Australia has two options - invest in sovereign AI capabilities and infrastructure with a focus on safety and privacy by design, or invest in risky models by foreign allies and remain dependent.
Trump Administration releases AI Action Plan
Category: 🤝Principles and ✍ Regulation, in progress
On July 23, the Trump administration released a 28-page plan titled "Winning the Race: America's AI Action Plan". With 103 specific policy actions under three key pillars 1. Accelerating AI innovation, 2. Building American AI infrastructure 3. Leading in international AI Diplomacy and Security.
In conjunction, the president signed three executive orders "Preventing Woke AI in the Federal Government", "Accelerating Federal Permitting of Data Center Infrastructure", "Promoting the Export of the American AI Technology Stack".
"Woke AI" EO dictates federal agencies only procure LLMs developed with two "unbiased" AI principles: truth seeking and ideological neutrality. "Data Center" EO officially revokes Biden's "Advancing United States Leadership in Artificial Intelligence Infrastructure" while maintaining focus on fast-tracking AI infrastructure development by providing financial incentives, streamlining regulatory procedures and permitting use of federal land. "AI Export" EO aims to "extend American leadership" in AI by exporting full-stack technology packages including hardware, models and security measures.
The action follows the Senate's 99-1 decision to remove AI moratorium language from budget reconciliation bill H.R. 1 ("One Big Beautiful Bill") before being passed into law by Trump July 4.
Comment
Addressed in the 6 key takeaways: Overnight, the American AI Action plan dropped. Here are 6 items that stand out to me:
☣️ Biosecurity: The plan requires that companies providing synthetic DNA must screen all orders against a watchlist of dangerous pathogens and verify their customers' identities. Crucially, it also calls for these rules to be enforced rather than relying on voluntary checks. This is a crucial step in preventing AI from being used to aid in the creation of bioweapons.
🔬 AI Safety Science: There's a clear call to accelerate research into AI interpretability, control, and robustness. The document acknowledges that progress in these areas is essential for the safe deployment of AI in high-stakes domains like national security. It also highlights the need to protect AI systems from adversarial attacks such as data poisoning.
📊 AI Evaluations: The plan emphasises building a robust AI evaluation ecosystem. This includes developing the science of measuring AI models and assessing them for national security risks, such as their potential to aid in cyberattacks or the development of chemical, biological, radiological, or nuclear weapons. It also calls for evaluating the risks foreign AI systems could pose to critical infrastructure.
👐 Open-Weight Models: The plan recognises their value for innovation, academic research, and business adoption. While encouraging a supportive environment for open models, it notes the government must work with industry to "appropriately balance the dissemination of cutting-edge Al technologies with national security concerns".
🚨AI Incident Response: The plan calls for updating national cybersecurity incident response playbooks to specifically include considerations for AI systems. This is a crucial step in preparing for, and minimising, the impact of AI-related incidents and accidents.
🛡️Cybersecurity: Finally, there is a strong focus on bolstering the cybersecurity of critical infrastructure. The plan recognises that AI will enhance the capabilities of cyber adversaries and that the U.S. must adopt AI-enabled defensive tools to stay ahead of these emerging threats.
Overall, we are arriving at a global consensus on these points. These risks are real. A path forward to address them is clear. Safety science is falling behind. Global governance is falling behind. It's time for countries like Australia to act.
OpenAI's Economic BluePrint for Australia
Category: 🤝Principles, proposed
Open AI in collaboration with Mandala Partners, an economics and policy consulting firm, released a paper on June 30 detailing a recommended strategy for Australia to accelerate AI Adoption.
The 16-page paper outlines targeted policy suggestions to address the opportunity areas for Australia to economically benefit from AI's potential. The key focus areas are 1. AI productivity growth 2. Integrating AI into the education sector 3. AI for efficient government service delivery and 4. Infrastructure investment to establish Australia as the Indo-Pacific AI leader.
The 10-point AI Action plan highlights investment in AI skills training, digital literacy in schools, infrastructure development, tax incentives for businesses and public sector adoption of AI.
Comment
The report claims AI adoption will provide $115 billion in economic value by 2030 with $80 billion from productivity improvements. However, the potential economic loss from job displacement, as AI replaces a large number of workers, is not addressed.
Page 6 of the report focuses on the Tech Council of Australia's "high" estimated figures while omitting the factors that would lead to the high estimated benefits. In our submission to the Economic Reform RoundTable, Australia's low public trust could cost Australia $70 billion annually in the AI transition.
While OpenAI ensures adaptation to local regulations, the new ChatGPT Agent has already been put forward to be investigated by the AFP (Link to linkenpost or directly to information age article?) in violation of Australia's Crimes (Biological Weapons) Act of 1976.
Furthermore, offering tax incentives for SMEs and enterprises adopting AI when the foundational AI models are owned offshore would leave implementation costs on Australian tax payers with economic profits funneled to OpenAI.
To benefit from the AI boom, Australia needs to invest in building public trust and domestic AI capability.
In case you missed it…
-
Office of the Australian Information Commissioner (OAIC) released a [privacy assessment tool](https://www.oaic.gov.au/privacy/privacy-guidance-for-organisations-and-government-agencies/organisations/privacy-foundations-self-assessment-tool) for organisations and government agencies. -
Denmark, recently appointed to the Presidency of the Council of the European Union, signalled intention to make [targeted revisions of the GDPR and ePrivacy](https://www.globalpolicywatch.com/2025/07/denmark-proposes-gdpr-and-eprivacy-directive-revision/) directive to reduce compliance burdens and encourage competitiveness. Notable changes include exempting companies from cookie consent requirements for personal data when acquired for statistical and functional purposes. -
Meta argues access to personal data is "vital" for understanding Australian landscape in [submission](https://engage.pc.gov.au/document/570) to the Productivity Commission's "Harnessing data and digital technologies" -
Amendments to Senator Wiener's SB 53 shifts the onus on large scale AI developers to follow a safety policy followed by an independent audit with civil penalties if developers don't adhere to new stipulations. Critical incident reporting must be made to the Attorney General within 15 days. -
Texas enacted Texas Responsible AI Governance Act "TRAIGA" [HB 149](https://capitol.texas.gov/tlodocs/89R/billtext/pdf/HB00149F.pdf#navpanes=0) coming into effect January 1 2026, making it the second consumer protection law. -
Pennsylvania enacted SB [SB 649](https://www.palegis.us/legislation/bills/2025/sb0649), classifying AI deep fakes as digital forgeries with criminal misdemeanour penalties if impersonation is non-consensual. -
Musk announces Baby Grok – a "kid-friendly" chatbot amid controversies regarding the new [companions feature and content moderation](https://time.com/7302790/grok-ai-chatbot-elon-musk/). Companions were accessible even in kids mode.
Subscribe to this newsletter
Get monthly updates on AI Policy and Governance from around the world.