Home / Newsletter / October, 2024
AI Policy & GovernanceOctober, 2024

AI Policy and Governance Newsletter — October 2024

Australia progresses voluntary AI safety standards and mandatory guardrails, Newsom vetoes SB 1047 but signs other AI safety bills, and the UN releases practical proposals for AI governance.

AI Policy and Governance Newsletter

14 October 2024

Welcome to the monthly newsletter from The Good Ancestors keeping track of public policy news related to AI safety. The newsletter collates significant changes in AI policy around the world. The newsletter lists and explains different initiatives and provides brief high-level comments on the developments.

Initiatives are categorised according to their type (legislation, regulation, principles, other) and status (in force, in progress, proposed).

🧾Legislation = new law from a legislature (e.g. a parliament).

✍Regulation = rules made by an executive arm of government (e.g. a government agency or department).

🤝Principles = A statement that does not change the legal or regulatory space but signals certain intent from a government (e.g. statement of principles).

Australia makes new voluntary AI safety standards and consults on mandatory guardrails

The Australian Government has progressed both a set of voluntary AI safety standards and a discussion paper proposing mandatory guardrails for AI developers and deployers. Each document contains 10 "guardrails" – voluntary or mandatory – which shape the behaviour of organisations involved with AI. The Voluntary AI Safety Standard builds on international standards, such as the ISO standard for AI management and the EU's AI Act. The mandatory guardrails proposal closed for public comment on 4 October.

Category: 🤝Principles, in progress

Comment: Good Ancestors appreciates the effort that has gone into the latest paper in the "Safe and Responsible AI in Australia" series. The "Mandatory Guardrails" paper substantially increases the sophistication of Government's AI policy thinking. We've also submitted our own comments, which will be available shortly on our website

The key shortcomings of the Paper are that (1) it fails to distinguish between

general-purpose AI (GPAI) and GPAI that could pose serious risks and (2) it stretches one set of guardrails across all types of AI as well as both developers and deployers. Because the risk of systems varies so much and the ability of developers and deployers to mitigate risks is very different, applying a single set of guardrails to everything will almost certainly lead to both overregulation and underregulation. Although some refinements are required, the direction is promising.

Newsom vetoes SB 1047 but proceeds with other AI safety bills

California's Governor Gavin Newsom has decided to shelve the much-publicised AI safety legislation known as SB 1047. While this act would have led to significant requirements on major AI models, there has nevertheless been a flurry of new legislation in California that he has decided to sign. Three new acts aim to prevent the misuse of sexually explicit deepfakes. These include criminalising the creation and distribution of realistic sexually explicit images of a real person that cause serious emotional distress (SB 926), requiring social media platforms to implement reporting mechanisms for users to report sexually explicit deepfakes (SB 981), as well as mandating that AI-generated content include a disclosure for users (SB 942). Another two new acts also prohibit the unauthorised use of digital replicas of individuals' voices or likenesses in contracts for personal or professional work (AB 2602), as well as banning the creation or distribution of digital replicas of deceased personalities without permission from their estate (AB 1836).

Category: 🧾Legislation, in force

Comment: Abandoning SB 1047 is a setback for ensuring the safety of AI systems and building public trust. This is globally significant as California is the home of many of the world's leading AI developers. If Califonia is not going to act on safety, it becomes more important for other countries to act individually and multilaterally. That said, the other bills do represent steps towards lessening some of the potential negative effects of AI. Australia should take note of some of them, such as the requirement for social media platforms to provide a reporting mechanism for sexually explicit deepfake images, which would be an important complement to the legislation against sexually explicit deepfake material passed earlier this year. The protection of individuals' right to their likeness is also something that should be considered in Australia.

The UN releases practical proposals for AI Safety

The UN has, through its Advisory Board on Artificial Intelligence, released a report with seven recommendations for governing AI. These include (1) establishing an international scientific panel on AI; (2) regular international policy dialogue on AI; (3) an AI standards exchange; (4) an AI capacity development network to share key capabilities; (5) a global fund for AI; (6) a global AI data framework; as well as (7) a permanent AI Office within the UN.

Category: 🤝Principles, in progress

Comment: The United Nations' forays into AI Safety are a welcome step towards increasing international cooperation on AI. The work done by the UN International Panel on Climate Change on building a scientific consensus on the risks of climate change highlights the value that a similar panel could create regarding AI. A possible endpoint is a UN agency similar to ICAO that helps countries coordinate, share safety information, and overall balance economic output with safety and other concerns.

Subscribe to this newsletter

Get monthly updates on AI Policy and Governance from around the world.