AI Policy and Governance Newsletter
30 August 2024
Welcome to the monthly newsletter from The Good Ancestors keeping track of public policy news related to AI safety. The newsletter collates significant changes in AI policy around the world. The newsletter lists and explains different initiatives and provides brief high-level comments on the developments.
Initiatives are categorised according to their type (legislation, regulation, principles, other) and status (in force, in progress, proposed).
🧾Legislation = new law from a legislature (e.g. a parliament).
✍Regulation = rules made by an executive arm of government (e.g. a government agency or department).
🤝Principles = A statement that does not change the legal or regulatory space but signals certain intent from a government (e.g. statement of principles).
Breaking news: California bill SB 1047, with its groundbreaking safety requirements, seems headed for Gavin Newsom's desk. Watch this space for a special explainer!
Australia bans deepfake sexual material created without consent
The Australian Senate has passed The Criminal Code Amendment (Deepfake Sexual Material) Bill 2024, making it criminal to create or share AI-generated images that are sexually explicit and created without the consent of real people depicted in them. This follows highly publicised incidents of abusive deepfake images being created without consent.
Category: 🧾Legislation, in force
Comment: Deepfakes have real potential for abuse, both directly through humiliation and indirectly through their use in extortion and other threats. This issue affects women and girls, who can often be targets of 'revenge porn'. Boys and men can also be humiliated in this way, with teenage boys being common targets of 'sextortion'. Easy access to highly capable AI-enabled software will lead to these problems increasing. Unfortunately, the Bill fails to address the developers and deployers of such tools. Practical action requires addressing development and deployment in addition to use.
Two new bills in the United States Senate propose new restrictions on using people's works and likeness for training AI models
The two bills are separate but tackle similar issues. The Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act) would mandate that AI developers allow journalists and artists to include a machine-readable document in their work, showing its origin, and would also require developers of AI models to seek permission before using works labelled in such a way to train their models. The bill is bipartisan but yet to be voted on by the Senate. The US Senate has also introduced a bill aiming to prevent the unauthorised use of individuals' voices and likenesses to produce AI-generated copies of either. The NO FAKES Act would "hold individuals or companies liable" if they produce or host an unauthorised digital replica of a person. This bill is bipartisan and yet to be voted on by the Senate.
Category: 🧾Legislation, in progress
Comment: Both pieces of proposed legislation tackle issues related to using unauthorised content in AI models. Content creators are worried that their works are being used to train AI models that can then reproduce them. Similar concerns arise around a person's unique features, like their voice and likeness. Both can put a person out of work, harm their reputation or even violate their right to dignity. While some argue that humans learn by drawing inspiration from published works, it's not obvious that we should give machine 'learning' the same rights as human learning. While the approach in the legislation seems broadly right, we should worry about unintended consequences. For example, we shouldn't prevent famous people from being caricatured by new technologies. Preserving the same rights and obligations with new tools as old ones is a good starting point. However, as with deepfakes, if easy access to highly capable AI-enabled software changes the balance and leads to harm, we may need to reconsider.
The US Senate also considers a bill allowing financial firms to test AI solutions outside of existing regulatory frameworks
Another bill introduced in the US Senate proposes allowing the financial sector to deploy AI in their businesses without complying with existing rules. This would be possible by applying for regulators to lift regulations and instead apply alternative compliance methods. Such products would have to not pose a systemic risk to the US financial system. The bill is bipartisan but has not yet been voted on by the Senate.
Category: 🧾Legislation, in progress
Comment: The use of 'regulatory sandboxes' to promote innovation is a novel approach with interesting potential uses. However, the finance sector is an unusually risky industry to start with. We will undoubtedly see widespread development of AI in finance even without regulatory incentives and should be worried about negative consequences. High-frequency trading has already caused flash crashes, and thinking about the oversight and new regulations needed to ensure the safe development of AI should be the priority – not deregulation.
Subscribe to this newsletter
Get monthly updates on AI Policy and Governance from around the world.