Home / Newsletter / July, 2024
AI Policy & GovernanceJuly, 2024

AI Policy and Governance Newsletter — July 2024

California advances regulation for costly AI models, Australian governments release a joint AI framework, and creating sexual deepfake images without consent is criminalised in Australia.

AI Policy and Governance Newsletter

15 July 2024

Welcome to the monthly newsletter from Good Ancestors keeping track of public policy news related to AI safety. This monthly newsletter collates significant changes in AI policy around the world. The newsletter lists and explains different initiatives and provides brief high-level comments on the developments.

Initiatives are categorised according to their type (legislation, regulation, principles, other) and status (in force, in progress, proposed).

🧾Legislation = new law from a legislature (e.g. a parliament).

✍Regulation = rules made by an executive arm of government (e.g. a government agency or department).

🤝Principles = A statement that does not change the legal or regulatory space but signals certain intent from a government (e.g. statement of principles).

California is setting up regulation for costly new AI models

California's State Assembly, the legislative body of the world's most significant jurisdiction for AI, is discussing a bill to regulate the development of new AI models. The bill would require businesses that spend more than $100 million USD on training new models to perform safety testing on them. Without appropriate testing, developers will face liability for the damages caused by their AI models.

Category: 🧾Legislation Status: In progress

Comment: The California bill would create a strong incentive for the developers of the most powerful AI models to take safety seriously or face potential consequences. This is a step in the right direction by a well-placed legislature, but would benefit from others following suit. Evidence suggests that a focus on benchmarking and red teaming can reduce dual-use hazards from AI models.

Australian governments release a joint framework for governing their use of AI

Australia's federal, state and territorial governments have released a joint framework for the assurance of artificial intelligence in government. This framework sets the approach that Australian governments will follow when implementing AI solutions themselves. This approach is based on key principles, including 'human-centred values", "privacy protection and security", and "transparency and explainability", among others.

Category: 🤝Principles Status: In force

Comment: The AI assurance framework is a valuable tool for ensuring that the use of AI by governments is consistent and aligned with foundational principles. Coordinated, cross-cutting national policy has shown promise in tackling difficult issues such as COVID-19. However, the AI assurance framework does not demonstrate substantial progress beyond the 2022 ethical principles and may face practical hurdles. For instance, governments may struggle to provide clear, simple explanations for how AI systems reach outcomes. The assurance framework also does not implement the key ethical principle that AI systems should not pose unreasonable safety risks, and should adopt safety measures proportionate to the magnitude of potential risks.

Creating sexual deepfake images without consent criminalised in Australia

The Australian (Commonwealth) Government has introduced new legislation in Parliament to criminalise the use of deepfake AI technology to create non-consensual sexual images of another person. Sending such images to other people will also be criminalised. This follows well-publicised incidents of such abuse happening in Australian schools.

Category: 🧾Legislation Status: In progress

Comment: This legislation is a significant step in improving Australia's safety against AI products which can be used to commit serious sexual offences against both children as well as adults. However, the Bill only regulates user behavior and does not take action against apps or websites whose primary purpose is to create such material. AI developers who make tools without adequate safeguards also avoid regulation. More effective harm reduction should consider developers and deployers, not just users.

Subscribe to this newsletter

Get monthly updates on AI Policy and Governance from around the world.