AI Policy and Governance Newsletter — December 2024
Australian Senate inquiry releases its AI report with recommendations for dedicated legislation, Singapore bans AI-manipulated election content, and Canada announces its national AI Safety Institute.
AI Policy and Governance Newsletter
9 December 2024
Welcome to the monthly newsletter from The Good Ancestors keeping track of public policy news related to AI safety. The newsletter collates significant changes in AI policy around the world. The newsletter lists and explains different initiatives and provides brief high-level comments on the developments.
Initiatives are categorised according to their type (legislation, regulation, principles, other) and status (in force, in progress, proposed).
🧾Legislation = new law from a legislature (e.g. a parliament).
✍Regulation = rules made by an executive arm of government (e.g. a government agency or department).
🤝Principles = A statement that does not change the legal or regulatory space but signals certain intent from a government (e.g. statement of principles).
Recommendation: Please check out this great article by the Good Ancestors Project's Greg Sadler on the risks posed by AI to biosecurity!
Australian Senate inquiry releases its report on AI
The Select Committee on Adopting Artificial Intelligence released its final report on 26 November. The recommendations include introducing dedicated, whole-of-economy AI legislation (Recommendation 1) with a principles-based definition of high-risk AI uses, complemented by a non-exhaustive list covering areas like general-purpose AI models (Recommendations 2 and 3). The report also included protecting workers' rights through tailored regulations and the extension of workplace safety laws to address AI-specific risks (Recommendations 5–7). Recommendations also addressed transparency in AI training datasets, licensing of copyrighted material, and fair remuneration for creators impacted by AI-generated outputs (Recommendations 8–10). Additionally, the recommendations advocate for implementing automated decision-making safeguards, as outlined in the Privacy Act review and Robodebt Royal Commission, to ensure accountability in government AI use (Recommendations 11–12). Finally, they call for a holistic, sustainable approach to developing sovereign AI capabilities and infrastructure, leveraging Australia's unique advantages and cultural perspectives (Recommendations 4 and 13).
Category: 🤝Principles, proposed
Comment: The final report acknowledged that advanced AI systems pose catastrophic risks. The report proposes that risk-based legislation can handle these challenges, specifically calling out the weaponisation of advanced AI. Only Senator Pocock's additional comments called for technical solutions in addition to regulatory solutions – specifically calling for Australia to create an AI Safety Institute.
Overall, the process was messy - with a dissenting report from Coalition members and "additional comments" from the Australian Greens and Senator Pocock. Perhaps this does not bode well for the political journey of a future AI Act in Australia.
Singapore passes a bill to ban AI content manipulating elections
Singapore has enacted a new law prohibiting the use of deepfakes and other digitally manipulated content involving election candidates during the election period, aiming to safeguard against misinformation. Passed on October 15, the bill bans online content that falsely portrays candidates engaging in actions or statements they did not make, including AI-generated deepfakes. The ban applies from the issuance of the Writ of Election until polling closes, aiming to create a deepfake-free election campaign period.
Category: 🧾Legislation, in force
Comment: The Singapore legislation sets an important example for the rest of the world, including Australia, to follow. The potential for AI interference in democratic elections is very high, and actions should be taken pre-emptively rather than reactively. The ability of AI misuse to damage institutions has already been shown in other domains, and consequences are hard to address after the fact. Getting ahead of these problems is the right thing to do.
Canada announces a national AI Safety Institute
The Government of Canada has announced the creation of the Canadian AI Safety Institute (CAISI). Led by Innovation, Science and Economic Development Canada, CAISI will mark a significant step in advancing AI safety research in the country. It will also cultivate a national AI safety research community in partnership with Canada's leading AI institutes Mila, Amii, and the Vector Institute. This initiative builds on Canada's prior $50M investment in AI safety and aligns with commitments under the Bletchley Declaration to strengthen international cooperation and focus on priority areas like cybersecurity and national security.
Comment: The creation of an AI Safety Institute is an essential step towards a national AI safety policy and one that Australia should also follow. The similar size and structure of Canada's economy will give Australia a great example to follow in doing this, learning from what works and what doesn't. Although Canada had announced intent and funding previously, this new announcement provides real detail about the future of CAISI. We're excited to see how it progresses.
Subscribe to this newsletter
Get monthly updates on AI Policy and Governance from around the world.