AI Policy and Governance Newsletter — January 2026
Australia's AISI begins hiring its founding team, Grok AI deepfake crisis prompts international action, the Productivity Commission doubles down on AI regulation as a last resort, and global warnings on AI capabilities and safety gaps intensify.
January Newsletter
14 January 2026
Happy new year!
Australia's AI Safety Institute (AISI) is hiring its founding team, with most roles closing 18 January. The positions come as the international network of AI safety institutes dropped "safety" from its name—following the UK and US, which renamed their institutes last year. Australia's new AISI kept the term and will hopefully attract top-talent who want to pursue the safety-first mission.
The lion's share of AI news coverage this week has focused on Elon Musk's Grok AI, which Australia's eSafety Commissioner is investigating after the chatbot generated sexualised images of women and children without consent. Prime Minister Albanese condemned the technology as "abhorrent", while Indonesia and Malaysia have become the first countries to ban the system outright.
Meanwhile, the Productivity Commission doubled down on its recommendation that AI-specific regulation be treated as a "last resort", the ACCC has warned that agentic AI is already straining consumer protections, and the UK's AI Security Institute published sobering findings on frontier capabilities, including warnings about catastrophic loss of control.
—
Welcome to the AI Policy and Governance newsletter from Good Ancestors. We track the biggest developments in AI policy and safety, at home and abroad.
—
Featured Australian publications
- Designing Australia's AI Safety Institute: Expert Survey Report (Good Ancestors) For our latest report, we surveyed 139 professionals with expertise in AI safety, governance, and related fields to gather input on the operations and direction of Australia's AISI. Key findings include strong support for operational independence, international collaboration, and attracting technical talent.
- Harnessing Data and Digital Technology: Final Report (Productivity Commission) The Commission's final report recommends that AI-specific regulations "should only be considered as a last resort", arguing existing frameworks can be adapted. The report also recommends against a copyright exception for text and data mining.
- Mid-Year Economic and Fiscal Outlook 2025-26 (Treasury) In addition to the $30 million for the AISI, MYEFO reveals $166 million for GovAI Chat, a ChatGPT-style AI assistant for the public service, and flags potential compensation liabilities from automated welfare compliance decisions.
- Principles for the Secure Integration of AI in Operational Technology (Cybersecurity and Infrastructure Security Agency and Australian Signals Directorate's Australian Cyber Security Centre) New guidance for critical infrastructure owners and operators on integrating AI into operational technology environments, outlining key principles to leverage benefits while minimising risk.
- ACCC Snapshot on AI Developments (Australian Competition and Consumer Commission) The ACCC's update on AI trends examines advances in foundation models and agentic AI, warning of emerging consumer risks including AI-facilitated scams, fake reviews, and the possibility of AI agents colluding "even where this is not expressly intended or programmed by human creators".
—
News & commentary
Australia ramps up for its new AISI as the Network of AI Safety Institutes drops "safety" from the name
✍ Regulation, 🤝 Principles
The International Network of AI Safety Institutes has renamed itself the International Network for Advanced AI Measurement, Evaluation and Science. The change, confirmed at the network's fourth meeting in San Diego on 4–5 December, follows a pattern: the UK renamed its AI Safety Institute to the AI Security Institute, while the US renamed theirs to the Center for AI Standards and Innovation in June 2025. The UK will serve as Network Coordinator for the next 12 months.
Australia attended the San Diego meeting days after announcing the creation of its own AI Safety Institute, retaining "safety" in the name. The AISI is recruiting for its founding team, with most roles closing on 18 January. Positions range from technical to broader policy roles and are available Australia-wide with flexible work arrangements.
Good Ancestors' expert survey of 139 professionals in AI safety and governance identified what could make or break the AISI's ability to attract talent. Strong international connections with other AISIs (67.9%) and leadership focused on catastrophic risks (64.1%) were the top drawcards. The biggest deterrent? Bureaucratic culture preventing impact—cited by 90% of respondents.
The survey also highlighted a stark funding gap: 53.3% of respondents recommended annual funding of $50 million or more for AISI to make a meaningful contribution, while Government has allocated $30 million over four years (about $7.5 million annually). In an interview with InnovationAus, Good Ancestors CEO Greg Sadler noted that Government is investing just 7 cents for every $1,000 of expected economic benefit. With Government unable to compete on pay with international frontier labs, Sadler argued it would have to "compete on culture".
Comment:
The trend away from "safety" language—first the US, then the UK, now the international network—is worth watching. Whether this reflects political sensitivities, shifting priorities, or rebranding remains to be seen. What matters is that core functions like independent evaluation and testing of frontier models continue regardless of institutional name. Australia's retention of "safety" may help keep that focus visible internationally.
The survey findings highlight a familiar tension: technical experts want meaningful work with real impact, but traditional public service structures can impede agility. This makes the founding team's composition critical. The right leadership and early hires can establish a culture of impact, build credibility with the AI safety community, and advocate effectively within Government for the resources and autonomy the AISI needs to succeed.
Grok AI crisis prompts international action on deepfakes
✍ Regulation, 🤝 Principles
Australia's eSafety Commissioner Julie Inman Grant has launched an investigation into Elon Musk's Grok AI following mass complaints about the chatbot's "undressing" feature, which generated sexualised images of women and children without consent. Prime Minister Anthony Albanese condemned the technology as "abhorrent", joining an international chorus of leaders pledging action.
Indonesia and Malaysia have become the first countries to ban Grok. Indonesia's Communications Ministry cited the need to protect women, children, and the community. The EU and UK have also condemned the feature, and US Vice-President JD Vance acknowledged to UK Deputy PM David Lammy that such content is "entirely unacceptable", shortly before UK media regulator Ofcom launched a formal investigation.
The crisis has amplified calls for the implementation of Australia's new industry codes. These codes, registered by the eSafety Commissioner in September 2025, apply to generative AI services and AI companion chatbots—many of which can engage in sexually explicit conversations with minors and have been alleged to encourage suicidal ideation, self-harm and disordered eating. Under these codes, services must prevent children from accessing sexually explicit content, violent content and content related to self-harm and suicide, or face penalties of up to $49.5 million.
The Grok crisis is part of a broader pattern. Deepfakes levelled up in 2025, with technical improvements and consumer tools making creation easier. Videos that closely resemble real humans can be synthesised in real time, making detection difficult. These tools have already been used to undermine online academic assessments and to fuel propaganda, including racist and antisemitic misinformation following the Bondi terrorism attack.
The controversy has not prevented Grok's expansion into sensitive applications: on Monday US Defense Secretary Pete Hegseth announced that Grok will be integrated into Pentagon networks later this month.
Comment:
The Grok incident demonstrates why voluntary industry commitments are insufficient. X's own policies prohibit depicting individuals in pornographic content without consent, yet its built-in AI tool does exactly that. When xAI's response to widespread condemnation was to put the feature behind a paywall rather than remove it, it suggested where the platform's priorities lie.
The integration of Grok into X creates regulatory challenges. Australia's planned ban on "nudify" apps targets apps that are primarily designed to sexualise images, but Grok is a general-purpose AI that enables abuse. Creating nonconsensual sexual deepfakes of adults is criminal in South Australia, Victoria, and NSW, but remains legal elsewhere in Australia. The Commonwealth's 2024 offences focus on sharing material rather than standalone creation or the tools used. Our upcoming industry codes represent progress on protecting children, but they don't address creation tools or require safety-by-design approaches, that researchers argue are necessary–which could mean disabling or removing certain features and capabilities entirely.
PC doubles down while other regulators and experts push back on "wait and see" approach
✍ Regulation, 🤝 Principles
The Productivity Commission's final report on Harnessing Data and Digital Technology recommends that "AI-specific regulations should only be considered as a last resort". The Commission argues that gap analyses should first examine whether existing frameworks can address AI-related harms with improved guidance and enforcement, or with modification. The recommendation explicitly argues against "whole-of-economy regulation such as the EU AI Act and the Australian Government's previous proposal to mandate guardrails for AI in high-risk settings".
However, scholars argue that "without dedicated AI regulation, Australia will leave the most vulnerable at risk of harm" and that serious legal gaps remain.
The ACCC has warned that agentic AI risks undermining consumer protections. Chair Gina Cass-Gottlieb noted that "the emergence of new technologies over time, including agentic AI, may need us to consider whether the ACL continues to be effective". This contradicts Treasury's October finding that the Australian Consumer Law "can still protect consumers in the context of AI goods and services".
Meanwhile, the federal government has abandoned plans to establish a permanent AI advisory body, despite proposing and funding the initiative in last year's budget.
Comment:
The Productivity Commission recommends comprehensive gap analyses before regulatory action. But evidence of urgent gaps is already emerging. The ACCC is flagging that agentic AI may strain existing consumer protections. Good Ancestors' AI Legislation Stress Test found that up to 93% of surveyed experts consider Australia's current regulatory approach inadequate for managing AI risks. The question is not whether gap analyses have value, but whether waiting for their completion is prudent when regulators and experts have already identified concrete shortfalls and when interventions at the model level would be 'upstream' of many of the regulatory domains that would be covered by these gap analyses.
Global warnings on AI capabilities and safety gaps
🌍 Tech, ✍ Regulation
The UK's AI Security Institute (formerly AI Safety Institute) has published its inaugural report on frontier AI capabilities, delivering sobering findings. AI capabilities are improving rapidly across all tested domains, with performance in some areas doubling every eight months and exceeding expert baselines sooner than anticipated. The report warns that "in a worst-case scenario, unintended behaviour could lead to catastrophic, irreversible loss of control over advanced AI systems"—a possibility "taken seriously by many experts".
The Future of Life Institute's AI Safety Index found that safety practices at leading AI companies Anthropic, OpenAI, xAI, and Meta are "far short of emerging global standards". The Global Challenges Foundation listed AI in military decision-making as a global catastrophic risk, highlighting governance gaps including the lack of a universally accepted risk framework, inadequate confidence-building measures and limited transparency around military AI development.
In an interview with ABC's 7.30, AI pioneer Professor Yoshua Bengio (one of the "godfathers" of deep learning and the most cited researcher globally in any field) warned about humanity's future with artificial intelligence.
Australian Buck Shlegeris, CEO of safety research organisation Redwood Research, summarised the challenge of Silicon Valley's "move fast and break things" culture when it comes to AI: "the attitude that has brought Silicon Valley so much success is not appropriate for building potentially world-ending technologies."
Comment:
The UK AISI report is particularly relevant to Australia as our institute begins operations. Anthropic's head of UK policy noted that UK AISI technical researchers "have, at times, picked up [technical issues] that we haven't spotted", demonstrating the value of independent government capability.
But Australia's AISI should learn from international counterparts without simply replicating them. Good Ancestors' expert survey found strong support for the AISI to pioneer approaches that differ from existing institutes—addressing neglected areas and making a unique contribution rather than duplicating work done elsewhere. Survey respondents rated autonomous systems (85.8%), cyber misuse (81.2%), and dual-use science including CBRN risks (79.8%) as critical or very important AISI focus areas. The UK report's warning about catastrophic loss of control underscores the importance of getting this focus right. Australia's AISI has an opportunity to contribute to international safety standards while building sovereign expertise—but only if it is resourced appropriately and focused on the highest-consequence risks.
AI adoption remains a challenge for Australian business
✍ Regulation, 🌍 Tech
Australian tech leaders have issued their predictions for the year ahead, with AI a recurring theme. Atlassian's Scott Farquhar framed 2026 as the "moment Australia either steps into AI leadership or falls behind". Kylie Frazer of Flying Fox Ventures noted that around 85% of US equity funding has gone into AI this year versus around 51% globally,with Australia "anecdotally well below" both figures.
Industry's slow adoption reflects real uncertainty. AirTrunk's Robin Khuda highlighted that "cybersecurity, data integrity and AI governance are still big challenges as we head into 2026". Meanwhile KPMG's annual survey saw "new technologies, including AI" rise to the top challenge for executives, with 63% citing implementation and ethics concerns.
In The Australian's annual CEO survey, chief executives stressed the need to balance innovation with smart regulation that builds public confidence. Seven West Media's Kerry Stokes argued that "greater concern should be focused on the controllers of the AI platforms and how they could look to deploy AI as it advances", referring to the small number of global technology giants developing foundational models.
Adoption is also challenged by fragile public trust in AI. AI adoption featured in several of the top business controversies of 2025: Qantas's AI-drafted customer apology, Deloitte's error-riddled AI-written government report, and Commonwealth Bank's short-lived decision to cut 45,000 jobs as part of an AI voice bot rollout.
Comment:
Industry and the public cannot be forced to accept AI. The surveys reveal a clear pattern: governance gaps, and their impact on public and industry trust, may be as significant an obstacle to adoption as skill and investment gaps. Sensible regulation that builds confidence could accelerate rather than hinder Australia's AI transition.
—
In case you missed it…
- Government braces for automated welfare compensation claims: The Albanese government has listed almost five years of potentially unlawful automated welfare compliance decisions as a new "unquantifiable" Commonwealth liability, setting aside $44 million to address the scheme.
- 100+ UK parliamentarians acknowledge AI extinction risk: More than 100 lawmakers have backed ControlAI's campaign, calling for binding regulation on the most powerful AI systems.
- Trump signs executive order on AI: President Trump signed an order granting the Attorney General authority to sue states and overturn AI laws that do not support "United States' global AI dominance", putting dozens of state safety and consumer protection laws at risk.
- Bunnings AI gives illegal DIY advice: Days after being named Australia's most trusted brand, Bunnings apologised after its "Ask Bunnings AI" bot gave step-by-step instructions for replacing an extension cord plug, a task requiring a licensed electrician in Queensland.
- Crypto company's AI chatbot lies to journalist about being human: An AI bot gave false information and insisted it was not an AI bot.
- Australia joins US-led AI supply chain alliance: Australia and seven other nations signed a declaration at the Pax Silica Summit in Washington, committing to build more reliable technology supply chains to counter China's dominance.
- AI jobs are the fastest growing in Australia: New data shows AI-related roles leading employment growth.
- OpenAI seeking 'head of preparedness': Sam Altman announced OpenAI is looking for someone to lead on understanding 'how new AI capabilities could be abused'. Previous holders of the post have lasted only months.
- Google and Character.AI settle lawsuits over teen harm: The companies settled lawsuits filed by families accusing AI chatbots of harming minors, including contributing to a Florida teenager's suicide. Terms were not disclosed.
- Children forming emotional relationships with AI chatbots: Increased scrutiny as more children develop attachments to AI companions, prompting University of Sydney researchers to develop MIA ("Mental health Intelligence Agent") to provide safer mental health advice.
- AI biases shape news narratives: Research shows that subtle biases in AI models can alter narratives when systems are used to disseminate news, raising concerns about cumulative effects on public discourse.
- CommBank warns Australians overconfident about detecting deepfakes: Research found 89% of Australians believe they can accurately spot deepfakes, but when tested, they were correct only 42% of the time—worse than guessing. One in four Aussies have already encountered a deepfake, with scammers needing as little as 10 seconds of voice recording to create convincing audio impersonations. CommBank recommends families establish a confidential "safe word" to protect against impersonation scams.
–
Featured opportunities
- Roles: Multiple positions on the founding AISI team. If you're an Australian citizen with relevant skills, this is a unique opportunity to shape Australia's approach to AI safety. (Most roles close 18 January)
- Role: AI Governance, Department of Defence. Defence is recruiting for AI governance expertise. (Closes 28 January)
- Grant: National Intelligence Postdoctoral Grants 2026 for postdoctoral early/mid-career researchers to undertake academic research related to national intelligence. (Closes 13 February)
- Training: Summer 2026 IAPS AI Policy Fellowship is a fully-funded, three-month program for professionals from varied backgrounds seeking to strengthen practical policy skills for securing a positive future in a world with powerful AI. (Closes 2 February)
- Training: Technical Alignment Research Accelerator (TARA). A free 14-week part-time program building technical AI safety skills across APAC. The March 2026 cohort is targeting Sydney, Melbourne, Brisbane, Singapore, Manila, Taipei, and Tokyo. Ideal for software engineers, ML practitioners, and technical professionals who can't commit to full-time programs. (Closes 23 January)
—
That's all, for now!
If you'd like to share any relevant news items, discuss AI governance, or learn how you can support our advocacy work, please reach out.
Onward in action!
The Good Ancestors team
Subscribe to this newsletter
Get monthly updates on AI Policy and Governance from around the world.