Pre-Budget Submission 2025-26

AI as an Economic Opportunity

In January 2025, Good Ancestors made a pre-budget submission to Treasury, presenting the economic and strategic case for establishing an Australian AI Safety Institute (AISI). The submission argues that addressing public concerns about AI safety through an AISI would not only mitigate the significant downside risks posed by frontier AI but also accelerate the economic benefits of faster adoption of safe AI, and help Australia capture a share of the growing global AI Assurance Technology market.

Key Arguments

Analysis shows that AI will be a major economic force, with Australian adoption potentially generating between $45B and $600B in annual value by 2030. However, Australians are uniquely concerned about AI safety, with 69% expressing apprehension—the highest level globally.

Our submission focuses on five key reasons why Australia should fund an AISI:

  1. AI will be a major economic force. Capital markets are backing AI with unprecedented investment, with over half of global venture capital now flowing to AI companies. Leading nations are committing substantial resources, from the UK's £100 million AI Safety Institute to the US's $500 billion Stargate Project. Multiple analyses forecast that AI will contribute tens to hundreds of billions in annual value to Australia by 2030.

  2. Australia has a unique opportunity in AI Assurance Technology. The global AI Assurance Technology (AIAT) market will reach $276B by 2030, presenting a significant opportunity that aligns with Australia's strengths. Our proven expertise in safety-critical industries like mining safety, aviation standards, and food biosecurity provides a strong foundation. Our regional position and socio-technical capabilities give us strategic advantages in developing and exporting robust assurance capabilities.

  3. Public trust is essential to realising AI's benefits. Australians are more concerned about AI safety than any other nation, with trust identified as the primary factor restricting AI adoption. The Tech Council of Australia's analysis shows that the difference between fast and slow AI adoption could result in a 156% difference in annual economic value by 2030, making public trust a critical economic factor.

  4. Safety requires institutional capability. Leading AI experts warn of potentially catastrophic risks if AI development continues without adequate safeguards. Early examples already show AI systems causing harm through mistakes, misalignment, and misuse. As AI capability grows rapidly, effective oversight requires that the government has in-house technical capability.

  5. Australia has made relevant commitments. The Seoul Declaration commits us to create or expand AI safety institutes. The Hiroshima Process and Bletchley Declaration require technical capability to evaluate and mitigate risks from AI systems. Domestically, Recommendation 17.2 of the Robodebt Royal Commission highlights the need for technical expertise in overseeing automated systems.

The proposed investment of $8–50M annually represents just 0.001–0.1% of the potential yearly economic benefits from accelerated AI adoption by 2030.