Blogs | 6clicks

Insights from Ready for Sovereignty 2026 Canberra: Australia’s AI governance stalemate

Written by Andrew Robinson | Apr 23, 2026

The Ready for Sovereignty 2026 Forum in Canberra has just concluded on April 21, bringing together risk, audit, cyber, and AI leaders from defence, healthcare, education, and the Australian Public Service (APS) who work at the forefront of technology and governance.

What made it such a success is that it was a tight-knit discussion that surfaced genuinely valuable insights, and two things came through clearly enough that I can’t leave them in our internal notes:

 

  1.  Risk-first AI governance is crowding out opportunity in Australia. The country's approach to AI is almost entirely focused on what could go wrong, to the exclusion of what we stand to lose by not moving, or what we might gain if we get a few things right.
  2.  Regulatory frameworks are out of step with real-world AI use. Regulators responsible for governing AI are so structurally disconnected from how AI is actually being used in practice that the rules are either non-existent, irrelevant, or unenforced.

Neither of these is necessarily a new observation. What stood out was the level of consensus in the room. Comforting? No. These challenges are widely understood, yet they continue to hold organisations back on a daily basis, with no clear progress in sight.

The risk asymmetry nobody wants to name

One participant put it plainly: 90% of the conversation in Australia on AI is about risk. Not opportunity. Risk.

 

That is not a criticism of caution. Caution has its place. The Robodebt Royal Commission, the AI-generated Deloitte report given to the head of a government department, these are real events with real consequences, and they rightly shaped how government thinks about automated decision-making.

 

But there is a cost to not moving, and nobody is being held accountable for it.

 

When a department freezes AI adoption because the risk of acting badly outweighs any personal incentive to act well, that is a rational response to a broken incentive structure. The downside of a bad AI deployment is visible, career-ending, and gets hauled before a committee. Meanwhile, the downside of not deploying, delayed services, overwhelmed regulators, backlogs that harm citizens, those costs are diffuse, slow, and attributable to nobody.

 

Until that asymmetry is corrected in policy, no amount of strategy documents or national AI plans will change behaviour at the operational level.

 

Compare that to the countries we’ve held other discussions. The UAE has a minister for AI, an AI university, and AI officers in every government department reporting back to the minister routinely. Estonia treats its government like a startup, with failure acknowledged as a positive learning outcome. Singapore and Malaysia are pushing ahead in financial services and government efficiency. The reason those countries move faster is not that they are less careful. It is that they have more to lose by standing still.

 

Australia is a lucky country. That luck has made us comfortable with a pace of caution that, in the context of AI, is becoming a competitive liability. We ought to strive for more than a dependency on critical minerals.

The regulator relevance problem

The second thing that landed hard in the room was the state of regulatory maturity.

 

We presented a five-level model for how regulators and regulated entities share assurance information.

Level one is spreadsheets via email. Level two is SharePoint instead of email, but still document-based. Level three is structured data collection. Level four adds intelligence and automation. Level five is continuous assurance with real-time feeds, already operating in some high-risk sectors.

 

The consensus on where most Australian federal government regulators sit? Level two, and struggling to get out of spreadsheets and documents shared via email.

 

That is not a technology problem. The capability to do better exists. It is a willingness problem, a literacy problem, and a structural problem. Regulators are writing rules about AI systems they have not used, cannot interrogate, and would not know how to audit. The controls they prescribe lag behind the systems they are meant to govern, not because they are slow, but because the architecture of regulation assumes a pace of change that AI simply does not respect.

 

Three specific failure modes came through clearly in our discussion:

  • Lag. By the time evidence is collected, reviewed, and assessed, the system has already changed. Internal and external audits happen annually. AI models in production can change weekly.
  • Drift. Controls that exist in documentation diverge from real-world behaviour. The system running in production is not the system that was approved. Vendors update models. Data changes. Nobody has real-time visibility.
  • False confidence. Documentation is complete, attestations are signed, everyone feels safe. Then something fails in production and the assurance picture unravels.

These three problems predate AI. AI makes them structurally worse, because the gap between documented design and production reality is wider, changes faster, and is harder for a non-technical reviewer to detect.

 

One participant described agencies currently doing forensic reviews to locate all automated decision-making buried in complex IT systems, precisely because nobody mapped it when it was built. A regulator cannot know what the regulated entity has not already mapped. The question is whether regulators build the architecture to prevent issues now or keep discovering them in the aftermath of harm occurring. I know which model I prefer.

Sovereignty is more than a data centre

The sovereignty conversation in the room was more sophisticated than the public debate usually is, and it is worth capturing here.

 

Sovereignty is not just where your data sits. A meaningful sovereignty posture covers compute infrastructure, the legal entity of your provider, where your LLM is hosted, and where the people supporting that infrastructure actually are. You can host data in Australia and grant overseas access and achieve essentially no meaningful control. Sovereignty without access controls, audit rights, and practical support resources is marketing, not effective security.

 

Several participants made the point that sovereign AI environments, by their nature, are running older foundation model versions. The hyperscalers building trillion-dollar infrastructure are not prioritising sovereign deployments for Australian audiences. If you need air-gapped sovereignty, you are accepting a capability lag. That is a legitimate trade-off, but it should be made explicitly, not discovered after deployment.

 

The honest situation: many vendors claiming AI sovereignty in their contracts do not hold up to a half-decent audit. That is not a hypothetical. It was said in the room by people who have done the audits.

Which is precisely why we have taken a different approach with 6clicks, building it as sovereign GRC infrastructure designed to operate within the environments organisations actually need, not just where vendors prefer to run.

What the national AI plan tells us

The Albanese Government's National AI Plan, released in December 2025, and the Anthropic MOU signed three weeks ago, are the most coherent policy signals Australia has produced on AI. They are worth taking seriously on their own terms.

 

The APS AI Plan introduces Chief AI Officers across every agency. That matters because it creates named accountability at the agency level, which is the structural change the incentive architecture has been missing. GovAI, the central government AI platform, is designed explicitly to prevent vendor lock-in, which is the right direction. New Commonwealth contracting clauses requiring disclosure of AI use in consultancy work directly address past failure modes. These represent a credible starting point.

 

But the National AI Plan is funded at $29.9 million for the AI Safety Institute. Compare that to the UK equivalent. Compare it to what the UAE is investing. Australia's ambition on AI is modest, and the resourcing reflects it.

The Anthropic MOU makes headlines. Dario Amodei flies to Canberra, shakes hands with the Prime Minister, and signs a non-binding statement of intent. It is the first arrangement executed under a National AI Plan that is five months old, in a domain where the frontier shifts every few weeks. The optics are good. The execution gap it exposes is far bigger than the agreement it announces.

The three reforms that would actually move things

Based on everything that came through in Canberra, and the policy context around it, three interventions would have a disproportionate impact.

  • Name the cost of inaction. Policy reform should require agencies to assess and publish the cost of not deploying AI in priority service areas, alongside their risk assessments for deployment. Make inaction a visible, costed risk with an owner. Right now, only one side of that equation appears in decision-making.

  • Fix the accountability architecture. Named accountability for AI-assisted decisions, with a genuine safe harbour for following prescribed frameworks, changes the incentive structure for every senior officer in the APS. Right now, there is a stick without a carrot. The stick applies to people who act. People who do not act face no accountability.

  • Embed practitioners in regulators. Not advisory panels. Not consultation submissions. Mandatory secondments of working technologists into the regulatory bodies writing AI rules. The rules written by people who have built and run AI systems are usually implementable. The rules written without that knowledge stop at principles.

The harder conversation

The country comparison that stuck with me from the room was Estonia. A population of 1.4 million. A threat environment so acute that digital sovereignty is existential. When Russia moves, Estonia has no country. That is what motivates their AI-first government posture.

 

One participant said something I keep returning to: Australia's version of existential risk is not getting the next federal government contract.

 

That is sharp and mostly fair. The absence of an acute threat has made us comfortable with a pace of deliberation that is no longer affordable. The expectation gap is coming regardless. Either it arrives through citizen pressure when private sector AI service quality makes government look incompetent, or through a cyber incident that has to be responded to at AI speed with human-pace governance, or through the slow accumulated cost of productivity foregone.

 

We will move eventually. The question is whether we choose to move now, with governance architecture in place, or whether we get dragged forward reactively and build the governance after the fact.

 

Better governance enables faster movement. The faster the car, the stronger the brakes. That framing landed in the room because it is true, and because it reframes governance from a constraint on deployment into a precondition for it.

 

Australia has the governance institutions. We have the standards. We have practitioners who understand the problem with genuine depth. What we do not yet have is the political will to treat AI capability as a national economic priority rather than a risk to be managed.

 

That requires a champion. Not a minister who manages the downside of AI. A minister who owns the upside. We are waiting for that person.

Canberra set the tone. See it on demand.

The conversations in Canberra were too important to stay in the room. We’ve made the resources and key insights available on demand, so you can see exactly how leaders across government and critical sectors are thinking about AI governance, sovereignty, and what needs to change next.

 

Access the Ready for Sovereignty 2026 Canberra: On-demand forum

 

We’re continuing the discussion across Australia. Join us at the next stops: