TL;DR
On 23 March 2026, the Australian Government released formal expectations for data centre and AI infrastructure developers as part of the National AI Plan.
For compliance and risk leaders in critical infrastructure and government, this is not a background policy update. It is a direct signal that sovereign AI governance is now a live compliance obligation, and that organisations without structured assurance processes are already behind.
Australia has drawn a clear line in the sand on sovereign AI governance, and it starts with Defence. The Department of Defence's March 2026 policy on responsible AI use is not an advisory document. It is a binding governance framework that applies across the ADF and the broader defence portfolio, and its reach extends well into the organisations that support it.
Australia's AI infrastructure ambitions are accelerating fast. Data centre capacity is forecast to double from 1,350 MW in 2024 to 3,100 MW by 2030, with projected investment reaching AUD 26 billion, according to a Mandala report cited by the Department of Industry, Science and Resources. Against that backdrop, the government has made clear it will not allow unregulated growth.
The Expectations of data centres and AI infrastructure developers, released on 23 March 2026 by the Minister for Industry and Innovation, the Minister for Energy and Climate Change, and the Minister for Science, Technology and the Digital Economy, sets out five core pillars that apply to new or expanded hyperscale facilities and large-scale AI compute centres across Australia.
For Governance, Risk, and Compliance (GRC) leaders, the key signal is straightforward: organisations building or using AI infrastructure in Australia now face a layered set of assurance obligations that span national interest, energy, water, workforce, and security. And these obligations sit alongside existing requirements under the Security of Critical Infrastructure Act 2018 (SOCI Act), the Information Security Registered Assessors Program (IRAP), and the Hosting Certification Framework.
Join us in Brisbane on 30 April for the 2026 Sovereign AI and Regulatory Assurance Forum, a closed-door executive forum for senior leaders across AI, risk, audit, compliance, and resilience. Register your place at the Brisbane Forum.
The National AI Plan expectations are structured around five pillars. Each one creates a compliance surface that risk and governance teams need to map.
Projects must align with Australia's strategic, economic, and sovereign data objectives. Developments that demonstrate clear public benefit, rather than purely commercial outcomes, are more likely to gain government approval and priority treatment. For organisations operating in regulated sectors, this means demonstrating that your AI infrastructure strategy is aligned with Australian sovereign objectives, not just commercial efficiency.
Data centre operators are expected to underwrite new renewable power supply, fund their share of new grid connectivity, and participate in demand flexibility mechanisms. These obligations introduce operational and financial compliance requirements that go beyond traditional IT risk management and require coordination between infrastructure, sustainability, and risk teams.
Operators must minimise water use through efficient and innovative solutions, cover their share of water infrastructure costs, and be transparent about usage and efficiency. This creates a new category of environmental compliance reporting that connects directly to corporate sustainability governance frameworks.
The expectations require operators to demonstrate meaningful investment in Australian jobs, skills, and supply chains. For organisations that rely on foreign-owned AI infrastructure, this introduces supply chain due diligence obligations that sit naturally within vendor risk management programs.
Hyperscalers are expected to make compute available to Australian start-ups building Australian AI, and to partner with the domestic innovation ecosystem. This pillar reinforces the broader sovereign AI narrative: the government wants Australia's AI capability to be genuinely local, not simply hosted locally.
The expectations do not exist in isolation. They layer on top of a set of existing regulatory frameworks that compliance leaders in government, financial services, and critical infrastructure are already navigating.
Foreign direct investment in AI infrastructure is subject to Foreign Investment Review Board (FIRB) scrutiny, the Hosting Certification Framework, and potential national security review. For any organisation evaluating cloud or AI infrastructure providers with overseas ownership, this is not a theoretical risk. It is a live due diligence obligation.
Government and regulated entities using AI infrastructure for workloads up to and including the Protected level must ensure those platforms hold current IRAP assessments against the Australian Government Information Security Manual (ISM). Microsoft completed updated IRAP assessments for Azure, Dynamics 365, and Microsoft 365 in March 2026, but for organisations building or procuring AI infrastructure directly, the assurance burden falls internally.
Critical infrastructure owners and operators using AI in operational technology environments face additional obligations under the SOCI Act. The Australian Cyber Security Centre (ACSC) released joint guidance in late 2025 with international partners on securely integrating AI into operational technology systems, outlining four principles that critical infrastructure operators are expected to apply.
From December 2026, entities using automated decision-making (ADM) must disclose in their privacy policies how AI is used to make decisions that significantly affect individuals' rights or interests. Compliance leaders need to start mapping ADM use cases now.
The cumulative effect of these obligations is a new category of compliance risk that does not fit neatly into any single framework. It sits at the intersection of infrastructure governance, cybersecurity assurance, environmental reporting, vendor risk management, and national security due diligence.
Organisations that approach sovereign AI compliance as a one-off checklist exercise will find themselves revisiting it repeatedly as the regulatory landscape continues to evolve. The more sustainable approach is to treat sovereign AI governance as a continuous assurance capability, one that is integrated into your existing GRC programs rather than managed as a parallel workstream.
For CISOs and heads of risk and compliance, the practical questions are:
If the answer to any of these is uncertain, the gap between your current state and where regulators expect you to be is wider than it looks.
6clicks is built for exactly this kind of layered, multi-framework compliance environment. Our platform connects Governance, Risk, and Compliance (GRC) programs across frameworks including the ISM, SOCI Act, ISO 27001, and NIST, so organisations do not need to manage sovereign AI obligations in a separate system.
Key capabilities that are directly relevant to the National AI Plan expectations:
The organisations that will navigate the sovereign AI regulatory shift most effectively are the ones that already have structured GRC programs in place. The ones starting from scratch will spend 2026 catching up.
Australia's National AI Plan is a government framework that sets out how AI will be developed and deployed in Australia. The March 2026 release of expectations for data centre and AI infrastructure developers is a direct output of that plan. For compliance leaders, it matters because it introduces new obligations around national interest, energy, water, workforce, and security that apply to organisations building or operating large-scale AI infrastructure in Australia.
The expectations apply to new or expanded data centre and AI infrastructure developments in Australia, including hyperscale facilities and large-scale AI compute centres. Small-scale edge or on-site enterprise data centres are currently excluded. Organisations procuring or using covered infrastructure, particularly in regulated sectors, should assess their exposure through their vendor risk and procurement processes.
The expectations sit alongside, not above, existing obligations under the Security of Critical Infrastructure Act 2018. Critical infrastructure owners and operators must already manage AI risk under the SOCI Act. The National AI Plan expectations add a sovereign and strategic layer on top of those existing security and resilience obligations.
IRAP stands for Information Security Registered Assessors Program. It is the Australian Government's framework for assessing the security of cloud and technology systems used for government workloads. For organisations building or procuring AI infrastructure that will process government or Protected-level data, ensuring the platform holds a current IRAP assessment is a core compliance requirement.
Start by mapping your current AI infrastructure dependencies against the five expectation pillars. Identify which dependencies involve foreign-owned providers subject to FIRB scrutiny. Review your vendor risk assessments for IRAP status and Hosting Certification Framework compliance. Then connect those findings to your existing GRC program so sovereign AI governance is managed continuously, not as a one-off project.
Join us in Brisbane on 30 April for the 2026 Sovereign AI and Regulatory Assurance Forum, a closed-door executive forum for senior leaders across AI, risk, audit, compliance, and resilience. Register your place at the Brisbane Forum.