TL;DR
The Australian Department of Defence released its Policy Settings for Responsible Use of Artificial Intelligence in Defence in March 2026, a dedicated governance framework that sits outside the DTA's whole-of-government AI policy.
For any organisation that supplies, supports, or operates alongside the Australian Defence Force (ADF), this policy creates direct Governance, Risk, and Compliance (GRC) obligations that cannot be met with generic compliance tooling.
Australia has drawn a clear line in the sand on sovereign AI governance, and it starts with Defence. The Department of Defence's March 2026 policy on responsible AI use is not an advisory document. It is a binding governance framework that applies across the ADF and the broader defence portfolio, and its reach extends well into the organisations that support it.
The timing of this policy is not accidental. Globally, AI adoption in defence and critical infrastructure is accelerating faster than regulatory frameworks can keep pace. Australia has watched this dynamic play out in allied nations and chosen to act ahead of a crisis rather than in response to one.
What makes the March 2026 release significant is its specificity. This is not a principles statement or a strategy paper. It is an operational governance framework that establishes three non-negotiable requirements for AI use across Defence:
For context on the depth of governance expected: the policy explicitly references Article 36 of the Geneva Convention in the context of AI in weapon systems. That is not a footnote. It signals the maturity and seriousness of the obligations the defence supply chain now operates under.
On 21 April 2026, 6clicks is hosting a sovereign AI roundtable in Canberra for regulators, regulated entities, and defence-adjacent organisations navigating exactly these obligations. Register your place at the Canberra Sovereign AI Roundtable to join a select group of government and industry leaders for a focused conversation on what sovereign AI governance means in practice.
One of the most important distinctions in the March 2026 policy is that it is explicitly excluded from the Digital Transformation Agency's (DTA) whole-of-government AI policy. The defence portfolio operates under its own sovereign framework.
This is a critical point for organisations that assumed alignment with the DTA's guidance was sufficient. It is not, at least not for entities operating in or adjacent to the defence sector. Compliance teams, chief information security officers (CISOs), and risk managers in defence-adjacent organisations need to assess their exposure against this policy specifically, not just the broader APS AI framework.
Legality and international obligations. AI systems used by or supplied to Defence must comply with Australian domestic law, including the Privacy Act 1988, and with Australia's international obligations. For suppliers, this means your AI governance documentation needs to address legal compliance explicitly, not just cite general ethical principles.
Human accountability and explainability. This requirement has direct implications for GRC platform design and implementation. Any AI-assisted decision in a defence context must be explainable and must preserve human accountability in the decision chain. Tools that operate as black boxes do not meet this standard.
Proportionate risk controls. The policy requires testing, training, and evaluation scaled to the consequence of failure. This is a risk-based approach, which is good news for organisations already using structured GRC frameworks, but only if those frameworks are applied rigorously and documented in a way that is auditable.
The defence supply chain in Australia is large. Financial services institutions supporting defence contracts, critical infrastructure operators, state and federal government agencies, and technology providers to the ADF are all potentially within scope of the obligations this policy creates.
If your organisation handles data, develops systems, or provides services that touch the defence portfolio, your GRC posture needs to reflect the requirements of this policy. That includes your AI governance documentation, your risk assessment methodology, and your control testing regime.
Many organisations in Australia's government and defence technology sector hold or are pursuing Information Security Registered Assessors Program (IRAP) assessments. IRAP covers the information security baseline, but the Defence AI policy adds a layer of AI-specific governance on top of that baseline that IRAP alone does not address.
Organisations that treat IRAP as their complete compliance story for defence work will find gaps when this policy is applied to their AI systems and processes. A dedicated AI governance framework, mapped to the three pillars of this policy, is now a practical necessity.
General enterprise risk frameworks were not designed with AI-enabled systems in mind. The consequence-scaled testing requirement in the Defence AI policy requires organisations to assess AI-specific risks: model drift, data poisoning, unintended outputs, and accountability gaps in automated decision chains.
This is not a minor update to an existing risk register. It requires a structured approach to AI risk that sits alongside, and integrates with, existing GRC programs.
6clicks was built for exactly this environment: complex, multi-framework compliance in high-stakes sectors where accountability and auditability are non-negotiable.
For organisations responding to the Defence AI policy, 6clicks provides:
The Sovereign GRC capability within 6clicks is purpose-built for the Australian government and defence context, with deployment options that keep data onshore and within Australian jurisdictional control.
The policy directly governs the ADF and the defence portfolio. However, any private sector organisation that supplies AI-enabled systems, data services, or technology to Defence is expected to operate in compliance with its requirements. If your organisation is part of the defence supply chain, the policy's obligations are effectively your obligations too.
The Defence AI policy is explicitly excluded from the DTA framework and operates as a sovereign, stand-alone governance document. This means that aligning with the DTA's guidance alone is not sufficient for defence sector compliance. The two frameworks have different scope, different requirements, and should be assessed separately.
Human accountability means that a human must remain responsible and answerable for any decision made with AI assistance. It also means the AI system's reasoning must be explainable to the people reviewing or acting on its outputs. For GRC purposes, this requires documentation of AI decision chains, explainability testing, and clear governance over how AI outputs are used in operational processes.
The policy requires risk controls to be proportionate to the consequence of failure. This means starting with a consequence assessment for each AI system or use case, then designing testing, training, and evaluation requirements that match that consequence level. High-consequence AI applications require more rigorous controls, more frequent testing, and more detailed documentation than low-consequence ones.
Sovereign GRC refers to governance, risk, and compliance capabilities that operate entirely within Australian jurisdiction, with data sovereignty, onshore hosting, and governance frameworks aligned to Australian regulatory requirements. For defence-adjacent organisations, Sovereign GRC is not a preference. It is increasingly a procurement and compliance requirement.