TL;DR
AI adoption is accelerating globally, but AI governance frameworks lag behind deployment — creating a growing compliance gap
Gartner forecasts spending on AI governance platforms will reach $492 million in 2026 and surpass $1 billion by 2030 as global AI regulation intensifies
NIST CSF 2.0’s Govern function explicitly addresses cybersecurity supply chain risk management and can be applied to AI-related cybersecurity risks
NIST AI RMF 1.0 provides a complementary framework for governing AI risks — 6clicks maps controls across both
If you are deploying AI without a formal governance program, your alignment with NIST CSF likely has a material gap
Artificial intelligence (AI) is no longer a future technology challenge. It is a present cybersecurity risk. AI systems introduce new attack surfaces, data governance obligations, and accountability questions that traditional cybersecurity frameworks were not designed to address. NIST CSF 2.0’s new Govern function, combined with NIST’s dedicated AI Risk Management Framework (NIST AI RMF 1.0), provides the most comprehensive guidance available for organizations seeking to govern AI-related cybersecurity risks.
NIST CSF 2.0’s expanded Govern function incorporates emerging technologies, including AI, into broader cybersecurity risk management. AI systems introduce unique risks across multiple NIST CSF functions:
Gartner forecasts global AI governance spend to reach $492M in 2026 and surpass $1B by 2030, driven by regulatory pressure across 75% of world economies by 2030.
Adversarial AI attacks
Adversarial attacks manipulate AI models by feeding them crafted inputs designed to cause incorrect outputs. For cybersecurity AI tools (intrusion detection, anomaly detection), this can mean attackers deliberately evading detection. NIST CSF 2.0’s Protect and Detect functions require controls to address adversarial manipulation of AI systems.
AI supply chain risk
Organizations increasingly rely on third-party AI models, data sets, and AI-as-a-service providers. Each creates a supply chain risk: a compromised AI model from a third-party provider can introduce vulnerabilities into the organization’s environment. NIST CSF 2.0’s C-SCRM requirements apply directly to AI supply chain risk.
Data governance and privacy
AI systems require large data sets for training and operation. This creates data governance obligations: ensuring training data is appropriately classified, access-controlled, and does not create privacy compliance risks. NIST CSF 2.0’s Identify and Protect functions cover data asset management and protection.
AI system accountability
When an AI system makes a decision that causes harm — a false fraud alert, an incorrect medical diagnosis, a biased hiring recommendation — who is accountable? NIST CSF 2.0’s Govern function requires clear roles, responsibilities, and accountability for cybersecurity risk, including AI-related risk.
NIST published the AI Risk Management Framework (AI RMF 1.0) in January 2023, providing a structured approach to identifying, assessing, and managing AI-specific risks. The AI RMF’s four core functions — Govern, Map, Measure, and Manage — complement NIST CSF 2.0’s cybersecurity risk management approach.
Organizations deploying AI should implement both frameworks in an integrated program:
6clicks maps controls across NIST CSF 2.0 and NIST AI RMF 1.0, enabling organizations to manage AI governance and cybersecurity risk from a single platform without running parallel programs.
6clicks is Sovereign GRC Infrastructure — built for the AI era, with agentic connectivity and automation within sovereign deployment options. This is not generic AI-powered SaaS: 6clicks embeds AI governance capability within a platform designed to operate in the most demanding environments.
Prepare your cybersecurity program for the AI era. Book a strategy call to see how 6clicks integrates NIST CSF and AI governance into a single, sovereign-deployable program. Book a strategy call to get started.