As AI adoption accelerates, CISOs face a dual challenge: fueling innovation while mitigating the risks of a rapidly expanding attack surface. Tenable’s five-step framework for securing AI offers a systematic approach to reducing AI security risks as your organization races to achieve the productivity benefits of AI.
Key takeaways
- Get a five-step framework to help you secure AI usage throughout your organization and mitigate the security risks that AI tools create.
- Securing enterprise AI use demands a combination of robust AI discovery capabilities, mechanisms to secure AI workloads and the infrastructure running AI, prompt-level visibility, the ability to analyze AI security risks alongside other exposures, and technical controls to ensure compliance with your organization’s AI acceptable use policy.
- Learn why your existing security controls may be inadequate when it comes to securing AI.
As AI transforms enterprises, security leaders like me are grappling with how to most effectively manage the security risks it creates.
The challenge is that AI is now embedded virtually everywhere across our organizations: in employee productivity tools, SaaS platforms, developer libraries, cloud services, APIs, and web apps. The result? Our teams are left with a growing AI exposure gap: a vast and largely invisible attack surface that our traditional security tools weren’t designed to monitor.
Complicating matters is that we often can’t isolate AI risk to a single asset. Rather, it emerges from a string of interconnected elements (such as applications, infrastructure, identities, and data) that in aggregate create exposure. Here’s an example of what I mean.
Let's say an employee uses an approved AI chatbot for technical support resolution that relies on Amazon Bedrock agents, and those agents have elevated privileges to access sensitive internal systems like enterprise resource planning and customer resource management tools. If a threat actor gains access to the agent through an unpatched vulnerability on the employee’s laptop, then the threat actor can use the agent to breach sensitive data, and a seemingly safe use of an approved AI tool turns out to be a high-impact exposure.
Protecting data in today’s AI-assisted work environments becomes exponentially more difficult because each one of the myriad interactions with AI assets (e.g., every prompt, file upload, generated response, integration, and configuration) can put intellectual property, customer information, and confidential plans at risk.
So, how do we tame this challenging new attack surface that grows unchecked as our organizations expand their use of AI? Here’s a strategic framework I’ve implemented for governing, discovering, and securing AI wherever it crops up and creates risk for your organization.
Strategic framework: 5 steps for securing enterprise AI
1. Establish an AI governance committee, framework, and acceptable use policy
Securing AI starts with setting clear expectations with employees about acceptable use. Establish an AI acceptable use policy that:
- provides a list of approved and unapproved AI tools;
- defines appropriate and inappropriate business use cases;
- explains the types of data that can and can’t be shared with LLMs;
- prescribes rules for data handling;
- addresses copyright laws; and
- states the consequences of policy violations.
Based on your organization’s AI acceptable use policy, you can implement controls to enforce and monitor compliance with it.
2. Discover AI across your entire attack surface
When I talk with other CISOs about securing AI, they say discovering and detecting it is one of their biggest challenges. And I get it: it’s freakin’ everywhere, and a lot of it is really hard to find, in part because AI’s presence extends well beyond centrally-managed systems that are clearly visible.
As security leaders, we need to account for:
- AI assets, agents, plugins, browser extensions, and workloads, regardless of whether they’re...
- run in the cloud or on-premises
- internally or externally accessible
- approved or unapproved
- Forgotten AI test deployments
- AI tools embedded in endpoints and applications
- All AI software, libraries, models, and services
- Publicly exposed AI services, large language model (LLM) APIs, and AI chatbots on endpoints and in cloud applications.
Your existing data loss prevention (DLP), cloud access security brokers (CASB), and cloud security posture management (CSPM) solutions can provide a good initial starting point to discover AI assets. But holistic discovery requires specialized discovery tools because the non-deterministic nature of AI defies traditional rules-based security protections. It also requires unique detection capabilities to identify embedded AI tools and libraries and understand how AI systems work together to create exposure.
With a continuous and complete view of your enterprise’s AI usage, you’ll know precisely what workloads and infrastructure you need to secure, and you can begin to assess your organization’s overall AI exposure and prioritize specific remediation actions accordingly.
3. Secure AI workloads and agents
Because AI workloads are deeply interconnected and often severely misconfigured or over-permissioned, this step involves proactively securing the infrastructure where AI runs and hardening AI workloads before attackers can exploit them. For example, if developers at your organization are building AI-enabled applications in the cloud, you want to make sure that cloud infrastructure is secure.
Effective protections require capabilities to:
- Identify misconfigurations and risky configurations in cloud-based AI workloads.
- Detect vulnerabilities that could expose models, agents, data, or APIs to unauthorized access.
- Implement identity-driven exposure reduction, since AI relies heavily on non-human identities.
- Detect the overprivileged service accounts, roles, and machine identities that AI workflows use and strictly enforce least-privilege access.
- Understand potential attack pathways from AI assets and workloads that could impact business-critical systems or lead to sensitive data.
- Quickly isolate erratic or compromised AI agents into controlled environments to minimize potential breach impact.
I’ll dive deeper into this specific topic of securing AI workloads and agents in a follow-up blog I have planned. In the meantime, you can understand how identity weaknesses and infrastructure flaws combine to create critical exposure by conducting deep risk analysis of your AI stack. Based on these insights, you can provide actionable playbooks to your security teams to harden environments and ensure services run on secure, resilient, and validated architectures.
4. Assess AI usage and interactions
This step involves understanding how your employees interact with generative AI tools and autonomous agents to make sure employees aren’t violating your organization’s AI acceptable use policy. It’s critical to understand how data flows through all AI applications and determine where exposure is being created.
This requires granular visibility into:
- Who’s using AI
- For what purpose
- Where risky behavior and misuse originates
- What data employees are sharing through prompts, uploads, or automated actions
- Attempts to jailbreak approved AI tools and provide malicious prompts
Prompt-level visibility into employee AI use allows your security team to detect policy violations and reinforce safe AI behavior. It also allows your security team to identify any sensitive data, including intellectual property and PII, that employees or agents share with AI tools via prompts, uploads, and automated interactions and that could create exposure via an accidental leak. And it enables your security team to detect and respond to new AI-specific threats and misuse, like prompt injection attempts and other malicious instructions designed to manipulate AI systems.
Whether it’s discovering a malicious tool connected to a Microsoft Copilot agent or an employee misusing AI for inappropriate situations that the tool was not designed for (e.g., internal hiring decisions), you need to respond quickly to address the exposure and reinforce safe use.
5. Analyze AI security risks in the context of other exposures
To mitigate AI security risks, it’s not enough to detect in isolation unpatched vulnerabilities in AI software, weak configurations of AI systems, and overprivileged agents. After all, AI is becoming fully integrated into all of our apps, data, and business processes.
Mitigating AI security risks requires a unified, automated approach to gathering contextually-rich AI security data and correlating and analyzing it alongside other exposure data, such as a publicly exposed S3 bucket, a vulnerable laptop, or an orphaned account with admin privileges. At Tenable, we call this approach exposure management, and we see the industry quickly catching on. Exposure management allows you to proactively see how security weaknesses across your environment combine to create exposure: high-risk attack paths leading to your organization’s most sensitive systems and data.
Exposure management also surfaces risks with precise context — including the specific AI engine, user, and session — to enable high-fidelity issues management and rapid response. It’s about understanding how toxic combinations of risk coalesce to create business exposure. One medium-criticality misconfiguration in Amazon Bedrock could be connected to an unsecured LLM providing over-provisioned entitlement access for agents. Exposure management requires this complete understanding of the entire environment and attack surface.
Securing the future of AI innovation
The rapid integration of AI across the enterprise has created a complex, interconnected attack surface that traditional security controls are simply not equipped to handle. To close the AI exposure gap, security leaders must shift from a reactive, tool-centric approach to a proactive, unified strategy.
By implementing this five-step framework, you can create a resilient security posture that evolves alongside AI technology. Ultimately, effective exposure management isn't about slowing down innovation; it's about providing the necessary guardrails to ensure your organization can embrace the power of AI safely and confidently.

The post Security for AI: A strategic framework for closing the AI exposure gap appeared first on Security Boulevard.
Robert Huber
Source: Security Boulevard
Source Link: https://securityboulevard.com/2026/05/security-for-ai-a-strategic-framework-for-closing-the-ai-exposure-gap/