Get a template for an AI coding acceptable use policy with security controls and a list of 25 security questions to ask software developers and “citizen developers” about their AI use. Mitigate the security risks of vibe coding and using AI in software development with Tenable One.
Key takeaways:
- The vast majority of your developers are embracing agentic AI, machine learning, and large-language models (LLMs) for code completion and generation, automated testing, code reviews and analysis, and automated documentation, among other use cases.
- “Citizen developers” — business users with little to no coding experience and even less security experience — are also using agents, LLMs, and low-code/no-code (LCNC) platforms to build and deploy software without any security checks.
- While AI coding can be a gateway to innovation and efficiency, it also introduces significant cybersecurity risks. Know the right questions to ask your developers to understand the full scope of AI usage and how it’s reshaping the attack surface.
- Create an AI acceptable use policy (AI AUP) for business users, developers, and DevOps teams; implement training on cybersecurity best practices; and deploy an exposure management platform like Tenable One to reduce the risks of vibe coding, citizen developers, and using AI as part of the SDLC.
Your organization’s software developers and DevOps teams are using agentic AI, LLMs, and machine learning to do their jobs faster and more efficiently, whether you like it or not. In fact, 81% of developers surveyed by CodeSignal say they’re using AI for development, and some large tech companies mandate the use of AI for their developers.
In the most extreme cases, developers and non-developers (so called “citizen developers”) are resorting to vibe coding, where they tell an agent or an LLM what they want the software to do, the LLM or agent builds it, and the “developer” takes the AI code and puts it into production without any vetting or review.
AI-generated code created on behalf of citizen developers in particular are prone to misconfigurations, excessive data permissions, and weak authentication.
As you build and implement your organization’s AI acceptable use policy, it’s important to familiarize yourself with the various developer and citizen developer use cases, which can differ greatly from how other employees are leveraging AI. In this blog, learn about:
- The top five AI coding use cases — and the cybersecurity risks they introduce
- Key questions to ask developers and business users about their AI coding usage to gauge risk
- An example of an AI coding governance policy
What are the top 5 uses of AI, LLMs, and machine learning in code creation and development?
1. AI-powered code completion and generation
Integrated development environments (IDEs) incorporate AI coding assistants to provide real-time suggestions. These can include auto-completing the next few words in a line, generating entire functions or code blocks (vibe coding), or even creating boilerplate code based on comments or partial code structure.
Security risks of AI-powered code completion and generation: These practices introduce the risk of insecure code suggestions. AI models trained on vulnerable code often replicate insecure patterns in their suggestions. They also raise concerns about intellectual property leaks if developers are using AI tools that may share proprietary code snippets as training data or in suggestions to other users (depending on the AI tool's license and configuration).
2. Automated testing and test case generation
Developers use AI and ML tools to analyze existing code, documentation, and user interaction patterns to automatically generate unit tests, integration tests, and even security tests. One famous example is Anthropic using its Claude Opus 4.6 model to discover 500 high-severity vulnerabilities in open source codebases.
AI tools can also prioritize which existing tests to run based on changes made to the code, significantly speeding up the continuous integration (CI) process.
Security risks of using AI to test software: While helpful, the quality of AI-generated tests can vary. An LLM may fail to generate tests that cover subtle logic flaws or security vulnerabilities, leading to a false sense of security. Human review of security-critical tests remains essential. Additionally, a significant drawback of using an LLM to discover security vulnerabilities in software is that does so without any meaningful prioritization, resulting in even more noise for security and DevSecOps teams.
3. Code review, analysis, and refactoring
LLMs can review pull requests by summarizing changes, identifying potential bugs, checking code against organizational style guides, and suggesting optimizations or refactoring. Developers also use them to explain complex or legacy code in natural language, reducing onboarding time and maintenance effort.
Security risks of using AI for code review, analysis, and refactoring: AI reviewers configured for static application security testing (SAST) will scan for known security vulnerabilities and suggest fixes. However, they might miss contextual vulnerabilities specific to your application architecture or suggest remediation that, while fixing one issue, introduces a new, subtle one.
4. Automated documentation and summarization
Developers use LLMs to automatically generate function docstrings, API reference material, and README files from source code. In a DevOps context, these tools can also summarize long log files or incident reports to quickly identify root causes and patterns.
Security risks of using AI to create documentation: AI-generated documentation can sometimes be inaccurate or incomplete, especially for complex security features or custom encryption logic. Relying solely on these tools for critical security documentation can lead to misunderstandings and misconfigurations.
5. Natural language-to-infrastructure generation
DevOps teams use LLMs to translate natural language requests (e.g., "Deploy a three-tier web application using AWS, with a PostgreSQL database and a load balancer") into working infrastructure-as-code (IaC) configurations (e.g., Terraform or CloudFormation scripts). This significantly accelerates the provisioning of environments.
Security risks of using AI for natural language-to-infrastructure generation: This is a major area of risk. LLMs may generate IaC scripts that contain insecure defaults (e.g., overly permissive firewall rules, unencrypted storage, or unsecure port configurations) if not explicitly prompted otherwise. Integrate mandatory security checks and scanning tools into the continuous integration/continuous development (CI/CD) pipeline to validate all AI-generated IaC.
As developer and coding use cases get augmented with agentic capabilities, leading to the semi-autonomous or fully autonomous execution of software development and testing tasks, the security situation is only going to get worse.
What should I ask my developers and DevOps teams about their AI usage?
The primary areas of AI security risk for CISOs are:
- Agentic coding, vibe coding, and citizen developers
- Code integrity and security debt
- Legal and supply chain
- Data privacy and IP
Below are key questions to ask your DevOps teams to help you assess your organization’s risk in each of these areas.
1. Questions to ask developers to assess the risks of agentic coding and vibe coding
According to an October 2025 McKinsey report, business leaders are rushing to embrace agentic AI. For developers, tools like OpenClaw, Cursor, and GitHub Copilot Workspace can execute commands without human intervention, raising concerns about the permissions these tools hold.
Questions to ask your developers:
- What identity is the AI acting as?
- Is the AI tool running in a sandboxed environment, or can it read the .env files on a developer's machine?
- Is there a human in the loop for every deployment?
- Can the AI autonomously push code to a repository, or is a human review mandatory?
- Is AI-generated code deployed straight to production?
- What kind of testing is performed on AI-generated code?
2. Questions to ask developers about how they handle code integrity and security debt in AI coding
Like much of the output from AI tools, AI-generated code is often functional but flawed. It’s entirely possible that it may reintroduce long-running vulnerabilities like SQL injection or insecure hardcoded secrets.
Questions to ask your developers:
- Are we tagging AI-generated code?
- How do we identify which parts of the codebase were written by AI for future audits or incident response?
- Has our security-to-code ratio changed?
- If AI is helping us write code 20% faster, are we also increasing our security scanning capacity by 20% to keep up?
- How are we handling hallucinated libraries?
- What is the process for verifying that the packages or APIs the AI suggests actually exist and aren't malicious typosquatting packages?
3. Questions to help you assess the legal and supply chain risks of AI coding
Using AI-generated code can introduce legal risks such as violating copyrights and licenses. It also raises the stakes for supply chain risk, potentially passing flaws and vulnerabilities to your customers.
Questions to ask your developers:
- Are we accidentally violating GPL/AGPL licenses?
- Does the AI tool you’re using have a copyleft filter to prevent it from suggesting code that would require us to open-source our own product?
- Can the AI vendor indemnify us against copyright claims?
- If the AI produces code that is a direct copy of a copyrighted work, who is legally liable?
- Do we have a software bill of materials (SBOM) for AI-assisted builds?
- Can we prove to our customers exactly what went into the software we sold them?
4. Questions to help you assess the data privacy and IP considerations of AI coding
Are your developers accidentally leaking your company's proprietary code, API keys, or customer data into public AI models? You can mitigate some of this risk by restricting all employees to a closed enterprise-grade platform like ChatGPT Enterprise. Even so, it’s important to be clear about the scope of any tools.
Questions to ask your developers:
- What tools are you using? Are they approved?
- Is the vendor using our proprietary code to train their global model?
- Have we opted out of data improvements?
- Are you using personal ChatGPT accounts or unvetted browser extensions instead of enterprise-sanctioned tools?
- What happens to the data in the prompt history?
- Who has access to it?
- Does the vendor delete it after a certain period?
What to include in an AI coding acceptable use policy?
Here is a brief overview of key AI governance and AI accountability policies to consider:
| Policy area | Specific policy | Control implementation |
|---|---|---|
| Monitoring | Developers understand their use of AI is monitored for compliance with the organization’s broader AI acceptable use policy, including use of approved and unapproved tools, as well as for data leaks, secrets detection, hallucinated libraries, malicious prompts, etc. | Implement a platform capable of discovering AI in its various forms (agents, plugins, extensions, LLMs, etc.) a across the entire organization — internal and external, on-prem and cloud, approved and unapproved — delivering a complete, risk-aware view of where AI operates, how it is connected, and where exposure is created. |
| Developer accountability | The developer who reviews, modifies, and commits the AI-generated code is fully accountable for its security, compliance, and legal standing (including licensing). | Incorporate this principle into your security awareness training and update your secure software development lifecycle (SSDLC) documents to reflect AI usage as a new form of third-party input. |
| Compliance and licensing | Scrutinize AI-generated code for potential open-source license infringement (a risk when AI models reproduce training code). | Use software composition analysis (SCA) tools and human legal review to check any significant AI-generated code block against your organization’s open-source licensing policies. |
| Training and awareness | All developers must complete annual training on the specific security risks of LLMs, including hallucination, prompt injection, and data leaks. | Create a modular training program focused on AI-specific secure coding patterns and the risks of developer overconfidence in AI-generated code. |
| Vibe coding | Decide if your organization will allow vibe coding, and if so, for what use cases. For instance, you may opt to allow vibe coding for the development of personal productivity scripts and wireframes but not for production systems, customer-facing systems, or any applications that touch sensitive customer, employee, or intellectual property data. | Consider implementing controls for environmental isolation, AI-generated test coverage, traceability, and verification gating. |
| Agentic AI policy | Any use of agentic AI (where an LLM can perform multi-step actions autonomously, like creating a pull request or deploying IaC) must have strict, predefined guardrails, including requirements for human-in-the-loop (HITL), and run in a sandboxed, low-privilege environment. | Require explicit security architecture review and approval before introducing any autonomous AI agent into the CI/CD pipeline. |
Source: Tenable, January 2026
Keep your innovation going with exposure management
As developers and business users embrace AI coding, you don't have to make them choose between innovation and security. Establishing an AI acceptable use policy for developers, providing them with sanctioned and secure AI platforms to use, educating them about cybersecurity best practices, and monitoring their use of AI tools will reduce your organization’s risk.
Tenable AI Exposure continuously discovers AI across your entire organization — approved and unapproved, internal and external, on-premises and cloud — to deliver a complete, risk-aware view of:
- where developers and others are using approved or unapproved AI tools;
- what they’re using AI for (e.g., prompt-level visibility);
- whether they’re intentionally or accidentally leaking proprietary code, API keys, customer data, or other sensitive information into public AI models;
- how AI applications, agents, plugins, and extensions connect to your organization’s systems and data to create real risk.
In short, Tenable One correlates the relationships among AI applications, infrastructure, identities, agents, and data to highlight and prioritize the AI exposures that matter most for remediation.
Use Tenable One to:
- Discover AI running inside your organization, whether it’s approved, unapproved, or embedded. Maintain a unified, up–to-date view of AI software, libraries, models, and services operating across your organization. See which AI components are outdated, vulnerable, or misconfigured so you can prioritize remediation based on real risk.
- Protect AI workloads and agents. Identify and prioritize risky configurations in cloud-based AI workloads and model environments that could expose models, agents, data, or APIs.
- Detect and respond to prompt injection attempts, jailbreak behavior, and malicious instructions designed to manipulate AI systems. Isolate erratic and misbehaving agents.
- Govern AI usage. Identify sensitive data, intellectual property, and PII being shared with AI platforms and agents. Surface AI-related risks with precise context, including the AI engine, user, and specific interaction or session involved, enabling rapid understanding and response.
Unlike point AI security tools that surface isolated findings, Tenable One correlates AI, infrastructure, agents, and data exposure into a unified view, so you can reduce AI risk across all environments, even as your developers leverage AI for scale and productivity.
Learn more
- See how Tenable AI Exposure can help you uncover AI coding risks and remediate issues without slowing innovation.

The post Security for AI: A guide to managing the risks of vibe coding and AI in software development appeared first on Security Boulevard.
Tomer Y. Avni
Source: Security Boulevard
Source Link: https://securityboulevard.com/2026/03/security-for-ai-a-guide-to-managing-the-risks-of-vibe-coding-and-ai-in-software-development/