Mandate Secure Prompt Engineering Practices for All Developers
Mandate the use of secure prompt engineering practices as the first line of defense in the AI-assisted development lifecycle. The prompt is the new "shift-left"; a vague or naive prompt will predictably generate insecure code, while an explicit, security-aware prompt will produce safer, more robust outputs. This practice is a form of proactive risk mitigation, not just an output-optimization technique.
Develop and enforce a standard for "secure prompt engineering" that all developers must follow. This standard should require prompts to be explicit about security requirements, such as input validation, error handling, data minimization, and the avoidance of hardcoded secrets.
AI-generated code introduces "AI-native" vulnerabilities, the most common of which is the "omission of necessary security controls". This happens because the AI model is "unaware of the risk model behind the code" and optimizes for the "shortest path to a passing result," not for security. For example, a prompt for "user login code" will likely produce code that works but lacks protection against brute-force attacks.
This practice should be mandated for all developers as soon as they are given access to AI coding assistants. It is a foundational skill that should be taught as part of "Level 2: Use and Apply AI" (from Recommendation 13). Apply this rigor to any prompt that generates new functionality, especially for code handling user input, authentication, data access, or API endpoints. This is particularly critical when working with brownfield code (pain-point-06-brownfield-penalty), where the AI lacks context on existing security patterns.
Develop a Secure Prompt Template: Create a template for common tasks and store it in the shared process-optimization/structure-your-ai-prompt-library. This template should include sections for: Context: (e.g., "This code is for a public-facing API endpoint.") Security Requirements: (e.g., "Must validate all user input per OWASP Top 10. Must use parameterized queries. Must not contain hardcoded secrets.") Data Handling: (e.g., "Prioritize data minimization. Do not log PII.") Dependencies: (e.g., "Use only approved libraries from our internal manifest.") Mandate Task Decomposition: Train developers on the "breakdown" method. For any task larger than a single function, the developer must first instruct the AI to produce a plan (e.g., in a plan.md file). The developer reviews the plan for hallucinations or security oversights before instructing the AI to generate any code. Enforce Approval Gates: Teach developers to include "approval gates" in their prompts, such as "Generate the plan for refactoring this service, then stop and ask for my approval before modifying any files". This keeps the human "in control" and is the best defense against pain-point-03-hallucinated-capabilities.
Workflows that implement or support this recommendation.
- Understanding Security Risks in AI-Generated Code | CSA - https://cloudsecurityalliance.org/blog/2025/07/09/understanding-security-risks-in-ai-generated-code
AI-generated code introduces "AI-native" vulnerabilities, the most common of which is the "omission of necessary security controls". - Security-Focused Guide for AI Code Assistant Instructions - https://best.openssf.org/Security-Focused-Guide-for-AI-Code-Assistant-Instructions
Key principles for secure prompts include treating all inputs as untrusted, explicitly forbidding hardcoded secrets, prioritizing data minimization. - Five Best Practices for Using AI Coding Assistants | Google Cloud Blog - https://cloud.google.com/blog/topics/developers-practitioners/five-best-practices-for-using-ai-coding-assistants
Adopting a "task decomposition" pattern is critical. Instead of giving AI complex, high-level assignments, developers should "Break down... into several manageable components" and "instruct the AI to ask for your approval before executing on new plan milestones".
Ready to implement this recommendation?
Explore our workflows and guardrails to learn how teams put this recommendation into practice.
Engineering Leader & AI Guardrails Leader. Creator of Engify.ai, helping teams operationalize AI through structured workflows and guardrails based on real production incidents.