Identify critical security gaps in your MCP servers, AI agents, and multi-agent workflows with the SAFE Framework.
Preview the questions below. Get instant access to complete your assessment.
Includes questionnaire + scanner agent. Cancel anytime.
This questionnaire is based on the SAFE Framework, a comprehensive security specification for AI agents initiated by Astha.ai, now part of the Linux Foundation and OpenID Foundation.
"The SAFE Framework provides critical guidance for securing AI agent interactions. This assessment tool makes it accessible for organizations to evaluate and improve their security posture based on industry-vetted attack techniques and mitigations."

Distinguished leader in open-source and cloud-native communities with 10+ years of Kubernetes and Docker experience. Co-authored NIST SP 800-204D on software supply chain security (used by Department of Defense). Also co-authored the AI Safety Standard Guidance (CTA-2114).
Our assessment covers all 10 critical security categories from the SAFE-MCP TOP 10 framework, ensuring your MCP servers are protected against the most common attack vectors.
Malicious prompt injection attacks
How do you defend your MCP-based agents against prompt injection when untrusted content (user input, web/file/tool output, or multimodal data) flows through MCP tools into the model that chooses tools?
How do you prevent MCP-aware agents from returning covert instructions, manipulative flows, or disinformation in their responses or tool outputs that downstream systems or users might blindly follow?
SAFE Framework adapts the proven MITRE ATT&CK methodology specifically for AI agent environments, providing a structured approach to understanding and mitigating security risks.
The framework covers 14 tactic categories and 80+ techniques, each with actionable mitigation and detection guidance.
Adversaries may poison or manipulate tool definitions to execute unauthorized actions.
Malicious prompts crafted to bypass safety controls and execute unintended commands.
Adversaries may consume excessive computational resources to degrade or deny service availability.
Industry experts recognize SAFE Framework as the standard for AI agent security
"SAFE Framework provides the critical security framework the AI agent ecosystem desperately needs. As AI adoption accelerates, having standardized attack patterns and mitigations is essential for building trustworthy AI systems."
"The SAFE Framework aligns perfectly with OpenSSF's mission to secure the open source ecosystem. By documenting real-world attack techniques and providing actionable mitigations, SAFE empowers developers to build secure AI agents from day one."