Hi @Klavis-AI team,
I recently ran a security audit on klavis as part of research on MCP server security posture across the ecosystem.
Found a couple of items worth flagging:
1. Tool description injection risk
The platform's tool descriptions aren't validated against adversarial prompt patterns. As an integration platform running MCP tools at scale, a single poisoned tool description can propagate across all connected agents — making this a higher-severity issue than in single-server deployments.
2. Missing output sanitization
Tool outputs are forwarded to model context without scanning for embedded injection patterns. At scale, this means one malicious upstream API response can affect every agent consuming that tool.
Both are in a full audit report — 8-page PDF with CVSS ratings, EU AI Act mapping, and remediation steps — for $29 at luciferforge.github.io/mcp-security-audit.
Demo report: https://luciferforge.github.io/mcp-audit-reports/
— Lucifer / LuciferForge Security
Hi @Klavis-AI team,
I recently ran a security audit on klavis as part of research on MCP server security posture across the ecosystem.
Found a couple of items worth flagging:
1. Tool description injection risk
The platform's tool descriptions aren't validated against adversarial prompt patterns. As an integration platform running MCP tools at scale, a single poisoned tool description can propagate across all connected agents — making this a higher-severity issue than in single-server deployments.
2. Missing output sanitization
Tool outputs are forwarded to model context without scanning for embedded injection patterns. At scale, this means one malicious upstream API response can affect every agent consuming that tool.
Both are in a full audit report — 8-page PDF with CVSS ratings, EU AI Act mapping, and remediation steps — for $29 at luciferforge.github.io/mcp-security-audit.
Demo report: https://luciferforge.github.io/mcp-audit-reports/
— Lucifer / LuciferForge Security