Tool Registry Poisoning Exposes Gap in Agent Security

AI agents select tools from shared registries by matching natural-language descriptions, but no verification ensures those descriptions are accurate or that tools behave as claimed. A researcher filing a security issue discovered that tool registry poisoning spans multiple vulnerabilities across the tool lifecycle, from selection through execution. Existing software supply chain controls like code signing and SLSA provenance check artifact integrity but miss behavioral integrity, leaving agents vulnerable to prompt injection in tool metadata and runtime behavioral drift.
TL;DR
- →AI agents choose tools from registries based on natural-language metadata with no human verification of accuracy or truthfulness
- →Tool registry poisoning is not a single vulnerability but multiple threats at different lifecycle stages: selection-time attacks (impersonation, metadata manipulation) and execution-time attacks (behavioral drift, contract violation)
- →Standard software supply chain defenses (code signing, SLSA, SBOMs, Sigstore) verify artifact integrity but cannot detect behavioral integrity violations like prompt injection payloads in descriptions or server-side behavior changes
- →A runtime verification proxy between agent and tool can validate discovery binding, monitor network connections against declared endpoints, and validate output schemas to catch behavioral violations
Why it matters
As enterprises deploy AI agents with access to tool ecosystems, the gap between artifact integrity and behavioral integrity becomes a critical security flaw. Attackers can publish legitimately signed tools with prompt injection payloads in metadata or change server behavior after publication, bypassing all existing supply chain controls. Without behavioral verification, the industry risks repeating the HTTPS certificate mistake of the early 2000s, where strong identity assurances masked the actual trust question.
Business relevance
Enterprise deployments of AI agents depend on tool registries for functionality, but poisoned tools can manipulate agent decisions, exfiltrate data, or cause behavioral drift without detection. Organizations applying standard software supply chain controls to agent tooling may believe they have solved the security problem when critical behavioral gaps remain. This creates material risk for any business relying on agent tool selection and execution without runtime behavioral verification.
Key implications
- →Applying existing software supply chain defenses to agent tool registries is necessary but insufficient, creating a false sense of security if behavioral verification is not added
- →A new primitive is needed: behavioral specifications (similar to Android permission manifests) that declare allowed endpoints, output schemas, and behavioral constraints, paired with runtime verification proxies
- →Tool registry poisoning requires defense at multiple stages: discovery binding validation to prevent bait-and-switch, endpoint allowlisting to catch exfiltration, and output schema validation to detect prompt injection responses
What to watch
Monitor adoption of runtime verification layers in MCP implementations and other agent tool protocols. Watch for standardization efforts around behavioral specifications and whether major cloud providers and tool registry operators implement endpoint allowlisting and output schema validation. Track whether the industry treats behavioral integrity as a separate security concern from artifact integrity or conflates the two.
vff Briefing
Weekly signal. No noise. Built for founders, operators, and AI-curious professionals.
No spam. Unsubscribe any time.



