Researchers at OX Security discovered an architectural flaw affecting all 200,000 Model Context Protocol servers. The MCP standard, created by Anthropic, allows AI agents to communicate with external tools and data sources. OpenAI adopted it in March 2025, followed by Google DeepMind. Anthropic donated MCP to the Linux Foundation in December 2025, and the protocol has generated 150 million downloads.

The vulnerability enables unauthenticated command execution on systems running MCP servers. OX Security's four researchers identified the flaw as a systemic problem inherent to how MCP handles communication between AI models and tools. The issue affects every server using the standard, creating widespread exposure across the AI supply chain.

Anthropic responded to the findings by characterizing the vulnerability as a feature rather than a bug. The company maintains that proper implementation and deployment practices can mitigate the risk. However, the breadth of affected servers and the ease of exploitation raise concerns about real-world security in production environments.

The discovery highlights tensions between open standards adoption and security hardening. As enterprises integrate MCP into their AI infrastructure, the architectural flaw represents a foundational challenge that requires coordinated fixes across all implementations.