The EU's AI regulation strategy faces a fundamental enforcement problem: it depends on the cooperation of companies it's trying to oversee. OpenAI has granted European regulators direct access to its GPT-5.5 Cyber model for security review, with discussions already underway. Anthropic, however, remains resistant after four to five regulatory meetings on its Mythos model, still refusing lab access to Brussels officials.
This asymmetry exposes the weakness built into Europe's regulatory framework. The EU AI Act requires high-risk systems to undergo review, but the agency lacks teeth to demand transparency. When companies withhold access, regulators have limited recourse beyond public pressure or legal threats that take years to resolve.
OpenAI's willingness to cooperate appears strategic. Providing early access to GPT-5.5 Cyber signals compliance and cooperation with Brussels, potentially avoiding friction as the company scales operations across Europe's largest economy. This posture gives OpenAI cover and may accelerate its market position under emerging rules.
Anthropic's stonewalling reflects a different calculation. By resisting four to five meetings without granting access, the company signals it views the current regulatory framework as either toothless or disadvantageous. Anthropic may be testing whether the EU will actually enforce its own rules or simply accept repeated "no" answers.
The real issue: Europe has written regulations that assume voluntary disclosure. The AI Act doesn't grant regulators explicit power to demand internal access to models, training data, or safety evaluations. Companies can delay indefinitely through negotiation or attrition.
This creates perverse incentives. Smaller or European-founded AI companies face stricter enforcement, while American incumbents like OpenAI and Anthropic can pick and choose which requests to honor. OpenAI's compliance may reflect confidence that even full transparency won't halt its growth. Anthropic's