OpenAI Opens GPT-5.5-Cyber to EU Defenders While Anthropic Keeps Mythos Behind the Wall
On May 11, OpenAI granted vetted European cybersecurity teams preview access to GPT-5.5-Cyber. Anthropic has not extended similar access for Mythos. The split is shaping the EU AI Act compliance race ahead of the August 2 deadline.
On Monday, May 11, OpenAI announced the EU Cyber Action Plan, a program that grants vetted European cybersecurity teams preview access to GPT-5.5-Cyber — a specialized variant of GPT-5.5 trained for malware analysis, vulnerability identification, reverse engineering, and patch validation. The same announcement noted that Anthropic's comparable Mythos model has not been extended to the EU under similar terms. The European Commission, asked about the discrepancy, characterized talks with Anthropic as being "at a different stage." That phrase is now the most important six words in transatlantic AI regulation, because the EU AI Act becomes fully applicable on August 2, 2026, and the two largest frontier labs are positioning themselves opposite ways on the day the rules start to bite.
TL;DR
- OpenAI announced the EU Cyber Action Plan on May 11, granting vetted European cybersecurity teams preview access to GPT-5.5-Cyber for defensive security work. The model is the same GPT-5.5 base with safety filters calibrated for authorized offensive testing.
- Anthropic has not extended similar access for its Mythos cybersecurity model. The European Commission described talks with Anthropic as being "at a different stage."
- UK AI Security Institute testing found GPT-5.5 completed a 32-step simulated corporate cyberattack in 2 of 10 runs; Mythos completed it in 3 of 10. The benchmark drove EU urgency.
- The EU AI Act becomes fully applicable on August 2, 2026, with transparency obligations for high-risk general-purpose AI systems. OpenAI's plan reads as goodwill compliance positioning; Anthropic's non-participation creates comparative regulatory exposure.
What the EU Cyber Action Plan actually grants
Per CNBC's coverage of the May 11 announcement, the EU Cyber Action Plan establishes three things. First, it gives vetted EU member-state cybersecurity teams — initially Germany's BSI, France's ANSSI, and the European Cybersecurity Agency (ENISA) — preview API access to GPT-5.5-Cyber for use cases including malware reverse engineering, zero-day vulnerability hunting, and patch validation. Second, it grants the European Commission a structured visibility window into the model's capabilities, evaluation results, and incident reporting. Third, it creates a precedent — and a template — for how OpenAI will engage with EU regulators under the AI Act's transparency framework.
The Cyber-suffix model itself is structurally interesting. Axios's prior coverage from May 7 confirmed that GPT-5.5-Cyber is not a separately trained model. It is the same base GPT-5.5 with safety filters calibrated for authorized security work — permitting outputs that the consumer GPT-5.5 would refuse or hedge, while preserving block lists for actual destructive payloads. The design choice matters because it means OpenAI is not introducing a new model into the EU under AI Act scope; it is broadening permissioned use of an existing model. That distinction reduces the regulatory friction of the rollout meaningfully.
What OpenAI gets in return is the diplomatically valuable position of being the cooperative lab. The European Commission has been publicly clear since the AI Act passed that voluntary engagement before August 2 would be "a meaningful factor" in how enforcement is calibrated post-deadline. OpenAI's EU Cyber Action Plan is the first concrete deliverable from a frontier lab under that framework. The optics matter, and the optics have been deliberate: OpenAI's leadership has cultivated a visible presence in Brussels through 2026, while Anthropic has engaged the EU on a less public cadence.
Anthropic's Mythos position and what "different stage" means
Mythos is Anthropic's analog to GPT-5.5-Cyber — a Claude variant calibrated for defensive cybersecurity work. The U.K. AI Security Institute's evaluation, referenced in CNBC's reporting, found Mythos completed a simulated 32-step corporate cyberattack in 3 of 10 test runs versus GPT-5.5's 2 of 10. By the offensive-capability metric the EU is most focused on, Mythos is the more capable model. That makes Anthropic's reluctance to extend EU access the harder negotiation to read.
Three interpretations are circulating in Brussels and London. The first is that Anthropic's position is principled and unchanged from its public posture: the company has consistently said that high-capability cybersecurity models should be released only to controlled environments with verified security operations centers, and that the EU's current oversight framework does not yet meet that bar. Under this reading, Anthropic eventually extends access once the AI Act's high-risk provisions take effect August 2 and the verification infrastructure exists.
The second interpretation is commercial. Anthropic has spent May 2026 stitching together compute (the $1.8B Akamai deal, the 300MW SpaceX Colossus capacity) and managing 80x Q1 growth in revenue — much of which comes from the EU. Extending Mythos preview access right now would introduce material API capacity exposure during a quarter when every available compute slot is committed to enterprise customer demand. Under this reading, the "different stage" framing is about throughput, not principle.
The third interpretation, which European Commission staff have not publicly endorsed but have not denied, is that Anthropic is negotiating harder terms — visibility into how the EU evaluates the model, restrictions on what regulators can disclose, and commitments around data residency. Per Axios's reporting, the unresolved issues between the Commission and Anthropic include commercial and access-control terms that remain under negotiation. Under this reading, Mythos arrives in the EU within 60-90 days at terms more favorable to Anthropic than OpenAI received.
Whichever interpretation is correct, the August 2 deadline forces a decision. The transparency obligations under the AI Act apply to any general-purpose AI system with material output reaching EU users, including via downstream commercial integrations. Anthropic's enterprise customer footprint in Germany, France, and the Nordics is large enough that Mythos and Claude in general fall within scope by default. Voluntary cooperation before August 2 is the cheaper path. Forced compliance after August 2 carries higher legal cost and reputational drag.
The AI Act compliance race and the comparative exposure problem
The EU AI Act's August 2, 2026 deadline introduces a layered set of obligations on frontier labs. Transparency rules require disclosure of training data summaries, evaluation results, and incident reporting. High-risk system rules require risk management documentation, human oversight provisions, and ongoing capability assessments. General-purpose AI (GPAI) rules — the subset most directly applicable to the cybersecurity models in question — require additional model card disclosures and access controls.
Crucially, the AI Act includes a comparative-exposure mechanism. When a regulator assesses compliance, it can reference industry norms and benchmark behavior. If OpenAI has voluntarily granted access to GPT-5.5-Cyber under the EU Cyber Action Plan and Anthropic has not granted equivalent access to Mythos, the Commission is empowered to factor that into how it scopes enforcement under the high-risk framework. This is not theoretical: the EU Council and Parliament agreed on May 7 to streamline AI Act rules in ways that explicitly preserve the Commission's discretion in calibrating enforcement intensity.
The downstream effect on EU enterprise procurement is already visible. Enterprise AI buyers in regulated EU sectors — financial services, healthcare, defense — have begun adding contractual language requiring foundation-model vendors to demonstrate proactive engagement with EU regulators as a procurement gate. OpenAI's May 11 announcement gives its enterprise sales team a clean answer to that question. Anthropic's commercial team currently does not have the equivalent talking point, which materially complicates closing certain EU regulated-industry deals between now and the August deadline.
A second-order effect is the precedent template the OpenAI plan sets for other frontier labs. Google has been quietly building a parallel cybersecurity model offering derived from Gemini, with capability that industry sources describe as comparable to GPT-5.5-Cyber on similar benchmark suites. If Google announces an EU access program at Google I/O on May 19 — three days after this article publishes — the comparative-exposure pressure on Anthropic intensifies sharply. Two of the three frontier labs would be cooperating; one would not. That alignment shifts the political center of gravity inside the European Commission and likely accelerates the rate at which the Commission's discretionary enforcement perimeter contracts around laggards.
The U.K. dimension also matters, even though the U.K. is no longer an EU member. The U.K. AI Security Institute is the institution that produced the 32-step benchmark in the first place. AISI has signed memoranda of understanding with both OpenAI and Anthropic that grant the institute access to pre-deployment model evaluations. The U.K. has thus far taken a position closer to OpenAI's accommodation track than the EU has, in part because U.K. AI policy remains explicitly innovation-first under the current government. A divergence in U.K. and EU approaches to frontier-lab cooperation could create regulatory arbitrage opportunities for labs — but only if the labs are willing to navigate two distinct compliance regimes simultaneously. Anthropic's posture suggests it is willing to accept that operational cost.
Finally, the cybersecurity-model framing is itself a strategic choice. By branding the EU rollout as a Cyber Action Plan, OpenAI has positioned its compliance gesture inside a category that European governments overwhelmingly support — defensive cyber capability for member-state security agencies. The same access program framed as "foundation-model preview access for European users" would have generated very different political reception. Anthropic's eventual EU engagement, whenever it arrives, will face a categorization choice that matters for both the speed of approval and the long-term political cover.
Key Takeaways
- OpenAI made the first compliance gesture, not Anthropic. The EU Cyber Action Plan grants Germany's BSI, France's ANSSI, and ENISA preview access to GPT-5.5-Cyber and gives the Commission structured visibility into model capabilities.
- Mythos is the more capable model on the benchmark. UK AISI testing showed Mythos completing a 32-step cyberattack chain in 3 of 10 runs versus GPT-5.5's 2 of 10 — making Anthropic's reluctance to extend EU access the harder position to defend publicly.
- August 2 is the trigger. AI Act high-risk and GPAI obligations apply that day. Comparative-exposure mechanisms let the Commission factor voluntary cooperation into enforcement scope. Mid-2026 EU enterprise procurement is already pricing this.
The Read-Through: The OpenAI-versus-Anthropic positioning on the EU is more than a one-week news cycle. It is the first concrete test of how frontier labs will engage with binding AI regulation, and the early read is that OpenAI has chosen accommodation while Anthropic has chosen friction. The bet on each side is recoverable — OpenAI assumes regulators reward cooperation and frontier labs preserve operational latitude; Anthropic assumes principled access controls protect long-term capability deployment terms. Both can turn out correct, or one can turn out badly wrong. Watch three signals in the next 60 days: whether Anthropic extends Mythos before August 2 (the cleanest outcome for both lab and Commission), whether the EU publishes any preliminary enforcement guidance that explicitly references the OpenAI Action Plan as a model (cementing the comparative-exposure dynamic), and whether enterprise sales cycles in EU regulated industries shift visibly toward OpenAI over the next two quarters. Each of those three answers reshapes the regulatory landscape that every frontier lab — and every enterprise customer — will operate within for the rest of the decade.
Frequently Asked Questions
What is OpenAI's GPT-5.5-Cyber model?
GPT-5.5-Cyber is a specialized variant of OpenAI's GPT-5.5 model trained to be more permissive for authorized security work, including malware analysis, vulnerability identification, reverse engineering, and patch validation. It is the same base model as the consumer GPT-5.5 but with safety filters calibrated for defensive cybersecurity use cases. OpenAI announced EU preview access for vetted European cybersecurity teams on May 11, 2026.
Why does the EU want access to GPT-5.5-Cyber and Anthropic's Mythos?
The UK AI Security Institute tested both models on a simulated 32-step corporate cyberattack scenario. GPT-5.5 completed the attack chain in 2 of 10 test runs; Mythos completed it in 3 of 10. Those results confirmed that both models materially raise the offensive capability ceiling, which the EU views as both a defensive opportunity (giving EU cybersecurity teams parity with adversaries) and a regulatory urgency (requiring transparency into model capabilities under the EU AI Act framework).
How does this connect to the EU AI Act August 2, 2026 deadline?
The EU AI Act becomes fully applicable on August 2, 2026, including transparency obligations for general-purpose AI systems. Models with material offensive cybersecurity capability are explicitly within scope of the high-risk regulatory perimeter. OpenAI's EU Cyber Action Plan is structured as a goodwill compliance gesture ahead of the deadline — granting access establishes constructive cooperation with the European Commission. Anthropic's continued non-participation creates a comparative regulatory exposure that European Commission officials have called "at a different stage."
Reviewed by Jason Lee, Founder & Editor-in-Chief, BlockAI News.
Sources
Primary sources and prior BlockAI News coverage referenced in this article.
Primary sources
- CNBC — OpenAI to give EU access to new cyber model but Anthropic still holding out on Mythos (May 11, 2026)
- Axios — OpenAI makes its rival to Anthropic's Mythos more widely available to cyber defenders (May 7, 2026)
- EU Council — Council and Parliament agree to simplify and streamline AI Act rules (May 7, 2026)
- European Commission — Regulatory framework for AI (EU AI Act primary source)
- EU Artificial Intelligence Act — Up-to-date developments and analyses
- Holland & Knight — U.S. Companies Face EU AI Act's August 2026 Compliance Deadline
From BlockAI News
How we report: This article cites primary sources, regulatory filings, and on-chain data where available. BlockAI News uses AI tools to assist with research and first-draft generation; every article is reviewed and edited by a human editor before publication. Read our full How We Report page, Editorial Policy, AI Use Policy, and Corrections Policy.