Cyber Security · 11 May 2026

UK AI Security Institute Reveals Claude Mythos Preview Can Execute Multi-Stage Cyber Attacks Autonomously

By Markelly AI · 11 May 2026

The AI Security Institute conducted evaluations of Anthropic Claude Mythos Preview announced on 7th April to assess its cybersecurity capabilities. The results show that Mythos Preview represents a step up over previous frontier models in a landscape where cyber performance was already rapidly improving. The findings raise significant concerns about how artificial intelligence systems could potentially be misused by malicious actors to compromise vulnerable computer networks and systems across Britain and beyond.

Advanced AI Models Show Dramatic Progress in Cyber Capabilities

The institute has tracked AI cyber capabilities since 2023 building progressively harder evaluations to keep pace with AI progress from chat-based probing to capture-the-flag challenges to multi-step cyber-attack simulations and two years ago the best available models could barely complete beginner-level cyber tasks. Now in controlled evaluations where Mythos Preview was explicitly directed and given network access to do so researchers observed that it could execute multi-stage attacks on vulnerable networks and discover and exploit vulnerabilities autonomously doing tasks that would take human professionals days of work. This represents a quantum leap in the offensive capabilities that AI systems can demonstrate when provided with the appropriate access and instructions.

Testing Reveals Sophisticated Attack Execution

The evaluation process involved complex simulated environments designed to test how well AI models could navigate real-world cyber attack scenarios. The institute measured performance across various metrics including the number of steps completed in multi-stage attacks and the effectiveness of vulnerability discovery processes. Mythos Preview did also show some cyber capability limitations within the limits of the evaluation. However the overall performance still marked a significant advancement compared to earlier generations of AI models that struggled with even basic penetration testing tasks.

Future Evaluations Must Include Active Defenses

In a regime where attackers can direct and provide network access to models to conduct autonomous attacks on poorly defended systems cybersecurity evaluations must evolve and as capabilities continue to improve evaluation environments that lack defences will no longer be challenging enough to discriminate between the capabilities of the most cyber-capable models or assess trends. Future work will involve evaluating capabilities using ranges simulating hardened and defended environments including ranges with active monitoring endpoint detection and real-time incident response. The team will also be tracking how AI-enabled vulnerability discovery and penetration testing campaigns perform on real-world systems.

Security Basics Become More Critical Than Ever

Testing shows that Mythos Preview can exploit systems with weak security posture and it is likely that more models with these capabilities will be developed. This highlights the importance of cybersecurity basics such as regular application of security updates robust access controls security configuration and comprehensive logging. British organisations now face an environment where AI-assisted attacks could potentially probe their defenses with unprecedented speed and sophistication making it essential that fundamental security hygiene practices are maintained.

National Cyber Security Centre Offers Protection Guidance

Colleagues at the National Cyber Security Centre run the Cyber Essentials scheme to help organisations protect themselves against common online threats whether those threats are AI assisted or not. The scheme provides a framework for businesses and public sector organisations to implement baseline security measures that can defend against the majority of common cyber attacks. Future frontier models will be more capable still so investment now in cyber defence is vital. UK security experts emphasize that organisations cannot afford to wait until more advanced AI models emerge before strengthening their defensive postures.

Dual Use Technology Presents Both Risks and Opportunities

AI cyber capabilities are dual use as while they pose security challenges they can also help deliver game-changing improvements in defence. The institute recently released a joint blog post with NCSC on how cyber defenders can both harness and prepare for frontier AI. This balanced perspective acknowledges that while AI models like Mythos Preview demonstrate concerning offensive capabilities the same underlying technology can be leveraged by security teams to identify vulnerabilities before attackers do to automate threat detection and to respond more rapidly to security incidents.

Implications for UK Cybersecurity Posture

The emergence of AI systems capable of autonomous multi-stage attacks represents a fundamental shift in the threat landscape facing British organisations. Security teams that previously had time to detect and respond to human attackers methodically working through attack chains may now face AI-driven attacks that progress through multiple stages in minutes rather than hours or days. This compressed timeline makes real-time monitoring and automated defensive responses increasingly essential components of any robust security architecture.

The evaluation findings also underscore the growing importance of eliminating easily exploitable vulnerabilities from IT environments. Systems with weak configurations outdated software or inadequate access controls that might have presented moderate risk in the past now represent critical exposures that AI-assisted attackers could identify and exploit with minimal human intervention. British businesses public sector organisations and critical infrastructure operators must prioritise patching known vulnerabilities and implementing defense-in-depth strategies that assume perimeter defences may be breached.

As AI technology continues its rapid evolution the cybersecurity community faces an ongoing challenge to develop evaluation methodologies that can accurately assess emerging capabilities and inform appropriate defensive measures. The work being conducted by the AI Security Institute provides crucial visibility into these evolving threats enabling UK organisations to make informed decisions about security investments and priorities in an increasingly complex and fast-moving threat environment.