Skip to main content

Partnering with Irregular: Ahead of the Curve

Dan, Omer and their team are setting new security standards at the frontier of AI.

OMER AND DAN. (PHOTO: BEN HAKIM)

One of the first times we met with Dan Lahav, he told us AI would soon be able to autonomously bypass traditional endpoint security. At the time, most people would have said that wasn’t possible. But then Dan pulled up a test-environment example of an AI agent doing exactly what he described.

Indeed, 2025 has ushered in a step-change in agentic capabilities. Model reasoning, memory and action-taking have all improved dramatically, while on the infrastructure side AI has become faster and more affordable, accelerating development and simplifying integrations. Agents already generate the majority of code within many companies and projects, across everything from review and CI/CD processes to research. We are witnessing a massive and transformative shift.

In short, AI is redefining the enterprise—and as Dan showed us that day, security is no exception. The continuous improvements required to ensure safety are a massive operational challenge. Fortunately for all of us, the team at Irregular is thinking ahead.

A frontier AI security lab on a mission to defend against the next generation of threats, Irregular works side-by-side with world leaders in AI including Anthropic, OpenAI and Google DeepMind. By embedding directly with these cutting-edge research labs, Dan, co-founder Omer Nevo and their team are able to see around corners others can’t, running cyber offensive evaluations on advanced models and developing defenses before those models are released. This unprecedented access has put them at the epicenter of the conversation around responsible deployment of these technologies, and made Irregular an indispensable partner in the race toward AGI.

The Irregular team is exceptional, with brilliant minds across AI, security and math. Dan himself has deep expertise in both AI and security, driven by a lifelong obsession with superintelligence. His childhood fascination with Asimov and sci-fi led him to begin working in tech when he was just 14, and his AI research has taken him from top tech companies to the cover of Nature. Omer, meanwhile, began his career as an elite mathematician before joining Google as a researcher, and has a remarkable track record of attracting top technical talent. As excited as we were to see Irregular’s technology in action, getting to know the people behind it has taken that enthusiasm to a whole new level. We are grateful for the opportunity to be their partners and lead their recent funding.

Customers tell us the Irregular team moves at an unmatched pace, not only flagging potential misuse and architecting robust, state-of-the-art defenses, but creating compliance standards and roadmaps to help them keep critical systems online. Their work truly is setting standards for AI security: Irregular’s evaluations are cited (under their former name, Pattern Labs) in system cards for GPT-4, o3, o4 mini and 5; the UK government and Anthropic both use the company’s SOLVE framework, including to vet risks in Claude 4; and the team’s research on model theft and misuse is guiding policy across Europe.

And while Irregular is already generating millions in revenue, there is much room to grow. Outside of large or cloud-native companies, AI adoption is still early, and its security market remains nascent. The efforts of this team today will protect the enterprises of tomorrow.

Irregular is building the tools and frameworks those organizations need. At a time when autonomous systems are becoming more powerful and sophisticated by the day, Dan, Omer and their team have shown they can stay ahead of the curve and keep AI safe, responsible and secure.