On Wednesday, AI security company Irregular revealed it had raised $80 million in a fresh funding round, with Sequoia Capital and Redpoint Ventures leading the investment, and Wiz CEO Assaf Rappaport also contributing. According to a source familiar with the transaction, Irregular's valuation reached $450 million in this round.
“We believe that a significant share of economic activity will soon be driven by people interacting with AI as well as AI systems engaging with each other,” co-founder Dan Lahav explained to TechCrunch. “This shift is going to expose vulnerabilities at several layers of the security stack.”
Previously operating as Pattern Labs, Irregular has already established itself as an influential force in AI assessment. Their methods have been referenced in security reviews for Claude 3.7 Sonnet, as well as OpenAI’s o3 and o4-mini models. More broadly, their SOLVE framework, which measures how well a model can detect vulnerabilities, is widely adopted throughout the sector.
While Irregular has made strides in addressing current model security issues, this round of fundraising aims to support even more ambitious objectives: identifying new and unexpected risks and behaviors before they appear outside controlled environments. The team has built a sophisticated array of simulated scenarios to thoroughly test models prior to public release.
“We run intricate network simulations where AI systems act as both attackers and defenders,” co-founder Omer Nevo explains. “When a new model is introduced, these simulations reveal which defensive measures are effective and where the weaknesses lie.”
Security is now a key concern in the AI sector, particularly as the dangers associated with advanced models continue to grow. Earlier this summer, OpenAI revamped its internal security protocols to better guard against threats like corporate espionage.
Simultaneously, AI models are becoming increasingly skilled at detecting software flaws—an ability that holds serious consequences for both cyber attackers and defenders.
According to Irregular’s co-founders, this is only the beginning of the many security challenges that will arise as large language models become more powerful.
“As leading AI labs strive to develop ever more advanced and capable systems, our mission is to ensure their security,” Lahav remarks. “But because the landscape is continually changing, there’s significantly more work ahead of us.”