- Palo Alto Networks tech chief Lee Klarich said companies need to step up cybersecurity defenses to immediately prepare for AI attacks.
- Klarich said there is now a “narrow three-to-five-month window” for businesses to get ahead of AI-driven exploits.
- New models, such as Anthropic’s Mythos and OpenAI’s GPT-5.5-Cyber, are making it easier for hackers to exploit unknown software vulnerabilities.
Palo Alto Networks tech chief Lee Klarich said companies are losing time to step up software defenses as hackers increasingly exploit vulnerabilities with the help of artificial intelligence models.
“We now estimate a narrow three-to-five-month window for organizations to outpace the adversary before AI-driven exploits start to become the new norm,” he wrote in a blog post on Wednesday. “This impending vulnerability deluge demands urgency.”
The rise of increasingly sophisticated AI models such as Anthropic’s Mythos has raised the stakes, putting pressure on cybersecurity teams to step up their defenses as they brace for a wave of cyberattacks capable of exploiting previously unknown software vulnerabilities. The concerns have led to White House meetings with bank leaders and technology giants.
Google this week said it stopped an attempt to use AI for a “mass exploitation event,” but hackers are already using available AI tools to exploit software vulnerabilities.
Klarich agreed that these features won’t be limited to newer models and called for an industrywide innovation to hunt down new attack techniques, including virtual patching capabilities. He said Palo Alto will roll out the first set of capabilities “very soon.”
Last month, Anthropic limited the rollout of its Mythos model to a select group of companies to test and fix vulnerabilities before hackers find and abuse them. The group included Palo Alto Networks, CrowdStrike, Amazon, Apple and JPMorgan.
OpenAI announced its GPT-5.5-Cyber model last week and followed that with the rollout of its Daybreak cyber initiative.
“The big question just a few weeks ago was: ‘Are we overstating the model capabilities?’ With more testing, I can confidently say we weren’t,” Klarich wrote. “In fact, these models are likely even better at finding vulnerabilities than we initially realized.”
https://www.cnbc.com/amp/2026/05/13/palo-alto-ai-cyberattacks-mythos-gpt.html

