Welcome to the second Cloud CISO Perspectives for March 2026. Today, Nick Godfrey details his conversation with Francis deSouza at RSA Conference, and how it’s part of our approach to bold and responsible AI use.
As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.
RSAC ’26: AI, security, and the workforce of the future
By Nick Godfrey, senior director, Office of the CISO

You can’t bring traditional security to an AI fight, so how do we defend against AI-powered attacks, boost defenders with AI, and secure AI use? Answering those questions was top of mind at RSA Conference last week, where I spoke with Francis deSouza, Google Cloud’s COO and president, Security Products, about our approach at a Google-hosted breakfast for CISOs and other executives.
One of his key points is that organizations that adopt AI move through a three-stage journey:
- Automate tasks: Using AI for specific, repetitive tasks, such as summarizing notes.
- Redesign workflows: Using agents to manage entire end-to-end processes.
- Rethink functions: Completely reimagine how a department operates, such as the security operations center (SOC).
“The workforce of the future, across every function in an organization, is going to need to be bilingual. That they need to understand their function — whether it’s cybersecurity or marketing or sales or development — and AI,” deSouza said.
He also said that part of AI-era resilience means being multi-model and multicloud. A durable AI strategy shouldn’t rely on a single model or a single cloud provider, as organizations need the ability to failover and adapt as leaderboards and technologies evolve.
“Organizations look to CISOs to drive those decisions and hold them accountable if they go wrong,” he said.
Over the course of the conference, Google discussed how AI itself is a new surface area that needs to be protected, and both attackers and defenders are looking to AI to strengthen their positions.
How we’re securing AI
AI is creating a new surface area that needs to be protected. Organizations should focus on models, agents, and data as mission-critical points to secure.
We’ve been keeping tabs on a new trend of model extraction and distillation attacks that pose a long-term threat to frontier model providers and regular enterprises that build and operate their own models, and code vulnerability is an equally serious risk.
We’ve seen early adopters use the new Triage and Investigation agent to collapse the time-to-investigate for complex alerts from two hours down to just 15 to 30 minutes. We’ve also seen additional benefits from our AI-enhanced defense, such as using our Big Sleep agent to uncover and fix vulnerabilities before they can be exploited.
We’ve also seen how good intentions can go awry. With remarkable speed, OpenClaw has rapidly become a new supply-chain attack surface. Attackers have used it to distribute droppers, backdoors, infostealers and remote access tools, with many incidents so far this year. (We’re actually partnering with OpenClaw through VirusTotal scanning to detect malicious skills.)
Supply chain security is even more important in the AI era. Threat actors in the second half of 2025 exploited software-based vulnerabilities (44.5%) more frequently than weak credentials (27.2%), a significant increase from the start of 2025.
Identity is once again the new perimeter, so it’s vitally important as part of a robust AI strategy to manage shadow AI and govern agentic identities. In addition to focusing on identity as the key to securing agents, we advocate for treating data as the new perimeter and prompts as code, as part of a holistic approach as we’ve advocated through our Secure AI Framework and industry collaborations.
How AI is changing offense
We’ve seen three key ways that adversaries have been using AI to accomplish their goals:
- New, less-skilled threat actors empowered by AI
- New and existing groups using new AI techniques
- A new level of speed, sophistication, and scale to attacks
AI has been lowering barriers to entry for less technically skilled actors, especially by allowing them to give instructions to a model. AI has also made it easier to discover zero-day vulnerabilities, conduct phishing attacks (especially voice phishing,) and develop malware.
AI agents are upending the previous commonly-held wisdom about the techniques that threat actors use. Cybercriminals, nation-state actors, and hacktivist groups use agents to automate spear-phishing attacks, develop sophisticated malware, and conduct disruptive campaigns.
There’s more to AI-enhanced attacks than just agents. There are new classes of attacks on AI systems, including autonomous attacks, prompt injection, distillation attacks, AI-enabled malware that can evade signature-based detection, and even attacks against agentic ecosystems by exploiting their supply chains.
Adversaries are using autonomous attacks to scale their operations — and the impact they have against targeted systems. One example of this is Hexstrike AI, which represents a paradigm shift from manual hacking to AI-orchestrated warfare.
With a standardized interface for more than 150 offensive security tools, Hexstrike AI allows an agent to hand off tasks from one tool to another without human intervention. It’s also openly available and already in use by nation-state aligned threat actors, and gaining significant attention in underground conversations.
AI, particularly agents, will accelerate intrusions and have already begun to outpace human-driven controls. We’ve seen AI-automated scanning used by threat actors to sift through stolen data for hard-coded keys and access tokens to help them expand their attacks to other organizations. Simultaneously, hand-off times between threat groups collapsed from eight hours in 2022 to 22 seconds last year.
How AI is changing defense
Despite all the benefits that adversaries are seeing from AI, it’s also boosting defenders in three critical ways:
- We’re using AI to fight AI.
- We’re orchestrating defense at a new pace and volume, beyond human scale.
- We have a secret weapon: Context is the defender’s advantage.
AI-led defense is shifting from attack detection to pre-calculating and neutralizing the attack surface before the adversary arrives. Comprehensive identity management is key, with true Zero Trust access a necessary goal.
Organizations should turn to reputation-based risk modeling, agent observability, and identity to sanitize prompts. Also important is AI red teaming as part of a holistic approach to isolating agents at machine speed when anomalies are detected.
It’s impossible to defend the ever-growing volume of surfaces and alerts without AI. We’ve seen early adopters use the new Triage and Investigation agent to collapse the time-to-investigate for complex alerts from two hours down to just 15 to 30 minutes. We’ve also seen additional benefits from our AI-enhanced defense, such as using our Big Sleep agent to uncover and fix vulnerabilities before they can be exploited.
Context has become the defender’s advantage. When you understand your network and user behavior, you can better detect anomalies and prioritize risks based on business impact — and harden systems accordingly.
We need to move from agents with a human in the loop to human over the loop. Some of these gains will come from the agentic SOC, where security operations powered by AI agents can automate SOC workflows, and operate at speed and scale that was not possible before.
These changes can help reduce remediation from hours to seconds. We predict that by 2026 AI will autonomously resolve or escalate more than 90% of Tier 1 alerts, covering enrichment, categorization, and initial triage. The average enterprise analyst spends 30 minutes triaging a single alert: An agent can cut that down to five minutes, potentially saving $2.7 million annually.
A big part of AI security posture management will be the continuous discovery and inventory of AI assets and vulnerabilities at scale across multicloud environments.
All our news from RSA Conference
In addition to discussing all things AI, we made several key announcements last week:
- Wiz news: We’ve completed our acquisition of Wiz, and revealed the AI-Application Protection Platform (AI-APP) and red, blue, and green security agents.
- M-Trends: New research from Mandiant’s M-Trends 2026 and special report on AI risk and resilience can help organizations better understand the current threat landscape and how to keep defenses current.
- Threat intelligence: Google Threat Intelligence Group (GTIG) officially debuted its Disruption Unit in our keynote from Sandra Joyce, vice-president, Google Threat Intelligence, as we collectively evaluate what we can do within existing authorities and regulatory frameworks to make it more difficult for malicious actors to succeed in their efforts.
- Agentic SOC: We’re introducing new agents in the agentic SOC to help defenders focus on what matters most.
- Check out our new security innovations in Chrome Enterprise, Security Command Center, network management, and more.
You can check out everything we announced at RSA Conference here.
https://cloud.google.com/blog/products/identity-security/cloud-ciso-perspectives-rsac-26-ai-security-and-workforce-of-the-future

