Palo Alto Networks security-intel boss calls AI agents 2026's biggest insider threat
interview AI agents represent the new insider threat to companies in 2026, according to Palo Alto Networks Chief Security Intel Officer Wendi Whitmore, and this poses several challenges to executives tasked with securing the expected surge in autonomous agents.
"The CISO and security teams find themselves under a lot of pressure to deploy new technology as quickly as possible, and that creates this massive amount of pressure - and massive workload - that the teams are under to quickly go through procurement processes, security checks, and understand if the new AI applications are secure enough for the use cases that these organizations have," Whitmore told The Register.
"And that's created this concept of the AI agent itself becoming the new insider threat," she added.
According to Gartner's estimates, 40 percent of all enterprise applications will integrate with task-specific AI agents by the end of 2026, up from less than 5 percent in 2025. This surge presents a double-edged sword, Whitmore said in an interview and predictions report.
On one hand, AI agents can help fill the ongoing cyber-skills gap that has plagued security teams for years, doing things like correcting buggy code, automating log scans and alert triage, and rapidly blocking security threats.
When we look through the defender lens, a lot of what the agentic capabilities allow us to do is start thinking more strategically about how we defend our networks, versus always being caught in this reactive situation
"When we look through the defender lens, a lot of what the agentic capabilities allow us to do is start thinking more strategically about how we defend our networks, versus always being caught in this reactive situation," Whitmore said.
Whitmore told The Register she had recently spoken with one of Palo Alto Networks' internal security operations center (SOC) analysts who had built an AI-based program that indexed publicly known threats against the cybersecurity shop's own private threat-intel data, and analyzed the company's resilience, as well as which security issues were more likely to cause harm.
This, she said, allows the firm to "focus our strategic policies over the next six months, the next yes, on what kinds of things do we need to be putting in place? What data sources do we need that we are not necessarily thinking of today?"
The next step in using AI in the SOC involves categorizing alerts as actionable, auto-close, or auto-remediate. "We are in various stages of implementing these," Whitmore said. "When we look at agentic, we start with some of the more simple use cases first, and then progress as we become more confident in those from a response capability."
However, these agents – depending on their configurations and permissions – may also have privileged access to sensitive data and systems. This makes agentic AI vulnerable – and a very attractive target to attack.
One of the risks stems from the "superuser problem," Whitmore explained. This occurs when the autonomous agents are granted broad permissions, creating a "superuser" that can chain together access to sensitive applications and resources without security teams' knowledge or approval.
"It becomes equally as important for us to make sure that we are only deploying the least amount of privileges needed to get a job done, just like we would do for humans," Whitmore said.
Does your CEO have an AI doppelganger?
"The second area is one we haven't seen in investigations yet," she continued. "But while we're on the predictions lens, I see this concept of a doppelganger."
This involves using task-specific AI agents to approve transactions or review and sign off on contracts that would otherwise require C-suite level manual approvals.
"We think about the people who are running the business, and they're oftentimes pulled in a million directions throughout the course of the day," Whitmore said. "So there's this concept of: We can make the CEO's job more efficient by creating these agents. But ultimately, as we give more power and authority and autonomy to these agents, we're going to then start getting into some real problems."
For example: an agent could approve an unwanted wire transfer on behalf of the CEO. Or imagine a mergers and acquisitions scenario, with an attacker manipulating the models in such a way that forces an AI agent to act with malicious intent.
By using a "single, well-crafted prompt injection or by exploiting a 'tool misuse' vulnerability," adversaries now "have an autonomous insider at their command, one that can silently execute trades, delete backups, or pivot to exfiltrate the entire customer database," according to Palo Alto Networks' 2026 predictions.
This also illustrates the ongoing threat of prompt-injection attacks. This year, researchers have repeatedly shown prompt injection attacks to be a real problem, with no fix in sight.
"It's probably going to get a lot worse before it gets better," Whitmore said, referring to prompt-injection. "Meaning, I just don't think we have these systems locked down enough."
How attackers use AI
Some of this is intentional. "New systems, and the creators of these technologies, need people to be able to come up with creative attack use cases, and this often involves manipulating" the models, Whitmore said. "This means that we've got to have security baked in, and today we're ahead of our skis. The development and innovation within the AI models themselves is happening a lot faster than the incorporation of security, which is lagging behind."
Making attackers more powerful
In 2025, Palo Alto Networks' Unit 42 incident response team saw attackers abuse AI in two ways. One: it allowed them to conduct traditional cyberattacks faster, and at scale. The second involved manipulating models and AI systems to conduct new types of attacks.
"Historically, when an attacker gets initial access into an environment, they want to move laterally to a domain controller," Whitmore said. "They want to dump Active Directory credentials, they want to elevate privileges. We don't see that as much now. What we're seeing is them get access into an environment immediately, go straight to the internal LLM, and start querying the model for questions and answers, and then having it do all of the work on their behalf."
Whitmore, along with just about every other cyber exec The Register has spoken with over the past couple of months, pointed to the "Anthropic attack" as an example.
- From AI to analog, cybersecurity tabletop exercises look a little different this year
- It looks like you're ransoming data. Would you like some help?
- Spy turned startup CEO: 'The WannaCry of AI will happen'
- AWS joins Microsoft, Google in the security AI agent race
She's referring to the September digital break-ins at multiple high-profile companies and government organizations later documented by Anthropic. Chinese cyberspies used the company's Claude Code AI tool to automate intel-gathering attacks, and in some cases they succeeded.
While Whitmore doesn't anticipate AI agents to carry out any fully autonomous attacks this year, she does expect AI to be a force multiplier for network intruders. "You're going to see these really small teams almost have the capability of big armies," she said. "They can now leverage AI capabilities to do so much more of the work that previously they would have had to have a much larger team to execute against."
Whitmore likens the current AI boom to the cloud migration that happened two decades ago. "The biggest breaches that happened in cloud environments weren't because they were using the cloud, but because they were targeting insecure deployments of cloud configurations," she said. "We're really seeing a lot of identical indicators when it comes to AI adoption."
For CISOs, this means establishing best practices when it comes to AI identities and provisioning agents and other AI-based systems with access controls that limit them to only data and applications that are needed to perform their specific tasks.
"We need to provision them with least-possible access and have controls set up so that we can quickly detect if an agent does go rogue," Whitmore said. ®