Agentic AI’s Scary Security Risks
Now, before autonomous bots are everywhere, would be a good time to start mitigating their potential dangers. Plus: Atera Autopilot goes GA and N-able and NinjaOne defend integrations.
Note: Being a channelholic isn’t a 24/7 job, but it sure feels that way sometimes, which is why I’m on vacation as you read this. My next post will appear June 6th.
Jason Rader doesn’t get anxious about security easily.
Like everyone else, he encounters fright-worthy new dangers all the time. But Rader, who’s been global vice president and CISO at IT services giant Insight for close to four years and spent over seven years before that in senior roles at security vendor RSA, pretty much knows how to mitigate them. The dangers posed by agentic AI, however, are different.
“I haven’t been this nervous for a while,” admits Rader, who regularly thinks up some worrying new risk that far too few customers and co-workers are worried about.
“I just feel like if there’s one thing that I’m not hearing people ask about, there’s got to be more,” he says. “And that’s what’s got me scared, because it’s literally anything you can think of.”
Researchers at Microsoft helpfully catalogued some of those things in a whitepaper published last month:
They’re not the only ones pondering this stuff either, according to Avihay Nathan, senior vice president and head of AI, data, and research at identity security vendor CyberArk.
“There isn’t a single CISO or CTO that we talk to that they’re not aware and concerned about this,” he says.
They’re right to be concerned too. Unlike the agents that security and RMM systems have been putting on endpoints for years, Nathan notes, AI agents aren’t “tiny code minions” performing a limited set of mostly harmless actions in the background.
“They can think for themselves. They can decide how to do a task, which may be different than what you intended. They can grant privileges to each other. They can decide that they want to access something that you didn’t want them to access just because they need it to complete a task,” Nathan says, adding that any mischief they make along the way, if they’re acting on your behalf, is partly your fault.
“They’re doing it with your credentials or your identity,” he says.
Good luck monitoring any of that, Nathan (pictured) adds. Most of what agents do is hidden from view and impossible to interpret. “They talk in a language that no one understands,” Nathan observes. “It’s a black box.”
Bad enough if what goes on in that box results in data leakage or privilege management vulnerabilities, moreover. What happens if threat actors begin using agents maliciously? Except that it’s not a matter of if.
“It’s when,” Nathan says.
Same goes for all the unauthorized, unreported shadow agentic deployments Rader is certain are coming soon. “It’s so easy,” he says. “You can download something, install it, it can impersonate you, and it would be difficult for us to figure that out.”
To make matters worse, of course, while AI agents may be relatively rare today there are going to be a lot of them soon, and we’re already drowning in non-human IDs. Part of what got me thinking about this whole issue, in fact, is research CyberArk published in April stating that businesses worldwide have 82 machine identities on average for every human one.
Now. Today. Before agentic AI.
CyberArk’s study got Brian Weiss (who you’ve met here before) thinking too. Using software from Shield Cyber he discovered that his one company, San Luis Obispo, Calif.-based ITECH Solutions, is responsible for 86 human identities and 1,347 non-human ones. MFA isn’t an option for securing the non-human IDs either, notes Weiss, ITECH’s CEO and chief AI officer.
“There’s no human that’s going to sit there and process the MFA,” he says.
So if that won’t work, what will? Not surprisingly, a lot of people are thinking about the matter. Microsoft, for one, made multiple agentic security announcements at its Build conference last week, including new support for agentic ID management in its Microsoft Entra IAM service. Weiss calls that last bit “a solid step towards zero trust, which is a best practice approach to securing identities whether they be human or non-human.”
SailPoint’s Harbor Pilot solution, introduced in March, similarly breaks down the distinction between human and machine IDs. “We believe that all identities need to be managed the same, whether it’s human identities or machine identities,” said Dave Schwartz, the vendor’s senior vice president of global partners, during a recent conversation at the RSAC Conference in San Francisco.
Same for CyberArk, whose platform is designed to ensure that agents, just like people, get privileged access only if they should have it and when they should have it. Anomaly detection and behavioral monitoring features, meanwhile, issue alerts (with tunable sensitivity) when agents take unusual, potentially dangerous actions.
Nathan encourages everyone who thinks about security to choose some kind of mitigation strategy, and soon. “It’s not theoretical,” he says of agentic AI security risks. “It’s already in production at many companies, and in the next year or so, it’ll be even more critical.”
Agentic AI’s got a white hat too
It’s worth noting before we move on that vendors are increasingly using agents to make identity security easier versus harder. In a kind of fight-fire-with-fire play, for instance, SailPoint’s Harbor Pilot uses its own agents to prevent others from doing harm.
Additional vendors are putting agents to work in other defensive capacities. 15 of the 96 AI security vendors tracked by analyst Richard Stiennon, for example, use agents to triage incoming alerts. As does security hyperautomation vendor Torq, which sees even broader potential in agent-powered protection.
“By evaluating threat data, including email content and recipient profiles, executing broadscale sweeps for malicious payloads, and launching containment measures, agentic AI can autonomously remediate extensive phishing attempts,” says Leonid Belkind, the vendor’s co-founder and CTO, in remarks emailed to Channelholic. “Likewise, agentic AI can strengthen the speed and accuracy of malware detection by rapidly analyzing file behavior, investigating anomalies, and containing threats to prevent further damage.”
Atera Autopilot is (almost eerily) live
As long as we’re discussing helpful versus harmful uses of agentic AI in managed services, let’s talk a little about Autopilot, managed services software vendor Atera’s fully autonomous, AI-powered help desk service. Devoted Channelholic readers first read about that product late in 2023 and got updates during its beta testing last August and this January. Last Tuesday, it became generally available.
Given all the scary possibilities I discussed earlier in this post, I should probably observe here that Atera took an extremely cautious whitelist approach to safety with Autopilot in which the system can only complete tasks it’s specifically authorized to perform. So no matter how badly it might want to delete that mission-critical virtual server for some hallucinatory reason, it simply can’t, because that’s not on the whitelist.
Enough is on the whitelist, though, for Autopilot to have handled 20 to 40 percent of Level 1 tickets without any human intervention during beta testing, allowing MSPs to handle 3-5x as many tickets with the same number of technicians.
“Our partners can have more customers, they can have more endpoints, with the same team that they have today,” says Yoav Susz (pictured) Atera’s U.S. general manager, during an episode of the podcast I co-host set to go live this Friday.
In a sense, though, outcomes like that are somewhat to be expected given the nature of what Autopilot does. A lot of other lessons learned during testing were significantly less predictable and even more interesting:
End users share things with AI technicians that they won’t share with human ones. Atera has long assumed that ticket volumes would stay the same or maybe even decrease at MSPs using Autopilot. Instead, they went up, for reasons no one anticipated.
“When people knew that there wasn’t a human being on the other end that was going to judge them for their questions, they were willing to ask things that maybe they were a little bit ashamed of asking before, like how do I change the font on Google Slides,” Susz says.
End users have been inadvertently discovering capabilities in Autopilot that Atera didn’t know it has. For example, someone having trouble with an Excel error message took a screenshot and uploaded it to Autopilot for help.
“Autopilot gave them the right answer, which was really cool,” Susz says. And also surprising, because Atera didn’t know Autopilot could process visual input that way.
Autopilot learns from experience and uses what it sees to make helpful suggestions to MSPs proactively. For example, at one MSP Autopilot noticed lots of users asking for help installing Windows language packs. “It saw the behavior repeating itself again and again and again,” Susz says. So on its own initiative it wrote a script to automate language pack installation and asked the MSP for permission to use it in the future.
“That’s really, really powerful because it’s learning to behave in the same way that you are behaving,” Susz says.
AI really wants to make people happy. To a fault if you’re not careful, as OpenAI recently discovered too.
“Because it’s eager to please, we found that there were some false positives in the beginning,” Susz reports, like when users would ask Autopilot to reset their email password. “It would say, ‘absolutely.’ And then you would ask it, ‘have you reset the password?’ and it would say yes. And it hadn’t.” Atera has since fixed the issue.
End users have a tendency to anthropomorphize Autopilot. By assigning it names.
“We’ve seen people call it Billy for some reason,” Susz says by way of example. Unclear if Autopilot now answers to that name, but judging by how badly it wants to please you about password resets, I’m guessing it’s only too happy to answer to whatever name you give it.
Want to hear more unexpected insights from the Autopilot beta test?
Everything I quote Susz saying above comes from an interview on my podcast, MSP Chat. The episode featuring that conversation goes up on May 30th right here, where you’ll also find earlier episodes on AI and a host of other subjects.
In defense of seams
Let it not be said that Kaseya doesn’t believe in offering MSPs a choice. Following its relatively unheralded acquisition of IT management suite maker Pulseway a few days back, it now has three RMM solutions and three different options for ticketing.
That said, Kaseya has long and clearly been deeply wedded to the idea that MSPs are almost always better off buying as many tools as possible from one supplier, provided those tools are integrated at the source code level. Multi-vendor stacks connected via APIs, the company believes, are inherently more expensive and less capable than single-vendor platforms. The latest example of that conviction in action, released late last month, is Kaseya 365 Ops, a deeply integrated collection of back-office tools that, like the rest of the Kaseya 365 family, sells at an extremely low price.
Not coincidentally, the day Kaseya 365 Ops debuted was also the day Syncro released Syncro XMM, a product similarly designed around the proposition that third-party integrations are productivity and profitability killers that wise MSPs avoid.
I wrote a post entitled “Death to Seams” about both solutions and their implications for the larger platforms versus best-of-breed debate three weeks ago. Mike Adler, N-able’s chief technology and product officer, found reading it a little frustrating.
“It’s like everyone’s running to the extremes,” he says. “Why does everyone think that that’s the answer when there’s going to be parties in both camps at almost all times?”
Adler and I spoke about the matter at his request last week, and out of fairness I decided to give NinjaOne a chance to chime in too. Peter Bretton, the company’s vice president of product strategy, is as skeptical as Adler of either/or thinking on integrations.
“We’ve shown and proven repeatedly that there is benefit in a unified platform with a unified data model,” he says. But, he adds, “it is bordering on egomaniacal to say that any one company can solve all of an MSP’s needs.”
N-able and Ninja disagree about many things, but when it comes to robust support for third-party integrations, it turns out, they share four beliefs in common:
1. An all-inclusive, one-supplier platform can be the right choice for younger MSPs getting a freshly hatched practice off the ground. “Smaller MSPs are going to probably receive an outsize benefit from a unified platform,” Bretton says. “It’s easier to set up. It’s easier to implement. It’s easier to deploy. It’s easier to train on.” However…
2. Eventually, those younger, smaller MSPs are going to outgrow their all-inclusive platform. More specifically, Adler says, they’re going to start running into industry-specific scenarios and customer-specific needs that their all-inclusive platform doesn’t address.
“That’s a really, really important mindset. Your business is going to change. Your business is going to grow. You’ll get new customers with new requirements,” Adler says. “If you have this all-in-one box and you’re locked into that box, what do you do when you need to break out?”
3. Vendors, including big ones with plenty of money to spend, can’t possibly excel in everything at once. “I can’t even probably get effective feedback that helps me build a customer-centric roadmap for 50 products,” Bretton says. “It’s just not possible. And so if you go super broad, you are invariably saying, ‘I will not invest as much in each of these products as a company that is focused on just one.’”
The resulting compromises, Bretton continues, can have serious consequences, which is why NinjaOne doesn’t offer endpoint protection software and never will.
“We think that core platform vendors in the MSP space shouldn’t be building EPP products,” Bretton says. “The amount of research and development and effort and expertise needed to really protect clients should be left to the security vendors.”
4. All-in platform providers can’t possibly keep up with every innovation in every technology category either. The best, most relevant illustration, according to Adler, is AI, where every week seems to bring some new development that MSPs and their clients want to take advantage of sooner than all-in platform providers can hope to accommodate. Interestingly enough, N-able had AI very much on its mind when it developed the interactive, next-generation API it introduced last March.
“We could see AI coming,” Adler recalls, but didn’t know who the winners would be, and couldn’t wait around to find out. “We have APIs, but we needed stronger, action-driven APIs that bi-directionally exchange information, because there’s going to be a bunch of AI stuff in the background here that people are going to start playing with.”
Now, at this point, I’ve given Kaseya, Syncro, N-able, and NinjaOne opportunities to comment on the merits of platforms and integrations. Why not ConnectWise too, you ask? Stay tuned, folks. I’ll be at that vendor’s IT Nation Secure event in just a few days.
Also worth noting
Microsoft had a lot to say about agentic AI outside the context of security at Build last week. Here’s an overview.
Surprise! Google also had lots to say about agentic AI at its I/O conference last week.
The latest output from the Dell AI Factory includes new AI PCs and multiple edge and data center enhancements.
Speaking of AI, Extreme Networks has put new conversational, multimodal, and agentic AI features for its Extreme Platform ONE into limited availability.
Not a moment too soon: DNSFilter says businesses are increasingly blocking genAI tools.
TeamViewer’s new TeamViewer ONE combines endpoint management, remote connectivity, AI, and digital employee experience functionality.
Someone’s got to sell all this AI stuff, so the Channel Marketing Association has a new AI certification.
Fortra has a new partner program.
Not to be undone, Egnyte has a new partner program and portal.
NinjaOne has earned the FedRAMP “in process” designation.
Solutions from security vendor Guardz are now available on the Pax8 marketplace.
Devicie’s Reporting Connector in Microsoft Edge for Business helps users collect real-time browser telemetry and endpoint security event details.
Security vendor Druva’s platform now protects data in Microsoft Azure SQL and Azure Blob Storage.
Cynet, the security vendor helmed by former ConnectWise CEO Jason Magee since February, has rolled out a major new update of its proprietary AI engine.
Exabeam and Vectra AI are collaborating on AI-based SecOps automation.
ManageEngine and Zensar are collaborating on bringing real-time observability and unified operations to IT management.
JumpCloud has acquired privileged access management vendor VaultOne.
As a fan of the Alliance of Channel Women, I’m pleased to know that co-founder Nancy Ridge has written a book about the organization’s roots.