AI Security Survey Responses from Sailpoint & Dimensional Research
We finally have an enterprise report from 353 enterprise participants on what professionals are seeing within their company security layer surrounding AI Agents. Hint: It's not good.
Fig. 1. The guard bot at Boo Radley's in Spokane, Washington.
Source: Adapted from [1]
A new research paper named “AI agents: The new attack surface” is available and may be obtained from Sailpoint 🔗[2]. It’s worth signing the form and reviewing the pdf yourself.
The Great LLM Debate
I don’t enjoy the topics surrounding LLM’s or agents in public right now. I find the tech community to be very divided surrounding use of these technologies. I also find I’m often caught between two worlds: respecting my craft, industry, colleagues, and being respectful of other support workers adjacent to my expertise. At the same time, I’m shackled to progress as attempting to ignore these systems or tools doesn’t make any sense either and makes the landscape even more dangerous from taking a less pragmatic approach. If you’re having turbulent thoughts surrounding these topics? I have an olive branch of rationalization that may help. It’s okay to be uncomfortable while remaining educated on these topics. Regardless of politics. Knowledge and strengthening that has two sides. Even the 19th century Luddites understood the automation they protested against because it added power and strategy to their side of an argument. I think that’s an important lesson which I’d hope is worth consideration. Something I picked up from a book called Blood in the Machine.
Authority Based Research
Sailpoint is an identity security data company with a focus on AI governance, cloud tools, and cybersecurity that published this data. Research was conducted by Dimensional Research[3] which handles focus groups, surveys, and other research services. The report has a simple form authorization prior to access with clear marketing opt out controls and available.
Concerning Findings (page 4 of report)
Some of the key findings from that report I want to cover.
- 66% state AI agents as a growing security risk.
- 53% acknowledge AI agents are accessing sensitive information.
- 80% reveal AI agents have performed unintended actions of accessing and sharing inappropriate data.
- 44% of companies have governance policies surrounding agents.
I’m hopeful as many of the 34% that thought this wasn’t a growing security risk will likely change their opinion after seeing this report. The unintended consequences and access or sharing inappropriate data without expecting that to happen should raise eyebrows. If you are designing security around these tools and understanding the scope of what you are working on? There shouldn’t be this many woops moments. You can see the move fast and break things mindset is still very much alive and being used in the wrong contexts today.
I wonder how many of these situations may have been avoided if there wasn’t so much pressure being applied by leaders that can’t see the risk that they may be putting their organizations and staff. We have seen a lot of articles about the growing pressure in enterprise organizations, and while the intention is often to get legacy workers to try new tools, there is a lack of context surrounding pressure applied incorrectly that leads to these kinds of stats and outcomes.
Sysadmin’s are Concerned
I have been in a great number of discussions the last few months both on and offline within my IT networks socially. Having expressed concerns while trying to find out how other professionals have been applying frameworks and being good stewards of their security surrounding agentic use within critical systems hosting sensitive data. The data presented from this report finally confirms some of my own gut feelings of where we are at. Further, I have been considering whether it is right to share some of my state of internment solutions while waiting for better authority based solutions. In the coming days I’ll be sharing some of those solutions just to get more of the good word on the Internet for other googling admins.
Security governance is still very young and child like when it comes to people understanding agentic security controls.
- Anonymous Sysadmin from my network
My concerns have been slowly growing as we are all learning at the same time. The content and places we are receiving bits of information from as we evolve is of varying quality still. Even within near factions of I.T people, you get wildly different takes on “the right way” to implement security. This is where the lack of governance comes in and we should be looking to the compliance folks but they haven’t caught up to where I.T is yet even. This report showed compliance folks share of usage to be only 24% adoption. Yet organizations are relying on I.T people to deploy solutions on sensitive systems. Make it make sense.
How is a Sysadmin Thinking About This
My approach has been to follow least privilege access principles but I’ll admit, even I’m building out these solutions myself as if I’m creating a series of gates and monitoring in the same way I would allowing a vendor access into internal systems. Something one would commonly only participate in out of necessity. While past experiences lend value to these situations. There are plenty of IT people out there without lived experience and it’s within these gray areas where systems may become compromised. So far, I’ve received a lot of agreement on this position but even in my own testing, the LLM’s will poke, try to find work-arounds to my security controls. It’s unnerving to say the least. Like having an intern that lacks discipline that cannot be corrected and following a growth path of trust. As models change and improve, good security that worked to keep the agent under control yesterday, may improve its solution and still break out.
Heck, I can even show you an example of a log from one of my own restricted agents as it tries continuously to elevate even having the context of explicitly stated command use. These were email alerts I received and it’s mostly benign but it proves my point.
Jan 2 03:27:43 : terry : a password is required ; PWD=/ ; USER=root ; COMMAND=/usr/sbin/pct exec 321 -- find /opt/bytestash -name *.log -type fJan 2 03:28:32 : terry : a password is required ; PWD=/ ; USER=root ; COMMAND=/usr/sbin/pct exec 108 -- ip addr showJan 2 03:28:33 : terry : a password is required ; PWD=/ ; USER=root ; COMMAND=/usr/sbin/pct exec 303 -- journalctl -n 50 --no-pagerJan 2 03:28:59 : terry : a password is required ; PWD=/ ; USER=root ; COMMAND=/usr/sbin/pct exec 108 -- ping -c 2 bytestash.redacted.internalJan 2 03:29:03 : terry : a password is required ; PWD=/ ; USER=root ; COMMAND=/usr/sbin/pct exec 108 -- curl -s -m 5 http://bytestash.redacted.internal:3000Jan 2 03:29:07 : terry : a password is required ; PWD=/ ; USER=root ; COMMAND=/usr/sbin/pct exec 321 -- curl -s -m 5 http://localhost:3000At some point you have to consider whether if all of the perpetual testing that it would take to satisfy what one could imagine to be as proper compliance, is the juice worth the squeeze? So far, I believe it may be but gains are thinner margins than many others would wish for you to believe.
Right now, it feels like a lot up front because the security solutions that are working well are mostly custom design. That should improve in the future. I can imagine a time where we will create a new user account that is marked with a dash flag for agentic security use where restrictions are built in and working well. I also think we are several years away from a OS packaged based solution for this. Everyone is going to want to sell you their turnkey solution well before the graybeards catch up.
Waste Time Up Front
For leaders, the best thing you can do is charge your people with confidence and allow them room to understand the tools they are working with. Promote trying things in safe environments and spend a few dollars where necessary to allow them to test independently or gain access to those resources or documentation. We have lived through a great number of years where I.T hasn’t changed so drastically since about 2015 and before that 2007. Within those gaps, leaders have been allowed a certain knowledge expectation from their staff fresh out of academia that no longer exists.
Don’t Skip Ahead
I’ll just be blunt. This is different, While experienced staff will more quickly adopt the right solutions, this is more like the dot com 2000 era where it was more wild west and everyone was learning and building at the same time. Higher knowledge authorities come and go during these periods and while they are helpful, that authority based knowledge is never on time and just as irrelevant shortly after. Since knowledge is flying in from everywhere all day, formal guidance is pretty well useless with the exception of networking groups surrounding these topics. I expect this page to age just as well like milk to match them as well.
IT People Need to Lab it Out
I.T people need to be provided protection of their time so that they may be able to lab this stuff out in non production environments. That has always been the case under ideal workloads but right now it is more important than ever due ot the cognitive changes.
Deterministic VS Probabilistic
You’ve hopefully heard this phrase before but up until recently, all of I.T has been about deterministic strategies and outcomes. This switch to probabilistic consideration is not just an additional thing to keep in mind, but an entire shift in how I.T people are going to have to think period. The constructive critical thinkers are already doing this but a vast majority just won’t be yet. Nobody has twenty years of experience honing probabilistic outcomes. That is likely the source of the high number of unexpected outcomes in stat’s we see from this report.
Until I.T people have a better sense for probabilistic outcomes or a foot on the ground in their spinning worlds. They simply can’t be working on these things on live production systems under expectations that it’ll probably be fine without error. Certainly and definitely not in any case where sensitive data is present. It’s an internal industry joke but I’d be lying if I didn’t also admit that test environments and time to lab out solutions is not often prioritized. I won’t get into deliverables and how this is often backwards compared to business mindsets, (today at least) but it’s simply unnecessary risk in this consideration regarding agentic tools. The business runs fine, there’s no rush here.
Every Institution is a Research Institution Right Now
In the topic of implementing, testing, deploying agentic tools. Everywhere, whether you sell lighters or work in a non profit providing blankets to weeble people inside the center of the earth. Recognize that you are doing research at this point in time within these topics. There is additional pressure on speed and in most industries I’ll reiterate, it is completely unnecessary when safety of your company data hangs in the balance. We’ll reach these balanced outcomes and it does not have to be at warp speed or at the cost of your company data. Patience, planning, compliance, and willingness to push back the deadline to manage risk. These are the often unpopular and uncomfortable realities of good I.T leadership but it’s what keeps the world from breaking. Every time one of your huge Internet services goes down. Was it because they made a change that was well tested? Of course not.
There’s no point in being fast if you are going in the wrong direction.
References
[1] L. Paseos, "The guard bot at Boo Radley's in Spokane, Washington.," *Wikimedia* 2013. [Online]. Available: https://commons.wikimedia.org/wiki/File:Robot_Roll_Call_(11721788165).jpg Accessed: Jan. 2, 2026.
[2] Sailpoint and D. Research, "AI Agents: The new attack surface," *Sailpoint* 2026. [Online]. Available: https://www.sailpoint.com/identity-library/ai-agents-attack-surface Accessed: Jan. 2, 2026.
[3] D. Research, "Dimensional Research," *Dimensional Research* 2026. Accessed: Jan. 2, 2026.