Maintaining cyber control when AI can act autonomously
Date:
Thu, 02 Apr 2026 10:40:22 +0000
Description:
The critical question is what AI agents are authorized to do: how they
trigger workflows, execute tasks and operate within delegated permissions.
FULL STORY ======================================================================Copy link Facebook X Whatsapp Reddit Pinterest Flipboard Threads Email Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Tech Radar Pro Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Become a Member in Seconds Unlock instant access to exclusive member features. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. You are
now subscribed Your newsletter sign-up was successful Join the club Get full access to premium articles, exclusive features and a growing list of member rewards. Explore An account already exists for this email address, please log in. Subscribe to our newsletter The ServiceNow AI platform vulnerability earlier this year reflects a broader shift happening in enterprise cyber
risk. There was no evidence of exploitation before a fix was in place, but
the incident serves as a warning to cybersecurity professionals.
Weaknesses in agentic AI capabilities can allow user impersonation and privileged workflow execution to take place, illustrating how modern security threats are evolving beyond traditional data breaches. This is particularly pertinent today, as analysts predict that 40% of enterprise applications will include AI agents by the end of 2026. However, recent research also found
that nearly half (47%) of AI agents are running without oversight. That equates to an estimated 1.5m ungoverned agents in use across major organizations across the UK and US. Article continues below You may like How businesses can stop their AI agents from running amok The Human Risk Reckoning: Why security must evolve for an AI-augmented workforce Friend or foe? AI: The new cybersecurity threat and solutions Matthew Lloyd Davies Social Links Navigation
Principal Cyber Security Author, Pluralsight. For businesses operating across the supply chain, the risk of ungoverned AI agents can grow exponentially. Without proper oversight, autonomous agents could create disruptions that cascade across multiple organizations.
As agentic AI adoption increases and becomes embedded in business software , cybersecurity is no longer just about protecting data; it is about
controlling the systems that can act on the organization's behalf. Organizations must move beyond a cybersecurity model centered solely on stopping breaches, and instead focus on how to maintain operational control when automated systems act beyond their intended scope. A changing cybersecurity model For most of the last two decades, the cybersecurity model was built around a clear perimeter. Cyber teams would typically be managing and preventing compromises at individual server points, discrete,
identifiable failures that could be isolated and contained. The rise of agentic AI has shifted their attention.
As AI becomes embedded into core business platforms, organizations don't just need to worry about hallucinations or output inaccuracies. The next major shift is from 'AI content risk' to 'AI action risk'. When AI agents are interacting across identities, APIs, platforms and workflows, it introduces new risk factors, and unlike a static data breach, these can propagate across multiple systems before anyone notices. Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Contact me
with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.
The critical question is what AI agents are authorized to do: how they
trigger workflows, execute tasks and operate within delegated permissions. When an agent is misconfigured, exploited or granted excessive privileges,
the consequences can escalate rapidly, because these systems automate decisions across multiple workflows simultaneously.
The question is no longer only "have we been breached?" but "are our systems still doing what we authorized them to do?" Those are different problems, and they demand different controls. Retaining operational control In testing scenarios, researchers demonstrated that unauthenticated external attackers requiring only a targets email address could embed malicious instructions in data fields that higher-privileged users AI agents would later process. If left unmanaged, organizations can expect to see unauthorized workflow execution, cross-platform access expansion and rapid propagation of errors or malicious actions. What to read next Agentic attacks demand agentic defenses How a mature API management strategy can help eliminate agentic blind spots
AI agents are about to make access control obsolete
In effect, a familiar security flaw becomes more consequential when it sits inside a platform that can act across workflows often described as impact amplification.
A reported security flaw that enables user impersonation and arbitrary
actions within entitlements is exactly the kind of failure mode that leaders should worry about in AI-enabled workflow systems. Its why knowing how to retain operational control when automated systems behave unexpectedly is crucial.
For cybersecurity teams, this means treating AI features as changes to the organization's control environment. Organizations must reassess permissions, audit trails, monitoring and rollback paths at every AI implementation. Disciplined identity governance, least-privilege access design and tighter privilege management are essential.
This requires a shift in how organizations manage risk. Rather than focusing on supplier assessments, leaders should prioritize integration governance prioritizing the small number of platforms that can trigger material business actions. It also involves controlling the seams: mapping key integrations, data flows and privileged automations , while monitoring them for abnormal behavior and tightening admin and service-account privileges.
Rehearsed executive response when AI-enabled workflows are exploited will become increasingly important as the link between cyber and AI becomes stronger. Set out clear escalation expectations including rapid disclosure, clear mitigations and tested vendor comms channels. Time to clarity is a critical security capability in AI controlled systems. The cyber skills gap Cybersecurity was identified as one of the top skills gaps in our Tech Skills Report, and 95% of IT and business professionals say they lack adequate support to build skills. Clearly, organizations must invest in capability to govern AI-enabled systems effectively.
If AI agents are going to be added to an existing product, cybersecurity must be top of the agenda in the planning stage. That includes ensuring AI agents are narrowly scoped in terms of their privileges and risks are mapped out if something goes wrong. It also demands investing in the technical capability
to design, monitor and rapidly contain AI-driven automation.
But this requires skilled professionals whose skills are up to date on the latest AI cyber risks. Currently, the knowledge gap in the majority of organizations makes it hard for security professionals to defend against AI-powered threats let alone know what to do when something goes wrong.
Those organizations that get it right will see a wealth of new learning in
how security and privacy in AI work together.
Equally important is practice. Being able to measure readiness with sandbox assessments will ensure decision-making has been exercised and recovery times are widely understood. Rehearsals should also include executive teams, legal and comms, who are poised to react to threats and coordinate quickly with vendors. What leadership should prioritize As organizations accelerate the adoption of AI agents, leaders need to redefine risk. That means treating unauthorized actions, workflow manipulation and operational disruption as disaster scenarios worthy of the same rehearsal rigor applied to ransomware
or a major outage. It's a responsibility that doesn't just lie with the cybersecurity team. Its a responsibility that doesnt just lie with the cybersecurity team.
The questions every leadership team should already have answers to are: Who can act on our behalf? What's the kill switch? What's our containment move in the first hour? Organizations that have rehearsed those answers, across
cyber, legal, comms and executive teams, will be the ones that keep core systems running when something goes wrong. Check our list of the best Firewalls: reviewed, rated, and ranked.
======================================================================
Link to news story:
https://www.techradar.com/pro/maintaining-cyber-control-when-ai-can-act-autono mously
--- Mystic BBS v1.12 A49 (Linux/64)
* Origin: tqwNet Technology News (1337:1/100)