Agents. Agents. Agents. They’re everywhere. With the growing hum of agentic solutions tickling our eardrums, enterprise leaders are excited about all of the promising attributes of agents taking over their business processes. That excitement is warranted, because agents offer an excitingly low barrier to entry to what was previously a challenging automation space with solutions like RPA, IDP, or even Low Code/No Code. And then we start throwing out terms like self-healing, and that would lead anybody linked with enterprise automation to catch themselves drooling at least a little bit.
However, the excitement is influencing leaders into overlooking one primary, critical flaw in agentic AI solutions: process. In the people, process, technology framework, agentic addresses people and technology, but completely overlooks process. When an agentic solution uses a large language model (LLM) to execute a “process,” it looks for creative solutions to a problem. So if a user relies on an agent to execute a process 100 times, it will vary slightly each time. These are the glaring issues that come from a lack of process in automation that no one is talking about yet and why CIOs and other leaders need to tread carefully into an agentic future.
The Illusion of Control
Imagine a car without a steering wheel. That’s essentially what enterprises are doing when they implement agentic AI solutions without proper human oversight. They feel like they’re in control, because they create the prompts and check some boxes to build the agent. But, what happens after the agent lacks control mechanisms like a steering wheel in a car, leaving users unable to effectively review or modify the AI’s planned actions.
This absence of control is particularly alarming in domains dealing with sensitive financial, legal, or healthcare matters. Think about how sensitive we are about mistakes of this nature with our flesh-and-bone human employees. We’re not very forgiving are we?
Just as no one would trust a self-driving car without an emergency override, enterprises cannot afford to rely on AI agents that operate as black boxes, making decisions that could have far-reaching consequences while we remain in the dark, particularly when there can be minor variations each time.
The Maintenance Nightmare
The whizbang features of generative AI are, in general, focused on the first part of all lifecycles and agents are no different. Agentic solutions are pitching how quickly and easily they can be spun up and get to work. The concept of “velocity to value” is thrown around wantonly. But again, this isn’t how the enterprise operates.
Maintaining AI agents is akin to building a skyscraper on quicksand. The challenge lies not just in the initial implementation but in the ongoing management and adaptation of these systems. Current agentic models offer no clear solution for maintenance, despite the fact that up to 95% of automation work after the initial creation lies in maintaining the processes.
The problem is compounded by the potential for cascading changes when modifying high-level prompts. Because users can’t easily control how agents function in detail, they must go to what they can control via prompt engineering. A small tweak to the prompt could lead to an entirely new execution plan by the agent, with no clear visibility into the details. This lack of granular control makes it nearly impossible to implement minor adjustments without risking unintended consequences across the entire system. To that end, does the agentic solution have the testing to understand those impacts at the scale of thousands of automations per day? Perhaps not.
The Reliability Mirage
For humans, 95% is pretty good most of the time. But for AI agents, we won’t be able to overlook an error rate of 5% or even 10% in complex use cases. AI systems are fundamentally imperfect. This inherent unreliability makes agentic AI solutions a ticking time bomb in environments where precision is paramount.
We don’t allow for many mistakes in multi-million dollar transactions in a financial services organization, nor should we. Even if that accuracy rate grows from 95% to 99% accuracy rate, a large enterprise could face hundreds of errors monthly, each potentially leading to significant financial losses or legal issues. If that was your bank, would you trust it? The reputational damage might represent the worst of it. The stakes are simply too high for such a margin of error.
The Governance Gap
The rise of citizen development in AI poses a significant risk to enterprise governance. Without proper oversight, employees across the organization could create their own AI agents, leading to a chaotic landscape of uncontrolled automation without clear process. CIOs only recently returned to glory after the era of shadow IT, and now they face their toughest adversary yet in shadow AI.
This scenario is analogous to allowing every employee to create their own version of critical business processes. It undermines the carefully crafted workflows designed by process owners and introduces inconsistencies that could jeopardize compliance and operational integrity. Agentic solutions suggest that everyone should create business process automations, and that’s simply not true for an enterprise. Rather, the thinking should be that every person should technically be able to create automations through the use of natural language and disappearance of complex coding bottlenecks, but only a select few should actually have that privilege with visibility from IT and operational leadership.
The Learning Dilemma
We all know that agents offer the value of increased adaptability and resilience in the context of handling exceptions, but that may not be enough. Enterprises are dynamic entities, constantly evolving in response to market changes and internal improvements. The lack of a clear learning philosophy and lifecycle management for AI agents means that as businesses change, these systems may become increasingly out of sync with organizational needs.
This misalignment could result in AI agents making decisions based on outdated information or obsolete processes, potentially leading to costly mistakes or missed opportunities. If businesses opt to simply transition from one agent to a new version, they will need to consider what that change management looks like.
Be Hopeful But Critical of Agentic Adoption
The potential of agentic AI is undeniable, but the current state of agentic solutions makes it a risky choice for enterprise adoption, particularly in areas where accuracy and accountability are non-negotiable. The lack of process, human oversight, complex maintenance requirements, inherent reliability issues, governance challenges, and difficulties in adapting to business evolution all contribute to a perfect storm of potential failures.
At Kognitos, our HAL (hyperautomation lifecycle) platform provides the same benefits of agentic solutions without the challenges. Process is incorporated as the backbone of our platform, offering the same speed to value and cost-consolidation that has made agentic solutions an alluring option, and we have addressed the issues outlined here in ways that other agentic solutions simply can’t match.
Most importantly, Kognitos offers businesses the chance to truly standardize and automate their processes, while also allowing for adaptability. We identify a creative solution on the front end, then replicate the process exactly until HAL encounters a reason it can’t repeat it, then asks for guidance and works that into the process moving forward. Learn more here.
Discover the Power of Kognitos
Our clients achieved:
- 75%manual data entry eliminated
- 30 hourssaved on invoicing per week
- 2 millionreceipts analyzed per year