In July 2025, I argued that the biggest barrier to agentic AI wasn’t model capability. It’s our imagination. That’s still true, but over the past year, something important has changed. Now we have evidence of why agentic initiatives get canceled even if the tech looks ‘good enough.’ And the biggest reason is that the work wasn’t designed to move from exploration to measurable value.
Gartner’s prediction is still a jolt: over 40% of agentic AI projects will be canceled by the end of 2027. Not because the models “don’t work,” but because organizations struggle to turn agentic capability into strategy, ROI, and responsible scale.
That lack of clarity matters because most cancellations will happen at the executive level. If anything, they die by one question from the CFO, “What value was delivered?” An amorphous answer and outcome will then make the C-Suite realize that an “agent” hasn’t streamlined work. It’s just added to the bottom line.
Agentic AI is designed to pursue goals, take actions, learn from feedback, and improve over time. But most companies still treat it like a fancy chatbot or a smart version of an automated script. That’s like buying a Sukhoi Su-57 and using it to deliver mail. I mean, it’s technically possible, but also completely absurd and destined to disappoint.
The problem isn’t the tool. It’s the sandbox we insist on keeping it in.
Most companies exploring agentic AI never get past the exploration stage. They run pilots and collect reactions, but they can’t point to what value their AI delivers. That’s because they don’t redesign the work, track what has changed, or build the operating model needed to scale the program.
What Changed Since the Original Post
In mid-2025, it was easy to talk about agents conceptually. In 2026, we have to be more honest, because agents are touching real systems and workflows.
Three things are becoming clear:
Let’s baseline on what an Agentic AI system should look like. I don’t mean a chatbot with a nice UI. It plans, acts, and learns from the outcomes of its actions. I’m talking about systems that pursue goals, make bounded decisions, and improve over time based on shared context and feedback. If you’re familiar with the Avengers and Iron Man, I’m talking about a step toward Jarvis.
But before we get into architecture, we should ask: how does Agentic AI benefit the business?
First, “agentic” is a buzzword and marketing label: Too many products are being sold as agents when they’re little more than chat interfaces with better prompts or automation with a new wrapper. That creates confusion. And when leadership buys “agentic AI” and receives “rebundled automation,” the category takes a credibility hit.
Second, the cost of going from demo to production is high: It’s more than the cost to compute. You need to factor in the cost of connecting the agent into your systems, deciding what it’s allowed to impact, putting guardrails and approvals in place when it interacts with critical systems, handling exceptions, and ensuring it behaves consistently.
Third, the teams that are getting agents into production are obsessed with testing: An agent isn’t a single answer. Instead, it’s multiple choices and actions strung together over time. Without a reliable way to measure and monitor, you can’t convince anyone to incorporate it into real workflows. And if it can’t touch real workflows, it never delivers ROI and will get canceled.
Evaluation is how you move from exploration to value because it turns “this is interesting” into “this delivers.”
In 2026, while imagination is still the barrier, it’s more than an issue of mindset. It’s an operating model problem.
How can we save agentic AI from failure?
The answer is unsurprisingly similar to what I proposed last year: stop leading with tech specs. Lead with business imagination. Then do the hard work of turning imagination into execution.
I still believe we need to start with imagination, but it isn’t the only goal. The teams that win will move through a process of exploration, proof, integration, and adoption at scale. They will treat agentic AI like a business capability instead of a demonstration.
Start with Business Imagination, Not Tech Specs
Ask not, “what can the AI do?” but “what if we had a tireless specialist who could explore options and adapt in real time?” Agentic AI can optimize supply chains, detect anomalies, conduct audits, personalize services — and even negotiate contracts.
Pilot Real, Not Easy, Use Cases (and Exploration Matters)
Focus on high-value, high-complexity areas: fraud prevention, dynamic pricing, risk mitigation — not just chat flows.
Prioritize Real Business Value (Or You’ll Never Prove That Value)
Deploy agentic AI only where it provides measurable gains in productivity, cost, quality, speed, or scale. Chasing hype without ROI is a recipe for cancellation.
Integration Complexity Is Real
Feed It the Right Data
Autonomy demands context. Most business value hides in unstructured data: invoices, contracts, emails, service logs. Agentic AI must consume and reason over this to be truly effective.
Build Trust In, Don’t Bolt It On
People only work with what they trust because trust is the cost of being able to scale. AI is no different. Transparency, auditability, and responsible behavior must be part of the AI’s design, not an afterthought. Trustworthy agents become business allies, not black boxes.
Upskill Imagination Across Teams
AI workshops should inspire — not just inform. Help teams dream, design, and co-create with AI.
The bottom line
Agentic AI won’t thrive on compute and code alone. It needs bold imagination, grounded data, thoughtful integration, and earned trust to transform business—not just generate experimental excitement.
The technology isn’t the problem.
It’s the sandbox we keep it in.



