Insights

3 min

Agentic AI Isn’t Magic and Won’t Cure Our Data, Process and People Issues

The Market is Racing Past Foundational Data Challenges that Impact any AI-related effort

GenAI, we hardly knew you. Just a year ago you were all everyone could talk about. Now it’s Agentic AI that will “change the world.”

What’s striking is organizations continue to blow right past core data issues that have existed for decades – doubling and tripling down on acceleration without guardrails or fundamentals. Think about it like this:

  • Overall AI effectiveness is underwhelming in large part because organizations have ignored critical data foundation issues.
  • Agentic AI’s probabilistic outcomes are resource intensive and require constant monitoring, tuning and refinement that exceed current capabilities or complex legacy processes.
  • And we still haven’t reckoned with the role humans need to play in AI, never mind how they will engage with agents.
Organizations need to get serious about solving the foundational data issues of quality and reliability. They need to have a clear understanding of how any AI (Agents or otherwise) can and should be used. And they need to address the processes and polices that ensure reliable and ethical use.

The Agentic AI Data Puzzle

Since Agentic AI is designed to operate autonomously, data quality, consistency, and accessibility are crucial. Too many organizations still rely on siloed data or use outdated or poorly structured datasets. This creates real risk of flawed outputs and actions by agents.

This is why Agentic AI’s near-term success will focus on internal applications. It enables teams to iron out the kinks, identify where Agentic AI fits well, and avoid over-engineering solutions that create more tech debt and low adoption.

Highly regulated industries like healthcare or finance will be cautious since they can’t afford the risk of autonomous agents making direct care or investment decisions. Instead, areas like procurement, sales and marketing, HR and finance back-office functions are prime targets, where there are smaller data ecosystems and teams that can provide expert context and oversight.

This will likely lead organizations to explore using small language models (SLMs). SLMs enable teams to more easily define and guardrail the data environment and inputs while monitoring its actions. Still, teams need to stay vigilant, since small language models pull from a limited dataset that may not represent the diversity and nuances of the real world. This can perpetuate biases, create flawed conclusions, or even generate hallucinations — responses that sound plausible but are entirely inaccurate.

So even with SLMs we will need robust data strategies to properly curate, clean, balance and govern the data. I can see a world with multiple SLMs serving dedicated agents, where an observability control tower enables centralized strong policy and rules around data collection, storage, and usage that helps avoid privacy issues, regulatory non-compliance, bias and potential legal repercussions.

Don’t Forget the People

Implementing Agentic AI is a significant change that heavily impacts organizational structure, roles, and workflows. Especially with Agentic AI’s reputation for replacement, organizations need to be deeply focused on explaining clearly how agents will augment and not displace.

This puts an increased premium on upskilling and training. Humans must be in control of these systems to a) provide the context crucial to trusting outputs and b) inform the workflows of these outputs to best match how the business should ideally operate. If organizations ignore this, they will certainly see resistance, misuse and flawed adoption that create weak or bad outcomes.

Imagine a healthcare AI making decisions about patient care without input from doctors, or a financial AI misinterpreting market data without human checks. Agentic AI must complement human expertise, not replace it. Humans are essential to validating decisions, intervening when necessary, and providing empathy and judgment that AI lacks.

As Always, Balance is The Way Forward

The rush to adopt Agentic AI is understandable. Fear of missing out, being outpaced by competition and senior level pressures are real. However, Agentic AI is really just the latest chapter in a conversation we need to keep having. That conversation is about balance – balancing out cost vs. outcomes, potential vs. reality and readiness vs. goals.

There is a saying that nature abhors a vacuum. As technologists, we’re more than eager to fill perceived gaps with the latest and greatest solutions. Agentic AI scratches that itch – what if we just automated all these burdensome processes, what a world we could create!

But there’s another truism – the best systems work in balance. We need to apply that thinking to Agentic AI, acknowledging the continued need to build and maintain solid data foundations, integrate effective change management practices, and ensure the human-in-the-loop so systems and people can thrive together.

Picture of Poornima Ramaswamy​

Poornima Ramaswamy​

Al, Data & Digital Thought Leader I P& L Leader I Customer Centric I Strategic Advisor

Share
Facebook
Twitter
LinkedIn

Leave a Comment

Your email address will not be published. Required fields are marked *


Recent Insights

Related Topics