- AI is not removing the need for strong data foundations. It is making their absence more costly.
- The biggest blocker to AI adoption is not the model. It’s whether the data can be trusted.
- Businesses that treat data and AI as operating infrastructure, not side initiatives, will be better positioned to create real value.
There’s a lot of noise in the market right now around AI.
When Maggie Laird, President of Pentaho, Hitachi Vantara’s data software business, and I spoke on my Data Redefined podcast last July, the AI market was already moving quickly. Since then, that pace has only accelerated. But the core point from our conversation is still true: for all the attention on AI, everything still comes back to data.
That is why I keep coming back to this conversation months later.
We have been talking about the promise of data for a long time. I said during the discussion that we have been talking about getting value from data for almost as long as my 28-year career. And in many organizations, it still feels like a promise.
AI has not closed that gap. It has made it more visible.
“What’s very clear is that data is the number one blocker to successful AI,” said Laird.
That should get everyone’s attention.
Because the problem for most companies is not a lack of interest in AI. It is not even a lack of ideas. The problem is that many businesses are still dealing with the same foundational data issues they’ve had for years: fragmented environments, inconsistent quality, weak governance, and limited trust in the data moving across the organization.
That is why one of my responses during that conversation was simple: “The more things change, the more they remain the same.”
But what is changing is the cost of getting it wrong.
If bad data feeds a report, it can slow down a decision. If bad data feeds a model, an agent, or an automated workflow, the consequences move faster and spread wider. That’s a very different operating reality. Laird pointed out that AI isn’t necessarily creating entirely new data problems. In many cases, it is amplifying the ones that were already there.
That is exactly why companies need to think differently now.
In that conversation, I said the opportunity is to become more data-centric, not just data-driven. That distinction is important. A data-driven company uses data to support decisions. A data-centric company treats data as part of the business’s core infrastructure across functions, systems, workflows, and interactions.
That is the standard AI is pushing companies toward.
This is not really about whether an organization can run a pilot or test a tool. Most can. The harder question is whether the business can trust the data underneath it, move it effectively, and make it usable for the right consumer, whether that consumer is a person, a system, or a machine. That is where readiness becomes real.
And as Laird said, “There’s no shortcuts to quality data.”
That is the part of the AI story that more companies need to take seriously.
The winners are not going to be the companies making the boldest claims about AI. They are going to be the ones doing the less visible work to make their data usable, trusted, and operational. They are going to be the ones building pipelines, governance, quality controls, and internal discipline needed to support AI at scale.
That may sound basic, but it is not. It is the foundational work required to scale usable AI.
And that’s why one of Laird’s other comments stayed with me: “Get your house in order. Get the fundamentals in order so that you can move quickly on the AI journey when it’s the right time.”
That was the real takeaway when we recorded the podcast last July, and it is even more true now.
The companies getting serious about the fundamentals today will be in a much better position to create real value from AI.
View the Full podcast here.



