By Chris Havemann, CEO at RealityMine
The rise of AI is reshaping the consumer insights industry — not just by speeding up analysis, but by changing the rules of how we understand people. In this article, I share my take on:
AI is already changing how we work with consumer data. Whether it’s customer service agents generating responses or platforms like Meta dynamically building ad creatives, AI is now part of the ecosystem.
But when we talk about consumer insights, the picture gets more focused — and more interesting.
At every stage of the insights value chain, AI is starting to play a role:
From a business point of view, this sounds great. But there are risks, too.
There’s a rush in some parts of the industry toward synthetic data — often because traditional survey response rates are falling, fraud is on the rise, and people simply don’t want to spend time answering surveys like they used to.
So, in some circles, there’s an assumption that AI can just “fill the gap.”
But if your training data is poor, your synthetic data will be, too. There’s no shortcut to high-quality signal. And as AI scales, that problem scales with it.
At RealityMine, we’ve always been focused on capturing actual behaviour — what people do rather than what they say they do. That kind of data is particularly valuable now because it’s grounded in reality.
The shift we’re seeing is that AI can now work with these large behavioural datasets more effectively. Where a few years ago it might have been too much for a typical client to process, AI now enables them to follow consumers across journeys, platforms, and behaviours at scale.
That means behavioural data isn’t just big — it’s finally usable in a more meaningful way.
If you’re a research agency, this is both a gift and a threat.
AI makes analysis faster, cheaper, and in some cases, better. But that efficiency gain isn’t going to land in your P&L. Your clients will expect more for less — and they’ll get it, whether from you or a competitor who embraces the tech faster.
So how do you stay relevant?
I’d argue there are two ways:
The tools may change, but judgment, creativity, and trust still matter.
One area that needs close attention is privacy. Behavioural data, by definition, can be sensitive. Even if it’s not explicitly personal, AI has the power to draw inferences that may amount to synthetic personal data.
It’s not always about what data you have — it’s about what conclusions can be drawn from it.
So we need to be clear about how data is gathered, what consent looks like, and how we protect individuals in a world where patterns can reveal far more than we realise.
If I had to make a prediction, I’d say the insights industry will shift towards smaller volumes of higher quality data — backed by verification, richer context, and smarter tools.
It might mean going back to more rigorous recruitment. It might even mean face-to-face panel validation, despite the cost. But if AI can generate more with less, it makes sense to focus on the integrity of your inputs.
In short: the value will move upstream.
Perhaps the most underappreciated shift is how AI will change the speed of organisational learning.
We’ve all seen it — a business clings to an old assumption about its customers or market long after the facts have changed. Why? Because the data cycles were slow, and change was uncomfortable.
AI doesn’t care about your assumptions. It will discard yesterday’s conclusions if today’s data tells a different story. And that’s a mindset shift many organisations are not ready for.
But if we get it right, this could be a catalyst for more dynamic, responsive, and evidence-led decision-making, provided we’re still willing to challenge what we think we know.