Digital twins and AI are converging to transform industrial operations, but success demands deep data discipline, strategic humility, and an uncomfortable rethink of control. Real change starts with the unspectacular.
For all the headlines about AI remaking industry, few narratives grapple with the unglamorous reality at its foundation: if your data is poor, your AI is pointless. And if your digital twin is just a 3D visualisation, you have missed the point entirely. For Simon Bennett, Director of Research and Innovation at AVEVA, digital twins are not something that can be overlaid with intelligence; they are the substrate that makes it possible.
“You cannot talk about AI without talking about data, and by extension, digital twins. They are bound together,” Bennett explains. “But the digital twin is the heavier lift. It demands years of painful work to digitise a business properly, starting with how you define entities, capture topologies and build a unified data language. That groundwork is essential.”
This insistence on structure over spectacle is hardly fashionable. Executives are drawn to the promise of visual dashboards and intuitive intelligence. Yet beneath that sits a slow, relentless process of building semantic consistency across functions, assets and legacy systems. Without it, AI models will simply mirror and reinforce existing silos. Companies desperate for digital transformation often overlook the fact that meaningful intelligence can only be extracted from systems that first understand themselves.
Beyond the pretty picture
A digital twin, Bennett argues, is not a digital replica of physical infrastructure. It is a machine-readable model of how a business operates. And in that sense, a bakery with an Excel sheet of oven cycles, staff rotas and footfall patterns may be closer to a true digital twin than a 3D-scanned oil rig.
“There is a misconception that digital twins must be visual,” he says. “But it is really about the data model that underpins how your business functions. And it is challenging for most organisations to achieve that completely.”
This difficulty is amplified by the complexity of industrial operations, where no two assets are quite the same and documentation is often incomplete. Many facilities that AI is expected to optimise were built long before anyone conceived of digital transformation. For a process plant commissioned in the 1970s, the challenge is not to plug in AI but to construct, from scratch, a structured data representation of reality.
It is here that passive sensing and robotic data collection are starting to offer lifelines. Equipped with cameras, vibration detectors and gas sensors, autonomous platforms are navigating harsh industrial environments and returning valuable insight. While this may seem like a shortcut to digitisation, it is more accurately a bridging mechanism, one that extends visibility into inaccessible corners of analogue infrastructure.
“We are seeing robots equipped with sensors, cameras and gas detectors being used in legacy environments to gather streams of data that would otherwise be inaccessible,” Bennett explains. “These are not digital-native environments. But with the right scaffolding, you can start to build a digital reflection of what is happening.”
The goal is not replication for its own sake. It is to create a shared language between operations, engineering, planning, and the systems that support them. Without this shared digital foundation, AI cannot function as a unifying intelligence. It simply becomes another silo.
Control in an autonomous age
If building the twin is the slow part, then trusting it to take control is the existential leap. The challenge is not just technical, but psychological and cultural. Operational control systems have always existed. But handing those reins to AI, even partially, triggers understandable concern.
“Some of our more visionary customers are looking to embed AI directly into the control loop,” Bennett explains. “But there are legitimate trust and ethics issues, especially in safety-critical environments. Until regulation and compliance frameworks mature, most organisations will hesitate.”
That hesitation is well-founded. The consequences of unintended behaviour in an AI-controlled environment are not hypothetical; they are operational, legal and reputational. While software-defined autonomy has become routine in digital-native sectors, heavy industry must operate under more constrained risk appetites. And yet, the appetite is growing. From energy utilities to advanced manufacturing, experiments in semi-autonomous control are accelerating.
There is a deeper issue, however, than technical readiness. It lies in how strategy itself is conceived. Traditional enterprise strategies operate on five- to ten-year planning cycles. AI moves in quarters, sometimes weeks. The result is a cognitive dissonance between what can be planned and what must be reacted to. As Bennett observes, many organisations now find themselves layering tactical AI deployments on top of strategic plans that were obsolete before they could be fully implemented.
“The pace of change is so fast that traditional strategies are struggling to keep up,” Bennett continues. “Strategic planning cycles cannot accommodate breakthroughs that happen quarter to quarter. So what we are seeing is an uncomfortable blend of tactics and strategy, and a lot of people looking sideways to see who will make the first big leap.”
This volatility undermines not just confidence but cohesion. In the absence of an enterprise-wide AI strategy, departments forge ahead independently. AI-enabled analytics in engineering. Chatbots in procurement. Predictive models in maintenance. Each is potentially useful, but only if it is coordinated. The risk is a patchwork of tactical wins that mask systemic fragmentation.
From data love to data literacy
What separates AI winners from also-rans in industry will not be their choice of platform, their access to cloud infrastructure or even their algorithmic sophistication. It will be their ability to define business problems in machine-interpretable terms.
“There are many executives who believe AI has potential, but they have not translated their business problems into a form AI can work with,” Bennett observes. “You might say your profit margins are declining, but how is that defined in data terms? That is the new frontier of business literacy.”
This requires a new class of leadership, individuals who can operate at the interface between business outcomes and data architectures. Chief Digital Officers, data engineers and domain experts must collaborate to define problems not as abstract goals, but as measurable, structured, and solvable data challenges.
It also demands that companies move beyond passive consumption of AI tools. AI is not a product to be procured, but a capability to be cultivated. And that capability rests on ownership, not just of data, but of the models built from it. This, Bennett explains, is why AVEVA has chosen to avoid competing with its own customers for ownership of intelligence.
“We do not own the customer’s data, so we should not own their models,” Bennett explains. “Our job is to provide the tooling and the platform to let them create their own insight. It is a more strategic posture, and one that respects the customer’s ownership of value.”
The implication is profound. Enterprises must stop waiting for the perfect AI solution to arrive and start building the structures, data pipelines, model governance, and human oversight that will allow them to create their own.
The future is hybrid and decentralised
There is no single future of AI infrastructure in industry. For some, the cloud will enable rapid scaling and centralised insight. For others, on-premise compute remains a necessity, dictated by regulation, latency, sovereignty or security constraints. What matters is not location but agility.
Equally, the obsession with generative AI obscures the more prosaic but valuable work being done by machine learning models in operational contexts. Large language models may dazzle with conversational fluency, but they are not optimising pumps, predicting failure or detecting anomalies in process data.
“The LLMs get the attention, but many of the AI models that matter in industrial contexts can run on a laptop,” Bennett says. “Predictive maintenance, real-time optimisation, and safety monitoring, these are not about hallucinating new content. They are about understanding existing patterns with rigour and speed.”
This divergence in AI form factors is accelerating as compute power at the edge grows. Where once the cloud was the only viable platform for inference and analysis, now embedded AI agents on the plant floor can deliver decisions in real time, without latency, bandwidth or privacy concerns.
The result is a decentralised intelligence architecture, part cloud, part edge, and part human, that offers resilience without sacrificing control. But this only works if data integrity is preserved across every interface. Without standardisation, synchronisation and semantic consistency, distributed AI becomes distributed confusion.
For executives, this is where the real work lies. The future is not a straight line of increasing automation. It is a messy, hybrid, and multi-modal evolution. And it demands leadership that understands when to centralise and when to delegate, when to accelerate, and when to rebuild the foundations.
Strategy under pressure
Beneath all the noise about disruption, the real disruption is strategic. AI is not just a technology challenge. It is a planning problem. A cultural reset. An invitation to rethink what control, value, and competitive advantage look like in a world where intelligence can be embedded anywhere.
For many organisations, this will be deeply uncomfortable. It will require investment in the least glamorous parts of the business, data governance, model validation, and edge integration. It will mean accepting that the most powerful use of AI may not be to reinvent the product, but to rewire how decisions are made, who makes them, and when.
As Bennett concludes, the most exciting AI use cases are built on the least exciting foundations. The companies that thrive will not be those who shout the loudest about AI. They will be those who listen hardest to their own data, build the structures to act on it, and trust their people to lead through ambiguity.
Ignore the hype. Embrace the discipline. And remember that transformation starts with the parts of the business no one wants to look at, until they have no choice.