Manufacturers are no longer relying on static schedules and human memory to manage complex assets. Embedded AI agents are changing the game by proactively detecting inefficiencies, optimising inspections, and capturing expertise before it disappears.
The traditional view of enterprise asset management as a reactive, records-based system is quietly collapsing. In its place, a new model is emerging, one that does not simply track breakdowns and schedule fixes, but actively predicts, prescribes, and intervenes. The agents leading this transformation are not human. They are digital co-workers: intelligent, embedded AI systems capable of interpreting, recommending, and in some cases executing operational decisions.
This shift is not theoretical. It is already happening in real-world industrial environments where legacy practices are proving inadequate to meet the complexity and speed of modern operations. Yet rather than replacing humans, these systems are reframing their role, turning frontline expertise into institutional intelligence and lowering the barrier to high-quality decision making.
From support tool to specialist peer
What distinguishes agentic AI from earlier generations of automation is its positioning. Rather than acting as a passive co-pilot, these systems operate alongside users as active collaborators, capable of initiating tasks, flagging inefficiencies, and drawing attention to latent risks. “The people using our systems are often trapped in day-to-day tasks, firefighting failures and executing routines without stepping back to assess their effectiveness,” Chris van den Belt, Head of Product Management, Ultimo, explains. “A digital co-worker can interrupt that cycle. It can say: you are inspecting this machine weekly, but your data shows no deviation for 30 weeks. Why not change the frequency?”
That kind of insight, previously the domain of seasoned engineers or process consultants, is now increasingly derived from internal operational data, provided that the data is complete and contextual.
“The main limitation is not the technology,” van den Belt adds. “It is the completeness of the data. If a failure report just says ‘pump stopped, fixed it’, then there is nothing to learn. But if the report captures the root cause, the method, the outcome, then we can start to build systems that offer real decision support.”
Building trust with visible logic
The ability of an agentic system to support and eventually drive decisions depends on more than technical capability. Trust is fundamental. That trust, in turn, depends on transparency. Van den Belt
is pragmatic about the need for explainability, but sees it as a spectrum. “If the AI suggests some common symptoms to add to a failure report, explainability is less important,” he says. “But if it tells you to reduce a preventive maintenance interval, then it has to justify that recommendation. That is where trust is won or lost.”
Trust is also a function of experience. If a technician sees the same AI-generated suggestions repeatedly lead to positive outcomes, they are more likely to accept the next one without hesitation. At that point, the possibility of AI-generated actions becoming the default, subject to later human review rather than pre-approval, moves closer. “Eventually, yes, I think agent-to-agent ecosystems are likely,” says van den Belt. “But not now. Users still need to be in the loop. We need a phased transition, one that builds confidence and allows organisations to stay in control.”
Crucially, the shift towards more autonomous agentic workflows will not be determined solely by the capability of the model. It will be shaped by the organisation’s cultural readiness to adopt these tools, the confidence of its workforce to interpret and supervise outputs, and the maturity of its operational data.
Embedding intelligence without over-engineering
One of the paradoxes of AI in operational environments is that the use cases with the most obvious value, predictive maintenance, incident detection, and process optimisation, have often failed to scale. The reason is rarely technical. Instead, the bottleneck is infrastructural: fragmented data, lack of model training capability, or the absence of a scalable deployment architecture.
Van den Belt Herrero is clear that organisations should avoid over-complicating the foundations. “We deliberately use publicly available LLMs like OpenAI rather than building proprietary models,” he adds. “That way, AI becomes an embedded feature, not a parallel system. The user does not need to know what a prompt is. They just see a suggestion appear where it is relevant.” This low-friction approach reduces both cost and resistance. It also ensures that the system evolves with minimal cultural disruption. “We are not asking someone to adopt a new interface or new way of working,” he continues. “We are just offering a better interaction with the system they already use.”
However, van den Belt is quick to differentiate this embedded model from heavier AI use cases such as predictive maintenance. “Those require structured central data, advanced architecture, and expert consultancy,” he says. “That is why the adoption has been slow. With agentic AI embedded into existing systems, the barrier to entry drops significantly.”
Capturing human expertise before it disappears
One of the biggest challenges facing industrial organisations is the rapid loss of tacit knowledge. As experienced engineers and operators retire, the embedded intelligence in systems of record is often worryingly thin. Digital co-workers offer a mechanism to close this gap. “Today, if a technician does not know what to do, they call the expert,” van den Belt says. “But what if the system could recall a similar failure from two years ago and show the diagnosis and resolution? If we capture that knowledge consistently, then everyone benefits.”
Far from displacing human intelligence, this model amplifies it, turning individual experience into collective memory. In some cases, it also reduces the skill threshold for entry, a critical benefit in an industry facing a recruitment shortfall. “We have operators who are not comfortable using software forms,” van den Belt explains. “But what if they could just speak to a digital co-worker? Describe the issue, answer a few clarifying questions, and the system creates the report. It is more intuitive and closer to how they already work.”
In practical terms, this kind of conversational interface, layered with intelligent prompting and embedded reporting, makes AI feel less like a technology platform and more like a trusted colleague. It creates an experience that is natural rather than disruptive, and intuitive rather than burdensome.
The use of third-party models and AI-generated outputs inevitably raises questions around governance. “Transparency is non-negotiable,” van den Belt notes. “If a recommendation comes from AI, the user needs to know that. Not for novelty, but for accountability.” He is also cautious about imposing AI where it is not yet mature enough to be helpful. “Everything depends on impact,” he adds. “If an AI-generated suggestion changes your maintenance strategy, then it needs to be transparent. If it just helps you search better, that is a lower bar.”
The principle is that AI should support the user, not replace them. At least not yet. The transition to greater autonomy will be gradual and built on evidence.
Success is not a feature, it is a relationship
For all the complexity of AI deployment, van den Belt measures success in simple terms: usage and value. “Are users engaging with the AI features? Are they accepting the recommendations? Are those actions leading to measurable improvements?” he says. He points to one example where a digital co-worker is responsible for identifying under-reported safety incidents based on maintenance logs. The system does not remove human oversight but augments it. Safety officers see the AI-generated incident reports alongside those created by humans and can decide whether to act. “It is a low-risk way to introduce agentic AI,” he notes. “It helps address real problems, like under-reporting, without taking control away from the user.”
Measurability is also essential. “Every AI capability we build is designed to track its own effectiveness,” van den Belt explains. “If a digital co-worker recommends an optimisation, we track the outcome. Did it reduce downtime? Did it increase inspection intervals without increasing failures? That is how we judge impact.”
Success also means acceptance. The best algorithm is useless if users ignore its outputs. That is why cultural adoption, not model accuracy, remains the most complex challenge. “You cannot force a workforce to trust AI,” van den Belt concludes. “You earn that trust incrementally, by embedding intelligence in ways that support their tasks, not disrupt them.”
This is not a technical race. It is a design challenge, a cultural transition, and a strategic rethink of what enterprise systems are for. The goal is not to replace human expertise, but to preserve it, scale it, and make it accessible at the point of need. If that can be achieved, then agentic AI will not feel like a revolution. It will feel like common sense.