Digital cognition is closing the gap between human judgment and machine control

Executives across manufacturing are rethinking how decisions are made, moving beyond steady-state control to confront the messy, probabilistic choices that govern yield, safety, and uptime. Digital cognition in industrial automation offers a practical way to encode expertise, reduce variance, and improve performance without removing people from the loop.

Plant automation has excelled at holding a steady state. Closed loops keep temperatures, pressures, and flows within limits with a precision that would have astonished past generations. The moments that still bend outcomes, however, do not live in those neat intervals. They appear when start-ups compress schedules, when alarms cascade without a clear root cause, and when new feedstock or fouling pushes equipment into unfamiliar territory. These are the moments that rely on human control loops, the informal but critical chains of recognition, reasoning, and response that experienced teams carry in their heads.

“Operators and engineers still determine what to make and when to make it, what to fix and when to fix it,” Jason Urso, Vice President R&D and Chief Technology Officer, Honeywell Industrial Automation, explains. “The best-performing sites are often those that have the most experience on the floor, because human judgement in abnormal or transient conditions remains decisive. Automation has made steady state close to autopilot, but those one per cent moments carry real risk and value, and they still depend on people.”

The striking theme is not a call for autonomy at any cost. It is a call for tools that capture and scale the cognition behind good decisions. By framing these human loops as targets for systematic improvement, digital cognition becomes a design objective rather than a slogan. The aim is not to replace expert judgement, but to provide timely, context-aware advice that narrows the variance between a veteran’s response and a newcomer’s.

Probabilistic methods meet deterministic plants

Traditional control thrives on determinism. Given a set of inputs and models, the outputs can be predicted within tight bounds. Human cognition, by contrast, works on probability, pattern, and precedent. The technology inflexion is the arrival of methods and compute that model those probabilistic choices with enough fidelity to be useful in live operations.

“We finally have the mathematical techniques and the compute to treat the probabilistic side of the plant with the same seriousness as the deterministic side,” Urso continues. “If a cluster of five alarms appears, we can mine event histories to find similar patterns from years ago, match them to operator actions that worked, and present a ranked recommendation in the moment. That does not remove the operator. It equips a three-year operator to respond more like a thirty-year operator, because the pattern memory is no longer trapped in a single mind.”

To achieve this, two ingredients matter. The first is data already collected for compliance and diagnostics, including alarm histories, operator interventions, and time-series context. The second is trusted process models and simulators that can generate synthetic scenarios and forecast the likely outcome of a recommended action. A credible advice engine blends both, weighting precedent with physics so that suggestions are intelligible, testable, and bounded by plant reality.

There is an essential cultural point in how advice is delivered. No responsible operator wants a black box to ‘press the button’ in a high-hazard process. They will accept observable, graded recommendations that can be validated against live telemetry or run against an online process model in milliseconds. That distinction between opaque instruction and transparent decision support is what turns digital cognition from a research project into a practical instrument for performance improvement.

“Autonomy is a continuum, not a cliff,” Urso adds. “Driver assistance that warns of an unsafe lane change makes a young driver better without removing responsibility. The same principle applies on the board in a control room. We are presenting likely causes and proven action sequences, and we are learning from operator scoring of those recommendations so that the guidance improves over time.”

Connectivity changes the operating model

If digital cognition is the engine, secure connectivity is the drivetrain. For decades, industrial sites have been islands of expertise, each with its own tacit knowledge and practices. The combination of site-to-HQ and vendor-to-site connectivity allows expertise and monitoring to scale, turning episodic field support into continuous assurance. The operational and commercial model shifts from paid-when-broken to paid-to-prevent, aligning incentives and raising the standard of care.

“Connectivity lets us invert the service relationship,” Urso explains. “Instead of waiting for a call when something degrades, we can see anomalies as they emerge, correlate them across a large installed base, and recommend targeted fixes before a small issue turns into a production event. The benefit to the customer is higher reliability and faster resolution. The benefit to the provider is that success is defined by stable operation rather than billable emergency visits.”

The same connection fabric enables centralised expertise to influence local outcomes without stripping local control. A headquarters team can see the health posture of dozens of control systems and supporting assets, while the site retains authority over actions and scheduling. Advisory analytics operate alongside human oversight, and lessons learnt in one location become available to all in near real time. During the pandemic, the need for remote support shifted from an option to a necessity, and many organisations discovered that long-planned architectures worked when tested. That discovery continues to pay dividends as labour markets tighten and experienced personnel retire.

Cybersecurity in an open systems world

The gain in connectivity and openness increases the attack surface. Modern OT environments run on familiar IT components, share protocols, and exchange data with enterprise systems. The threat landscape has changed as well: more actors, more automation, more toolkits. The plant does not get to choose whether this is the world it lives in. It can decide how to defend.

“Obscurity is not a defence and perimeter thinking is not enough,” Urso adds. “You need asset visibility, vulnerability assessment, and threat detection in real time, and you need it at enterprise scale, not site by site. The right posture combines patch intelligence, anomaly detection, strict control of interfaces between IT and OT, and compensating controls when production realities mean a perfect patch cadence is impossible.”

The practical stance looks like a layered defence and continuous instrumentation. Discover what is connected. Classify and prioritise vulnerabilities by risk, not volume. Monitor for lateral movement and dwell, because sophisticated malware does not always announce itself on day one. When risk is elevated and maintenance windows are limited, apply compensating controls, such as tightening IT-OT firewall policies or enforcing allow-list execution, to reduce exposure until changes can be implemented safely. The essential step is moving from periodic, manual hygiene to continuous, evidence-based risk management that recognises the constraints of production environments.

There is a further human factor. New digital cognition capabilities require more data, and more data implies more connections. The defence must mature in step with that reality. That means decision-makers should consider cyber as a core design constraint for every connectivity and analytics decision, not a compliance afterthought. Frameworks and standards provide a baseline, but the operating discipline is what will determine resilience under pressure.

Evolving legacy without ripping and replacing

Modernisation is often where ambition collides with reality. Control systems last for decades, and the accumulated logic, displays, and wiring embody lifetimes of engineering. The cost and risk of wholesale replacement are why many programmes stall before they start. An architectural approach that treats evolution as a capability rather than a project changes the equation.

“We set out to remove the cliff-edge from system upgrades,” Urso explains. “The approach preserves configurations, graphics, procedures, and field wiring by wrapping legacy elements and emulating the old environment on modern, more capable hardware. It is akin to how long-lived spreadsheets run on today’s office suites without rework. You get new capability without discarding proven knowledge, and you create a path where each future generation continues to carry the prior one forward.”

That continuity does more than remove one-time cost. It preserves the behavioural consistency operators rely on under stress, while allowing advanced functions to sit alongside familiar views. It also buffers plants against the accumulation of technical debt that results from deferring upgrades to avoid disruption. An upgrade continuum becomes a core property of the platform rather than a heroic undertaking every twenty years. For leadership teams, this enables a more rational capital plan and reduces the operational risk associated with unfamiliar interfaces and rebuilt logic during critical windows.

From expectation to execution

Executives are rightly ambitious about what learning systems can deliver. The gap between expectation and execution narrows when the work is framed in the grain of plant reality. That grain includes imperfect data, skills variability, and the unforgiving nature of process safety. It also consists of an advantage: decades of recorded events and a deep modelling tradition that other AI domains envy.

The practical steps start with data discipline. Too much plant history is not usable because it was never collected to answer the questions digital cognition must ask. Service records that state’ problem resolved’ without the ‘how’ are a lesson in missed opportunity. Instrument tags that flatline or spike to implausible values should not slip through to model training. Investing in quality, context, and lineage is not a side activity. It is the foundation that lets probabilistic recommendations be trusted and refined.

Training and culture matter just as much. Guidance engines improve fastest when operators rate the usefulness of recommendations, and when those signals feed back into the model update. The habit of checking advice against an online process model, and the confidence that the model reflects plant reality, help prevent blind acceptance. These are teachable practices. They turn decision support into a living system that learns with the organisation rather than at it.

The governance that surrounds cyber and remote operations must be explicit. Clear responsibility boundaries between vendor and operator, documented change control, and transparent observability into agent behaviour and interfaces reduce the risk that new capabilities become new black boxes. The aim is to preserve agency and accountability while raising the floor of performance.

What the next step looks like

For senior teams deciding where to invest, the signal is consistent across domains. The next advantage will come from closing the distance between human cognition and machine control without compromising safety or resilience. Digital cognition offers a way to industrialise expertise, to shrink outcome variance across shifts and sites, and to raise the baseline of performance as experienced staff retire.

It is not a technology that lives in isolation. It depends on models that accurately reflect the plant’s physics. It depends on clean, contextual data and the means to collect it without eroding sovereignty or security. It depends on robust, observable connectivity. It depends on a modernisation path that avoids the dead weight of legacy rewrites. When those pieces are designed together, the productivity discussion stops being abstract. It becomes measurable in faster start-ups, fewer quality excursions, shorter mean time to resolution, and a shrinking gap between the best day and the worst day of the month.

“Autonomy is a destination, but progress arrives in usable increments,” Urso concludes. “We can improve human performance now by pairing deterministic science with probabilistic methods and by instrumenting decisions that used to live only in a notebook or a memory. We can connect plants so that expertise travels faster than problems, and we can defend that connectivity with active, layered security. We can evolve systems without discarding what works. If we focus on those truths, the path from expectation to execution gets shorter and a great deal clearer.”

The path ahead will belong to enterprises that treat cognition as an engineered capability rather than a happy accident of staffing. The technology is ready to help people make better choices at the moments that matter, and the operating model is prepared to reward stability and foresight instead of heroics. The result is not a plant without people. It is a plant where people operate with amplified judgment, supported by systems that learn, remember, and advise with the speed and consistency that modern manufacturing demands.

Related Posts
Others have also viewed

The software-defined revolution is reshaping industrial automation

A new generation of software-defined automation is quietly changing the character of modern manufacturing. By ...

The connected future of life sciences manufacturing

Precision, connectivity, and intelligence are reshaping the foundations of modern production. In life sciences, the ...

The road to embodied intelligence in robotics

The next decade will mark the transition from automated to autonomous robotics. By 2035, advances ...
magnetic

MES must become the factory’s nervous system, not its filing cabinet

Manufacturing execution systems are evolving from passive record-keeping to active, real-time decision support that shapes ...