Connected Technology Solutions spoke to Bernhard Eschermann, CTO process automation, at ABB Automation to gain his insight into the role that edge computing will play in manufacturing’s future.

Can you tell us the role that edge computing is currently playing in the manufacturing sector?

The interesting thing is that with the advent of cloud computing, a lot of the work that has been carried out for the last 30 years on site, in terms of computing, became edge computing, or fog computing, or whatever fancy terminology you want to use for that. So, there is certainly an issue in terms of defining exactly what the specifics are that you want to use with edge computing.

Typically, what we have is the usual automation systems, called Critical Control Systems (CCS) in the industry. You have got the inputs, outputs, controllers, servers, all the touch points that does normal automation. And obviously, that is also on premises.

But now, what you do is extract data on to something that is more easily accessible from the outside. Due to cybersecurity concerns, you do not want to expose too much of what is going on in your automation system to the outside world. And that is what you typically call the edge in an automation system today, basically extracting data from all the processes, equipment, and assets, and making this data available. You might have edges that are simple gateways, just taking data and forwarding it, but you might also have edges that are running applications. And then, if you do not want to, or cannot deal with, the data onsite, you might forward it outside to a digital platform – to some servers in a central place, or into the cloud. And you might also run applications there too.

It depends very much now in terms of what data you need, and what the application requirements are, whether you run these applications yourself, or they are somewhere in the cloud or in a data centre, or whether you run them on the edge.

Can you tell me how you deliver your edge applications?

We have got our own version of this. Our edge has the brand name of ‘Edgenius’ and our system for doing all of the industrial analytics and AI is called ‘Genix’. And then we have got our automation offerings, and other applications on the edge that are extending the functionality of the control system. To give you an example of how they work, traditionally, if people walking around a plant want to know about what is going on in a particular vessel – what the level of filling is or what the temperature is inside – and you might not have any local gauges. Now the operator can pick up the phone and call the control room and ask about it. What we would have nowadays is that the edge would have a server for providing mobile access to all that information, so that people could take their tablet or their mobile phone and check whatever the information that the people in the control room have on their mobile.

The next level of applications that you might hear about is called IOM applications, or Integrated Operations Management. These are applications such as manufacturing execution systems (MES). They might run on the edge, even though they have access to certain data from the enterprise resource planning in terms of daily production targets and scheduling. Then this system creates the schedule, that we must provide the right inputs to the underlying control system to do that.

So that covers some of the applications that you can run on the edge. But of course, you can also run analytics applications, and AI applications on the edge which give you the benefit of being able to analyse whatever data comes through, for example, from certain assets to find out whether there is any maintenance needed.

I think we outlined some of the benefits of managing data at the edge, could you reiterate some of those points for me?

You need to look at what is easier to do on the edge and what the benefits are. If you are streaming time series data in millisecond frequency, and you can create value out of that, then there is no reason to send loads of data to a remote place, because the cloud is not for free. So, if you have got millisecond data streams to the cloud, that is becoming expensive over time.

If you must deal with the operational data, that is something that the applications on the edge are typically much better suited for. What you also must do in case you want to do some further analysis on the cloud, you typically read some compressed data. So, getting from: ‘Okay, every millisecond, the temperature is still 21 degrees,’ to, ‘well, the temperature generally is 21 degrees, but we have this and this exception’, you still only have this information outside on the cloud. That is, of course, also intelligence that you have on the edge.

And the other part is, if you want to directly close a feedback loop, or you want to get the control signal back to the motor directly, then of course it is typically easier if you do that within a plant, rather than having to go through all the infrastructure of sending that to a data centre somewhere and processing the error and getting it back. So, the latency of data is of relevance to you, and then you typically tend to do things on the edge.

However, if you want to use data from the past ten years, or if you wanted to analyse data from multiple systems that are distributed across a country, then it can do that anyway in a remote place.

Does the proximity to the process improve the data analytics capabilities?

Since you might use higher frequency data, which is richer, you might have situations where you can get more out of this data. Then if you consolidate the data into some summary information that you put further away from the process, that is certainly one advantage. The other advantage of proximity is that you, as discussed before, have a lower latency because the loop from the process to the edge, and then back to do something, if you have this feedback loop it is much shorter. And you might be able to have a much earlier intervention by doing things on the edge.

How close do you think we are to having an intelligent edge, where decisions can be made and then deployed there?

I think we are already there. I think the problems that we see more is, if we go back to the very start of our discussion, not all the factories or plants might have access to all the data. I guess that is typically more often an issue than having data analysed and having intelligent applications installed on a computer close to the process.

What is next for the edge and manufacturing? Where do you see it moving forward in the future?

This I think is more of a development in that there will be more flexibility in terms of where you put functionality. If you look back, say 20 years, it was clear that if you wanted to have any impact on the process, you must place it next to the process. Now you open a lot more options. You might have some things in an embedded controller, next you might have some functionality on the edge, and you might have some support functionality also in a central data centre.

If you think about things like 5G, where you will be able to have guarantees on deterministic data transfer over radio connections, then, of course, the option of moving more functionality into a central data centre that is then connected to the cloud through 5G becomes bigger.

There will be more deployment options where you put your functionality. And that will be, I guess, also one of the important challenges to resolve, how do you coordinate all these different things that are no longer at one place, but are distributed? Depending on the type of cycles and quality of service and latency and all kinds of other considerations that you might have, how do you coordinate all of that so that you have got a consistent system that works reliably?

Read more of our exciting features here!