It is early days for the use of artificial intelligence in asset management but there are already promising signs that it may hold the key to improved performance
Artificial intelligence (AI) in the asset management industry is still hovering around its infancy as firms seek out best practices and use cases for their own strategic application. Indeed, 53 per cent of firms say they have or are planning a limited number of AI initiatives, according to research from Sapient.
For starters, AI can give asset managers the ability to access deeper insights, increase productivity and drive higher revenue at a time when most firms are adapting to rapidly changing customer expectations, sluggish returns, outflows from active to passive and tighter regulations.
Leaning on machine learning
Advanced Artificial Intelligence systems have a crucial role to play for organisations aiming to thrive in Industry 4.0, particularly true for the manufacturing sector, where operating constantly to the highest efficiency is essential. “Collecting and analysing the vast volumes of data required to manage a company’s assets using a traditional human operative approach can result in avoidable costs rapidly accumulating,” Ian Dowd, CMO at SSG Insight says. “Adopting the technological advancements made in AI and machine learning is the key to maintaining a competitive edge.”
At the forefront of AI technology, machine learning uses algorithms and statistical techniques to provide real-time data analytics, with the capability to completely revolutionise a businesses’ operating standards. Through faster feedback loops, AI can continuously and consistently monitor a company’s asset health and challenge unexpected downtime spent on maintenance. “Instead of relying on human inspection of a company’s assets to ensure that they’re operating to the optimum levels, AI technology automatically turns sensory readings into actionable insight to analyse and resolve defects immediately and identify possible causes of failure,” Dowd adds. “Once the root cause of the problem has been identified, the most suitable staff member can be assigned to assess the situation before costly delays accumulate.
“Another major benefit for companies that adopt AI and machine learning for their asset management is that the intelligence systems now exceed human capabilities, adopting the technology also helps to mitigate the level of impact made by a high employee turnover as key knowledge is retained in the business and time is saved on training new employees.”
Introducing AI and machine learning technologies also helps those in the manufacturing industry to flourish in Industry 4.0 as it enables employees to fully utilise their time. “Due to extensive amounts of data being analysed in real time, companies now have access to an increased amount of valuable insights,” Dowd concludes. “Displayed through clear summary reports that provide problem solving advice, employees can focus their time on strategic decision making that will result in a high return on investment for the business rather than collating and analysing the information manually.”
The problem with data
When it comes to making use of the copious amounts of data generated for asset management, one of the common challenges is the quality of the data itself. “There’s two sides to the coin,” Jeff Erhardt, GE Digital explains. “I would start by saying that the data in the customer space is not necessarily cleaner. We are a technology provider into companies as diverse as healthcare, payroll and gaming. Those companies are varied in the types of inquiries that they handle.
“In industry the data is both noisier or dirtier, but more importantly, if you think of a lot of what the industrial applications are trying to do, they are designed to predict and present failures in machines and in equipment. Many of these machines in the industries that we serve such as healthcare, aviation and oil and gas, serious problems are rarer; you don’t hear about plane crashes every day.
“Not only do you have noisy data by the nature of these pieces of equipment, they’re out in the field being subjected to environmental conditions, but also you have very rare events that you’re trying to predict.
“The upshot of that is, from an AI or a machine learning standpoint, it is not possible to simply say, ‘I’m going to pull an off the shelf tool from Microsoft or Google and apply that tool to this problem and expect to get a good result where people’s lives are at stake’. The flip side of this coin is the problems are very hard and they’re very high-stake issues that take a lot of care how to construct properly. It leads into our approach of how we embed AI in machine learning within our applications to make the end results better as opposed to trying to allow and teach other people to do it themselves on a one-off basis.”
Supervised or unsupervised
Erhardt explains that you can break down machine learning into two categories. One of which is called unsupervised, the other supervised. “The term does not mean that a human is supervising them, rather it means, do you have examples of known failures or not?” Erhardt explains “Things such as anomaly detection, is it possible to do those in a so-called unsupervised way?
“Where you’re looking at a signal coming off of a piece of equipment and you are asking, ‘Is there something in this signal, the time period that is different than I’ve seen in the past?’” Erhardt adds. “That’s interesting, but the problem is it is just not good enough to know that something is different, you need to also know that it is a problem and what you should do about it? Because just raising false alarms is not good enough. That is why really, our lens on the world is focused on the supervised space where we have examples of where this has happened in the past that we can then identify pattern from and predict when it is likely to happen again.”
The problem is that in the very low failure mode regimes, if you look at one piece of equipment at a time, or if you look at one customer at a time, it may be impossible to predict with high enough signal-to-noise ratio. “That is one of the key areas or key benefits that we have,” Erhardt continues. “Part of this GE ecosystem, part of this giant conglomerate, our customer base, our history, is we have the ability to look across many, many customers over time. We can understand what their history is and pick up many failures even if they were only one customer or one engine at the time, but when you start to look at it in aggregate, you can get better predictability, higher signal-to-noise than anybody else can looking one at a time.”
Still early days
According to Erhardt, predictive analytics for assets is one of these technologies that is still in its infancy much like Big Data, AI and machine learning. “They all go through the same cycles,” he says. “They start out through something real, they get hyped, everybody talks about it, the words get confused, nobody knows what it means and then everybody’s disappointed. It’s like the media version of the Gartner Hype Cycle and the trough of disillusionment.
“First and foremost, what I would say is the time is extremely early, and very few people, especially in our space that talk about doing machine learning or AI are actually doing it. It is an incredibly hard technology to structure problems for, it’s incredibly hard to implement and it’s incredibly hard to maintain.
“The worst thing about machine learning is that it is easy to get any problem 80 per cent correct. By that, what I mean is, is that the last 20 per cent is hard and it is critical, especially when you’re dealing with multi-million-dollar decisions. Unlike when you are using image classification where you are classifying dog versus cat or trying to recognise a person where the cost of being wrong is not necessarily that high. If you think about the use cases in this sector, the cost of being wrong in either direction is catastrophic.
“If you have a false negative, if you miss a potential weakness, there is a potential that it could blow up. That happened here in California near San Francisco ten years ago. PG&E (Pacific Gas and Electric) had a natural gas pipeline that exploded because they did not inspect it properly. People died and there are still commercials on the television with PG&E apologising and talking about the corrective action they’re taking.
“We have a lot of work to do on the basics, but doing it in a way that is thoughtful about how do we bring aggregated good to our customer base in a way that is sustainable and recognises the high risk and cost of what we’re doing.”
Gaining operational insight
AI and machine learning play an important role in gaining valuable operational insights. According to Jane Ren, CEO and founder Atomiton, data alone is not enough. “Applying AI to understanding the relevance and context of individual data streams in combination (impact of humidity or changes in weather on steam generation and heating processes), combined with AI’s learning capabilities – synthesising the information into knowledge (how temperature impacts yield or throughput), and used for forecasting (optimal scheduling), provides predictive insights that can be used to orchestrate and automate operations,” she explains.
“AI and machine learning are good at deriving insights and predictions from datasets that have been structured and contextualised for a specifically defined problem. But they need prerequisites to add value: define the specific domain problem and integrate and process (multiple sources of) data to add structure and context, based on the problem.”
There are two views of AI for asset management. Ren explains that one is the inside view, where AI is exclusively used to analyse the machine’s physical health and predict potential failures or maintenance needs such as monitoring (temperature and vibration of motors), analysing and learning, models can be created for when an asset needs maintenance (such as does it need calibration?), or when it might need to be replaced.
“Then there is the operational view, where the location, status, and functions of the asset is closely linked to the operational activities,” she adds. For the operational views such as location, utilisation and productivity are key parameters. “There’s good progress and adoption in both areas,” Ren continues. “However, we believe the second aspect will bring greater long-term productivity gains from asset management.”
She concludes that organisations are seeing tangible ROI in deployments of industrial IoT software, such as Atomiton’s IoT operating stack (aka A-Stack). “For instance, one of our tank terminal customers has been able to realise a 25 per cent reduction in peak energy use and ten per cent reduction in overall energy use, leveraging the intelligence and predictive models from our solution to optimise their operations,” she says. “There’s a great opportunity for many other organisations to benefit from connecting their diverse operational systems, combining real-time data into digital operational models that deliver relevant business insights, and predictive analysis to optimise operations and automate actions.”
Cybersecurity threat to legacy systems
Ransomware cybersecurity attacks present a very real threat to manufacturers, as evidenced by the recent high-profile WannaCry epidemic, which impacted businesses in more than 150 countries. Yet many UK manufacturers are still running legacy systems which are extremely vulnerable to attack.
With regulations requiring organisations to protect their data, and with the UK’s manufacturing industry more competitive than ever, the importance of having a robust disaster resilience provision in place has never been more critical.
The UK manufacturing industry is fiercely competitive, with manufacturers having to fight harder than ever to reduce costs, increase profitability and establish a competitive edge.
Tony Mannion, sales development manager at SolutionsPT, explains that one of the most effective ways of doing this is by ensuring they are using advanced industrial systems. “However, despite this, a significant number of manufacturers are still operating extremely insecure legacy control systems which are liable to leave their systems exposed to disruptive cyberattacks,” he says.
Manufacturing is now the industry’s most frequently targeted by cyberattackers – there was a 24 per cent increase in attacks globally from the first to the second quarter of 2017. The risk to manufacturers has never been higher. So how can they, and particularly those still running legacy systems, ensure their operations are safe from the threat of a ransomware attack? And what can they do to negate the impact if one does take place?
“With high-profile ransomware attacks such as the WannaCry and Petya epidemics, which affected critical infrastructure such as airports, banks and government departments across the world, and with malware’s ability to spread quickly and force unscheduled downtime, manufacturers can no longer afford to ignore the threat it poses,” Mannion continues. “Indeed, if unplanned downtime does take place, manufacturers risk reductions in both productivity and profitability, as well as a loss of reputation and, potentially, a loss of clients.”
Many ransomware attacks are not targeted, meaning all systems, including unpatched systems, Windows systems and the legacy systems, are vulnerable to infection. Similarly, if a ransomware attack can infect your systems, for networks that suffer from a lack of visibility, knowing what the malware is targeting and what damage it is doing is almost impossible.
Perhaps the biggest threat to manufacturers is the loss of data. This is a huge issue. As well as being enormously disruptive to operations, the loss of key data often carries with it legal implications, as some industries are required to provide information to government agencies such as the Environment Agency, and failure to do so will result in substantial fines. Likewise, loss of data can be catastrophic for manufacturers in regulated industries such as pharmaceutical, as they cannot sell their products into certain markets unless they have a complete set of production data.
“To protect against cyberattacks, manufacturers need to develop an architecture that is inherently secure by design and have a plan in place to protect against the threat of multiple types of cyberattacks,” Mannion concludes. “This is a cultural issue and the biggest victory a company can achieve against cybercriminals is for a shift in mindset around the OT environment.
“It may be impossible to prevent an attack from occurring in the first place, but a disaster resilience provision will ensure your operations can continue to function in the face of it. Disaster resilience provision should therefore be the cornerstone of every manufacturer’s cybersecurity strategy.”
[Box Out]
MAKING BETTER USE OF DATA
Jane Ren, CEO and founder Atomiton highlights three areas where organisations have opportunities to better use their data are around these areas:
Timeliness – the ability to extract and utilise the data in real time, so it can be used to inform current and future operations rather than pulling data from individual devices and sending it to a cloud, waiting a day, a week, or even a month to review and analyse it for any meaningful insight that can’t impact operations in real time. Example: You have consumable resources such as fuel used as part of a fabrication process. You need to know in real time (or even ahead of time) whether you have enough fuel for the day’s operations so that you don’t inadvertently create a project interruption or delay. With edge computing, you can gain the real-time data necessary to support these kinds of operation needs.
Context – bringing intelligence to the data within the operations, with more context leveraging other data (data from other systems). Example: In a chemical processing plant where steam-generated energy is an important part of their operations, you may be monitoring temperature for heating. However, weather conditions such as humidity may impact steam generation, so combining and analysing multiple sources of this real-time data can bring intelligence to how you manage operations. Another example: you’re running multiple lines simultaneously, which increases your energy usage, which in turn impacts your peak energy usage, and may then set higher demand rates for the future. Having the context across the multiple lines allows you to make intelligent operational decisions.
Future – predictive analysis. Software algorithms built to understand multiple factors and variables that are part of operations can be used for predictive analysis. Example: you’re trying to optimise efficiency of a downstream terminal loading operation, so you increase loading capability. There are numerous variables that make operations complex – what products in which pumps, truck arrival, queueing, loading time, loading bay assignment, schedule data and demand forecasting, etc. Building the software models that put these variables, assets and processes into context and analyses the information enables loading time prediction, queue monitoring and prediction, truck arrival prediction.