At the core of any net-zero strategy is understanding how you are performing and what actions you can take to improve that performance across the value chain. Jane Ren, CEO of Atomiton explains why the ability to access and analyse data in a holistic company-wide manner is at the heart of any net-zero strategy.
For many organisations, their first port of call would be to look at how artificial intelligence (AI) can help. We would like to believe that AI applies anywhere to any complex problem, but the reality is that is not the case. AI should only be used where it is the best tool for the task. Many of the gaps in today’s data surrounding the net-zero journey are caused by data silos, data architecture, and data management that will not benefit from applying AI from the outset. Nevertheless, there are areas where AI can be required, such as data flow automation, auditing data and tracking data quality, spotting errors or omissions and automating data transformation.
Three roles for data in a net-zero strategy
Carbon Emission Inventory. Firstly, to begin developing any strategy, you must establish a baseline to understand where most of the impact lies. This could be a variety of elements such as electricity, transportation, manufacturing processes or even from your supply chain. This baseline is essential and requires a broad range of data to understand how the various business activities could generate emissions. Unfortunately, this data traditionally resides in silos that are often challenging to access.
Carbon Performance Metrics. Secondly, a net-zero strategy requires an organisational transformation. It is not the responsibility of one person; it requires the mobilisation of different tiers of the organisation to take on responsibilities and track their KPIs and metrics so it can become an operational-oriented initiative. For example, many external groups and standards organisations examine how corporations are taking tangible actions to achieve net zero instead of just talking about it; to do that, they need KPIs that you can measure across the corporation to validate what has been completed.
Carbon Impact Analytics. The third piece of the data jigsaw is the role that data plays in understanding the costs, benefits, and feasibility of different carbon reduction strategies. An example is to evaluate a switch from fuel to electricity as the source to generate heat used for production. Although electrification allows the adoption of greener sources, bringing down the carbon footprint per unit of energy used, such a transition also carries capital costs and operational constraints. It is not feasible in all cases, especially in heat-intensive processes such as metal production. You require several sources of data to make these kinds of analyses. What is the demand? What is the peak load required, and how steep is the demand curve (how quickly do you need to ramp up or down energy intensity)? What are the capital and operational cost implications? How would that affect the organisation financially and operationally? What is the carbon implication?
Similarly, if you consider changing your product packaging process and material to be more circular, it may impact your suppliers, production sites, and distributors. You need data to understand the impact across the entire value chain before you can make an informed opinion. To objectively evaluate the cost vs benefits of such strategies, some organisations even put in place an internal carbon price to aid those analyses.
Facing the data challenges
The biggest challenge in using data is its inaccessibility; as much of the data in an organisation today sits in silos. To calculate your baseline carbon emissions, you may have to account for factors such as your employee business travel, and that data may reside in your expense management system. In addition, there will be a requirement for data on the supply chain, which will live in the enterprise resource planning (ERP) system. These are different data silos that are inaccessible, and the information in these systems is not prepared in the format you will need to calculate emissions. For example, it may tell you how much you have spent on fuel for the business fleet but will not keep track of the quantity of diesel each vehicle uses because, traditionally, that was not important to you. But that is precisely what you need to calculate emissions.
The lack of good practice often compounds data inaccessibility to ensure quality traceability and auditability. The data requires several transformation steps to extract the carbon emission information. It is not traceable if these transformations are conducted as a series of one-off processes instead of using a consistent tool and platform. This problem is becoming more acute with organisations’ growing scrutiny regarding their emission reduction claims.
Another emerging problem area is the shareability of data across the value chain and the lack of quality assurance. When it comes to Scope 3 assessment, this requires companies to share their emission data with either customers or suppliers. Because one company’s Scope 1 and 2 emissions are another’s Scope 3, it is rapidly becoming a complex question that has to be solved over time with a combination of technology and organisational changes.
How to break down data silos
The solution we build into our platform for breaking down these silos is a set of canonical models that can describe the standard expected data models and data structures that would be required. A canonical data model (CDM) aims to present data entities and relationships in the simplest possible form to integrate processes across various systems and databases. A CDM is also known as a standard data model because we aim for a common language to manage data.
When you establish models, you can develop a standard transformation from the source data to the target data you need. Everything becomes more consistent and less manual over time. Establishing these models will also set the expectation for different parts of the organisation that they will need to start collecting this kind of data.
The tools for drilling down and visualising the data
Data within an enterprise is multi-dimensional, so when you drill down, you would like to see the emissions from different geographical parts of the organisation. But that is not enough. You may also want to view emissions by product lines, which may cross geographies or even look at supply chain emissions.
These dimensions of data cut across each other, so it is essential that when you start collecting data, the relevant architectures are built-in so you can look at it from different perspectives and draw the conclusion you would like rather than having one single picture. These insights from the data come from both physical and logical ways of visualisation. The physical distribution across different locations or facilities is one essential part.
One tool that Atomiton supplies is a logical tracker tool where you can click and drill down into the data. It is an interactive visualisation that delivers a granular view of the data as you dig deeper into the lower data tiers. We can also provide comparative and trending analysis across time and across an organisation. With this time trend, you can view spikes and dig down to find the cause. All these visualisations help operators understand the dynamics of an overly complex picture.
Any journey towards net zero will be challenging. It requires a business transformation, including how you collect, transform, store, and interrogate data. However, once the building blocks of defined data architecture are in place, the valuable business insights that can be derived can smooth the way towards meeting sustainability goals.