Digital transformation and IT modernisation initiatives provide innovations to create a competitive edge and drive business growth. But they have also created increasingly complex environments that need to be managed by teams that are strapped for time and resources.

Research has shown that often the problem with getting people to use the right technology in their working life, is not always because of a lack of the right tools and software. It is in fact more likely to be the overwhelming number of different tools and tech that can be used and are available, which are not mastered by employees. Instead they are used in an ad-hoc capacity and are not able to reach their full potential.

“IT organisations are good at taking on new tools but really bad at retiring older ones resulting in a mass of tools that can be used, with some not being able to complete the job as well as their newer versions,” David Cumberworth, MD EMEA, Virtana says.

But how do you know which tools are critical to keep, where there is overlap, and what is no longer needed? Part of the promise of AIOps is to rationalise tools so you end up with a single-pane-of-glass view of the IT environment. This, however, is unrealistic.  Most analysts agree that you need a number of different tools to manage everything. “The challenges, therefore, are to reduce the number of legacy tools and replace them with a platform that does the collective work better,” Cumberworth adds. “The final tool selection should operate together in a fully integrated manner.”

Managing layers

The IT infrastructure is made up of servers and their related VMs and hypervisors, the network and related switches, and a storage layer (SAN or NAS). “A good starting point is to evaluate what you are using to manage these layers,” Cumberworth continues. “Then look at the infrastructure from an application point of view – do the legacy tools give you an application view or do they just show their particular silo? You need a view of how the applications using the infrastructure are performing so you can create a baseline. Once established, you can then look at pinch points and capacities to optimise the system. This new application-centric approach also gives you valuable insight you can share with the business – after all, they are only interested in how the applications are performing and not what technology they are running on.

“The next stage is to look at the applications themselves and the customer experience they provide. For this you will need an Application Performance Monitor (APM) that shows end-user experience, the coding, and all IT components outside the data centre.”

A good example of this is AppDynamics from Cisco.  Application Performance Monitoring tools like AppDynamics monitor business transactions as they move through a custom application written using Java or .Net and establish dynamic baselines.  AppDynamics can track every line of code and initiate deep diagnostics in performance wavers.  AppDynamics can also monitor servers (CPU & disk utilisation, memory consumption) and databases (performance metrics like resource consumption, database objects, schema statistics).  Using APM, business owners can now focus on revenue and conversion rates rather than focus on application performance issues alone.

“However, if you have 1000 applications you might license and instrument APM to monitor 10 per cent of them – only the business-critical applications,” Cumberworth explains. “For budgetary reasons, you would not have APM instrumented for the remaining 90 per cent of applications which may be tier two and tier three and may include commercial off-the-shelf applications like Backup or Authentication where code-level visibility is not of much value.  While APM monitors applications, virtual servers, and databases, it simply cannot see the underlying SAN or storage infrastructure.  However, this shared SAN infrastructure could be used by a noisy neighbour application which adversely impacts the SLA of a tier 0/1 application.”

Platform and infrastructure 

An infrastructure monitoring platform monitors the entire stack – from the host or VM down to the HBA port, SAN fabric and ports on your networked storage array. The platform should be able to connect to an AppDynamics controller and imports known applications.  Since the platform has already discovered hosts and the underlying storage infrastructure, after importing applications from the AppDynamics controller you can now see the application and its related infrastructure in the platform’s GUI.

“You now have application and infrastructure views, offering an integration interface so analytics can be viewed holistically rather than by platform,” Cumberworth says. “This enables you to report to the business how their applications are running and have performed during the time since last reviewed, transforming IT from overhead into a source of competitive advantage and business value. The relationship between IT and the business becomes stronger. IT can now show how the business-critical applications are performing and spot any potential problems in real-time before they affect end users. IT can also show its value and prove that its decisions on where applications are hosted, and their related costs, are sound.”

Streamlining tools and software, while retiring older ones and replacing them with a platform that does the collective work better, will ensure the systems can be used to their full potential and not in an ad hoc capacity. One of the most effective ways to benefit from these practices is to deploy a monitoring and management platform. Its technology provides innovative organisations with the clarity they need to take control of their infrastructure, transform their cloud operations, and deliver a superior brand performance.

Read more from our features page here!