The path to digitalisation, however rewarding the goal, has proven to be a difficult one for many enterprises. One thing that is certain, however, is that visibility into application performance is vital not only to an effective technology strategy but to business success. The use case study to follow outlines a series of events experienced by many companies and sets down some guideposts to help you ensure that you will know the kind of digital service your customers are getting and will allow you to respond rapidly to outages and brownouts and, as a result, minimise the impact on revenue and brand.
A European energy concern had deployed a second generation application performance monitoring (APM) solution in 2015, having chosen a vendor that, at the time, was a Gartner® Magic Quadrant™ (MQ) leader. Initially, it was a good fit. The number of corporate applications written in Java, according to the three tier architectural standards of the day was growing and it was anticipated that eventually the greatest part of the application portfolio would be Java-based. Another major requirement was the ability to rapidly map a single end-to-end business transaction to the succession of underlying software component state changes and the vendor in question featured that very functionality as a differentiator.
In the first years after the initial APM system deployment, things went well but in that same period, ideas regarding best practise application architectures and the application development process itself underwent a revolution in response to digitalisation driven changes in the energy market. Applications needed to be built in such a way that functionality could be frequently added, modified, or removed in the blink of an eye with only minimal impact on the surrounding environment. To accommodate this demand, architectures became more modular and components more ephemeral while the application development process sped up to support a continuous stream of changes flowing into the production environment. A side effect of this revolution was that application developers, heretofore unconcerned with what happened to their code once it entered into production, now had to take responsibility for that code at least in the early days after its initial release.
With the change in application architecture and what turned out to be an order of magnitude acceleration in the rate at which changes were delivered, the deployed APM technology began to fail at its mission. Being a second generation platform, the technology relied heavily on sampling, the use of pre-defined application topology models, and byte code instrumentation based deep dive analysis of behavioural anomalies. Unfortunately, sampling was too coarse to capture all of the important change-driven events in the environment; the pre-defined models were out-of-date almost immediately and the focus byte code based analysis warped an understanding of transaction flows when most of the transaction processing took place in the passing of messages between containerised components of Java code. Finally, the inadequacies of observation and analysis at the software system state change level greatly reduced the value of any end-to-end business transaction view. In short, when it came to the newer applications, the energy concern was unable to effectively understand the impact of the thousands of changes being made monthly.
In response to the deteriorating situation, the APM solution was supplemented by an increasing number of tools. Application logs, provided by Splunk’s log management system proved to be a particularly rich source of information, confirming and enhancing the output of the second generation technology. Costs were increasing, however, as was the toil associated with maintaining and choreographing a portfolio of unintegrated technologies. Most problematic, however, was the loss of any possibility of putting together an end-to-end view of digital system behaviour that would be directly relevant to business - as opposed to technology - decision makers.
The decision was made to review what the market had to offer - not with an eye to replacing the existing APM technology across the board but to put in place a division of labour where a new platform would handle the newer generation of modular, ephemeral applications, leaving the legacy portions of the portfolio (still over 50%) to the incumbent vendor. There might be a gradual migration away from that vendor over time but there was no need for a radical, across the board replacement at present.
Splunk’s Observability platform quickly rose to the top among the vendors being considered for four reasons:
The implementation was a success and the Splunk Observability platform has now become a key enabler of the company’s IT Operations and Development. In the future, it is expected that Splunk’s footprint will grow in tandem with the deployment of ‘new generation applications’ but, beyond that, more attention will focus on using AI functionality provided by Splunk’s IT Service Intelligence (ITSI) system to understand and even predict the impact of system level changes on business process execution.
The successes achieved by this European company were not a result of unique circumstances. Any company, with a focus on the digitalisation and a strategic approach to data can get there with an assist from the Splunk technology portfolio. Unfortunately, until now, the reach of Splunk’s observability functionality into the UK and German marketplace has been somewhat limited due to the lack of a sovereign realm that satisfies regulatory requirements. Hence, Splunk was not able to support UK and German companies as they coped with the accelerating pace of digital environment change, even when Splunk was already the strategic choice for on premises IT operations log management and security event and information management. Now, however, with the new realms in London and Frankfurt, our UK and German customers will be in a great position to build out a unified, AI-enabled approach to development, operational, and security data management.
Want more details on the Splunk realms and our solutions? Don't hesitate to get in touch.
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.