Whether on the cloud or on-premises, visibility into the inner workings of our IT services and infrastructure is an essential ingredient of a well working IT system.
The drive for digital transformation as a core strategic objective for most modern enterprises has meant that ensuring IT systems are working well, secured and delivering value for money is a critical endeavor. Monitoring IT status and performance is crucial for:
In The Uptime Institute’s Annual Outage Analysis, more than two-thirds (67%) of all outages cost organizations more than $100,000. The takeaway? The ability to quickly detect and address system anomalies is capability you need.
In this article, we will review what is monitored, the process of monitoring as well as future trends.
Splunk IT Service Intelligence (ITSI) is an AIOps, analytics and IT management solution that helps teams predict incidents before they impact customers.
Using AI and machine learning, ITSI correlates data collected from monitoring sources and delivers a single live view of relevant IT and business services, reducing alert noise and proactively preventing outages.
Put simply, the term “IT monitoring” refers to any processes and tools you use to determine if your organization’s IT equipment and digital services are working properly. Monitoring helps to detect and help resolve problems — all sorts of problems.
Today, monitoring is complicated. That’s because our systems and architecture are complicated — the IT systems we use are distributed. (Just like the people we work with are, too.)
Let’s look at a couple official definitions.
Google’s SRE book defines monitoring as the “collecting, processing, aggregating, and displaying real-time quantitative data about your system”. This data can include query counts and types, error counts and types, processing times, and server lifetimes.
In ITIL® 4, information about service health and performance falls under the “Monitoring and Event Management” practice. They define monitoring as a capability that enables organizations to:
Monitoring is closely linked with many of the IT service management (ITSM) practices including incident management, problem management, availability management, capacity and performance management, information security management, service continuity management, configuration management, deployment management, and change enablement.
Monitoring can have various “flavors”. Though this article is about IT systems monitoring writ large, we can also categorize some more specific subsets of monitoring, like:
(Splunk can help with all of this. We also offer vendor-specific monitoring: AWS, SAP, GCP and more.)
Example: Splunk Infrastructure Monitoring showing an AWS services dashboard
The EC2 dashboard displaying out-of-the-box metrics and indicating critical disk space issues
IT systems monitoring is about answering two fundamental questions: what is happening, and why it is happening.
To answer these questions, you need to continuously monitor elements in the system for anomalies, issues, or alerts for maintenance activities, in order to ensure that the services operate and can be consumed as per agreed performance levels.
Metrics are the sources of raw measurement data that is collected, aggregated, and analyzed by monitoring systems. IT system metrics range across multiple layers, including:
Monitoring based on low level infrastructure metrics is known as “black-box monitoring”. This is generally the preserve of system administrators and DevOps engineers. At the application level, the term “white-box monitoring” applies, and is usually the work of developers and application support engineers.
IT system monitoring metrics are usually sourced from native monitoring features that are designed and built within the IT components being observed.
Beyond that, some IT monitoring systems deploy the use of custom-built instrumentation (such as lightweight software agents) which can extract more advanced service level metrics.
According to Google there are four golden signals that should be the focus for IT systems monitoring:
As system administrators set up monitoring systems to capture more data, they run the risk of being overwhelmed by:
It is a good practice to set up simple, predictable, and reliable rules that catch real issues more often than not.
In addition, regular review of thresholds settings (informational vs. warning vs. exceptional) as well as effective configuration of automated correlation engines such as those enabled by AIOps can help prevent over-alerting.
(Learn about adaptive thresholding, which enables smarter monitoring.)
Now, with the context set, let’s have a look at the six main activities in IT systems monitoring:
When selecting an IT system to monitor, you’ll need to do several planning activities, including: defining its priority, choosing features to monitor, establishing metrics and thresholds for event classification, defining a service 'health model' (end-to-end events), defining events correlations and rule sets, and mapping events with the action plans and teams responsible.
Key outputs from planning include:
This is the first stage of event handling. Here, the IT systems alerts are detected when the set thresholds and criteria are passed. Alerts are captured by an IT monitoring system where they can be displayed, aggregated and analyzed.
Based on the rules set, the monitoring system filters and correlates the received alert. Filtering can be based on criteria such as:
Correlation checks patterns among other alerts to determine anomaly sources and potential impacts.
In this phase, the event is grouped according to set criteria (such as type and priority) in order to inform the right response. For example, alerts related to intrusion or ransomware would be classified as security events — and this informs a SOC team to act on them.
Based on the action plan and responsibility matrix you previously defined, the relevant team is paged via email, text, online collaboration systems or other agreed channels.
For some IT environments, the event response can be automated, meaning that action is taken independent of human intervention such as rebooting of instances or failover of traffic.
Based on the handling of events and the resultant effect on the quality of IT systems, there should be a regular review of the monitoring planning to ensure that the metrics and thresholds set still meet your requirements. This review should also:
As IT systems grow in complexity, organizations will require to invest in IT systems monitoring tools that provide the capability required to keep up with the technology evolution and the volume of changes made.
A survey from 451 Research had 39% of respondents having invested in between 11 and 30 monitoring tools for their application, infrastructure, and cloud environments—wow! This tool sprawl quickly results in:
Tools that can span the entire technology landscape and consolidate events across myriad systems and environments will inevitably be more attractive for organizations looking for value for money.
When working with clients over the last years, along with annual research, two primary trends emerge.
The impact of AI/ML on IT systems monitoring will continue to grow especially given the rising capability of large language models (LLMs). Modern tools that have integrated AI can now handle the entire process lifecycle from detection to response, especially for large event data volumes analysis, as well as handling of tedious activities such as event correlation and log analysis across distributed systems.
With appropriate training, these tools are perfectly suited to sort through alert “noise” and “false positives/negatives” faster and more effectively than any human team. However, this does not mean the total elimination of people from IT systems monitoring — instead, their focus will shift to building better orchestration and automation tools to respond to alerts and resolve them.
The other trend that impacts IT systems monitoring is the advent of unified observability. The rise of platforms that provide a single view — across infrastructure, applications, and user experience — by analyzing logs, metrics and traces means there’s a valuable magnifying glass available to you: more thorough analysis of alerts to pinpoint the exact issues that users are facing across complex environments.
(Splunk is the first platform that unifies full observability with cybersecurity. See how.)
For businesses of all sizes, IT systems monitoring is a critical way for guaranteeing the functionality, performance, and security of their IT services. The field of IT systems monitoring will continue to evolve to meet new challenges and offer more benefits as long as technology continues to grow.
The significance of continual improvement cannot be overstated. Organizations will only guarantee that their services provide value by embracing a proactive, data-driven approach to IT systems monitoring.
See an error or have a suggestion? Please let us know by emailing ssg-blogs@splunk.com.
This posting does not necessarily represent Splunk's position, strategies or opinion.
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.