If IT security is top-of-mind for you and your organization, asset and application discovery is critical — you need to know all of the assets you have in order to identify any areas of vulnerability.
Asset and application discovery is the process of identifying and cataloging all software and hardware systems running on the network.
Specifically, discovery is the use of automation and tools to detect application components and services (i.e., assets). These assets may be either:
Integrated permanently within the IT environment.
Activated in their ephemeral state to perform temporary computing tasks.
IT assets include hardware, software, and cloud services. Anything that may be a part of IT infrastructure, including virtualization options, can be included in the broad category of IT assets.
Hardware assets include:
Laptops and desktop computers
Servers
Routers
Printers and scanners
Software assets include:
Licenses
Operational systems
Applications
Cloud assets include:
Virtual machines (VMs)
Storage resources
Today, cloud computing, the containerization of IT, and increasing concerns around information security converge, revealing an important fact: enterprise technology environments are chaotic and heterogeneous.
So how can you cut through the noise?
An obvious goal here — for business executives, ITOps, and cybersecurity teams — is to discover, track, and monitor all IT assets and their operational states. This is a basis for automating and controlling ITOps processes, such as resource provisioning or managing security risks, by understanding how external cloud-based services access sensitive user data.
Let’s look at some common discovery techniques.
An agent is installed on the client server to capture targeted discovery information. This includes details and logs on system performance, configurations, processes, network communications, and traffic data between the client server and host systems.
A monitoring station sends periodic requests to the agent. The agent then responds with information on real-time systems, apps, and services running through the client server. This information is analyzed by a monitoring tool to:
Determine application dependencies.
Identify any topological change in communications between third-party services.
(Splunk can help with all your monitoring and full-stack observability needs.)
Agentless discovery is the traditional approach to application discovery that involves a monitoring tool interacting directly with the target service. An example is the “Sweep and Poll technique” for application discovery.
This involves pinging target IP addresses and identifying the responding services. The monitoring tool analyzes information such as ping rate and device group logs, which can be captured from an individual network node.
This is a simple and lightweight technique for asset and application discovery, but it has its limitations.
Discovering assets in a large, distributed network can take a long time. The fleeting (ephemeral) nature of cloud-based and containerized applications, plus frequent changes in dependencies, means that critical assets may be undiscovered. In the case of third-party cloud services, users may have limited visibility and access to external data centers.
Following the enormous adoption of SaaS, the cloud industry is responding to the growing business need for accurate and real-time discovery of IT assets.
The Discovery as a Service (DaaS) offering is a cloud-based service that typically works by:
Capturing network monitoring and orchestration level data from the host systems.
Running advanced AI tools on the backend to identify patterns corresponding to specific IT assets.
The DaaS service model itself is not different from traditional SaaS monitoring tools with embedded AI capabilities to discover application and asset relationships.
These services may also rely on conventional agent-based and agentless monitoring capabilities. In this case, it’s important to consider the limitations of both methods:
Agent-based tools require installation on every server to maximize visibility.
Agentless systems can only register assets visible during the monitored periodic intervals.
This is where advanced analytics and monitoring capabilities can be crucial, relying on data patterns and information flow between interacting apps and services in real -time.
Data-driven capabilities used in DaaS are scalable, but also expose users to privacy-related risks: monitoring tools can reverse-engineer network usage patterns by closely analyzing network logs, without accessing information communicated between the target apps and end-users. These patterns can be used to infer information relevant to app usage and the end-users, which may be otherwise considered private and confidential.
The modern enterprise IT network is complex. The architecture may be software-defined and running application components within containerized or virtual environments. The computing resources are allocated dynamically based on:
Changing traffic demands
Performance expectations
The network generates a deluge of information that must be analyzed in real time. In the context of application and asset discovery, real-time, packet-level information is analyzed. The network communication packets contain information on:
The network src
IP flow attributes
Source and destination nodes
Security identifiers
Other information that is crucial for the correct and secure interaction between applications, services, and machines
Monitoring tools with advanced AI capabilities can infer patterns of information flow between network nodes and map them to specific application components and asset instances.
In contrast to deterministic asset discovery techniques such as agent-based monitoring and the agentless Sweep and Poll approach, AI tools can accurately discover assets based on probabilistic models of the network and asset behavior within a complex IT network.
(Read our complete guide to network monitoring.)
For improved security, cost optimization, and visibility, asset and application discovery is crucial.
See an error or have a suggestion? Please let us know by emailing ssg-blogs@splunk.com.
This posting does not necessarily represent Splunk's position, strategies or opinion.
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.