Software development (SDLC) value streams aren’t difficult to understand; you correlate all activity in the development process from planning to prod and relate activity to key objectives such as impact on revenue, application quality and user satisfaction. Unfortunately, most organizations approach value streams on an ad-hoc, manual basis. This approach to value streams requires significant effort, is prone to error and creates huge opportunity costs. Building real-time visibility into your SDLC, with consistency across the entire development life cycle, is allowing organizations to empower their development with data.
Value streams help organizations understand the impact of their delivery chain and application velocity in many ways. Value streams support:
What’s consistent with all of these activities is that they span every phase of development and relate the activity to other key business objectives. Normal development activity and visibility is limited on a team by team basis and information is seldom shared. A true value stream however will provide insights to anyone in the organization, no matter their function.
What many organizations are realizing, as they automate software delivery and build more efficient delivery chains, is that the delivery chain itself is a product. The delivery chain has features, it’s scripted and iterated upon constantly. Iterations include building more automation, adding features and building tools. Just as with any other application, the delivery chain has a service level agreement (SLA) for the developers who use it and objectives they intend to meet. After all, developers can’t release any features if the processes and pipeline that ships that code is broken.
Image by Olalekan Oladipupo from Pixabay
Also, like any application, the infrastructure and code that runs the delivery chain produces a lot of telemetry data. This data can easily be collected and visualized so that organizations can, at a glance, see the stability of their delivery chain, spot anomalies and analyze areas for improvement.
Collection of this telemetry data is the first key step in building out a value stream. Because, this telemetry data, in aggregate, gives everyone a big picture of how their increased application velocity maps to key objectives.
As you already know, Splunk has the collection of data and visualization of that data down to a science. These insights make Splunk a valuable tool for visualizing value streams. But, the hardest part of implementing a value stream is knowing what to collect, getting consensus between teams and adoption of the value stream.
The way Splunk addresses this is with Splunkbase and a broad collection of partnerships and integrations that support the collection of telemetry data from the most popular software development tools like Azure DevOps, Vault, and JFrog.
Stephen Chin, Senior Director of JFrog Developer Relations explains:
"To add to the visualization of the data, leading DevOps teams rely on the JFrog platform to enable Splunk to receive the unified log data for their value streams. This end to end solution accelerates software delivery from building, managing, and securing binaries in self-hosted, cloud, and hybrid environments."
Looking specifically at the recent JFrog integration, we find a great opportunity to leap (excuse the pun) from each stage of the developing lifecycle – creating visibility into the stability and movement of artifacts in your software development process. Delivery chain stability and predictable movement are key components for all delivery chains to follow.
The JFrog integration provides out-of-the-box log ingest from Artifactory and Xray, and all the pre-built dashboards, allowing organizations to quickly get insights into both the administration of Artifactory as well as the activity data from Artifactory and Xray.
"The integration between JFrog and Splunk makes all of the rich log data on: data transfers, repository access, audit actions, and service errors from the JFrog Platform, available for analysis within Splunk Enterprise for visibility across the entire value stream." — Stephen Chin
Observability across IT Value Streams with Splunk and JFrog
On August 26th, join JFrog’s Stephen Chin and myself as we explore this specific use case. And, don’t forget to see JFrog at the next Splunk DevOps virtual event.
Automated software delivery chains are meant to release software faster and at a higher quality. However, organizations without full visibility into their delivery chain are often unable to answer the questions of how they’re actually doing compared to those goals. Building a value stream with Splunk, the Data-to-Everything Platform, can help bridge the gap between objectives and understanding the value of going fast.
With integrations like JFrog’s, organizations now have application-level observability from plan to code and package to production, giving them a critical tool in their development workflow; while DevOps teams now have insights about the movement of artifacts/containers and their integrity.
You can see that by connecting JFrog’s integration to CI/CD tools like Spinnake, Jenkins, or JFrog Pipelines, testing tools like Selenium and, finally, production observability, organizations are able to correlate data across their entire toolchain and build a value stream that crosses silos.
----------------------------------------------------
Thanks!
Chris Riley
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.