Note: This feature is now available for Splunk Enterprise customers in the Spring 2022 BETA.
For years customers have leveraged the power of Splunk configuration files to customize their environments with flexibility and precision. And for years, we’ve enabled admins to customize things like system settings, deployment configurations, knowledge objects and saved searches to their hearts’ content.
Unfortunately a side effect of this was that multiple team members could change underlying .conf files and forget that those changes ever occurred. Add up the myriad of configuration changes that can happen every day and you might encounter realities that are different than expected for any number of reasons.
These changes have never been natively tracked within Splunk, leading to confused team members and befuddled customer support reps. Don’t you wish there was a way to track .conf file changes?
In the Splunk Enterprise Spring 2022 Beta (interested customers can apply here), users have access to a new internal index for configuration file changes called “_configtracker”. The log files come from configuration_change.log which include .conf file changes related to the creation, updating, and deletion of .conf files in the monitored file paths.
A simple table view with the following query can provide a fast way for users to understand what types of file paths, stanzas, and properties are changing within an environment:
index=_configtracker sourcetype="splunk_configuration_change" data.path=*server.conf
| spath output=modtime data.modtime,
| spath output=path data.path,
| spath output=stanza data.changes{}.stanza,
| spath output=name data.changes{}.properties{}.name,
| spath output=new_value data.changes{}.properties{}.new_value,
| spath output=old_value data.changes{}.properties{}.old_value,
| table modtime path name prop_name new_value old_value
Below, you can see an example of how local configuration changes made in the UI are seamlessly translated to the underlying configuration files. Thus, a user changing the configuration settings with an existing alert can find these changes logged in the “_configtracker” index.
Lastly, this new feature can be used to diagnose previous troubleshooting sessions. For example, a common troubleshooting tactic in the case of a blocked queue is to increase the queue size under indexes.conf. Although this may solve for a symptom in the short term, the actual root cause of the problem may still be lurking in the background. When the larger issue still manifests via new symptoms later on, a deeper investigation usually takes place. At this point, it’s important for the admin or support representative to know what settings were previously tinkered with before. With Splunk’s new config change tracker feature, it’s easy for admins or support reps to look back and understand if queue size settings were previously manipulated, and better yet, what queue size values were specifically attempted.
This same use case can be extended to a whole host of other configuration values like timeouts and concurrency limits just to name a few.
That’s all folks! We can’t wait for our customers to start leveraging the configuration change tracker feature today. Please do leave any feedback or suggestions under “Enterprise Administration - Internal Logs” in the Splunk Ideas Portal.
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.