As I often visit customer sites that have data to send to Splunk from security devices, I find that vulnerability scanners seem to have a lower priority than IDS, IPS, etc. in terms of usage. Some places do not seem to utilize a scanner at all. I find that slightly odd as discovering a security issue yourself through the use of tools used the right way is much better than someone else finding your security holes. Moreover, as a former colleague once informed me, a lot of the appliances that are brought into a data center, including security collection appliances, may have been built with unpatched older operating systems meaning these should be the first targets for a vulnerability assessment. Scanning for vulnerabilities is the first step, but making sense of the returned results in an efficient manner is the next major hurdle. Naturally, Splunk comes to mind for this effort. Before I go into more details, I wanted to provide some background as this article will start with development of scanners before explaining why Splunk is an important complimentary solution with a vulnerability scanner.
Back in the day, when the internet was a Wild West of open gateways and unfenced machines connected to the greater Net, it was quite easy to find a vulnerability on a network or machine and simply login. These were the days without too much consideration for multi-layer Firewalls or testing for backdoor accounts. Yours truly was given the task to use software engineering techniques to create one of the first commercially sold TCP/IP vulnerability scanners. I inherited a first attempt by the group that tried to do this which turned out to be a collection of shell scripts and unrelated C programs that would be used as a services offering for a security assessment. This approach would not scale to the unenclosed masses that needed quick time to market real-time tools to conduct their own assessments on a regular basis.
Development
The first thing I did was to redesign the whole approach to use object oriented programming techniques to create repeatable vulnerability testing code in C++. (Please excuse some of the programming jargon as I had mentioned that I was going to go over development first before getting to the point of using Splunk). I created a base class that had two methods, test() and report(), that were meant to be overridden by implementation classes. Each implementation class would be used to conduct a specific vulnerability test given a set or range of input IP addresses or DNS names. As you can foresee, they first called their test() method to perform the scan and then called their report() method to log their results to the file system. The hierarchical approach can be summarized with this diagram.
The non-exhaustive lists of tests included vulnerabilities found using telnet, ftp, finger, rlogin, rsh, ping, etc. Anyone could implement a future test by inheriting from the base class and registering themselves with the main calling program which started off being multi-process, but could easily have been rewritten to be multi-threaded. I also implemented a fat client GUI to initiate the vulnerability scanning and to view the results.
Reporting
This is where it gets interesting. Each test would return a series of results in a log file format such as this:
Feb 20 12:10:11 test=rlogin 10.4.23.34 root=unsuccessful
Feb 20 12:10:12 test=rlogin 10.4.23.34 guest account enabled
Feb 20 12:10:14 test=rlogin 10.4.23.34 vulnerability xxx found
...
To show this data with aggregated reports, I used scripts written in TCL, which turned out to be 30 to 50 lines of code per report. Why TCL? One reason was that I knew the language and another was because Perl, Python, Ruby, etc., had either not been invented yet, or were in their infancy. These reports were not that fancy. An aggregate report may have looked like this:
Test | Vulnerabilities Found |
---|---|
rlogin | 52 |
rsh | 30 |
finger | 5 |
telnet | 21 |
ftp | 2 |
… | … |
Imagine how quickly and efficiently I could have created these reports with the power of Splunk’s search language and reporting capabilities. Not only could we have cranked out hundreds of reports rather quickly, but our customers would have been able to do so themselves to customize their views.
Let’s head into more modern times and look at what Splunk can do with very little effort against sample vulnerability scan data. I’ve used Nessus as the scanner in the examples, but the same principals should apply to your own favorite scanner. A sample event may look like this:
It is mostly unstructured, but some of the fields will be extracted automatically by Splunk at search time as they are in key=value format. My first report may be similar to the one I described above where I want to find a count of all signatures (vulnerabilities are now called signatures) found. The Splunk search could be:
sourcetype=nessus|stats count by signature|sort - count
Now, let’s try something a little more graphical where I would like to see a graph of these same signatures over time using a column graph. In the old days, TCL would not have been able to do this and I would have had to spent some time learning and using the accompanying TK language for the chart. In Splunk the search is as follows using a custom time range:
sourcetype=nessus|timechart count by signature span=30s
The power of ad-hoc reporting here along with the ability to put these reports on a dashboard and also create alerts on notable events makes developing these solutions addicting. Splunk would have done for my vulnerability scanner software what a potassium laced drink would have done for a thirsty athlete giving it more flexibility and openness to quickly do something meaningful with data that could get stale with lack of attention.
If you like the these reports for your vulnerability scanner, then you may also want to consider Splunk’s ES 2.0 offering, which has vulnerability scanner reports and related correlation searches as a sub-domain for the network portion of its monitoring. These are much more professional reports than the ones I just showed you as they take into account summary indexing to scale the aggregate reporting to massive amounts of data and also correlate the data with events from other security devices to give you a more complete security posture. For the reports to work out of the box, you will need to alias your field names to use the Splunk Common Information Model and create a few tags and eventtype definitions for your events. ES 2.0 already does this for the Nessus sourcetype. Here are some partial dashboard views:
Vulnerability Scanners have come a long way since the days of exploiting TCP/IP protocol vulnerabilities to include a rich set of tests which are more in line with today’s platforms and concerns. Regardless of the era, I hope I have shown that Splunk can be a complementary tool for monitoring, reporting, and investigating your infrastructure. You should consider using both.
You may ask whatever happened to my original vulnerability scanner project? Well, I moved on to other things and at Splunk applied my knowledge to launch a real-time status concept, where availability of your machines through protocols, not the vulnerability of them is the concern. The product, I helped develop, was decommissioned as the availability of open source scanners with names like Satan, Saint and Sara made the commercial product somewhat superfluous. If nothing else, I helped contribute to a time-line that evolved for the genre.
P.S.
You may also be wondering what was the name of product. Since the company sponsoring it has been sold twice and the name was trademarked, I did not want to mention its name outright. The name started with PING and ended with WARE 2.0.
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.