Splunk is committed to using inclusive and unbiased language. This blog post might contain terminology that we no longer use. For more information on our updated terminology and our stance on biased language, please visit our blog post. We appreciate your understanding as we work towards making our community more inclusive for everyone.
Over the last year, many of us have been introduced to the term “Software Supply Chain”. For better or worse, it is now part of our defense vernacular and won’t be going away any time soon. If anything, it has consumed us in many ways and has been the cause of many nights of lost sleep. Well, that could just be us on the SURGe team here at Splunk.
Last year, after the Solarwinds Orion supply chain attack, we decided to take a closer look at software supply chain attacks. We knew it would be a challenging subject to tackle, especially considering the wide swath of software, services, infrastructure, people, and all of the other bits and bobs that make up our computing life. It turns out that software supply chain attacks are far more common than we’d like to think. The European Union Agency for Cybersecurity (EINSA) recently published a report detailing about 25 known software supply chain attacks between January 2020 and July of 2021 alone.
Over the last few months, we’ve embarked on some research to explore methods to potentially detect supply chain attacks and shed some light on this dark corner of cyber defense. We purposefully limited our scope to methodologies that had the potential to be useful in detecting abnormal activity on critical servers that are terrifying to look at sideways, nevermind patching. Why would we limit our scope so much? Well, it turns out detecting supply chain attacks isn’t easy. It’s actually pretty hard. We also wanted to ensure that whatever the end result was, it would be useful for a majority of our readers, not just the wizards, as Ryan Kovar, leader of the SURGe team, would say.
After much deliberation, we came to the conclusion that focusing on JA3 and JA3s hashes would be a fun and perhaps rewarding path forward. What is this magical alphanumeric string I speak of? I’m glad you asked. We won’t go into much detail here, but suffice it to say it is a fairly simple, yet ingenious, method to fingerprint TLS negotiations between a client and a server. I know, you want to know more. Well, we recommend that you check out some amazing research by Lee Brotherson and even more from the fine folks over at Salesforce that open-sourced their JA3 code. After reading that, you should come away with a great understanding of TLS fingerprinting and how JA3 works.
Of course! Let’s go over the useful information rather than just talk about our inspirational journey. What would a Splunk blog post be without some screenshots of Splunk? One of the queries we developed uses Splunk’s anomalydetection command. In this query we’re going to simply try to find some anomalous TLS sessions in our Zeek logs coming from our critical server netblock, 192.168.70.0/24.
sourcetype="bro:ssl:json" ja3="*" ja3s="*" src_ip IN (192.168.70.0/24)
| anomalydetection method=histogram action=annotate pthresh=0.0001 src_ip, ja3, ja3s
| stats sparkline max(log_event_prob) AS "Max Prob", min(log_event_prob) AS "Min Prob", values(probable_cause) AS "Probable Causes", values(dest_ip) AS "Dest IPs", values(server_name) AS "Server Names", values(ja3) AS "JA3", values(src_ip) as "Source IPs" count by ja3s
| table "Server Names", "Probable Causes", "Max Prob", "Min Prob", "Dest IPs", ja3s, "JA3", "Source IPs", count
| sort "Min Prob" asc
Once we run this query, we’ll be able to see that there are some anomalies in the data we should probably take a look at. Namely, the TLS sessions that are reaching out to manic.imperial-stout.org and update.lunarstiiiness.com.
To make these results even more useful, we can also include an allow list in our query. This will help to ensure some of the more benign hosts that we would expect to see (such as legitimate software update sessions) aren’t included in our results.
Now we’re getting somewhere. All of our suspicious queries have bubbled their way right to the top of our list. Right where suspicious data belongs.
We spent a lot of time not only conducting our research, but we also wrote a beautiful white paper that provides far more detail than this blog post. Not enough? No problem. Ryan Kovar and I are also presenting on this very topic at Splunk’s annual conference, .conf21. You can check out our talk Hunting the Known Unknown: Supply Chain Attacks (SEC1745C) to see us in all of our on-screen glory. Still want more? Did I mention that we’ve written a white paper? You can check out all of the queries we’ve developed, along with a few bits of code we developed specifically for this research. One of them being a nifty way to generate anomalous TLS sessions that mimic Zeek SSL logs.
Do you want even more? Well then, you’ve come to the right place. You can also play along today, as in right now, with Boss of the SOC (BOTS). A software supply chain scenario was designed and built out by John Stoner, which was integral to our research efforts.
If software supply chain in the DevSecOps realm is more your cup of tea, we’d highly suggest you take a look at Dave Herrald and Chris Riley’s talk from this years’ .conf21, Enabling DevSecOps and Securing the Software Factory with Splunk (SEC1108C). They put together a lot of terrific content to better understand how Splunk can be used in your DevSecOps pipeline to better defend against and detect software supply chain attacks.
Software supply chain attacks are not going away. As our network defenses improve, adversaries must move up the chain to stay a step ahead of our defenses. This cat and mouse game is the nature of network defense. It’s why we lose sleep, working to think of new and novel ways to help everyone detect adversaries and stay safe. Our research presented here sadly isn’t a silver bullet. It won’t solve the software supply chain problem today, or at all. What it will do however, is provide everyone with another tool and means of staying ahead of adversaries. With any luck, it will also help start a conversation and inspire others to explore interesting avenues of detection, ensuring we can all sleep better at night.
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.