Hello, everyone! Welcome to the Splunk staff picks blog. Each month, Splunk security experts curate a list of presentations, whitepapers, and customer case studies that we feel are worth a read.
Check out our previous staff security picks, and we hope you enjoy.
>
Exploiting ML models with pickle file attacks by Boyan Milanov
“A worthy read on how Python's pickle module is exploited by attackers to deploy malicious machine learning models. Attackers utilize tools like pickling to alter and insert malicious code into legitimate pickle files.”
Mapping the Mind of a Large Language Model by Adly Templeton et al for Anthropic
“Anthropic's blog covers, at a high-level, how they managed to extract millions of features from one of their large language models (LLMs), Claude 3.0 Sonnet. The purpose of this research is to better understand the inner workings of the model, which in turn can help them make the models safer and potentially give greater operational visibility when they are running. For a more comprehensive read, Anthropic published their full paper on the study here, titled: ‘Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet.’”
Data Science & Exploratory Data Analysis: the Panda versus the Pony! by Alex Teixeira
“Exploratory data analysis is a task we often do when faced with a new dataset. I really enjoy the comparisons here between doing the data analysis outside of Splunk in Python AND with Splunk’s native SPL commands. It showcases how powerful Splunk is to have in your toolbox! Happy hunting!”
Fake Google Chrome errors trick you into running malicious PowerShell scripts by Bill Toulas for Bleeping Computer
“We've often relied on indicators such as bad grammar, low resolution images, and vague messaging as a red flag, but it's getting more complicated. Here is one excerpt from the article: 'Although the attack chain requires significant user interaction to be successful, the social engineering is clever enough to present someone with what looks like a real problem and solution simultaneously, which may prompt a user to take action without considering the risk.’”
Teams of AI agents can exploit zero-day vulnerabilities by Pieter Arntz for ThreatDown
“Researchers at the University of Illinois tested a new way of using AI for hacking. In these experiments, instead of using a single LLM to discover zero day vulnerabilities, it used a hierarchy of AI agents. This method was 550% more effective at identifying zero days than the single LLM approach! This highlights the ability of AI to identify vulnerabilities quickly in order to address them before the software is released. Great stuff!”
Audra Streetman
@audrastreetman / @audrastreetman@infosec.exchange
Malicious activities linked to the Nobelium intrusion set by CERT-FR
“This report from CERT-FR outlines several cyberattacks attributed to Nobelium, a threat group linked to Russia’s foreign intelligence service, SVR. The cyberattacks include phishing lures targeting government and diplomatic entities along with the IT industry, most likely for espionage purposes. The targeting of IT and cybersecurity entities for espionage could strengthen Nobelium’s offensive capabilities and inform future operations, according to the report. This is especially timely in the lead up to the Paris Olympics.”
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.