Most people have a good grasp on processes, but not many people understand pipes. You’re not alone. I put my hand up here, too. There are many good primers out there. (This one from VerSprite is quite good, in my opinion, and includes links to some non-security pipe primers.)
So, let’s talk about pipes: particularly, the threats inherent in them. In this comprehensive article, we are going to talk about detecting, hunting and investigating named pipes. Specifically, we will:
(Part of our Threat Hunting with Splunk series, we've updated this article recently to maximize your value.)
Pipes are a form of inter-process communication (IPC), which can be used for abuse just like processes can.
Lucky for us, we can ingest a lot of pipe-related data into Splunk, both from endpoints as well as from network wire data.
In this article, I will only look at pipes from a Windows perspective. Before we get into the actual tracking activities, I’d like to cover how you can capture pipe data in your environment.
On top of the Windows Universal Forwarder, the two Splunk add-ons I’m using to capture pipe events on Windows endpoints, are:
To capture pipe activity within wire data, I’d recommend using either Splunk Stream or Zeek / Corelight.
For my examples here I am using Zeek data, as that also provides protocol decodes for DCE/RPC events. I have a Zeek sensor capturing data (for this exercise I am mainly concerned with SMB and DCE/RPC traffic) across my test environment via a SPAN port. I am using the following add-on for this in Splunk:
For pipes, there are two event types to watch for on an endpoint:
Don’t get confused with the client and server terminology here. Quite often, a client and a server in a pipe connection will reside on the same host. The client is just the process initiating the connection to the server process, which created the pipe.
To capture both the creation of pipes from endpoint data and the actual connections to them, we want to capture two different Sysmon event codes (I recommend using either SwiftOnSecurity or Olaf Hartong’s Sysmon configs.) These are:
There are two types of pipes — named and anonymous — with one big difference that interests us here:
Named pipe network traffic uses SMB or RPC protocols. Wire data is right up there with endpoint data in my list of favorite data sources. If you aren’t already capturing wire data, I’d ask your manager right now to release some funds to allow you to do so (the AI/ML-enabled next-gen firewall upgrade can wait a bit longer).
We can look for pipe creation and connection events in our environment using a simple search for EventCode 17 and EventCode 18 within Sysmon data.
index=main source="xmlwineventlog:microsoft-windows-sysmon/operational" EventCode IN (17,18)
| stats count by PipeName host EventCode EventDescription process_name
The search is using the stats command across this data to return a count of events grouped by:
Searching for the process ID or process name that created the pipes can be powerful, but more on that later.
In my results, there are several crashpad-related pipe connections from chrome.exe, some other GoogleCrashServices pipe connections from GoogleUpdate.exe, and some TSVCPIPE creation and connection events from svchost.exe. On its own, that information isn’t useful for us right now. In my lab environment, I have far fewer events returned than you would see in a production environment.
The point: Just looking for pipe creation and connection events will leave you with lots of data to sift through when you’re hunting.
If we want to see named pipes in use over RPC from wire data in our environment, we can use this search that makes use of Zeek data.
index=main sourcetype=bro:dce_rpc:json
| stats count by named_pipe src_ip dest_ip dest_port endpoint operation
Again, this search uses the stats command to provide a count of named pipes, but now we’re looking at what’s in use on these named pipes:
The named pipes will most likely look different compared to the ones captured on the hosts themselves. Looking at the endpoints and operations in use here, we can pretty quickly deduce what’s going on:
We can see several connections to a named pipe of 135 on port 135 (this is TCP here as we are searching on Zeek DCE/RPC events). These are connecting to the epmapper endpoint, and running an operation of ept_map. (For some fun bedtime reading, here’s a link to the Endpoint Mapper Interface Definition.)
You’re probably saying something like, ‘Great! That’s super useful too!’ in a sarcastic tone. But I’m just trying to set the scene on how to search for named pipes in your environment. If you don’t stick around, you won’t get to see the exciting stuff later on.
The Splunk Threat Research Team (STRT) created several great detections for surfacing malicious pipe activity. For this exercise, I’m using their Cobalt Strike Named Pipes detection to find Cobalt Strike using named pipes in my test environment.
Cobalt Strike uses predefined pipe names. If the bad guys stick to those names, it’s quite easy to detect. Cobalt Strike also employs the concept of malleable profiles, which allows you to modify these names to try and avoid detection. (If you’re not familiar with Cobalt Strike, here’s a primer.)
Like a lot of lazy threat actors out there, I’m going to stick with the defaults because it’s easy.
I’m running a Windows 10 client, a Windows 2016 server, and a Windows 2016 Domain Controller in this test environment. I initially compromised the Windows 10 client, then moved laterally via PSExec to both 2016 servers, and then used named pipes over SMB for host-to-host communication. All C2 traffic is proxied back through my Windows 10 client.
I’m running the following detection from the link above:
`sysmon` EventID=17 OR EventID=18 PipeName IN (\\msagent_*, \\wkssvc*, \\DserNamePipe*, \\srvsvc_*, \\mojo.*, \\postex_*, \\status_*, \\MSSE-*, \\spoolss_*, \\win_svc*, \\ntsvcs*, \\winsock*, \\UIA_PIPE*)
| stats count min(_time) as firstTime max(_time) as lastTime by Computer, process_name, process_id process_path, PipeName
| rename Computer as dest
| `security_content_ctime(firstTime)`
| `security_content_ctime(lastTime)`
| `cobalt_strike_named_pipes_filter`
There are several macros in this search that ship with the detections in the following apps and repo:
In essence, this search looks for Sysmon event types 17 and 18 and then it looks for specific pipe names that typically show up with Cobalt Strike. Then, stats is run to calculate the number of occurrences across the various computers.
Ta-da — we’ve got Cobalt Strike! From the bottom, we see win-host-370.surge.local with 3 matched pipe names from the search, being run by 3 different processes:
Looking at the win-dc-483.surge.local host, we detect a few more pipes:
At the top, we can see a couple of similar pipes used on win-client-425.surge.local. The postex_e472 pipe was first used for reconnaissance (I ran Cobalt Strike’s net computers command to find the other hosts on the network), and used again for setting up the SMB beacons on the other hosts. Cobalt Strike appends a random 4-digit string to each postex_ pipe name, just like the MSSE ones.
Finally, we have the first pipe run on win-client-425.surge.local, which is MSSE-1630-server, being run by the lazy_beacon.exe process. This was the initial Cobalt Strike compromise in my environment.
Back to malleable profiles. How do you do this? Well, it’s a rather trivial process, and this post by ZeroSec gives a good run-through.
On top of changing profiles, you also want to modify the default Cobalt Strike Beacon binary to avoid using the MSSE pipe names. This is done through Cobalt Strike’s Artifact Kit. (Read up on these if you want further detail of how this works.)
I’ve now become quite the Fancy Lad having changed all of my pipe names to hopefully avoid the detection I ran earlier. With new pipe names (we’ll find these later), I ran through these same steps again:
And here’s the result from the same search I ran earlier:
Exactly 0 events.
What can we do to find this behavior when someone was not super lazy with their use of Cobalt Strike? Well, read on my friend, read on.
If you run other detections to pick up things like Mimikatz, you should have one or two threads to pull if something bad happens.
I ran the following search in my environment to detect Cobalt Strike Mimikatz activity. It actually picks up other malicious LSASS memory dumping activities as well, and comes from the Splunk Security Content repo:
`sysmon` EventCode=10 TargetImage=*lsass.exe CallTrace=*dbgcore.dll* OR CallTrace=*dbghelp.dll* OR CallTrace=*ntdll.dll*
| stats count min(_time) as firstTime max(_time) as lastTime by Computer, TargetImage, TargetProcessId, SourceImage, SourceProcessId
| rename Computer as dest
| `security_content_ctime(firstTime)`
| `security_content_ctime(lastTime)`
| `access_lsass_memory_for_dump_creation_filter`
This search also uses Sysmon events, but now we are looking for EventCode 10 (Process Access) events. Blackhills Information Security provides a good article on the various Sysmon event codes here — just scroll down to Event Code 10 for more information.
After filtering for those events, we then look for only lsass.exe in the TargetImage field, and then look for a few DLL’s in the CallTrace field.
The search returned a single result on win-dc-483.surge.local showing dllhost.exe accessing lsass.exe. The dllhost.exe SourceProcessId is 1076 and was run at 11:47:10 on October 28, 2021. This gives us some good information to begin our hunt.
In Process Hunting with PSTree, I was only concerned about parent/child processes, and used the great PSTree app to help me sift through those quickly and accurately. Now, I want to see if any named pipes were used, which can help me locate other hosts in my environment where there is similar behavior.
Here is my first search:
index=main host="WIN-DC-483" source="xmlwineventlog:microsoft-windows-sysmon/operational" ProcessId=1076 EventCode!=7
| reverse
| table _time EventCode EventDescription Description Image PipeName process_name parent_process_name parent_process_id
I’m limiting my search to the domain controller to look for ProcessId=1076 to see what else it may have done around that same time (don’t do a crazy all-time search here). I’m leaving out EventCode 7 (Image Loading), which can get quite noisy and isn’t part of my initial hypothesis of named pipe hunting.
I want to see my events from start to finish, hence using reverse, and then I table out the fields I’m interested in, which gives me this:
First, we have a process creation event (Event Code 1) showing rundll32.exe (process ID of 3552) creating a dllhost.exe process. We then see the dllhost.exe process performing a pipe creation event (Event Code 17). The pipe created was named \Surgesock2\mrpipespostex-28b-0. No wonder our earlier search didn’t find this, as I was quite sneaky, huh?
Ok, so we see a pipe creation event at nearly the same time as our Mimikatz detection. Interesting. Let’s perform a wider search on this host to look for all EventCode 17 and 18 events to see the other pipes that have been created and connected to.
This can be a bit tricky. You don’t want to search over too long a period, as you could get lots of pipes used by applications like Chrome, but you don’t want to miss any low and slow techniques either. Creating an “allow list” of known ”good” pipes and their associated processes would probably benefit you here (I’m feeling lazy, so I’m keeping mine open for now). Here’s my search:
index=main host="WIN-DC-483" source="xmlwineventlog:microsoft-windows-sysmon/operational" EventCode IN (17,18)
| reverse
| table _time EventCode EventDescription process_path PipeName process_name process_id
Several other pipes were created and connected during my defined timeframe on my domain controller. I see…:
What uses UNC paths and things like ADMIN$ typically? Windows file shares? And what protocol is used when connecting to Windows file shares? Maybe SMB? Lots of questions, I know, but I’m in a Jeopardy frame of mind right now, so make sure you phrase your answer in the form of a question.
Now, let’s go crazy and change our data source, as I think I smell some network activity going on. I called out wire data earlier on, and I have Zeek running in this environment.
Let’s look for that 68e5510.exe process name within the Zeek SMB Files sourcetype with this search:
index=main sourcetype=bro:smb_files:json name=68e5510.exe
| table src_ip dest_ip dest_port name path action
Great, we see some matching SMB network traffic between 10.0.1.17 and 10.0.1.14 for that file. I know that 10.0.1.17 is my Windows 10 client and 10.0.1.14 is my domain controller.
I also see multiple FILE_OPEN and FILE_WRITE actions taking place for that file. This now gives me another host to investigate, so I’m not going to move over to my Windows 10 client to start hunting there.
We know that the Windows 10 client has copied that executable to the domain controller, and that appears to be the start of various pipe activities on the domain controller as well. How could the Windows 10 client possibly start up that process after copying the executable across?
One way we typically use PSExec for lateral movement is via RPC. (This blog by F-Secure provides a really good explanation of how this is done.)
I’m going to now move back to Sysmon data and search for any network connections (Event Code 3) via RPC’s standard port of 135 in that same time frame with this search:
index=main host="WIN-CLIENT-425" source="xmlwineventlog:microsoft-windows-sysmon/operational" EventCode=3 dest_ip=10.0.1.14 dest_port=135
| reverse
| table _time src_ip dest_ip dest_port Image ProcessId
My search returned a few results for that timeframe:
That third process doesn’t sound normal to me. So, I’ll pull the thread a bit further here — let’s do a wide search on that process name:
index=main host="WIN-CLIENT-425" source="xmlwineventlog:microsoft-windows-sysmon/operational" process_name=mr_pipes_surge.exe
| reverse
| table _time User ProcessId EventCode EventDescription dest_ip dest_port PipeName query answer
Although I ran a rather open search, I want to narrow-down the results by limiting what I put in my table. In order, here’s what I want to see…
Here are my results:
Remember earlier in this blog when we tried to run the Cobalt Strike named pipe detection search that returned 0 results? Could we use some of the pipes discovered after the Mimikatz detection to modify that search to see if we can find any other impacted hosts?
Here are the various pipes we’ve seen in our hunt. They seem to follow Cobalt Strike’s standard of appending 4 characters or digits to a string (lazy me for not modifying that behavior):
It looks like we could just add a \\*mrpipes* to the PipeName clause in our detection for a final search of:
`sysmon` EventID=17 OR EventID=18 PipeName IN (\\msagent_*, \\wkssvc*, \\DserNamePipe*, \\srvsvc_*, \\mojo.*, \\postex_*, \\status_*, \\MSSE-*, \\spoolss_*, \\win_svc*, \\ntsvcs*, \\winsock*, \\UIA_PIPE*, \\*mrpipes*)
| stats count min(_time) as firstTime max(_time) as lastTime by Computer, process_name, process_id process_path, PipeName
| rename Computer as dest
| `security_content_ctime(firstTime)`
| `security_content_ctime(lastTime)`
| `cobalt_strike_named_pipes_filter`
And just like that, we pick up a bunch more pipes in our environment, which match up to the various Cobalt Strike actions that I took earlier across all 3 hosts!
So, if this were an old children’s fable from long ago, what is the lesson? Pick an easier topic, check the use-by date before eating that cheese?
Both are great ideas, but I think the lesson here is to not always rely on hard-coded values to keep you safe. They do have a place, however. Based on various reports, many people who utilize Cobalt Strike are being lazy and using the default values.
So, I guess do whatever you want in the end, but I’ll give you a final parting gift. If you’re using Zeek or Corelight, you can run this search, which picks up the PSExec lateral movement behavior within Cobalt Strike (I honestly built this before I found the F-Secure blog referenced above, really!):
index=main sourcetype="bro:dce_rpc:json"
| transaction uid startswith="OpenSCManagerA" endswith="CloseServiceHandle" maxspan<5s
| table uid named_pipe endpoint operation
Hopefully, this blog helped you understand a bit more about named pipes. I know I learned more doing the research.
In the meantime, happy hunting!
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.