The Know Your Customer (KYC) use case is always going to be at the forefront of any Financial Services Industries (FSI) institution as regulations in most countries demand its implementation to make sure controls, processes, and procedures are in place to identify bad actors and to protect legitimate customers. In the past, ex-Splunker Hash Basu-Choudhuri wrote about the topic with some interesting history and reasons why using any type of time series data, structured or unstructured, can be beneficial for helping with this use case. As Hash’s explanation was high-level, I’ll take it to the next level to provide more details in this blog for a prescriptive path on using Splunk products for KYC. We will stick to some technical ideas on adding to existing solutions that should already be in place, but we will not discuss the nuances of different regulations surrounding the topic, as that’s beyond our scope. Here we go.
The very first thing we need to know for KYC before the customer even becomes a customer is to verify their identity. Usually this involves collecting various PII information from the customer and then using a series of checks to verify their identity. Today, various companies provide extra controls to identify synthetic identities or accounts to track for possible money laundering, terrorist funding and whatever else a bad actor may be trying to accomplish here.
How can Splunk help? Assuming the synthetic identity checker, whether home grown or bought off the shelf, is logging events somewhere to verify identities, Splunk Enterprise and Splunk Cloud Platform can monitor for errors, latency, and other troubleshooting issues with the application. To add another layer of monitoring, Splunk Infrastructure Monitoring (SIM) and Splunk APM can be used to make sure the infrastructure and transactions for the synthetic accounts are running without issues.
If the application goes through web server logs, regardless of channel, Splunk Enterprise and Splunk Cloud Platform could be utilized to monitor client IPs of the applicant to see if they are within the vicinity of their home address (assuming they are not on a foreign VPN) and also check if they or other household members have tried to open up accounts recently with the same FSI. You may conclude that these are simple rules and the synthetic identity tracker will pick this up, but the real power of using Splunk here is for creating rules on the fly without writing application code to dynamically adapt to a changing world. Thresholds can be adjusted and anomalies can be discovered as new indicators of identity theft and synthetic identities come into play.
Verification of the identity also comes with a regulation term called due diligence which goes beyond just knowing that the customer is who they claim to be. This involves some background checking, which establishes the value of the account and risk level of the customer. For instance, someone who owns a large business or is an elected government leader will have a different risk level than a small value account held by an individual not in the public eye. There are various steps for due diligence, which means workflow logs can again be monitored by Splunk Enterprise and Splunk Cloud Platform for analytics and troubleshooting. As the data is uncovered, it will most likely reside in a local database as a system of record for customer information. This data can be exposed to Splunk via lookup capabilities to enrich customer information in further investigations
This is the step for KYC where Splunk Enterprise and Splunk Cloud Platform can shine a direct light on the subject as once a customer is acquired, the compliance requirements do not stop as there has to be continuous monitoring of activities by the FSI to again make sure there is no nefarious activity with their behavior. This is the point where we can get into greater detail on what can happen next.
One of the greatest apps on Splunkbase is called Splunk Security Essentials. The introductory release of the app had lots of focus on first time seen for an activity and outlier detection. First time seen could mean the first time the user tried to open an account or perhaps, the first time they logged into an account, or even the first time they performed a transaction against the account. Each one of these are noteworthy events when taken into context. For instance, if they are applying to create a new account, it is worth looking if they have applied before and when was the first time they applied. This may give context, if they were rejected for KYC regulation reasons the first time they applied. Another example would be the first time they performed a withdrawal action against the account where the account was dormant after opening and almost all the money was withdrawn after 18 months. So, the first time seen for a transaction is 18 months after opening and it appears as if the customer decided to defund the account. Could this be an example of an account takeover or a holding position to further launder money? In either case, the first time seen is an important part of KYC. Let’s get into how Splunk Processing Language (SPL) can be used to implement this.
Here’s a rather simple use case with some SPL on first time seeing a customer after they’ve opened up an account. We are calling the Last Touched field as the first time they opened an account and then seeing which customers have taken at least 6 months to perform any action at the bank. This is not necessarily bad behavior, but it adds to the risk score of the customer.
index=transacations sourcetype=account_opening| eval prev_epoch=strptime(last_touched, "%m/%d/%Y %H:%M:%S")|sort - last_touched |join customer [ index=transactions sourcetype=banking]|where epoch>relative_time(prev_epoch, "+6mon") |fields - prev_epoch, balance|rename accountID as current_accountID action as current_action account_type as current_account_type |eval current_balance=tostring(round(current_balance, 2),"commas"), other_balance=tostring(round(other_balance, 2),"commas")|convert timeformat="%m/%d/%Y %H:%M:%S" ctime(epoch) AS current_time|fields - epoch
This may look a little involved, so let’s break it down. The first and second lines gather all transactions for account opening and we convert the last touched field, which is a human readable timestamp into epoch time, which is the number of seconds since January 1, 1970. It’s easier to do timestamp math with integers than it is with human readable text. Next, we join that data with current customer banking transactions. The where clause does the work for our outlier as it finds all events that have a current epoch time that is greater than the account opening epoch time. If it is greater than 6 months, This meets our criteria for “first time seen” for a customer who did not touch their account after opening for at least 6 months. The rest of the SPL is just formatting to make the output table prettier to turn the epoch time back into a human readable timestamp and convert the amount involved into a simple integer, but for our discussion, we will ignore it here. A sample output is shown below for this SPL.
There are many varieties to “first time seen” to monitor a customer, but before we generalize it, let us move onto outliers, which is one of the hallmarks of continuous monitoring. Let’s talk about a simple approach where machine learning is not needed, but it serves its purpose via statistics.
To begin with, SPL has a nice command called eventstats that finds statistics for all events in the search as a whole and it does not alter the results of the search. This is useful to find the average and standard deviation of all events in a filtered dataset. Let’s use this in a simple example.
index=payments|stats avg(amount) as avg_accountID by accountID|eventstats avg(avg_amountID) as avg_amount stdev(avg_amountID) as stdev_amount|where avg_accountID>(3*stdev_amount + avg_amount)
In this search, we are first calculating the average payment by account ID and then calculating the average amount and standard deviation of the amount value of all payments in a not shown user selected or saved search selected time range. The outlier is any payment that is greater than the average amount plus 3 times the standard deviation of all payments in the dataset. This is a rather simple way to find outliers in customer behavior, as you can also use an average with static multiplier, moving averages with standard deviation, and a host of other statistical techniques. Fortunately, Splunk partner, Discovered Intelligence, provides a tutorial on using these techniques in their Quick Guide to Outlier Detection in Splunk.
As for the above first time seen and simple outlier detection, it doesn’t make sense to simply hard code numbers and field names into a search that you’ll use many times. That’s where Splunk macros come in handy. I have created macros for first time seen and the outlier detection example on Splunkbase in a bundle called TA For SplunkStart Basic Security Essentials that you can download for free and extract from the macros.conf file.
The above example uses basic statistics. For more advanced ways to detect outliers, you can use the free Splunk Machine Learning Toolkit (MLTK) and utilize the many ML ways to find outliers that include the Density Function, Local Outlier Factor, and One Class SVM. Note, that this list is not exhaustive. The Splunk MLTK also provides a Smart Outlier Detector Assistant web page to make this process easier for the non data scientist.
The astute reader will note that finding outliers using eventstats over a total population is not a good idea when the population has different behaviors due to the intrinsic nature of how they perform routine transactions. For instance, one customer may routinely transfer 500 dollars per month via wire transfer, while another may routinely transfer 50,000 dollars per month. None of this behavior is out of the ordinary in respect to what they do on their own, but if you group the two customers together to find an average amount, it is meaningless and heavily skewed towards the bigger transfer. To get around this with the outlier approach, it may be a good idea to regularly collect transactional data per customer on a daily or weekly basis to get a baseline. Here’s an example of what you may get using the stats and collect command to collect data via a scheduled saved search for an average amount transferred stored in a summary index.
Timestamp | AccountID | amount |
---|---|---|
11/2/2022 5:06:30 | 123 | 50 |
11/2/2022 7:16:30 | 456 | 6345 |
11/7/2022 4:36:30 | 123 | 53 |
11/7/2022 1:16:30 | 456 | 4353 |
11/14/2022 9:46:30 | 123 | 51 |
11/14/2022 15:16:30 | 456 | 5345 |
In this small example, you can now use stats to find the average amount transferred for any account ID to get a baseline of previous transactions and compare it to the most recent amount to see if there is an outlier. This allows you to perform continuous monitoring and compare recent transactions to expected behavior for your customer. Every time the customer performs a transaction, a saved search can add the new value to the summary index and also compare the current amount to the historical average for the customer to find an outlier. Here’s a sample search for this situation that appends the average and standard deviation of the summary index for an account ID to the current transaction and compares the current payment (average of payments in the current time range) with the historical averages plus a multiplier of the standard deviation.
index=payments accountID=”456” | append [ search index=payments_summary accountID=”456” earliest=-1Y| stats avg(amount) as avg_payment stdev(amount) as stdev_payment]|stats avg(amount) as current_payment values(avg_payment) as avg_payment values(stdev_payment) as stdev_payment values(accountID) as accountID|where current_payment > avg_payment + (3*stdev_payment)
For efficiency, if you have millions of accounts, it may make more sense to store the summary data of transactions per day or per week for each account ID into the Splunk key value storage called kvstore. The kvstore is a general purpose database that ships with Splunk for create, modify, and delete capabilities used to enrich data with other searches.
There are other ways to create baselines within Splunk. Splunker Josh Cowling wrote a blog called a Splunk Approach to Baselines, in which he describes a proportional approach, where you compare the ratio of the individual counts per entity over the total count of all transactions and then detect outliers by unusual ratios. This approach scales better when counts of appearances are involved as opposed to averages and standard deviations. Either way, keep in mind that when you are continuously monitoring your customer for outliers that may become a security or fraud issue, remember that each customer is different and each needs their own baseline.
Just because a customer does something that is unusual does not mean there is an issue. Each outlier should have a risk score associated with it that is further evaluated against all risk scores for the customer. The accumulation of risk scores by account ID prevents false positives and ensures more confidence in possible nefarious behavior. Remember the due diligence section that was alluded to above? That information can be used to help calculate risk scores for searches that find outliers by possibly using numerical weights against the initial risk score. A lookup can be used to find the due diligence data which in turn is fed into another lookup to find a numerical weight to multiply the initial risk score. As the search runs, it saves all its information to a risk index and it can kick off alerts, if necessary.
In the past, I have written about different ways to calculate risk scores for outliers in Splunk in this blog on detecting financial crime. It details different approaches for scoring behavior via saved searches. With this in mind, by continuously monitoring your customer for anomalies and outliers, adding risk scores to the results, you can automate this portion of KYC and react accordingly and even utilize SOAR playbooks.
We also have an easy button. If you are a customer or planning to be a customer of Splunk Enterprise Security, then Splunk has a free supported app called Splunk App For Fraud Analytics that can be used to discover account takeover and account abuse, which are two of the hallmarks of monitoring and protecting your customers. The app uses Splunk’s ES risk based alerting (RBA) for accumulated risk scores in the spirit of this discussion. This app does not solve all your KYC concerns about continuous monitoring, but it does provide out of the box content that pushes you towards some of the goals.
No where in the KYC regulations does it say a bank has to monitor their customers for improving their customer experience and helping with their investments. Since this blog was written to help in compliance situations, customer journey use cases will not be covered here, but it is interesting to note that some of the same data that is collected in continuous monitoring can be reutilized for customer journey use cases. For instance, if a deposit has increased one hundred times the normal average, not only will this be an outlier, but it may be a legitimate time to engage the customer for better returns on their investment, assuming this outlier is benign. Splunker Charles Adriaenssens provides further details on customer journeys in his blog.
This blog entry discusses the main tenets of KYC in terms of how Splunk products may be useful for compliance. By recording all customer interactions as time series events, baselining customer behavior, and then using either statistical or machine learning approaches to discover anomalies and outliers, continuous monitoring is enhanced in an efficient and scalable manner that can be updated as needed.
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.