AI models are complex. They model a system behavior mathematically and map an input to an output. Given how complex real data can be, the modeling complexity of statistical tools, such as deep learning neural networks, is equally complex.
Today, the primary challenge for AI practitioners using AI technologies (particularly in heavily regulated industries) is to understand how AI models work. They need AI tools and technologies that are observable and visible.
The challenge with this visibility and observability, though, is that AI models are black boxes. A "black box" system, whether AI or not, means the inner workings of that system are not visible or evident based on observing the input and output relationships of the AI system models.
In this context, two important characteristics that can influence AI-based decisions are explainability and interpretability. Put simply:
Both terms are closely related, and both academic and the tech industry tend to use them interchangeably. In this article, we align with the standard convention, where interpretability is discussed as a subset of explainability — with some overlap in the two definitions.
So, in this article, we will explore the differences between explainable and interpretable artificial intelligence.
Explainability is the ability to describe the behavior of a system in understandable language to humans. Explainability helps us understand what caused an AI system to reach a prediction.
Another way to think about it, according to Philipp Drieger, Principal Machine Learning Architect here at Splunk:
Let’s say you have built a machine learning model that performs well on your training and test data: How do you find out which samples and features offer the highest impact on your model’s output?
In this sense, if a machine learning model can be described as having “high explainability”, the model would allow users to easily understand the cause-and-effect mapping between the input and output of the model. It is important for understanding black box and complex machine learning models.
We cannot fully understand how model weights correspond to a high-dimensional dataset that spans a vast feature space. However, it can help us discover meaningful connections between the data attributes and the outputs of the AI models.
Interpretability refers to the visibility and understanding of the inner logic and mechanics of the AI model. An AI model with high interpretability allows us to understand how the components of the AI model (nodes and weights in deep neural network models) produce a mapping between a system input and output.
Interpretability may not be a hard business requirement, but rather a strong technical consideration. AI practitioners may need to optimize model performance or build a model that generalizes sufficiently without bias or data security issues. In order to resolve these issues, they may evaluate how quickly and significantly the model converges when trained on different data distributions.
(Related reading: AI frameworks, AI ethics & AI governance.)
Consider an example of an AI model used to improve employee health and productivity. The health of your employees may depend on factors including their age, demographics, department and business function, past health conditions, and additional factors.
Now let’s say that the workplace human resources (HR) team wants to improve employee health — and they want to use an AI model to assist.
Because this is a workplace function, HR’s decisions can only apply to the office space, work culture, and business functions. Therefore, the HR team needs an AI model that is sensitive to parameters that are directly in their control, while giving a lower weight to external factors outside of their control.
Now, let’s apply the explainability and interpretability lenses:
For example, an explainable AI model will routinely produce a positive mapping between all decisions leading to positive work culture and employee health. But what does that mean for the inner workings of the model across all decisions? In our example, the AI model may be interpreted as follows:
From a business perspective, interpretability — how the prediction or decision was made — may not be a hard requirement. In fact, AI models that are complex and perform well on challenging datasets that encompass a wide range of distributions are typically black box machine learning models.
Though the black box model offers limited or poor interpretability, it can generalize and perform significantly better than traditional statistical models (such as decision trees).
Learn about using explainability - and observability - to benefit the software development lifecycle
A key limitation of complex models has to do with transparency, trustworthiness, and risk management for AI. Let’s play out what this means:
The internal logic and mechanism of a complex machine learning model is not evident. This means that your AI project may undergo extensive audits and controls before you can deploy it to production environments, particularly if being you’re using this model in industries that are highly regulated, like healthcare and finance.
On the other hand, complex machine learning models can provide key competitive differentiation. They are explainable — they work well and are highly dependable. And if the market segment is not highly regulated, organizations using them may use exploitative strategies to train large and complex AI models on a growing pool of sensitive end-user information.
As long as the models are explainable, the model training can be guided to transform raw data into meaningful insights and business knowledge. In fact, organizations may also be model agnostic as long as the right business metrics are sufficiently met.
Even in the latter case, AI practitioners are motivated to develop large, complex and explainable models that are also interpretable. Higher interpretability helps guide model performance while accounting for technical limitations, such as model complexity, computing performance and storage requirements.
See an error or have a suggestion? Please let us know by emailing ssg-blogs@splunk.com.
This posting does not necessarily represent Splunk's position, strategies or opinion.
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.