The advancement of generative AI technologies like GPT-3 and DALL·E has led to rapid growth in AI adoption worldwide. While companies adopt AI with the intention of being competitive in the market, they often overlook the security risks that come with AI — risks that can affect individuals, organizations, and the broader ecosystem.
In this article, we’ll introduce you to the concept of AI risk management. We will explore the technical and non-technical risks associated with AI systems. I’ll show you how to create an AI risk management approach aligned with the NIST AI risk management framework, which aims to create responsible AI systems.
Finally, we’ll conclude by discussing key challenges organizations will have to face when managing AI risks.
The increased use of AI within organizations has introduced several technical and non-technical risks. A specialized branch of risk management, AI risk management is focused on identifying, evaluating, and managing the risks associated with the deployment and use of artificial intelligence within organizations.
This process includes developing strategies to address those risks to ensure the responsible use of AI systems that protect the organization, clients, and employees against adverse impacts from their AI initiatives.
Several AI risk management frameworks have been introduced for more effective risk management. For example, the NIST’s AI Risk Management Framework provides a structured way to assess and eliminate AI risks. It includes guidelines and best practices for using AI.
When discussing AI risk management, it is important to understand the technical and non-technical risks that are associated with the use of AI.
Here are common technical risks for AI:
Data privacy risks. AI models, especially those trained on large datasets, can contain sensitive and personal information, such as Personally Identifiable Information (PII). These systems can inadvertently memorize and reveal sensitive information: this can result in an actual privacy breach and non-compliance with data protection regulations (like GDPR).
Bias in AI models. Sometimes, the training data used to train AI models can include biases. It causes the AI model to produce inaccurate and discriminatory results. For example:
Bias in hiring & recruiting practices can result in hiring only certain candidates (or certain types of candidates).
Bias in the financial lending field can limit financial access to only certain groups.
Inaccurate results. If the accuracy of the trained AI model is low, it can produce inaccurate results. Moreover, some models may not provide up-to-date information, leading the company or staff to make the wrong decisions.
Overfitting. phenomenon occurs when the AI model becomes too specialized for the training data. When new data is used, it can show poor performance. Thus, it can impact the reliability and accuracy of the outcomes.
In contrast, let’s look at non-technical risks from the use of AI:
Ethical and social risks. The use of AI in workplaces raises several ethical concerns. For example, it can cause job cuts for employees within the organization, and some of the data it produces may include racist results. Moreover, it may collect data without consent from individuals.
Loss of trust in your company. Some AI systems can produce harmful or biased outcomes, damaging the reputation of the company. Employees and internal stakeholders may lose trust in the AI system, and clients can lose trust in the company. This, of course, can impact the revenue of the company in the long term.
Regulatory risks. As AI technologies evolve (rapidly), the call for new AI regulations is louder and steadier than ever. Those regulations are developed by modifying the existing regulatory frameworks to keep systems compliant. This can reduce accountability and raise concerns about the ethical use of AI.
Like many other risk management approaches, AI risks can be managed with a simple five-step approach. This approach includes context definition, identification, prioritization, and risk mitigation.
Identify the context of the AI system. For example:
What kind of environment will it operate in?
Who will use the AI? Who will be impacted by it?
What are the functionalities of the AI system?
As discussed earlier, identify the technical and non-technical risks associated with AI systems. Start by thoroughly evaluating the system, as it works now or will be built. You should explore other methods, too, like discussions with people involved and user reviews.
Assess each risk thoroughly, identifying its impact on the organization. Here, you can use techniques such as:
The severity and probability of occurrence of each risk
Risk matrices
Risk scoring
Risk prioritization enables organizations to prioritize what risks must be considered first. Thus, resources for risk mitigation strategies can be allocated effectively.
Once you’ve IDed and prioritized all the risks, you can implement comprehensive strategies. For example:
Apply robust security mechanisms like access controls to avoid unauthorized access.
Encrypt appropriate data to mitigate data breaches.
Identify and, where possible, eliminate confidential data to mitigate the risk here.
Additional corrective measures can also reduce the impact of AI risks as early as possible, including:
Robust incident management and incident response plans (IRPs)
Proactive approaches like real-time monitoring, alerting, and vulnerability scanning.
To improve effectiveness regularly, form a habit around reviewing your AI risk management systems. For that, you can use techniques such as:
User feedback
Regular performance reviews
Effectively communicate the results to the stakeholders. Change and update your risk management strategies and systems, incorporating their feedback.
NIST has introduced a novel AI Risk Management Framework (AI RMF) that enables organizations to create responsible AI systems. Let’s understand the key components of the NIST RMF.
(Learn about the best risk management frameworks.)
First, the AI RMF introduces the following key categories that AI can harm.
Harm to people. This includes harm to the civil liberties, rights, and economic opportunities of individuals, as well as harm to communities and social harms, such as those affecting educational access. The RMF aims to protect individuals and communities from such harm.
Harm to an organization. This category includes harm to an organization's reputation and its business operations. It also includes data and monetary losses.
Harm to an ecosystem. This refers to harm to natural resources, interconnected systems, supply chains, and financial systems. NIST RMF aims to address and prevent this type of harm as well.
Understanding these possible harms will make us want to build AI systems that are not just effective — they’re also safe and responsible. To support that, the NIST RMF outlines a list of important characteristics that AI systems need to have.
These qualities are key to making AI systems that organizations can rely on.
Valid and reliable. The NIST AI RMF ensures that AI systems can accurately perform the tasks they are designed for. Trustworthy AI systems are often validated through rigorous testing and continuous monitoring to ensure their reliability over time.
Safe. The framework emphasizes incorporating safety from the beginning stages of developing AI systems.
Secure and resilient. Under the guidance of this framework, AI systems are designed to face and tolerate adverse events and changes while also being protected against unauthorized access.
Accountable and transparent. Ensure visibility. Everything about how the AI systems work and what they do must be clear and open for everyone to see. This means people can:
Easily find out why the AI made certain decisions.
Ask for explanations or responsibility for what it does.
Explainable and interpretable. The framework ensures that the AI system's functions are clear to understand for people with different levels of tech knowledge. This helps users to really understand how the AI system works.
Privacy-enhanced. Protecting the privacy of users and securing their data is the main focus of the AI system under this framework.
Fair. The framework includes steps to identify and fix harmful biases which make sure the AI system's outcomes are fair for everyone.
To turn these characteristics into reality, we need a clear set of actions and processes that we can implement. NIST recognizes a certain set of functionalities that provide a roadmap for implementing these qualities in AI systems. Let's see what they are.
This stage is crucial throughout all the other stages of AI risk management. It should be integrated into the AI system lifecycle by establishing a culture that recognizes the potential risks associated with AI.
The Govern step involves outlining and implementing processes and documentation to manage risks and assess their impact. Furthermore, the design and development of the AI system must adhere to organizational values.
This functionality establishes the context for using AI by understanding its intended purpose, organizational goals, business value, risk tolerances, and other interdependencies. It requires:
Categorizing the AI system and mapping out both the risks and benefits.
Understanding the broad impacts of AI decisions and the interaction between different lifecycle stages.
Here, you’ll establish ways to analyze and evaluate the risks associated with AI using either quantitative, qualitative or a combination of both types of tools. AI systems must undergo testing during both the development and production phases. Furthermore, these systems should be evaluated against the trustworthiness characteristics described previously.
Conduct comparative evaluations for performance benchmarks. Review these independently in order to
Minimize biases.
Enhance accuracy.
In this stage, you will allocate resources to manage the AI risks that have been identified. It requires planning for risk response, recovery, and communication, utilizing insights gained from the governance and mapping functions.
Additionally, organizations can enhance their AI system risk management through systematic documentation, the assessment of emerging risks, and the implementation of continuous improvement processes.
AI risk management allows organizations to build and use responsible AI systems. However, it also poses several challenges for organizations that they will have to tackle. Here are the key challenges associated with AI RMF.
There is a lack of reliable risk metrics due to institutional biases, oversimplification, and susceptibility to manipulation. Moreover, AI risks are often not well-defined or fully understood, which makes measuring their impacts quantitatively and qualitatively difficult.
These challenges get worse with the use of third-party software, hardware, and data, which may not align with the risk metrics of the original AI system.
(Related reading: third-party risk management.)
AI technologies are advancing at an alarming rate, introducing novel concepts and technologies. This rapid advancement has challenged regulators to update the existing policies.
So, you’ll likely need to add more items to your compliance list — and also realize that regulatory compliance may change significantly in coming years.
Organizations may attempt to eliminate all the negative risks. This is, at best, a waste of time and, at worst, it’s counterproductive. It is not possible to eliminate all risks.
Instead, organizations must adopt a realistic perspective on risk. This allows more efficient allocation of resources and avoids the waste of their resources.
Understanding the risks associated with adopting AI is important, as challenges, if not managed properly, can result in negative outcomes.
To handle these risks, it's important to constantly assess, prioritize, and put into action strategies to lessen these risks, while also checking how well these strategies work. In this context, the NIST RMF acts as a comprehensive guide that helps organizations manage AI risks more effectively.
See an error or have a suggestion? Please let us know by emailing ssg-blogs@splunk.com.
This posting does not necessarily represent Splunk's position, strategies or opinion.
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.