At its essence, Artificial Intelligence is the capability to acquire, model, and utilize knowledge. Importantly, this knowledge may be gathered from or based on sensitive and proprietary information.
Because of this sensitivity, the steps you take to acquire and consume this knowledge may be subject to regulations that your organization must meet or comply with.
Compounding this situation is that downstream tasks and decisions based on AI have a direct impact on your end users — your employees and your customers. Rightfully, this setup can raise concerns around topics like:
So, with that picture, how can you ensure that your AI systems are secure and dependable?
AI regulation and standardization remains highly debatable and challenging. That’s for a few reasons: The industry is evolving continuously. Knowledge transfer from academia and the research industry is enabling new AI use cases and innovations.
One this we do know is that we need a systematic approach to managing AI systems in order to ensure the responsible and dependable use of AI technologies.
That’s where AIMS comes in.
Artificial Intelligence Management Systems (AIMS) is a standardization framework that allows organizations to manage risks and opportunities associated with AI. The framework allows organizations to:
ISO/IEC 42001 is the world’s first Artificial Intelligence Management System (AIMS).
(Related reading: AI frameworks & secure AI system development.)
Among the newest standards, ISO 42001 contains a set of standardizations and guidelines that allow organizations to:
Think of ISO 42001 as an umbrella framework that applies to all AI projects in a variety of contexts and use cases, across all industry verticals and organizations.
The main objective of the ISO 42001 guideline is to offer a comprehensive and standardized guiding framework for responsible and effective use of AI. It provides an integrated approach to AI risk assessment, mitigation and management. The key themes of this framework include leadership, planning, support, operations, performance evaluations and continual improvement.
By aligning with ISO 42001, your organization can expect outputs and outcomes like these:
Your organization can develop safe and trustworthy AI from an ethical and legal standpoint. This is important because AI models are typically black box systems. (“Black box” means that a system maps an input to an output without providing clear visibility into its internal behavior — we don’t always understand how it works, we just know it does.)
From a perspective of building responsible AI systems, issues inherent and resulting from large-scale AI systems emerge in the late stages of the AI model lifecycle.
For example, bias may be inherent in a large dataset used to train your AI systems, but it can only be identified after the model is fully trained on all available datasets.
You can also understand how you can build trust and loyalty among end-users of your AI products and services. Develop processes that will help improve your reputation as an AI company.
With AI governance in place, you’ll comply with stringent regulations. These regulations are changing rapidly, trying to respond to the fast pace of AI innovation. Build processes of trust and transparency. Establish controls that will save you time, efforts, and investments on internal audits.
(Related reading: AI TRiSM: AI trust, risk & security management.)
Your organization faces risks that are unique to your circumstances, markets, business processes, location, and target market. The ISO 42001 framework is uniquely positioned to serve as a guideline for organizations of all sizes and industry verticals. Examples of the guidelines include:
The vast and comprehensive nature of the ISO 42001 guideline means that business organizations can identify new challenges and opportunities with a structured and systematic approach.
In the context of Artificial Intelligence, the notion of trustworthiness encompasses the ethical, technical performance and dependability related risks. For example:
A key characteristic of a trustworthy AI system relates to the generalizability of the AI models — whether (or not) you can generalize an AI model out towards other use cases. This characteristic is highly dependent on:
AI models are trained continually on new data but — when these models are exposed to new data distributions, they tend to catastrophically forget previous learnings.
Currently, AI models are trained continuously on all data distributions to enhance model generalization.
(Related reading: Executive Order 13960 requires trustworthy AI in the U.S. government.)
From a business perspective, building trustworthy AI requires a holistic data management process. In phases, you would expect:
Finally, an AI system impact assessment allows organizations to ensure that all technologies, processes, systems, roles and decisions collectively help organizations develop safe, responsible, trustworthy and risk-free AI Management Systems.
See an error or have a suggestion? Please let us know by emailing ssg-blogs@splunk.com.
This posting does not necessarily represent Splunk's position, strategies or opinion.
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.