The rise of generative AI tools has unlocked immense productivity potential but has also given birth to a new challenge: Shadow AI. As employees increasingly turn to unsanctioned AI applications for convenience, businesses face significant risks in maintaining data security and adhering to IT governance protocols.
Let’s take a look.
Shadow AI refers to the unauthorized use of Artificial Intelligence tools at the workplace and outside the scope of internal IT governance protocols. Shadow AI typically involves generative AI tools that are easily accessible online and make for a simple productivity hack. According to a recent research, around half of the workforce surveyed globally uses Gen AI frequently, and one-third using it on a daily basis.
A common example of shadow AI is the unauthorized use of OpenAI’s ChatGPT. This tool helps with tasks like editing your writing, generating content, research, and data analysis. This tool helps to boost efficiency, but since this tool may be unauthorized by IT teams, employees may accidentally pose serious risks to data security for their company and compromise the organizations reputation.
A similar practice of Shadow IT is already prevalent in enterprise IT. The term "shadow IT" refers to the use of IT devices, software, and services outside the ownership or control of an organization’s official IT department. Gartner estimates that up to 40% of the IT budget in large enterprises is spent on Shadow IT tools and predicts that 75% of the workforce will employ Shadow IT practices by the year 2027.
But what makes Shadow AI different from Shadow IT?
Artificial Intelligence is embedded into most technologies deployed through authorized channels of IT governance frameworks at the workplace. Most business functions are data-driven and inherently use AI to drive key business insights and decision-making processes.
Shadow AI is a subset of Shadow IT that specifically applies to generative AI tools.
Let’s discuss how generative artificial intelligence makes Shadow AI is different from Shadow IT in terms of its scope and impact:
The motivation for using a general purpose LLM at the workplace is simple: an intelligent agent that draws comprehensive knowledge from the internet and supports your daily job task.
Even before the ChatGPT and other generative AI tools were released to the public, employees frequently used Internet resources to complete their jobs. For instance, engineers use Stack Overflow and GitHub and marketers use online databases.
This knowledge has now been distilled into generative AI tools that reduce the task of searching and reading online resources, into a prompt query response.
The important difference in consuming knowledge — and the main challenge — is the process of prompting an external tool with sensitive business information. For example:
From a business perspective, the key challenge is the lack of control over the use of intelligent agents. Organizations can advise on security best practices, but a Shadow AI tool may not be able to process a user request without prompts containing privacy sensitive data. Since these tools are proprietary, organizations cannot identify and control how prompt data is used and protected against malicious intent.
As a result, organizations cannot enforce their own IT governance protocols to mitigate IT security and data privacy risks.
So how do you protect your organization from Shadow AI practices? The following best practices can help improve your security posture against Shadow AI while also allowing your employees to leverage generative AI as highly effective productivity tools:
This is perhaps the most practical approach to minimize the risk of exposing sensitive business information to generative AI tools. Employees should be aware of the risks involved and be motivated to take precautionary measures. These include obfuscating the code and anonymizing customer data before it is entered into an LLM prompt. These extra measures do not impact the output that users can generate from an LLM but eliminate the risk of business impact in the event of a data leak incident.
Mistral AI, Meta Llama and Google Gemini models are open sourced in some capacity. These can be a starting point to build your own models: start from these pretrained open-source models and fine tune them on your own proprietary datasets. Host these models locally or on a private cloud network. Your workforce can enjoy the same freedom of integrating generative AI to their daily workflows without the security risks associated with proprietary third-party generative AI tools.
Identify opportunities and challenges associated with generative AI adoption for various business functions. Security awareness can create intrinsic motivation among your workforce to take the necessary security measures. Internal open-source AI tools can serve as valuable productivity tools.
However, third-party tooling functionality may be necessary in many ways and can inadvertently expose users to unforeseen security and privacy risks. Banning these tools will naturally lead to Shadow AI, but providing well-informed guidelines on their use can help your employees adhere to your IT governance standards carefully.
See an error or have a suggestion? Please let us know by emailing ssg-blogs@splunk.com.
This posting does not necessarily represent Splunk's position, strategies or opinion.
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.