Skip to main content
false

Perspectives Home / CISO CIRCLE

A CISO’s Guide to Generative AI Policy

Generative AI adoption is soaring, revealing a crucial gap in policies, and highlighting the need for CISOs to create adaptive guidelines that balance innovation with security.

man on computer using generative ai

Generative AI adoption is similar to the early years of the California gold rush in the mid-1800s: excitement is high, but guardrails are harder to come by. 


Being a technology leader is never easy. But for women, and especially women of color, the road to the top is often much more difficult. Findings from Splunk’s recent report, State of Security 2024: The Race to Harness AI, echoed this gap: 91% of respondents said that security teams within their organizations were using public generative AI tools, but alarmingly, 34% said they didn’t have an acceptable usage policy for generative AI yet. 


As AI adoption skyrockets, CISOs can no longer take a “wait-and-see” approach to AI policy; they must grasp the nettle before they face the consequences. 

 

 

The importance of an AI policy

 

The CISO is not the only party responsible for creating an AI policy at an organization, but it pays off to be leading its development. After all, it’s the security team that will likely be facing the consequences of a loose or nonexistent AI policy. Data leakage is perhaps the most prevalent consequence; 77% of respondents agreed that it will increase with generative AI usage, while only 49% are actively prioritizing its improvement. 


Compliance is another driver. While the U.S.’s AI Bill of Rights and the E.U.’s AI Act aren’t enforceable laws at the time of this writing, stricter AI regulation is on the horizon. Organizations on the leading edge will be ahead of government efforts and focus on internal compliance controls so that the boxes are already checked once mandates take effect — and more importantly, the risk is mitigated early. This may include requiring training for employees who have access to the generative AI tools, vetting vendors for responsible AI usage, and reporting and managing the third-party supply chain risks that generative AI introduces – all of which should be covered in a policy.


“If we learned anything from cloud or IoT adoption, a lack of process and planning could come back to haunt security teams. The push from the business to haphazardly follow these trends resulted in undesirable consequences, such as non-compliant clouds paid on personal credit cards, or unsecured IoT devices rife with software vulnerabilities.” – State of Security 2024

 

 

Crafting an AI policy 

 

Widespread adoption is already occurring across the business (93%) and security teams (91%). Organizations that are resistant to generative AI will be left behind. Similarly, black-and-white thinking doesn’t bode well for a complex area like generative AI. An outright ban — Samsung, for example, banned public generative AI after discovering that its employees uploaded sensitive code to ChatGPT — will close a door on innovation while simultaneously opening one for shadow AI. Your end users will always find a way to use the tools that make their jobs easier. Banning generative AI outright could also hamper end-user productivity, with 92% saying that generative AI tools are driving a material improvement. 

 

CISOs should take a nuanced approach to this nascent area of cybersecurity policy to blend innovation with security.

 

 

Align your stakeholders

 

No policy should be created in a vacuum, but this is especially true for generative AI due to its widespread use cases across nearly every department. 


Start by figuring out who should be involved. Building a team with members that span multiple business units, including privacy, legal, human resources, go-to-market, marketing, and product ensures that a policy will consider each department’s unique use cases, needs and experiences. Involving each department in the early stages of policy creation also helps with business alignment, which is often one of the biggest challenges of any policy introduction that a CISO faces.

 

 

Map policies to priorities 

 

The complexity of generative AI means that the possibilities are nearly endless when it comes to deciding what to guardrail. First, determine your organization’s use cases and priorities for generative AI to avoid getting sidetracked.

 

Our State of Security 2024 report asked about seven key areas:
[Chart: Top areas of generative AI policy enforcement]

  • Data leakage protection/rights management: 49% 
  • Regulatory compliance: 45% 
  • Risk modeling/management: 42% 
  • User awareness training specific to generative AI: 38% 
  • Data governance: 37% 
  • Enforcement of approved/disapproved generative AI applications: 34%
  • User behavior monitoring: 24% 

Policies and priorities should be inextricably linked. An organization that prioritizes data leakage protection — 49% of our respondents do — should focus its policy on disallowing employees from putting personal data or sensitive company information like financials or internal documentation into a public large language model (LLM).

 

 

Stay fluid to encourage innovation

 

Even when an initial policy is created, the process doesn’t end — especially with a rapidly evolving technology like generative AI. Our respondents recognized this need for fluidity; 92% say they need to continuously revisit policies on permissive usage of generative AI across the business over time. During policy development, CISOs should decide how often reviews should occur.


As the saying goes, ‘Change is the only constant.’ LLMs will improve, regulations will mature, AI-powered threats will evolve and new employee use cases will emerge. CISOs should keep their ears to the ground to be aware of developments, both within the broader market and their specific organizations and teams, to update their policy accordingly and keep it relevant. 

 

 

Empower your users

 

An educated user is an empowered user. However, our report revealed that 65% admit to lacking education about the implications of generative AI. Our respondents were exclusively security and IT leaders, so if confusion is high among those roles, imagine how confused line-of-business employees must be. And confusion is a surefire way for policies to be ignored or misinterpreted. Empowered users, on the other hand, are helpful stakeholders in creating your policy and can be local champions in its adoption.


Education comes from the top down, so executives – especially CISOs – should remain curious and open about generative AI. Executives and practitioners alike should understand the basics of how an AI model works and how it can be used for both good and bad.   


A multidisciplinary governance board can also be instrumental in this stage by communicating the importance of responsible AI and championing its education. 
For more insights about how security leaders can curb the risks of generative AI, download the full copy of State of Security 2024: The Race to Harness AI.

Read more Perspectives by Splunk

JUNE 12, 2024 • 3 minute read

Uncovering Downtime’s $400B Impact

 

 

Nothing is certain in life except death, taxes, and downtime.

MARCH 25, 2024 • 2 minute read

What Science Fiction Can Teach Us About Cybersecurity Realities

 

With artificial intelligence being the topic du jour, AI can be the trigger to accelerate automated information sharing.

JANUARY 5, 2023 • 2 minute watch

 

Data Privacy in the Era of AI

What impacts will new generative AI advancements have on data privacy regulation in 2024? And how should companies prepare?

Get more perspectives from security, IT and engineering leaders delivered straight to your inbox.