false

Perspectives Home / CISO CIRCLE

The Divide Between AI Anxiety and Reality

AI may be the boogeyman in cybersecurity, but the real monsters are the old threats we already know—just with a new AI twist

business leader facing AI generated figure

Pteromerhanophobia — ever heard of it? More than 25 million adults across the United States suffer from this condition.  


In layman’s terms, pteromerhanophobia simply means fear of flying. Yet research shows that commercial airplanes are one of the safest forms of travel. What contributes to this very common disconnect between perception and reality? Fear of the unknown is one possible explanation.


Similarly, cybersecurity defenders are concerned about the fast rise of generative AI and the uncertainty it creates. In our report State of Security 2024: The Race to Harness AI, we sought to understand what kept cybersecurity leaders up at night and how those anxieties compared to what they actually experienced. 

 

 

AI-powered attacks are the most feared incidents

 

In May 2024, the FBI made a statement explaining that cybercriminals are already “leveraging publicly available and custom-made AI tools to orchestrate highly targeted phishing campaigns,” a warning that probably surprised no one in cybersecurity. Cybersecurity is one of the few industries in which the excitement of AI is met with pragmatism. AI is touted across manufacturing, marketing, retail, and a laundry list of other industries as game-changing and a surefire way to boost productivity. While the concerns of AI making jobs redundant or deprioritized may loom large, it’s primarily cybersecurity defenders that must deal with external adversaries using AI for malicious purposes. 


Although 46% of security teams declare that generative AI will be “game-changing,” nearly the same percentage (45%) of respondents predicted that adversaries would benefit most from it. Respondents also cited the AI-powered attack as the top concerning type of cyberattack. However, that AI anxiety may be dissipating. Eight months prior to data collected for State of Security, only 17% of respondents in our CISO report said generative AI would advantage defenders.


When it comes to cybersecurity, fear of the unknown is valid. You can’t protect what you can’t see; that’s why it’s so important to have visibility over your assets and the vulnerabilities associated with them. 


But it’s even harder to protect against what you don’t know. While most organizations have processes to defend against well-known attacks like data breaches, they don’t know yet what will stop AI-powered attacks, how they will occur, or even what form they may take. Adversaries and defenders alike are intrigued by the art of the possible, with one distinction: many adversaries are untethered by policies, corporate red tape, or ethics. And when attackers armed with AI prey on the most vulnerable vector — an organization’s end users — to divulge sensitive information, such as with highly targeted voice and video cloning, defending against the art of the possible is an overwhelming prospect.

 

 

How AI will amplify the threats you know (and don’t love) 

 

AI-powered attacks were the most concerning attack type, but it didn’t even make the top ten of incidents that were actually experienced. More common incidents included data breaches, with over half (52%) of respondents experiencing one in the past two years, business email compromise (49%), and system compromise (49%). 


The impact of generative AI is more likely to be an amplification of existing threats rather than a sudden surge of new attacks. Our respondents agree; 32% believe it will make existing attacks more effective (the top use case), and 28% anticipate an increase in the volume of existing attacks. This underscores the need for cybersecurity professionals to focus on fortifying their defenses against known threats. 


Another familiar — and arguably the most concerning — risk is data leakage, which 77% of respondents believe will increase with the use of generative AI. A rogue end user putting sensitive company data into an LLM can undo years of hard work from the cybersecurity team. Yet, only 49% say that blocking data leakage is a top priority for generative AI governance. Currently, there are limited options available that control the flow of data in and out of generative AI tools, which is a critical gap. 


If there's one area where AI concerns should be directed, it's policy and education — both good strategies to help mitigate the risks associated with its use. A staggering 93% of respondents use generative AI across their businesses, yet 34% don’t have a comprehensive usage policy, and 65% admit to a lack of education about the implications of generative AI. 


Proper education can encourage end users to interact with public LLMs responsibly and avoid costly mistakes. Thoughtful policies, backed by a cross-functional governance board, will go a long way in minimizing data leakage and other risks that generative AI introduces. By implementing these measures, cybersecurity professionals can better control the situation and help to mitigate the risks effectively.

 

 

Delve into the full report for more insights and recommendations on the impact of generative AI in 2024 as both threat actors and defenders race to harness its advantages. The report also delves into a changing threat landscape, the consequences of tightening compliance, and AI’s role in the talent shortage. 

Related content

July 31, 2024

A CISO’s Guide to Generative AI Policy

Read more Perspectives by Splunk

JULY 17, 2024 • 3 minute read

From Espionage to Sabotage: The Shifting Strategies of Global Cyber Conflict


Cyber sabotage is on the rise. Strengthening cybersecurity measures across sectors with sensitive infrastructure is becoming more crucial.

JUNE 25, 2024 • 3 minute read

Facing the Future: How CISOs Are Navigating the Evolving Regulatory and Threat Landscape


With great power comes great responsibility. But CISOs can’t do it alone.

JUNE 7, 2024 • 18 minute listen

How to Speak Board: A Primer for CISOs


The first step: Understand each member's background and unique concerns.

Get more perspectives from security, IT and engineering leaders delivered straight to your inbox.