false

Perspectives Home / CISO CIRCLE

AI Knows Best (But Won’t Tell You Why): Cybersecurity’s New Dilemma

What happens when your best cyber defender can’t explain its moves? Navigating AI’s brilliance and blind spots.

AI is rapidly reshaping cybersecurity operations, promising faster responses and better threat detection. But with great power comes great responsibility — or in this case, a critical challenge: understanding how AI decides what’s “bad” and requires action. The problem isn’t just about trust; it’s about transparency. If we don’t know how cyber defense algorithms are reaching their conclusions, defenders could be left scrambling to mitigate threats they don’t fully understand.


First, consider how AI is transforming cyber defenses. It can detect threats in real-time, reducing response times from hours to seconds, and analyze vast amounts of data to uncover vulnerabilities that might otherwise go unnoticed. This isn’t just about speed—it’s about creating more intelligent and adaptive defenses. AI has the potential to enhance productivity, improve compliance, and proactively manage risks, offering a glimpse into a more secure future. But with these advancements come challenges that can’t be ignored.

 

 

Lessons from the past: What algorithms have taught us

 

The combination of these measures should reduce the burden on cybersecurity operators and dramatically improve security. Yet, Yuval Noah Harari’s Nexus offers some excellent examples of algorithms applied when the reasoning for a decision is unknown or can’t be determined. Two cases are worth discussing.

 


Case 1: Sentencing with COMPAS risk assessment 

The first case involves a drive-by shooting in Wisconsin in 2013. Police spotted the car involved in the shooting and arrested the driver, Eric Loomis, from Wisconsin. Loomis denied involvement in the shooting but pleaded guilty to two less severe charges: “attempting to flee a police officer” and “operating a motor vehicle without the owner's consent.”  The judge used an algorithm called COMPAS.  However, the algorithm evaluated Loomis as a “high-risk” individual. The assessment influenced the judge to sentence Loomis to a steep sentence of six years in prison.

  
Loomis appealed the decision to the Wisconsin Supreme Court, saying the judge violated his due process. The judge admitted he did not understand how the COMPAS algorithm arrived at its evaluation. Further, the company that developed the algorithm argued that the algorithm was a “trade secret.”   Harari notes that by the early 2020s, “citizens in numerous countries routinely get sentences based in part on risk assessments made by algorithms that neither the judges nor the defendants comprehend.”

 


Case 2: AlphaGo’s “Move 37”
The second case involves a “Go” match between Lee Sedol, a world-renowned “Go” champion, and AlphaGo, a program designed by Mustafa Suleyman, co-founder and former head of DeepMind.  AlphaGo was designed to play “Go,” a strategy board game involving two players invented in ancient China.  The game is considered far more complex than chess. In March 2016, AlphaGo defeated Sedol, which was “a moment that redefined AI and is now recognized in many academic and government circles as a crucial turning point in history.”  During the game, AlphaGo made move number 37 — an unexpected choice that was assumed to be a mistake.  But as the game unfolded, the move proved a turning point, with AlphaGo winning. The moment highlighted the alien nature of AI as. the AlphaGo team could not explain how the computer decided to make the move.

 

 

Guardrails for navigating the AI unknown 

 

As AI continues to integrate into cybersecurity, we’ll likely see situations where decisions are impactful but difficult to explain. Defense systems might shut down critical systems without warning to block an attack — actions that may be hard to justify or reverse. When AI is used for defensive purposes, we may be unable to explain why it is taking action, similar to “Move 37.” This may be acceptable in some cases, but it could be very problematic in others.  For example, systems could be shut down without warning to “protect” against an attack. In cases involving non-critical systems, this may be an acceptable risk. However, for operations involving critical infrastructure, the outcome could be catastrophic.

 

Establishing AI governance can mitigate risks and make the system's decision-making more predictable in cybersecurity will prove pivotal. Classifying the infrastructure to be protected and the AI algorithm agents deployed to protect systems will be critical. Risk mitigation strategies are available, but there remain unknowns. Possible risk mitigation strategies include:

 

  • Establish clear objectives and constraints: Algorithms are literal, so goals and constraints should be well-defined. This foundation provides AI with a clear set of directions to minimize unexpected decisions.
  • Enhance explainability (XAI): Build or use systems that can explain why certain actions are taken. This could be a tool alerting an admin when usual traffic patterns or login behavior occur before acting. 
  • Keep a human in the loop: Make sure there’s always a person involved when new or unusual defense strategies come up. Having someone approve actions in these situations keeps things safe and grounded without unexpected surprises.  
  • Continuously review and refine algorithms: Regularly evaluate AI performance to ensure it matches your security goals and risk levels. Fine-tune things as threats or business needs change.
  • Multi-layered defense strategy: Using AI should be part of a larger plan with multiple layers of defense. You want to ensure that all AI does is double-checked by other systems and defensive measures—like having human reviews and backup systems in place. This way, AI isn’t operating independently without a safety net, and we add extra layers of oversight to keep everything secure.


AI’s strength in cybersecurity is how quickly and decisively it can act, often spotting threats before we even see them. But the way it makes these decisions can be a bit of a black box, making us wonder how much we can trust a system we don’t fully understand. Finding the right balance between AI’s power and transparency is crucial to using it responsibly.


To get more insights and expert analysis of the impacts of AI, subscribe to the Perspectives newsletter

Related content

DECEMBER 3, 2024

Unlocking OpenTelemetry to future-proof engineering teams

 

Read more Perspectives by Splunk

April 8, 2024 • 3 minute read

With Observability and AI, If Data Is the New Oil, What Is Its Pipeline?


As with oil, data is informational energy that must be found, extracted, refined, and transported to the location of consumption. Here's how it's done.

May 21, 2024  •  22 Minute Listen

Is Your Organization in Step with AI? Check on Your Data Tenancy.


Forget the lone-wolf mentality of a single SOC. Today, it’s all about cross-sector collaboration and information sharing.

MAY 15, 2024 • 4 minute read

The Makings of a Successful Organization in 2027 and Beyond


How do organizations future-proof tech against threats, both known and novel? Splunk’s SVP and GM of products and technology weighs in.

Get more perspectives from security, IT and engineering leaders delivered straight to your inbox.