false

Perspectives Home / EXECUTIVE STRATEGY

When to Choose GenAI, Agentic AI, or None of the Above

Not every use case calls for AI. Learn how to choose the right model, before it chooses for you.

The hype surrounding AI is real, and it’s showing up on nearly every road map. In Splunk’s State of Security 2024: The Race to Harness AI report, nearly half of respondents (44%) name artificial intelligence as a top initiative for 2024. Framed as a way to expand a company’s capabilities without increasing headcount, AI has become the initiative on everyone’s lips when seeking to do more with less.  

 

So much so that threat actors are using it, and defenders rely on it to thwart attacks. 

 

Generative AI and Agentic AI are the two forms of artificial intelligence getting the most attention right now. As organizations learn more about their full capabilities and implications, it’s clear these models represent different approaches with their unique challenges and advantages. Understanding where they diverge will be the key to making informed, strategic decisions on how and when to apply them.

 

per-newsletter-promo-v3-380x253

Navigating AI for smarter, more resilient operations

Enhance observability, secure operations, and drive resilience with AI strategies for executives.

 

What GenAI can and can’t do

In simple terms, Generative AI (GenAI) makes sense of data. It focuses on patterns learned from training data, whether text, images, code, audio, video, or a combination of these, and cannot take action on its own.

 

Think back to researching information in a library. You’d check the card catalogue, find the location, and track down the book in hopes it had what you needed. Then, the internet arrived, and suddenly everything from journals to encyclopedias were just a few clicks away. We still had the card catalogue, but now we’ve expanded that knowledge infinitely.

 

With GenAI, we’re able to take the information from the library and compile it with the internet to produce a result in plain language quickly. GenAI also gives us the ability to create and manipulate that data in service of a desired result. In a basic sense, it’s reactive in nature. You ask it a question, and it provides an answer or outcome.

 

Public GenAI isn’t just on the radar—it’s already in use. The State of Security report details that 91% of respondents on cybersecurity teams have integrated it into their workflows.

 

This technology can be employed by security teams to determine risk. Assuming it has access to the right data to make the decision, GenAI can evaluate systems, determine whether there is risk to humans or corporations, and recommend a course of action. Organizations are also using it to accelerate incident response, parsing large volumes of logs and alerts to surface potential threats in a fraction of the time. It’s proving effective at spotting anomalies or deviations from expected behavior that might indicate an attack is underway. Finally, GenAI is helping teams scale their expertise by generating summaries, translating documentation, and drafting detection logic.

 

But these gains come with tradeoffs.

 

There are real concerns around data privacy. GenAI tools, especially public models, can inadvertently gather and leak sensitive data sensitive information if guardrails aren’t in place to prevent this. Once internal information is used in a prompt, it is no longer fully under your control.

 

Shadow AI is another issue. Employees experimenting with unapproved tools introduce blind spots into your environment, potentially exposing the organization to problematic data flows such as unauthorized or non-compliant movement of sensitive information. It’s a classic example of a situation in which innovation outpaces oversight.

 

While GenAI is proving to be helpful to defenders, it’s also enabling attackers. Threat actors are using the same technologies to generate phishing emails, re-package harmful code, and automatically scan for system vulnerabilities.

 

Essentially, GenAI is lowering the bar for adversaries while increasing the speed of innovation. Understanding these risks, and managing them proactively, is crucial.

 

 

When AI starts acting on its own

In contrast, Agentic AI is a goal-driven and autonomous model based on datasets and parameters. It can plan, make decisions, and take dynamic actions without the need for human intervention. Depending on the data or guidelines, it will interact with whatever dataset it deems necessary as it drives toward a given outcome. Quite simply, it’s a problem solver. The best example would be the premise of a 1983 movie called Wargames, where AI was charged to run a nuclear war simulation, and it continued until it could no longer be caught in a data loopback, that is, until it understood that every path led to mutual destruction, and that the only winning move was not to play. (And it couldn’t do that until a human intervened with a tic-tac-toe game to wrench the AI from its single-minded [if we can say that] focus.

 

Agentic AI excels in settings that require speed, coordination, and autonomy. At its best, it can assess changing conditions in real time, plan multi-step responses, and act independently to achieve a desired outcome. That’s invaluable during fast-moving events like cyberattacks or outages, when human response time may be slow. But, as in the Wargames example, that same autonomy brings a host of risks. Without guidelines, agentic systems may act without full context or make decisions beyond what was intended. Unlike GenAI which waits for instructions, Agentic AI takes initiative, even when it can’t fully understand the trade-offs or doesn’t know when to stop.

 

In most cases, GenAI will be sufficient for a team’s needs. If a system can afford human oversight, or if the environment is complex and unpredictable, it’s wisest to rely on GenAI’s ability to inform rather than to act. Agentic AI should be reserved only for well-defined, high-urgency use cases where the risks are understood and the cost of delay is potentially greater than the risk of autonomy.

 

For example, in the case of a ransomware attack on an energy company, the normal procedure would be to detect, identify, prevent, and to some extent isolate the intrusion. This is the typical incident response process, but things are not always so simple in the cyber world. Say this was in the middle of winter, and this energy company provided power across several states. The ransomware is only targeting corporate systems and not operational systems, but Agentic AI does not know that as it reacts to the invasion of its systems and tries to mitigate effects in real time. In choosing the best response, a human must determine if risk of loss of life or risk to corporate assets is the dominant factor.

 

 

What leaders need to think about now

You don’t need a technical background to understand the implications. However, you do need to ask the right questions. For example, which of your existing tools could use AI to decide versus act? What thresholds trigger autonomy, and who approved them? Are humans still part of the escalation process, or have these workflows been automated?

 

Can your teams explain the conditions under which your AI systems take over? If something goes wrong, can you trace the decision logic? Can you or your teams override these decisions in real time, or is control lost the moment a signal crosses a threshold?

 

What types of decisions are your AI systems making today? Are they classifying events, recommending actions, or executing them? And do your internal policies account for these differences? For example, would the same response be triggered if a breach hits finance, or if it hits an energy utility during a snowstorm?

 

Most importantly, who is held responsible when an autonomous system makes the wrong call?

 

Too often, AI is lauded for its speed, scale, and precision, but without context those same traits can lead to decidedly poor outcomes. What matters isn’t how capable your systems are, but whether they’re working within clear parameters that reflect your intent and values, and whether they know when to pause.

 

AI doesn’t need to be feared. But it does need to be governed. That governance needs to begin now, before adoption outpaces oversight.

 

This isn’t speculation. Organizations are already deploying AI systems that can act on their own, sometimes without clear safeguards, and the consequences are beginning to show up in the real world.

 

Build your guardrails now. Set the rules. Decide what your AI system should protect, and what it must never endanger.

 

 

Resilience comes from more than uptime or rapid recovery. It ensures systems act with purpose and in line with your values and intent. That means designing AI that doesn’t just execute but reflects strategic, ethical, and organizational priorities. It means ensuring every automated decision includes constraints, context, and escalation paths. And it means owning those decisions at the top – not after a breach, but before the first action is taken on your behalf.

 

 

 

Get expert analyses, real-world applications, and actionable guidance on safe and responsible AI straight to your inbox with the Perspectives newsletter.

Read more Perspectives by Splunk

MARCH 25, 2025  •  7 minute read

The AI Genie is Out of the Bottle. Now What?

 

AI is transforming software development, security, and decision-making — but at what cost?

FEBRUARY 13, 2025  •  5 Minute Read

Ally or Adversary: The Paradox of AI in Cyber Defense

 

Security professionals break down the risks and benefits of AI in cyber defense and offense.

FEBRUARY 11, 2025  •  5 minute read

AI’s Potential is Limitless, But Are You Prepared for the Risks?

 

Discover when AI makes sense, when it doesn’t, and how to build a secure strategy.

Get more perspectives from security, IT and engineering leaders delivered straight to your inbox.