false

Perspectives Home / EXECUTIVE STRATEGY

AI’s Potential is Limitless, But Are You Prepared for the Risks?

Discover when AI makes sense, when it doesn’t, and how to build a secure strategy.

With all its promise, AI is the newest shiny object every business seems to be chasing. But before rushing out the gates, the biggest question for organizations might not be “can we do it?” but rather “should we do it?” Spoiler: Not every organization needs to jump on the AI train just because it’s leaving the station.


AI is moving forward at full speed, and many — if not most — organizations are jumping aboard. That said, not all AI implementations are created equal, nor are organizations at equal stages in the journey. Some organizations have already embedded AI in their infrastructure and are using it actively. Other organizations are planning and strategizing use cases for the near future. And for some, it’s merely aspirational.


Regardless of industry or size, the majority of businesses will likely have to make future investments in AI to stay competitive and relevant. But how do you decide when and where to go all in on AI and when other solutions may be more viable?


Here are three things to consider before embarking on an AI implementation.



Know the “why” behind AI

Whatever your current phase, plans, or practice for AI, don’t let it determine your business decisions. Instead, lead with the problem you want it to solve. While AI may be a possible solution, it may not be the best one, so you should remain open to the possibility that another solution might be a better fit. 


For example, if you’re attempting to clear a bottleneck, such as allowing customers to raise issues more easily, such as resolving a customer issue, adding AI might bombard and overwhelm systems with data, exacerbating the problem.


If you are solving a known problem and simply want to do it faster or more efficiently, then automation, not AI, is likely your best bet. (I call it machine doing rather than machine learning). Investigating a phishing attack, for example, is a classic example, because you probably do the same few steps every single time. If your users want better access and easier logins to applications, AI isn’t the best means of solving that problem either. Rather, you’ll need to change and streamline your method of authentication and processes. If you struggle to articulate value, adding more data science won’t help. In fact, it might hinder your efforts by requiring a major investment, only to yield little results.


However, if you have gobs of unwieldy data on your hands and you need to untangle it to identify previously hidden trends, insights, or connections — you’ll want to consider AI to get the job done. If you wish to augment the productivity of an understaffed or underskilled team, that might be another reason to consider AI.


Remember, the old adage “to a hammer, everything looks like a nail” still rings true. When AI is your only tool, everything else might look like noise, datasets, and models. So, before you dive in, take your time and be intentional with your AI implementation.



Nothing is magic, not even AI

Few things in life are a magical fix, and AI is no exception. If your foundational systems and processes are broken, AI won’t solve those problems—it could even make things worse. It’s up to you to thoughtfully assess where AI can add value within your ecosystem. And let’s be clear: AI is not a one-size-fits-all solution. While it can address a range of challenges, you can’t deploy the same technology in every situation and expect consistent results.


So, what is AI good at? As mentioned, AI can help make sense of messy data and remove complexity. That said, adding any tool to your environment creates new layers of complexity, regardless of what it is. You have to train users, document processes, manage suppliers, understand the risk, and integrate the tools into older systems.


If your systems aren’t already operating smoothly and efficiently, AI won’t improve them. If anything, AI will likely shine a light on the worst elements of your broken processes. For example, if your FinOps are weak, adding AI’s compute and staffing costs will expose that your processes are lacking.



Make a plan, put it in writing

When assessing how and indeed if to leverage AI, it is not just the technology that requires scrutiny but the creation and enforcement of clear policies. This even applies to AI skeptics and holdouts. After all, many vendors are embedding AI into their tools by default, so chances are you will still be impacted even if you haven’t implemented it yet.


Here are a few key pieces foundational to your policy:


  • Know your AI surface: That means understanding and documenting where AI is in your organization and related workflows, including all aspects of all aspects of your supply chain (software, HR, legal, etc..) (software, HR, legal, etc..). Know what AI is doing, what it’s allowed to do, and how it is secured, including what organizational data is exposed.
  • Mandate training to build corporate AI knowledge: With a more treacherous and complex threat landscape, businesses must protect their AI workloads and those employees using it.  Educating users on the pitfalls of AI with organization-wide training is the best way to minimize instances of successful convincing social engineering schemes, deepfakes, and accidental data leakage. But beyond minimizing risk, training also helps build better business outcomes. If employees know how to effectively navigate AI, including what to avoid, they’ll leverage the technology in the best way, leading to more effective results. But beyond minimizing risk, training also helps build better business outcomes. If employees know how to effectively navigate AI, including what to avoid, they’ll leverage the technology in the best way, leading to more effective results.
  • Ensure security and observability basics are in place: AI development is moving at a dizzying speed. However, compliance, security, and monitoring standards have neither kept pace nor been well-defined for AI-enabled systems. While we can rely on standard security aids like the OWASP Top Ten and general secure development practices, there aren’t any widely used frameworks specific to AI yet. While we still don't know how to consistently secure AI at scale, making sure you have the basics helps you minimize the risk.While we still don't know how to secure AI at scale, making sure you have the basics helps you minimize the risk.


Whether we embrace it or not, AI is here to stay. But as author and technology critic Neil Postman said, “No medium is excessively dangerous if its users understand what its dangers are.” Ultimately, it’s in everyone’s best interest to learn about AI— even if it’s not in your immediate future. That means being clear-eyed and realistic about its role in your environment and the problems it is best suited to solve. By setting expectations and laying a foundation now, you’ll be better equipped to address its challenges and surprises when they inevitably come down the road.



For more recommendations on how security leaders can prepare for the age of AI, download Splunk’s  State of Security 2024: The Race to Harness AI.

Read more Perspectives by Splunk

JANUARY 7, 2025  •  4 minute read

Great Scott: Exploring the Past, Present, and Future of Generative AI


How has the history of AI influenced large language models (LLMs) and future investments?

DECEMBER 5, 2024  •  5 Minute Read

AI Knows Best (But Won’t Tell You Why): Cybersecurity’s New Dilemma


What happens when your best cyber defender can’t explain its moves? Navigating AI’s brilliance and blind spots.

NOVEMBER 1, 2024  •  3 minute read

How Data and AI Are Reshaping Industries


AI is shaking up tradition and bringing data-driven insights from the vineyard to the boardroom.

Get more perspectives from security, IT and engineering leaders delivered straight to your inbox.