Through my career, I have been “fortunate” to work on some of the greatest national security challenges, ranging from weapons of mass destruction proliferation to terrorism to cyber insecurity. AI represents our newest challenge but will most comprehensively impact society.
The rollout of ChatGPT/Bard is creating anxiety among CISOs as we begin to face decision-making via “non-human” logic, which will accelerate the pace and pervasiveness of attacks, including attacks that could slowly bias sets.1
Anxiety extends beyond CISOs, as we have seen in the Future of Life’s open letter in the Financial Times from tech titans like Elon Musk and AI thought leaders like Max Tegmark at MIT. In addition, security luminaries like Dan Geer from In-Q-Tel highlight the challenges of AI with onsite/edge computing and the blend of software with data.
As the Asilomar AI Principles state, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
CISOs will undoubtedly encounter pressure from CIOs and CTOs to adopt AI to increase efficiency. As a result, CISOs’ jobs will become more complex as they address AI-driven attacks, automated vulnerability exploitation, battle data poisoning, or deep fakes that make current phishing tactics look quaint.
The concept of computer-driven automated attacks is not necessarily new or fictional.
The Peacemaker, a fascinating read by William Inboden about Ronald Reagan and the Cold War, recounts an exchange between President Reagan and Michael Gorbachev at their Geneva Summit in 1985 over abolishing the Strategic Defense Initiative (SDI). Gorbachev angrily reacted when Reagan said he would not cancel SDI. He made an ominous threat; the Kremlin would adopt “automation which would place important decisions in the hands of computers and political leaders [would] be in the bunkers with computers making the decisions. (emphasis added) This could unleash an uncontrollable process.” Gorbachev exposed that the Soviets were already working on a system called the “Dead Hand.” “Dead Hand would automatically launch all of the USSR’s ICBMs upon detecting an American attack—placing the fate of the world in the hands of machines rather than men.”2
Fortunately, Reagan and Gorbachev negotiated an agreement to dramatically reduce strategic nuclear weapons (START) and pulled the world from the brink of disaster.
Society’s challenges with AI are broader than the existential impact of nuclear weapons. Kissinger, Schmidt, and Huttenlucher detail in their book, The Age of AI and Our Human Future, that our perceptions of reality may change given AI-inspired insight.
CISOs will serve as the gatekeeper for AI, given AI’s capability to disrupt operations potentially.
A clear-headed, grounded collaborative framework is required to help CISOs traverse the accelerated adoption of AI.
A doomsday analysis is easy, but perhaps we start by leveraging Reagan's quotation: “Trust but verify” AI, particularly given recent reports of ChatGPT “hallucinating.”3
The 2017 Asilomar principles are starting point to flag research issues, ethics, values, and longer-term issues. For example, “AI research should be to create not undirected intelligence, but beneficial intelligence.” Or “Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards. Ethics and values cover issues of safety, failure transparency, and judicial transparency. Or, under long-term issues, like AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity, must be subject to strict safety and control measures. Finally, “superintelligence” should only be developed in the service of widely shared ethical ideals and for the benefit of humanity rather than one state or organization.
I find an eerie similarity between the pause proposed in the Future of Life Foundation’s Open Letter and the ethical and humanitarian issues scientists like Robert Oppenheimer experienced high on the plateau in Los Alamos as they raced to develop the atomic bomb. As written in American Prometheus by Kai Bird and Martin Sherman, most scientists believed the target was The Third Reich, only to learn the real focus in ‘45 was checking the Soviet Union by way of Japan. Once the Soviet Union conducted an atomic test in 1949, it was game on for the H-bomb or fusion-based weapon.
We can see this moment in AI time as Julius Caesar crossing the Rubicon, signaling the end of the Roman Republic, or maybe keeping it simple: “The proverbial cat is out of the bag.”
It is complicated, like Schroedinger’s cat: is the cat alive or dead? Generative AI can produce content indistinguishable from human output. Is it AI or not? How do we know?
AI’s ambiguity poses various concerns, from bias to security challenges.
Some good work is underway on “AI audits and detecting bias. Several organizations are addressing AI ethics, including the World Economic Forum, The Partnership on AI, and Equal AI. The Data and Trust Alliance, established in 2020, has many nontech employers among its members.4 However, implementation is unclear. Will it be voluntary or mandatory?
MIT’s technology review documents several ways AI chatbots are a “security disaster,” including jailbreaking, assisting scamming and phishing, and data poisoning.5
Now what?
According to Goldman Sachs, the latest breakthroughs in generative AI could automate a quarter of the work done in the US and the eurozone. AI could spark a productivity boom that would eventually raise annual gross GDP by 7% over ten years. At the same time, it would bring “significant disruption” to the labor market.6 In the US, 63% of the workforce could be affected, with 30% of those working physical or outdoor jobs unaffected. About 7% of workers in the US are in jobs where at least half of their tasks could be done by generative AI and are vulnerable to replacement.
A paper published by OpenAI, the creator of Chat GPT-4, “found 80 percent of the US workforce could see at least 10 percent of their tasks performed by generative AI, based on analysis by human researchers and the company’s large language model.” Occupations with higher wages generally present higher exposure, contrary to similar evaluations of overall exposure to machine learning.7
I find a reason for optimism, but we must move forward carefully. Many of the companies here today are or will be contemplating leveraging AI.
The Wall Street Journal, on March 31, offered a compelling interview with Bill Braun, CIO of Chevron. He said, “Doing AI responsibly is critical.” When asked where you would like to see AI embedded that you haven’t seen yet, he answered: “Everywhere. It should be part of every workflow, every product stream. But anything that looks like the more routine part or, the less value-adding part…helping take those aspects out of every worker’s interaction with technology should be the goal.”8
So, where do you start? All In on AI offers valuable guidance, including use cases covering Toyota, Morgan Stanley, Airbus, Shell, Anthem, Kroger, and Progressive.
There are three archetypes associated with AI adoption:
I want to circle back to “Trust but Verify” to delineate three essential points:
First, the most critical step is for the CEO to designate the executive in charge of AI adoption. This executive should lead a process for reviewing all potential AI applications.
Second, adopt a basic framework to help with the process. For example, Deloitte’s “Trustworthy AI Framework” calls out six areas to help clients with their policy development:
Third, double down on a resilience strategy. McKinsey released “A technology survival guide for resilience.” The good news is that many companies are already pursuing resilient infrastructure. McKinsey underscores the importance of understanding “criticality.” Simply, what is most critical to business operations? As McKinsey says, “This requires a resilient infrastructure with heightened visibility and transparency across the technology stack to keep an organization functioning in the event of a cyber attack, data corruption, catastrophic system failure, or other types of incidents.”
McKinsey has also established a maturity model for resilience.
At Splunk, we want to unpack this more.
Digital resilience covers five areas: visibility, detection, investigation, response, and collaboration. In the context of AI:
How well teams can see across their technology environment, including quality and fidelity of data and completeness of coverage.
Application to AI: Given AI applications will extend across security, DevOps, and observability, visibility must encompass each area. This will require the integration of data workflows into dashboards.
How well organizations leverage data to identify potential issues, including detection coverage and alerting.
Application to AI: CISOs must leverage and integrate detection tools to address AI application security. Devices must detect data and algorithm poisoning. As new AI capabilities come out, each should be gated until tools can detect AI tampering.
How well organizations use data to search for potential issues and accelerate analysis, including enrichment, threat hunting, and searching logs, metrics, and traces.
Application to AI: Threat hunting among AI applications may require special tools, for example, “sandboxes,” to allow operators to understand how an AI application works.
How quickly security, IT, and DevOps teams respond to day-to-day issues or incidents.
Application to AI: As with existing security operations, detecting and responding to AI-related threats, disruptions, and vulnerabilities is critical.
How well teams and their tools facilitate working cross-functionality across security, IT and DevOps.
Application to AI: Collaboration will be critical across security, IT, and DevOps, each area leveraging automated sharing and pooling insights with peers inside companies and with others.
While I am optimistic, we also need to be realistic. Unlike the beginning of the Cold War, this time, the geopolitical stakes involve China, not the Soviet Union, in the context of the atomic age. In Fortune, Tom Siebel, founder, and CEO of C3.ai, said:
“These tensions between China and the United States, in both the geopolitical and military realm, are very real. Enterprise A.I. will be at the heart of the design of the next kill chain. Whether you’re dealing with hypersonics, whether you are dealing with swarms, whether you are dealing with sub-surface autonomous vehicles, whether you are dealing with space, A.I. is very much at the heart of that. So we are in, I would say, open hostile warfare with China, as it relates to A.I. right now. And whoever wins that battle will probably dominate the world.”
I wrestle with the implications of what is ahead of us as AI will inevitably grow.
We know this is a critical time. I seek not to hype the issue but to keep our feet on the ground and acknowledge while AI brings tremendous opportunity, much remains unknown; we must step carefully. Splunk is serious about working together. This forum is an excellent opportunity to link and think.
Thank you.
[1] See The Age of AI And Our Human Future, Henry Kissinger, Eric Schmidt, Daniel Huttenlocher, 2021
[2] The Peacemaker, Ronald Reagan, The Cold War and the World on the Brink, William Inboden, p.375, 2022
[3] Lets cast a critical eye over business ideas from ChatGPT, Financial Times, March 12, 2023
[4] All in on AI,Thomas Davenport and Nitin Mittal, 2023 p 118
[5] “Three ways AI chatbots are a security disaster,” Melissa Heikkila, MIT Technology Review, Apr 4, 2023
[6] Generative AI set to affect 300mn jobs across major economies, The Financial Times, March 27, 2023
[7] GPTs are GPTs: An Early Look at the Labor Market Impact Potenital of Large Language Models, Tyna Eloundou, Sam Manning, Pamela Mishkin, and Daniel Rock, OpenAI, OpenResearch, University of Pennsylvania, Mar 27, 2023
[8] Download Extra, Chevron’s Bill Braun calls generative AI a ‘wake up call’ for traditional IT vendors, Wall Street Journal, Mar 31, 2023
[9] All in on AI, How Smart Companies Win Big with Artificial Intelligence, Thomas Davenport and Nitin Mittal. p. 48
[10] “A technology survival guide for resilience.” McKinsey & Company, March 20, 2023
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.