1. Vibe coding has issues.
AI is accelerating software development, but when “the vibes” take over, the resulting code is often riddled with security vulnerabilities. GitHub Copilot, for example, can streamline coding, yet according to a study by researchers from NYU and the University of Calgary, GitHub Copilot produces vulnerable code at an alarming rate. In their analysis of 1,689 AI-generated code suggestions across multiple languages—including C, Python, and Verilog—approximately 40% contained security flaws. These vulnerabilities range from insecure authentication mechanisms to improper handling of user inputs–issues that can become serious security risks in production environments.
Unlike experienced developers who apply security best practices, AI generates code by predicting patterns from existing data, which may include outdated or vulnerable methods. Without proper oversight, these flaws can slip through the cracks, creating weaknesses that attackers can exploit.
Rather than banning AI-assisted coding, enterprises can benefit from structured reviews that require documentation and rigorous peer validation. Labeling AI-generated code and integrating additional testing measures enhances security and transparency, ensuring that tools like Copilot remain the co-pilot, not the captain. AI can accelerate development, but rules, processes and human oversight are crucial for catching errors, improving code quality, and maintaining security.
2. AI can erode critical thinking.
AI is reshaping cognitive habits. Just as search engines changed how we access information, AI threatens to diminish problem-solving and decision-making skills. Instead of developing expertise, people may grow accustomed to outsourcing complex thinking to AI systems, shifting from a ‘learning to know’ mindset to one of ‘learning to rely.’
This shift can have serious consequences. Over-reliance on AI creates blind spots, as employees may trust flawed recommendations without question. If an AI system is compromised, team members may find they lack the experience or confidence to intervene effectively. Automation bias further exacerbates this, reinforcing unquestioned acceptance of AI-generated decisions.
Unchecked, this phenomenon can degrade human expertise across industries. Organisations that promote AI literacy and encourage employees to question AI outputs, rather than blindly accepting them, help preserve critical thinking. Teaching basic AI concepts like machine learning, algorithms, and data bias, along with ethical guidelines and prompt engineering, builds confidence in working with AI. Investing in upskilling and maintaining human oversight in AI decisions ensures that critical thinking and human judgment drive decision-making. This speaks to the value of experience, even in the era of AI.
3. AI challenges how we reward talent and human output.
AI is transforming how we assess and reward knowledge. As the line between human expertise and AI-assisted output blurs, the value of “real” expertise is shifting. How do we assess and reward human contribution in a world where AI plays a growing role in shaping the world of work? How do we recognize genuine expertise and originality?
AI is already impacting hiring. AI-aided virtual interviews are becoming more common, as applicants try to pass off responses from an LLM as their own. If AI is shaping candidates’ answers, what are we actually recruiting for? Originality, creativity, adaptability, or some other attribute?
The same applies to workplace performance. When employees use AI to generate reports, code, or develop strategy, how do we attribute ownership and reward human contribution? This challenge is especially relevant for SOCs today. How will an analyst with 15 years of experience feel when a new hire with just six months on the job is delivering at the same level of productivity? How can you ensure fair recognition? Should performance be measured by raw knowledge, adaptability, or the ability to leverage AI effectively?
In confronting these massive changes, organisations must rethink how to evaluate talent and what constitutes high performance in the AI era. As automation accelerates, companies must ensure they’re recognizing and fostering uniquely human “soft” skills–intuition, judgement, creativity, and ethical reasoning–that AI can’t easily replicate.
4. AI can give attackers an advantage and ossify defender skills and practices.
AI-driven cyber threats and defences are locked in an escalating cycle. Attackers use AI to code up exploits, create deepfakes, and productionise phishing scams, while defenders deploy AI to detect and block these attack vectors. This is nothing new; whenever a new technology arrives, attackers often have the “R&D advantage,” as they can innovate without policy (or ethical) restrictions. But defenders usually catch up and, hopefully, overtake.
As both sides refine their tactics, defences need to be agile, and not ossified into recognising only certain ‘well-known’ patterns. Attackers innovate, and defenders need to too. Taking an outcome-based approach to what your defensive capability needs to do will help you to avoid the trap of “AI’ing all the things” and instead, using the technology that solves the problem. For many organisations today, this is actually about leveraging automation, not AI. But if you are using AI, adapting to future scenarios looks like blending AI-driven tools with human expertise and mandating feedback loops to avoid stale output.
Defenses must also be understood; explainable AI (XAI) makes AI decisions more transparent, which helps security teams to understand what actions have been taken and why, and then give feedback to fine-tune threat responses. Meanwhile, Splunk’s own David Bianco has created AI-powered honeypots to deceive attackers, waste their time, and gather valuable intelligence on their methods.
Ultimately, operationalisation of AI is crucial to its success in any domain. AI security models must be trained and tested regularly, with assessment of its efficacy and fine-tuning in a deliberate periodicity. By combining automation with human adaptability, security teams can shift from reactive to proactive defense, staying ahead of both known and emerging threats.
Start with the problem you’re trying to solve, and find the technology and process that will solve it the best.
5. AI models can collapse and homogenise output.
AI reflects the data it’s trained on, which can lead to certain norms dominating at the expense of diversity. This not only dilutes regional or cultural perspectives; it reduces uniqueness in general. Simply put, technology development still needs your creative spark. There’s even talk about leveraging neurodiverse talent to counterbalance AI’s tendency toward homogenization.
People on the autism spectrum, those with ADHD, and others with cognitive differences often excel in pattern recognition, creative problem-solving, and deep focus. Their unique abilities can help to develop unconventional security approaches and reinvent the way we practice security. Leveraging neurodiverse talent isn’t just about inclusivity—it’s a strategic advantage that enhances security with fresh perspectives and problem-solving approaches.
A big risk is model collapse, a degenerative process where models trained on AI-generated data lose their ability to produce diverse and accurate outputs over time. As rare data points disappear and errors snowball, AI-generated content becomes increasingly generic and of worse quality. Governments have published guidance on an emerging technology called Content Credentials, which is a way to counter the erosion of trust and trace lineage of data– increasingly important to avoid the model collapse problem.
To prevent AI degradation, AI systems must be trained on diverse, high-quality datasets and continuously refined with real-world human input. Techniques like stratified sampling and data augmentation help ensure a well-balanced dataset, reducing bias and improving adaptability.
6. Look both ways: Calculate the risk of rushing into AI.
Embracing AI innovation is bold, but being a first mover isn’t always an advantage. Rapid adoption without a careful strategy can lead to soaring costs and low ROI. When expectations clash with reality, disillusionment sets in. This often forces organisations to roll back AI investments – if they manage to untangle the complex business logic.
So, yes. Be excited about AI. Be captivated by its potential. But let’s balance that enthusiasm with sound judgment, and never lose sight of what makes human intelligence valuable today: intuition, reasoning, creativity, and the ability to ask the right questions.
This may sound like “doom and gloom,” but we’ve seen this before. The *aaS boom of the past decade led to unchecked spending, with many companies rushing into subscriptions without understanding the true costs. When returns didn’t materialize, a wave of organisations switched back to on-premise solutions.
These risks are real. While AI holds enormous promise, we need to stay grounded. The challenge isn’t just keeping up with AI – it’s staying one step ahead, ensuring it remains a tool that enhances rather than undermines human intelligence.
Stay ahead of the evolving landscape of AI by subscribing to the Perspectives newsletter for the latest trends in harnessing the technology for success.