Black-hat AIs never sleep and are infinitely patient. AI-powered tools looking for vulnerabilities in your network will not get bored and switch to easier targets. They will keep going until they exhaust all possibilities so we have to be equally assiduous in locking everything down.
While malware creation tools have been around for a long time, AI-based tools are now simplifying the creation of viruses and similar, allowing relatively-unskilled hackers to create enterprise-specific attack tools. Custom attack code clearly is much more difficult to detect, meaning we need to raise our game to meet that.
Phishing is the ransomware operator’s weapon of choice for getting into a network, but they are often handicapped by poor language and design skills. Not anymore. Generative AI is particularly good at writing persuasive emails with perfect fluency in dozens of languages. No longer can we use bad grammar as an indicator of potentially harmful messages, and we need to train our colleagues about the threat of ever more plausible phishing and spam messages.
Talking of colleagues, they too can unintentionally become a threat. It’s all too easy to put confidential material into a public AI service such as ChatGPT without realizing that it may surface in a competitor’s generated text. Indeed, given that a competitor will be asking about the same topics, that is exactly where it will appear.
Accidentally training AI models with company data is only one side of the risk, however, as data poisoning is a nascent danger. Generative AI makes it easy to flood the internet with fake content, for example trashing your products. The AI scooping that up doesn’t know it is not true and will offer them up to users. Without proper protection, a public-facing chat bot may be turned, as Microsoft found to their cost with Tay. It’s essential to test your chat bots, both internal and external, regularly to see that they’re not developing bias or other undesirable behaviors.