AI security threats
The rise of AI presents cybersecurity challenges for all. Davey Winder explores the dangers – and how to defend against them!
The singularity is a currently – as far as we know – fictional event when technology advancement spirals out of control with unforeseen consequences for human civilisation. A common part of this is an artificial super intelligence, able to take self-determining decisions at a level far above standard human levels, which at this point doesn’t seem much beyond whether we need more raw milk next Tuesday, but the point still stands.
As it is, AI is here, though by standard metrics, it currently costs $10,000 in compute time to match standard humans at regular tasks. This doesn’t mean that generative AI with its ability to mimic human voices and animated facial expressions isn’t already outfoxing standard humans.
In this AI everyday reality, what we are seeing is a new wave of insidious and widespread AI-powered threats. Everyone needs to be aware of how the new technologies make online exploits more dangerous than ever, and even open up new vectors of attack.
Phishing and phakes
One of the most powerful applications of AI is in the arena of phishing. Hitherto, that term has mostly referred to dodgy emails that appear to come from a legitimate source – but once you add the ability to carry on wholly convincing human-like conversations, you’re into a whole new world of risk. In the past, phishing attempts largely relied on catching the recipient with their guard down, and getting them to immediately give up some item of valuable information. Now, all it takes is an API key for a mainstream LLM – which can easily be stolen – and criminals can use an AI to convincingly carry on an interactive conversation for as long as it takes to get what they want from the target.
In one sense, this doesn’t necessarily change the game all that much. You’ve probably heard of sophisticated spear-phishing attacks, where criminals have directly impersonated senior staff to trick victims into sharing something they shouldn’t, or masquerading as bank officials to get them to confirm a fraudulent transaction. But such attacks have historically required a significant investment of manpower. AI changes that picture dramatically.
PRIVACY PROBLEMS
Adam Pilton used to be in law enforcement, working as a detective in charge of a cybercrime team, before joining the private sector and becoming a cybersecurity consultant. And in his view, the greatest threat from AI isn’t conventional cyberattacks, but rather the invasion of our privacy.
That’s because the amount of data collected by technology interactions these days is beyond vast. Just picture what Google or Facebook know about you, your activities, interests, associations and so forth. “What if an organisation holding such data was breached? Imagine collating all that information and asking an AI to profile a person,” Pilton warns. What if cybercriminals used this information for AI-powered social engineering? “It no longer seems far-fetched to picture a cybercriminal receiving a notification that a victim has been effectively socially engineered, and is ready to be extorted.”