Using artificial intelligence for cybersecurity
How AI can improve cybersecurity and be weaponised for cyberattacks
4 minute read | |
Artificial intelligence (AI) is changing the game for cybersecurity, analysing massive quantities of risk data to safeguard against malicious attacks at a record speed that no human can ever compete with.
With human error accounting for 95 per cent of all cyberattacks, according to the World Economic Forum’s The Global Risks Report 2022, AI could be the ultimate cybersecurity tool.
CyberCX Cyber Intelligence Director Katherine Mansted said allowing data collection and analysis to be carried out by AI enabled companies to reallocate security personnel to highly skilled tasks.
“Humans are limited in how much we can process, and using AI tools can help us to reduce workload and free up the human to focus on the most interesting data input.”
“It can potentially help you to prioritise and focus human attention where it is needed the most," she said.
“This is about humans using AI to let humans do what they do best.”
Mitigating risk through artificial intelligence
Using AI to strengthen your company’s cybersecurity means attacks can be monitored at a higher speed and at a larger scale, according to Ms Mansted.
“There are roles for AI to flag things that look dodgy at scale,” she said.
“We’re seeing organisations get better at using automation – for example, to flag emails that look like phishing or to flag anomalies in terms of what their network is doing.”
Ms Mansted said AI allowed analytics across a business to be more cost efficient and possibly much faster.
“It means you can also keep up with what the threat actors are doing and there is already some promising work in terms of debugging code,” she said.
Hackers turn to artificial intelligence
Ms Mansted warned that AI was not a panacea – just as it could help to better defend our environments, the same tools were also assisting the attackers to build scale into their attacks, potentially automate attacks and improve social engineering.
“The era of the Nigerian Prince scam with bad English and awkward phrasing is over – we’re entering into a much more realistic and complex era of manipulated and manufactured fraud,” she said.
“What is different now is that we are in the era of readily available, commercialised and generative AI with more ability to generate realistic fake content – whether that is text, voice or video.
“At CyberCX, we have seen some phishing emails or other social engineering attempts where, suspiciously, the English and phrasing look a little bit better.
“That doesn’t mean it is aided by generative AI tools like ChatGPT – but that is a pretty reasonable hypothesis.”
Ms Mansted said the security community was also seeing generative AI content in information operations and disinformation.
“Just earlier this year, Chinese state-aligned hackers were found to be using AI-generated news presenters to distribute anti-US propaganda,” she said.
“What is disturbing is that not only is it propaganda but this was also using what appeared to be a commercial video generation platform, which anyone can pay to use online.
“This shows that once we have these tools in the cyber bloodstream, it’s reducing the cost to use them and increasing accessibility to bad actors like nation-state and criminal scammers. We’ll have bad content quicker, faster and potentially more realistic and tailored to individuals – that is the challenge we must grapple with.”
Adapting with cyber threats
Unfortunately, AI software is not foolproof and neither are humans, so AI and human intervention should go hand in hand.
“It comes back to the human factor, and the processes we embed around this technology and how we educate people,” Ms Mansted said.
“It’s about adding in a little bit of scepticism to communications – making sure we check the source of information we receive and that we are able to corroborate it.
“If you receive an unexpected message or text, find another way to validate where it is from.
“We’re getting better at checking threats and we’ll only have to do it more often as generative AI becomes more available at lower cost to more bad actors.”
In addition to cybersecurity training, Ms Mansted said workplaces needed policies and training on how to engage with generative AI.
“A lot of companies are thinking about how they can introduce generative AI into their workflow, which is going to be important for productivity and other gains, but there is an additional risk that comes with it,” she said.
“Be really careful not to share sensitive information into the tool – there are examples where employees in companies have copied and pasted sensitive information into the prompt for a commercial, generative AI tool. Once that information has left your environment, you no longer have control over it or own it.
“It’s essential we think about how we keep control of our information.
“It’s all about adapting to new threats that may be using generative AI, so you can be ahead of the criminal curve.”