Menu Close

Cybercrime costs the global economy trillions – here’s how AI can help

As the threat of cyberattacks intensifies in today’s rapidly digitising world, a new generation of artificial intelligence technology is giving business leaders the ability to defend themselves against rogue nation-states, e-criminals and hackers.

The technology boom of the past 20 years has allowed companies to communicate and conduct business with an ease never before experienced in human history. But just as the internet has facilitated huge economic opportunity and expansion across the planet for billions of people, unforeseen risks have also developed.

One of the biggest of these is cybercrime. Unheard of pre-internet, cybercrime has exploded in recent times as nation-states, e-criminals and hacktivists find ways to steal financial information, personal data and intellectual-property secrets from governments, companies and consumers.

While there has been a flurry of recent high-profile attacks, some of the most startling over the past year include a data breach of major US government site HealthCare.gov as well as successful personal data intrusions into Cathay Pacific and British Airways.

That’s on top of the infamous attack on Facebook that saw the tech giant admit to about 50 million users having been compromised by the security breach.

Indeed, so significant is the impact of cybercrime that it’s predicted it will cost the global economy a staggering US$6 trillion (€5.2 trillion) per year by 2021, up from US$3 trillion (€2.6 trillion) in 2015.

The rise of AI cybersecurity

Cutting-edge cyber defence

With the threat of cybercrime accelerating, CrowdStrike’s Michael Sentonas says firms, large and small, need to look for new ways to protect themselves. This is where, according to him, machine-learning software can be a game changer.

In his view, replacing traditional antivirus tech with AI and behavioural analytics is the way forward due to AI’s ability to better identify malicious files.

“The promise of AI and machine learning is that you can automatically detect a never-before-seen threat without the involvement of a human,” Sentonas, Vice President of Technology Strategy of the US cybersecurity outfit, tells The CEO Magazine.

“The idea is that you don’t rely on things like signatures; you look at the techniques and types of attacks used in the past and then use a map of these to predict, with high levels of confidence, a new malicious attack based on mass algorithms.”

Sentonas says the advantages of AI-based cybersecurity systems over traditional ones have become more pronounced as internet-based “adversaries” use increasingly cutting-edge tactics to infiltrate IT set-ups.

He notes especially the damaging impact of malware. This includes ransomware, which locks computers until a fee is paid and was responsible for infecting thousands of PCs globally in 2017 via the notorious bugs WannaCry and NotPetya.

“Machine learning is all about building a model to predict attacks. You get massive amounts of data on good files and bad files and you build your model to be able to recognise what’s a bad file and what’s a good file,” he says.

“Every time you see a file that you’ve never seen before, you break it down into small parts to see whether it resembles more what a bad file looks like historically or a good file. Advances in data storage mean we can now make those decisions very quickly and we can do it at a cost level that’s acceptable.”

 

Cybercrime breaches rampant

In the Asia–Pacific region, over 50% of companies have experienced a cybersecurity incident, while 27% don’t know if they’ve been breached because of a lack of data assessment.

Bad guys using tech

Cybersecurity expert Roger Smith nominates another reason to jump into AI-based cybersecurity – the ‘bad guys’ are already using it. Australia-based Smith, of R & I ICT Consulting Services, says the mind-boggling amount of money to be made over the next decade from digital forms of theft means that e-criminals are at the forefront of developments in machine learning.

In his opinion, that makes adopting an AI-based cybersecurity system simply a case of “keeping up” with those on the wrong side of the law. “Once one person has created an AI solution other people re-engineer it, which means the bad guys are now looking at how they can use AI to get malware into a system,” he tells The CEO Magazine.

The rise of AI cybersecurity

Smith’s view is backed up by recent research warning that AI-based tech could enable new forms of cybercrime, political instability and even real-world attacks by 2022. In the report, authored by 26 experts from across Europe, AI is described as a “dual use” tech with mostly beneficial applications but also potentially malicious military and civilian applications.

According to the report, these include expanding current cybercrimes such as “spear phishing”, where personalised messages are sent to vulnerable people to extract money, as well as new uses such as mimicking human voices or controlling synchronised drone attacks.

 

Cybercrime costing companies big

In-house cybersecurity teams are estimated to waste as much as US$1.3 million (A$1.8 million) per year by responding to inaccurate and erroneous intelligence or chasing false alerts on cybercrime.

No silver bullet

While conceding the growth of AI in cybersecurity is inevitable, University of Queensland’s Micheal Axelsen believes humans will have a big role to play for some time yet.

Axelsen, of UQ’s business information systems department, describes AI-centric cybersecurity as “absolutely not a silver bullet” to increased risks from hostile entities. Rather than being the centrepiece, the academic says machine-learning software should be part of a “portfolio” used to protect “particularly valuable data”.

“The big thing with AI is that you think it’s a silver bullet but it definitely won’t be. We’ve been looking for silver bullets for 25 years now and haven’t found one.”

He urges companies to look at alternative solutions like conducting a thorough “data audit”. According to Axelsen, organisations are “fantastic at hoovering up loads of data, storing it some place and then forgetting about it”.

The rise of AI cybersecurity

For this reason, he says a cutting-edge cyber-defence system may actually not be needed.“When we take data from consumers or employees, we should know what we’ve got, know where it’s at and know what its value is,” he tells The CEO Magazine. “If that’s not happening, I would put it to a failure of corporate systems.”

Irrespective of the approach taken to cybersecurity, it’s important that all firms exposed to e-crime conduct a proper risk assessment to assess exposure.

That’s the message from Griffith University’s Vallipuram Muthukkumarasamy, who urges companies to answer two important questions before making any big decisions in the area: What are you trying to protect? And who are your adversaries? “You need to, before anything else, establish your vulnerability, your attack surface and where your exposures are,” he says.

Leave a Reply