Among the top headlines in Google News’s Technology section today was criminal hackers use of AI (Artificial Intelligence) and its subset, ML (Machine Learning)1. Opening the article, I found a synopsis of a Tech Republic report, “Cybersecurity: Let’s Get Tactical,” in which the authors give ten ways cybercriminals are attacking with AI2 including
- phishing attacks, in which, upon gaining credentialed access, automatic scripts can wreak havoc, including draining bank accounts
- credential stuffing and brute force attacks, in which AI systems try passwords — and password possibilities — on many websites
- bulletproof hosting services that use automation to hide the tracks of malicious websites, so they can’t be stopped by law-enforcement, or often flagged by network scanning tools
The fact is, it’s an arms race. Both malware and criminal sites would be pretty quickly and easily identified on a network by the nature of their activity. So the criminals try to disguise their malware in benign code and their sites in bulletproof hosting schemes. The way they keep the ruse going is through machine learning adapting to changing circumstances.
The Good Side
The most dangerous cyber threats to organizations and individuals hide within everyday network traffic, cleverly disguised to avoid detection. Faced with a near constant stream of potential threat warnings, actual infections, and information on network activity, organizations of all sizes may struggle to successfully uncover threats. The heart of the issue here is that humans are incapable of handling such an enormous amount of data and data analysis. That’s where we have to rely on the strength of our machines, which have the processing power to comb through and analyze all of the noise, then identify the items that truly need attention. Advanced machine learning can be used to classify the enormous volume of available data that is overwhelming threat researchers and traditional defenses; it can reduce the false positives/negatives, as well as the workload for human analysts, thereby enabling an organization’s staff to focus exclusively on the actual threats themselves.4
Webroot’s AI phishing solution works in real time, so that if an employee clicks a phishing link, the AI system sees that this is not normal behavior, signaling that this could therefore be malicious activity. The AI opens the URL, decides on the quality of the content and has the ability to block it. It provides an intelligent response to unusual behavior.
So in this case AI prevents the damage of malicious access, triggered malware, lost credentials or giving criminals access to a network.
Spam Wars, Chapter 42: Nothing but Spam Wars
A long time ago there was the original spam wars,5 a who’s-got-the-better-weapons battle of good guys versus bad guys. Today it’s really only the tools that are more sophisticated. The good guys’ AI implementations can solve some of the problems (Webroot claims to stop 98 percent of spam,6 AKA 2% of malicious emails get through). But the bad guys are better financed: in Dr. Mark McGuire’s April 2018 study, “Into the Web of Profit”, middle earning cybercriminals make more than $75,000 a month, and the top earners make more than $166,000 per month.7
Because it’s an ever-escalating war, training for all employees to be able to recognize bad emails and websites is still the number one defense. 90% of cyberattacks are deployed through human error.8 And the training needs to be repeated periodically, because the tactics keep evolving. Per the Tech Republic report: even if your organization has implemented the latest and greatest security, it won’t matter if your employees are uninformed.
Cover these topics in your training:
- How to recognize fraudulent emails
- How to know you can click a link with complete certainty
- What to do if you question the authenticity of an email
- What should you do if you click a malicious link
- What to do when an email or link from an email asks for your credentials
Of course let’s continue to move forward with technologies to protect us. But like football helmets have been shown to give players a false sense that they can lead with their head, don’t be lulled by the thought AI is going to keep your business safe from email attacks. AI can’t fix the decision-making of the humans.
1 Artificial Intelligence means incorporating human-like reason and learning into machines: a computer draws conclusions from data. Machine Learning means programming into computers the ability to learn: machines use the provided and accumulating data to make accurate predictions.
3 Botnets are an intrusion into a computer system that illicitly take over the system to use as a robot to run its software program
5 as MIT’s “Technology Review” dubbed the situation in the July/August 2003 issue