Among the top headlines in Google News’s Technology section today was criminal hackers use of AI (Artificial Intelligence) and its subset, ML (Machine Learning)1. Opening the article, I found a synopsis of a Tech Republic report, “Cybersecurity: Let’s Get Tactical,” in which the authors give ten ways cybercriminals are attacking with AI2 including
- phishing attacks, in which, upon gaining credentialed access, automatic scripts can wreak havoc, including draining bank accounts
- credential stuffing and brute force attacks, in which AI systems try passwords — and password possibilities — on many websites
- bulletproof hosting services that use automation to hide the tracks of malicious websites, so they can’t be stopped by law-enforcement, or often flagged by network scanning tools
The fact is, it’s an arms race. Both malware and criminal sites would be pretty quickly and easily identified on a network by the nature of their activity. So the criminals try to disguise their malware in benign code and their sites in bulletproof hosting schemes. The way they keep the ruse going is through machine learning adapting to changing circumstances.