6 min. read Email this page Bryley Systems Inc.
Wolves multiplied
AI gives criminals the power to scale their operations. As an example, AI can scrape a resource, like freely-available podcast feeds and collect voice samples paired with names of small business owners. And these samples can be processed and made to say ‘send money to [a criminally-controlled bank account].’

Cybercrime magnified

Business Email Compromise on AI

Every day brings what can feel like destabilizing changes, including Business Email Compromise (BEC). It’s not new, but it has ballooned in its financial consequences – damages are of a size that can easily threaten the survival of a small organization. But Bryley has observed a rise in BEC attacks that show signs that criminals may be exploiting LLM tools like ChatGPT to craft personalized, convincing emails that mimic trusted people.

Here are some examples of how BEC attacks unfold for smaller businesses:

  1. A law firm received a Nov. 16 email with instructions to wire $68,403 in payoff funds to a mortgage company. That money, instead, went to one of the accounts that [Dwayne] King had opened in Atlanta, authorities said. The next day, [King] withdrew $3,800 in cash from the account at a bank branch, prosecutors said.1
  2. Attackers compromised a Microsoft 365 account, setting up malicious email-forwarding inbox rules to intercept communications to gather a small business’ vendor and banking information.2
  3. A marketing agency invoiced a small business for $103,000, payable via ACH. After the payment was initiated, the business received a suspicious email, supposedly from the agency, claiming suspicious activity and requesting a change of banking details. Thanks to their training, the accounting department verified the request by phone. It was revealed the email was a scam.3

Emails so convincing employees can’t tell them from the real thing

The Apple-fication of the web

LLMs fool us. They give us cues that they are, in the words of ChatGPT “clear, concise, and confident” – by design. While their content is nothing new. In fact it’s old (some now incorporate an updated search engine’s cache of web content). LLMs are pattern finders trained on a lot of data (the web and books) and statistically-based predictors based on those previously seen patterns.

LLMs are like the web with great UI/UX (User Interface/User Experience) design – continuing a trajectory Steve Jobs embraced after his 1979 visit to Xerox PARC, where he saw a computer interface mimicking real life with documents, folders and a desktop. When Jobs returned to lead Apple again in 1997, he doubled down on this idea through skeuomorphism – designing digital elements to resemble real-world counterparts, from page-turning books to metallic window frames. And Apple rose from the dead. Microsoft made a lot of similar interface choices in Windows. Google brought it to Android. Digital design has evolved, but real-world analogies remain; making the digital feel real has long been a human goal. LLMs are a logical and powerful continuation of the evolution of this human-computer interface pursuit.

Pixelated image of a magnifying glass, symbolizing how AI can magnify cybercrime and exploit our trust in digital interfaces

We’ve been trained to see these pixels as a magnifier – now floating via the fake shadow. With effort, the illusion fades. But we’ve been trained to expect digital interactions to work easily, like they’re supposed to. AI-generated BEC attacks exploit this. In mimicking trusted people, criminals play on our expectations of digital predictability. We’ve had years of being primed to believe illusions that confirm our expectations.

UI in the emails

LLMs collect data, analyze it and statistically predict the next logical thing.

A criminal trains an AI on collecting your data – squatting as a persistent presence on an employee laptop, silently collecting data over time. It learns communication patterns, internal contacts and financial workflows. An LLM analyzes that data to statistically predict how a specific manager would ask a specific buyer to release confidential data or send money to a criminal account.

Emails have become so convincing employees can’t tell them from the real thing. And when the fraud blends seamlessly into the natural workflow – like the hiding wolf – that’s when the criminals strike.

Wolves in the grass

AI also lets cybercriminals scale BEC attacks like never before. LLMs can analyze publicly-available data, identify decision-makers and generate personalized emails in seconds – turning once time-consuming scams into mass, automated operations. With AI doing the research, analysis and crafting the emails, even small and mid-sized businesses are sitting ducks.

And traditional security measures were not built for this. AI-fueled attacks can bypass rules-based defenses by not showing the usual give-aways – they can slip past traditional antivirus and anti-malware undetected. Cybercriminals can move broadly and with speed and precision – hitting businesses before the victims even realize there’s a threat.

Deep mindset

Just like people are reassessing operations in light of the possible benefits of LLMs, to defend against LLMs’ malicious use, also requires rethinking defense. So consider:

  • Implementing verification standards: Establish verification protocols for all colleagues, vendors and sensitive requests – especially financial transactions. Familiar communication patterns are able to be exploited in AI-enhanced attacks.
  • Cultivating AI Security Awareness Training: Equip your team with up-to-date knowledge of AI’s dual role — as both a threat and a defense – and its ability to amplify cybercrime, while providing ongoing training on evolving AI attack techniques and defenses.
  • Using AI-powered, adaptive defenses: Deploy intelligent, layered security that can respond dynamically to intelligent attacks.

AI has altered people’s interactions with computers, including criminals who are pressing it to see how it can be used to malicious ends. This means AI opens new vulnerabilities – this invites a reassessment of your security strategy.

AI’s broad accessibility makes small businesses increasingly vulnerable to its efficient attacks. In our next report, we’ll explore the specific vulnerabilities that make smaller organizations particularly susceptible and introduce the principles for building stronger defenses. This is part of a 3-section guide designed to help empower you to build resilience and safeguard your business in our age of AI.

Subscribe to Up Times by Bryley, the monthly tech newsletter for New Englanders by New Englanders.

1 The Patch, Money Laundering Email Scam Lands Atlanta Man In Federal Prison
2 Huntress
3 Huntress

Get more New England-based technology and security information. Subscribe to Up Times by Bryley monthly newsletter.

Business Email Compromise (BEC) + AI means the risk of devastating financial losses has never been greater:

  • AI-driven automation lets criminals target organizations of any size efficiently
  • Impersonation powered by AI creates emails that are designed to appear identical to genuine communications
  • Advanced AI-driven psychological manipulation exploits human vulnerabilities, increasing the success rate of attacks
  • AI’s ability to learn and adapt allows it to intelligently bypass traditional security defenses