7 min. read • Email this page
Listen to this blog post [beta]:
Why Bryley has integrated AI into defenses
The need for speed

Above, a packaging robot demo. Whether aware or not, we’ve all been affected by the speed of mechanization. And since 2022’s launch of ChatGPT the use-cases of mechanized language generation and coding have exploded. These developments have not been lost on criminals who use them to steal.
It’s hard to not be astounded by the fluency of text generators – for anyone who’s wrestled with the right way to say a thing, they come across like a magic trick. And while OpenAI tries to dazzle us with a new announcement, criminal organizations have been figuring out how these technologies can make their operations faster and more efficient.
But let’s take a detour back through the mists of time to 2018. Before commonplace chatbots and diffusion image generators, bad guys had unleashed morphing malware to evade antivirus and anti-malware programs. Polymorphic malware was programmed to shape-shift, followed by more advanced morphing types that could also recognize a system’s defensive software and use evasion strategies tailored to its findings.
In this Microsoft case study the observed malware implanted itself in networks and siphoned off processing power for the purposes of bitcoin-mining. This type of malware remains an issue, its prevalence fluctuating with cryptocurrency prices; there are sometimes tells such as lagging system performance and overheating, though this malware can remain a hidden resource burden for years stealing computer processing and electricity.
These are the sorts of attacks (and others like fileless or hard-drive-avoiding malware and insider threats) that led to the development of AI or ML (Machine Learning) defenses. The basic idea being that the clue that systems are under siege is the activity, not a software signature (the latter is the basis of antivirus and anti-malware – signature means the malicious code on a hard drive is recognizable).
Before ML was integrated we had logs of activity from devices and software on a network, and rules that would trigger alerts (ex. ‘notify if five failed login attempts on an account’). But for a person to track down and analyze anomalous behaviors takes time – sometimes hours or days. ML brought automated real-time awareness of unusual activity to systems that could trigger a shut down of an attack in seconds.
Brute-Force on AI
A magnifier of throwing everything at the wall
There have been brute-force-type attacks for ages, in which criminals automatically unleash stolen credential pairs on websites to try and take over accounts. Bots try different credential combinations until they successfully breach an account. So an employee’s compromised personal password could end up unlocking your organization’s M365 if they’re reusing credentials. Once attackers gain access to one legitimate account, they may steal data, inject malware, escalate their user privileges or lock out other users.
And AI changes this already lousy situation in these ways: it can make better probability-based guesses at credentials, and gets a big bump in speed from distributed gpu architecture which then leads to an increase in the number of attempts.
The AI-powered, Identity Threat Detection and Response (ITDR) provides an automated defense that responds fast enough to deal with AI-enhanced bot attacks. ITDR can halt threats in seconds. Conventional approaches check individual factors against rules (like ‘block after five failed logins’) and generate alerts that need human review, ITDR automatically analyzes dozens of signals simultaneously – things like location, device, time patterns, behavior – to quickly detect sophisticated attacks that rules may miss. Most importantly ITDR is able to limit access and contain an attacker in seconds.
Increase in targeted email attacks
AI means more volume and better impersonations
The FBI shows targeted email attacks, which it groups as Business Email Compromise (BEC), at over 21,000 complaints with nearly three billion dollars in losses in 2024. And regarding the first quarter of 2025, the Anti-Phishing Working Group saw 1,130,393 phishing attacks, up from 1,003,924 attacks. And the average amount requested in wire transfer BEC attacks in Q2 2025 was $83,099, a 97 percent increase from the prior quarter. The total number of wire transfer BEC attacks observed in Q2 2025 increased by 27 percent compared to Q1 2025.
Spear phishing is among the forms of highly targeted email attacks in which … threat actors craft personalized attacks aimed at specific people or organizations. Unlike general phishing campaigns that cast a wide net, spear phishing focuses on a specific victim, using information gathered from social media, company websites, or other public sources to create a highly convincing message … Here’s how a threat actor may do it: Research their target, [learn] about their job, colleagues, interests, and even personal relationships. Use this information to create a seemingly legitimate email or message that appears to come from a trusted source, like a coworker, business partner, or even a friend. The message might contain a malicious link, according to Huntress Labs.
AI tools now automate the research that made spear-phishing attacks expensive to pull off and rare. Attackers prompt AI systems to do the gathering or scraping. The attacker can then prompt the AI to generate emails impersonating the CFO requesting urgent wire transfers to the attacker’s bank, for example.
Bryley uses an ML-powered email threat detection that can identify spear phishing through relationship and behavioral analysis. The ML builds a baseline of normal communication patterns: who emails whom? how do they speak? what requests are normal?
And then ITDR adds another layer of defense by monitoring for account compromise – if attackers breach a real employee’s account to send a spear-phishing email from inside the organization, ITDR detects uncharacteristic behavior like being sent after hours or from a different location.
Bottom line
We’re being attacked by criminals deploying AI systems that speed up and enlarge criminals’ previous attack methods. These are better versions of what came before. Better grammar. Better impersonations. More persistence. Faster.
AI is a terrifically fast pattern-matcher and that means from a defensive point-of-view if attackers make a move that deviates from activity normally seen on your network, the AI (in the form of ML-empowered email software or ITDR and other types not discussed here [like EDR, XDR]) can slam the door on the attacker to halt their actions, to limit any damage.
To speak to Bryley’s Roy Pacitto about engaging Bryley’s AI defenses please complete the form, below, schedule a no-obligation, no-cost call. Or you can email Roy at RPacitto@Bryley.com or reach him by phone at 978.562.6077 x217.


