Attackers Are Using AI Now Too. Here's What That Looks Like.

New phishing kits are integrating AI assistants to help criminals draft convincing emails and run entire campaigns from a single dashboard. Here's what changed and what it means for your firm.

Most of the AI conversation in cybersecurity has been about defense. AI that spots threats. AI that monitors your network. That’s the version vendors sell at conferences.

Nobody was confused about what would happen next. The attackers figured it out too.

What a phishing kit looks like in 2026

In late April, security researchers at Varonis got inside a phishing kit called Bluekit and published what they found. Phishing kits aren’t new. Criminals have been buying and selling them for years. But Bluekit is a different animal.

It ships with over 40 phishing page templates targeting Outlook, Gmail, iCloud, GitHub, and other services. It handles domain registration, campaign setup, anti-detection, and victim session monitoring from a single dashboard. When someone enters their credentials on a fake page, the operator gets a Telegram notification in real time.

Here’s what’s new: Bluekit has a built-in AI assistant. It supports multiple models (GPT-4.1, Claude, Gemini, DeepSeek) and helps operators draft phishing emails and structure campaigns. The kind of convincing, personalized messaging that used to require a skilled social engineer? There’s now a text box for that.

Varonis tested the AI component and found it still rough. It produced campaign outlines with placeholder content rather than finished, ready-to-send emails. But the kit is under active development, shipping updates and new templates on a steady clip. The trajectory is obvious.

It’s not just email anymore

Separately, security firm Abnormal Security documented a platform called ATHR that uses AI voice agents to conduct social engineering calls. The AI impersonates IT support, walks victims through fake “verification steps,” and extracts credentials or gets them to install remote access tools.

If that sounds familiar, it should. That’s the same attack the Silent Ransom Group (Luna Moth) has been running against law firms for two years, as we covered in April. The difference is that Luna Moth uses human callers. ATHR automates the human out of the loop entirely.

What this breaks

Think about how you trained your staff to spot phishing. Look for typos. Look for weird grammar. Look for generic greetings. Look for manufactured urgency.

AI-generated phishing doesn’t have typos. The grammar is fine. It can pull your firm name, your employees’ names, and your practice areas off your website. It can reference a real case or a real client. It can generate hundreds of slightly different versions so no two people at your firm get the same email. And the person running the campaign doesn’t need to speak English or have any technical skill beyond clicking “generate.”

That changes things:

Volume goes up. When writing a good phishing email takes seconds, attackers send more of them. You’ll see more attempts, not fewer.

Quality goes up. The easy-to-spot stuff gets harder to spot. The gap between “obviously fake” and “looks real” gets smaller.

Targeting gets cheaper. Personalized spear-phishing used to be expensive. AI makes it trivial. A 30-person firm is absolutely worth targeting now.

Voice phishing scales. The callback attacks that Luna Moth runs with human operators? AI voice agents can do that 24 hours a day without a salary.

What actually works

MFA everywhere, email filtering, staff awareness. That baseline still matters (though MFA alone won’t stop every phishing technique). But the “awareness” piece needs updating. Training people to look for bad grammar doesn’t help when the grammar is perfect.

What matters more now:

  • Verify out-of-band. If someone calls claiming to be IT support, hang up and call your IT provider’s actual number. Same for any email demanding immediate action. This one habit stops both phishing and callback attacks cold.
  • Restrict remote access tools. If your firm doesn’t use AnyDesk or Splashtop, block them at the policy level. If a “helpdesk” call asks you to install one, that’s your signal.
  • Conditional Access policies. Require compliant devices, trusted locations, or phishing-resistant MFA (FIDO2 keys, passkeys) for anything sensitive. Most attacks start with stolen credentials, but those credentials become useless if the attacker can’t meet the device requirement.
  • Make reporting easy. No consequences for clicking something suspicious and raising the flag. One early report from one person can shut down an attack that would have hit the whole firm.

So now what

AI didn’t invent phishing. It just made it easy. The same way AI writing tools let anyone produce decent marketing copy, AI phishing tools let anyone produce decent phishing campaigns. People who were already good at this got faster. People who weren’t good at it can now play.

Your defenses should assume that the next phishing email hitting your inbox will be well-written, personalized, and difficult to distinguish from a legitimate message from opposing counsel or a vendor. Plan accordingly.


Artech Solutions provides managed IT and cybersecurity services for law firms and professional services firms in Iowa. If you want to know how your firm would hold up against these newer attack techniques, reach out.