AI Cybersecurity Threats: What Small Business Owners Need to Know About AI-Powered Threats
- Mar 18
- 10 min read
By George Papazian | Galyx.com | February 2026
Estimated reading time: 9 minutes

A client of mine runs a small company in Pennsylvania. They're small but have built a solid reputation for their innovative product. Early last year, her office manager got an email from what appeared to be one of their vendors requesting payment via wire transfer. The email referenced a specific invoice number, mentioned specifics that seemed legitimate, and matched the client's writing style almost perfectly.
It was fake. All of it. Generated by AI.
The office manager didn't catch it in time, sent the payment, then mentioned to the founder later that day, "oh, I paid one of the vendors because the account was overdue." My client said "we don't have any overdue invoices from them" and had a sinking feeling. As they looked into the matter, they were defrauded and lost several thousand dollars that they'll never see again. These are sophisticated businesspeople who have warned others about phishing emails for years but in a moment of distraction, stepped into the very trap they helped others avoid. The sentence I remember most when hearing the story is "This one actually looked like it came from someone who knew us."
They're not alone. The numbers tell a story that every small business owner needs to hear. According to the Identity Theft Resource Center, 81% of small businesses experienced cyberattacks in 2025, and 41% of those attacks were AI-driven. The FBI's 2025 IC3 report logged a 37% increase in AI-assisted business email compromise. And a Chainalysis report found that AI-powered scams are producing 4.5 times more profit for criminals than traditional methods.
This isn't a distant, enterprise-level problem. It's hitting companies with five employees and companies with five hundred. And the tools making it possible are the same ones you and I use every day.
The Problem Is Bigger Than Most People Realize
OpenAI published a detailed threat report in February 2026 titled Disrupting Malicious Uses of Our Models that should be required reading for anyone running a business. Credit is due to OpenAI for the transparency here: they're naming specific operations, showing how criminals use their platform, and explaining what they're doing to stop it. Most companies wouldn't publish this kind of information voluntarily. Their report is 37 pages and was the inspiration for me to research this further today.
What the report reveals is unsettling. Threat actors aren't building some exotic new technology. They're taking the same AI tools available to everyone and bolting them onto old criminal playbooks. Phishing emails that used to be riddled with spelling errors now read like they were written by your actual vendor. Romance scams that used to fall apart after two messages can now sustain emotionally manipulative conversations for weeks. Influence operations that used to require armies of human operators can now run with a handful of people and thousands of AI-generated posts.
The OpenAI report describes what they call the "ping, zing, sting" pattern in scam operations. The ping is the cold contact: AI generates a message designed to grab your attention. The zing triggers an emotional response: excitement about a deal, fear of missing out, attraction to a fake person. The sting extracts money. Each stage is now more convincing, faster, and cheaper to execute than ever before.
One case study in the report, codenamed "Date Bait," described a semi-automated romance scam operation likely based in Cambodia. The criminals used ChatGPT to generate promotional content for a fake dating service, ran targeted social media ads aimed at young men in Indonesia, and then handed off conversations to a mix of human operators and AI chatbots. Their internal reports (which the scammers themselves ran through ChatGPT) tracked hundreds of active targets and calculated a "kill value" for each victim: the maximum amount they expected to extract before blocking them.
That's not a hypothetical scenario. That's a real operation that was running at scale until OpenAI shut down the accounts.

What AI Cybersecurity Threats Actually Look Like in 2026
Let me break down the specific categories of AI cybersecurity threats that are hitting businesses right now, because "cybersecurity" is one of those words that makes people's eyes glaze over until it happens to them.

AI-Generated Phishing and Business Email Compromise
This is the biggest threat for most small businesses by a wide margin. Over 82% of phishing emails now use AI in some form, up more than 50% from the prior year. The click-through rate on AI-generated phishing is four times higher than manually crafted messages. And the cost for attackers to launch these campaigns has dropped by about 95% thanks to large language models automating the entire process.
Here's what makes this different from the phishing emails you're used to ignoring. AI-generated messages don't have the telltale signs anymore. No broken English. No generic greetings. They reference real projects, real people, real invoice numbers. They match the tone of actual business correspondence because the AI has been trained on millions of examples of exactly that.
Deepfake Voice and Video Scams
Deepfake scams have crossed what security researchers call the "indistinguishable threshold." Human listeners can no longer reliably tell cloned voices from real ones. A well-publicized case in 2024 saw an engineering firm in Hong Kong lose $25 million after employees were fooled by a deepfake video call that impersonated their CFO. Deepfake-related fraud has surged over 2,100% since 2022.
For a small business, this might look like a phone call from your "bank" that sounds exactly like your actual account manager. Or a video message from a "vendor" confirming a change in payment instructions. The voice cloning technology is now good enough that a few seconds of audio from a public LinkedIn video or podcast appearance is sufficient to create a convincing clone.
State-Sponsored Influence Operations
This one might seem like it only affects governments and large corporations, but sadly, no. The OpenAI report documented a Chinese law enforcement operation they called "Cyber Special Operations" that ran across more than 300 social media platforms with thousands of fake accounts, hundreds of operators, and locally deployed AI models including DeepSeek and Qwen. Their tactics ranged from mass posting to forging documents to impersonating U.S. officials.
A separate Russian-linked operation called "Fish Food" used ChatGPT as a content farm, generating batches of social media comments in multiple languages that were then posted by a network of seemingly unrelated accounts. One batch of seven AI-generated tweets, posted by different accounts, received anywhere from 57 views to over 150,000 views, depending entirely on the follower count of the account doing the posting.
Why does this matter for your business? Because these same techniques get adapted for commercial purposes. Fake reviews. Manufactured controversy about competitors. Disinformation campaigns targeting specific industries. The playbook is public, and the tools are available to anyone.
AI-Assisted Ransomware and Malware
Anthropic's August 2025 threat intelligence report revealed that their Claude AI had been used by a threat actor to compromise at least 17 organizations across healthcare, government, and religious sectors. The attacker used AI to scan networks, harvest credentials, steal files, and generate customized ransom notes based on each victim's financial data. Demands went as high as $500,000.
What's particularly alarming is that the attacker appeared to have only basic coding skills. Without AI assistance, they couldn't have built the malware components on their own. AI didn't just make an experienced criminal faster. It turned a novice into a functional threat.
Ransomware accounts for roughly 51% of cyberattack costs for small and medium businesses, according to recent industry data. And 60% of affected small businesses close within six months of a successful attack.
What the Major AI Companies Are Doing About It

Credit where it's due: the major AI companies are not ignoring this problem. Here's what each of them is doing, based on their public reports and stated policies.
Company | Key Actions |
OpenAI | Published detailed threat reports since Feb 2024. Disrupted 40+ malicious networks. Bans accounts, shares intelligence with law enforcement and industry partners. |
Anthropic | Released threat intelligence reports (Aug 2025) documenting real-world misuse of Claude. Claude achieves 99.2% refusal rate on harmful requests with safety protections enabled. |
Google Threat Intelligence Group (GTIG) publishes AI threat tracking reports. Documents adversary use of AI for social engineering, jailbreaking, and model extraction. | |
Microsoft | Documented 200+ instances of AI-generated fake content in July 2025 alone. Integrates AI-powered threat detection across Azure and Microsoft 365 security. |
Meta | Bans coordinated inauthentic behavior networks. Collaborated with OpenAI on cross-platform threat detection. Published transparency reports. |
The 2026 International AI Safety Report, a collaborative effort involving Anthropic, Google DeepMind, IBM, Meta, Microsoft, OpenAI, and others, noted that "more evidence has emerged of AI systems being used in real-world cyberattacks" and that "reliable pre-deployment safety testing has become harder to conduct." That last part is important. Some AI models can now tell the difference between a test environment and real-world deployment, which means they might pass safety evaluations and still behave differently once they're in the wild.
My take at the moment is AI companies are doing more than most people give them credit for, but they're also playing a game of whack-a-mole. Every time they shut down one set of malicious accounts, new ones pop up. The fundamental challenge is that these are general-purpose tools, and you can't make a useful tool for writing business proposals without also making it useful for writing convincing phishing emails.
Why Small Businesses Are Especially Vulnerable
I hear this a lot: "We're too small to be a target." That was maybe true ten years ago. It's not true now.
AI has fundamentally changed the economics of cybercrime. When crafting a phishing email required hours of research and manual work, criminals focused on big targets with big payoffs. When AI can generate thousands of personalized phishing emails in minutes at almost zero cost, the calculus changes. Why spend a week targeting a Fortune 500 company when you can target five hundred small businesses in the same afternoon?
Verizon's 2025 Data Breach Investigations Report found that 43% of all cyberattacks target small businesses. The Guardz Mid-Year 2025 SMB Threat Report showed that small businesses faced nearly twice as many weekly cyber incidents in the first half of 2025 compared to the same period the year before. And here's the stat that should concern every business owner: 83% of small and medium businesses are not financially prepared to recover from a successful cyberattack.
The reasons are predictable. Most small businesses don't have dedicated IT security staff. They don't have sophisticated email filtering systems. They don't run regular security training. And they often rely on the same password across multiple accounts because, let's be honest, nobody wants to manage forty different passwords.
All of that makes small businesses not just viable targets, but preferred ones.
What You Can Do to Protect Your Business

There's no magic here to make this go away and even with taking all the right steps, you can still be successfully targeted. But there are practical steps that will meaningfully reduce your risk without requiring a six-figure security budget. I've organized these by difficulty and impact.
Do This Today (30 Minutes or Less)
• Turn on multi-factor authentication everywhere. MFA adoption among small businesses actually dropped to 27% in 2025. That's alarming. If you're not using MFA on your email, your bank accounts, and your critical business software, you're leaving the front door unlocked. This is the single highest-impact thing you can do.
• Establish a verbal verification policy for financial transactions. Any request to change payment details, wire money, or update banking information should require a phone call to a known number (not the number in the email) for confirmation. This one policy would have saved my client from that payment I mentioned earlier.
• Check your email security settings. Most business email providers (Google Workspace, Microsoft 365) have built-in protections that aren't enabled by default. Turn on advanced phishing and malware protection. It takes ten minutes.
Do This Week (A Few Hours)
• Run a basic security awareness session with your team. It doesn't have to be formal. Show them examples of AI-generated phishing emails. Explain that "perfectly written" no longer means "legitimate." Walk through the verification policy. Forty-five minutes will make a real difference.
• Audit your password situation. Move to a password manager if you haven't already. LastPass, 1Password, Bitwarden. Credential-focused attacks made up 62% of all identity-based incidents in the first half of 2025. Most of those succeeded because people reused passwords.
• Review who has access to what. Does your former bookkeeper still have access to your accounting software? Does that intern from last summer still have a company email address? Clean it up.
Do This Month (Longer-Term Protection)
• Get cyber insurance if you don't have it. It's getting harder to obtain (nearly a quarter of companies reported difficulty getting or renewing policies in 2025), which is all the more reason to lock it in now while you can. Talk to your insurance broker about what's covered and what's not.
• Set up automated backups with offline copies. Ransomware specifically targets your backups. If your only backup is connected to your network, it's vulnerable. Keep at least one copy completely offline or in a separate, isolated cloud account.
• Consider an AI-powered security tool. Companies using AI-driven security platforms report detecting threats up to 60% faster. For small businesses, managed security service providers (MSSPs) can provide enterprise-level protection at a fraction of the cost. Services like Guardz, SentinelOne, and others now specifically target the SMB market.
• Create a simple incident response plan. Write down: who do we call if we get breached? What accounts do we lock first? Who contacts our clients? Having this documented before you need it saves critical hours when every minute counts.
The Bigger Point Here
I want to be careful not to make this sound like the sky is falling. AI is an extraordinary tool for business, and I spend most of my time helping business owners use it well. But there's a reality we can't ignore: the same capabilities that make AI useful for writing proposals, analyzing data, and automating workflows also make it useful for crafting AI-powered cyberattacks.
The World Economic Forum's Global Cybersecurity Outlook 2026 reported that 73% of survey respondents were personally affected by cyber-enabled fraud in 2025. CEOs now rank it as their top concern, ahead of ransomware. And 94% of cybersecurity experts identified AI as the most significant driver of change in their field.
But here's the part that gives me some optimism. The AI companies are investing heavily in detection and disruption. OpenAI has publicly disrupted over 40 malicious networks. Anthropic's safety systems now block 99.2% of harmful requests. Google, Microsoft, and Meta are all publishing threat intelligence and collaborating across platforms. The 2026 International AI Safety Report brought together contributions from Anthropic, Google DeepMind, IBM, Meta, Microsoft, OpenAI, and dozens of other organizations.
The arms race between attackers and defenders is real. But unlike previous technology shifts, this time the defenders have access to the same AI capabilities as the attackers. The companies building these tools have financial incentives, regulatory pressure, and (in many cases) genuine conviction to make them safer.
Your job as a business owner isn't to become a cybersecurity expert. It's to take reasonable precautions, stay informed about how the threats are evolving, and build a culture in your company where security is treated as seriously as locking the office at night.
Start with MFA and a verification policy. Those two things alone will put you ahead of most small businesses. Then work through the rest of the list at a pace that makes sense for your operation.
Good decisions start with good information. Galyx is built for business owners who know AI matters and need a technology partner who actually speaks their language and solves real business problems. Galyx focuses on practical guidance you can use now.
Register at Galyx.com for weekly insights on making AI work for your business.




Comments