If you run security for an organisation, you've probably sat through more "AI-powered cybersecurity" vendor pitches than you care to count. This post is an attempt to cut through that — to tell you what AI is actually worth buying, what's mostly marketing, and what you can build yourselves.
I'm writing this for a working practitioner, not a board member. We can skip the threat-landscape slides.
Where AI earns its budget
Alert triage. Every SOC I've worked with is drowning. Most alerts are false positives, and the real ones get buried. A second-stage model that re-ranks alerts based on likelihood of being a real threat is often the single highest-ROI defensive AI investment you can make. You keep your existing detection stack; you just prioritise better.
The good vendors in this space (the ones I'd name if I could): Exabeam, Splunk's UBA, Microsoft Sentinel's Fusion. The model quality is similar; the integration story is what differentiates them.
Phishing detection. Modern language models are surprisingly good at detecting social engineering, even when the grammar and branding look right. This shows up in both commercial products (Proofpoint, Abnormal, Microsoft Defender) and open source tools. Worth deploying if you aren't already.
Endpoint behavioural analysis. EDR tools that use ML to spot anomalous process trees, unusual registry access, or suspicious network connections have become genuinely good. Crowdstrike, SentinelOne, Microsoft Defender — all have capable offerings. The differentiator is less the AI and more the operational maturity (response actions, rollback, integration with your SIEM).
Log analytics. Anomaly detection on logs — spotting the unusual login, the off-hours database dump, the slow-and-low lateral movement — is now well within reach of in-house teams using tools like Elastic Security or a notebook-and-scikit-learn. You don't need to buy a $500k product to do basic UEBA. Sometimes you do. Most of the time you don't.
Code analysis. AI code review that flags injection vulnerabilities, hard-coded secrets, and common OWASP mistakes is a good use of AI. GitHub Advanced Security, Semgrep, Snyk — all have AI-assisted checks now. Not a substitute for human pen-testing; a useful layer before it.
Where I'd be more careful
Autonomous response. "AI automatically contains breaches" is a feature I'd deploy cautiously. Automated containment that gets it wrong can itself be catastrophic — taking down production systems, locking out legitimate users, triggering cascading outages. Play them in "suggest" mode before "act" mode, for a long time.
Predictive threat intelligence. Tools that claim to predict tomorrow's novel attacks are mostly selling aggregated feeds with a thin AI wrapper. Useful, but not miraculous. Don't pay miracle prices.
"AI-generated detection rules." Some vendors claim their AI writes custom Sigma or YARA rules tuned to your environment. In practice, these rules are often mediocre and generate noise. A human analyst writing the rules is still usually better.
Deepfake detection for video. The arms race here is moving too fast for any commercial product to keep up consistently. Useful for forensic analysis after the fact; don't rely on it for real-time decisions.
What AI is doing for attackers
Honest note, because it matters.
Attackers are using AI to write more convincing phishing emails and business email compromise pretexts, at scale, in any language. They're using voice cloning for vishing attacks. They're using code-generation tools to accelerate malware development. These are real.
But the defensive fundamentals haven't changed. If your organisation has:
- MFA everywhere (especially on admin accounts)
- Reasonable patching cadence
- Sensible access controls
- A culture where people feel safe reporting "this email looks weird"
... you're substantially protected from all of this. AI makes the attacker's life easier in a few ways. It doesn't suddenly make your controls less effective.
What I'd prioritise if I were starting over
If you're a mid-sized company building a security posture from a modest starting point, here's my suggested order:
- MFA for all users, phishing-resistant for admins. Do this first, before anything AI-flavoured.
- A managed EDR. CrowdStrike Falcon, Microsoft Defender for Business, SentinelOne. Whatever fits your budget.
- Email security with AI-assisted phishing detection. Abnormal, Microsoft Defender for Office, Proofpoint.
- A SIEM you actually use, with AI-assisted alert triage. Whatever you can operationalise.
- Regular security training and phishing drills. Humans are still your best sensor.
Notice that only items 3 and 4 are where you're paying for AI specifically. Items 1, 2, and 5 are fundamentals that are worth far more than any "AI cybersecurity platform."
A small, unglamorous truth
The best security AI I've seen isn't the one that does something dramatic. It's the one that lets an overworked analyst go home on time, because the alert queue is manageable and the signals they're looking at are mostly real.
That's the standard I measure defensive AI against. "Does it help my humans?" If yes, keep it. If no, skip it.
If you'd like a second opinion on where AI fits in your security stack — we're happy to help. Honest answers, even when they're "you don't need that."