Lawyers are, rightly, one of the most sceptical audiences for any new technology. The profession has seen too many transformational promises fizzle — remember when blockchain was going to revolutionise contracts? — and the stakes of getting it wrong are high. A buggy tool in a marketing department is embarrassing. A buggy tool in a deposition is a malpractice claim.
So when I talk to lawyers about AI, I try to start from respect for that scepticism. The ones using these tools thoughtfully are finding real value. The ones using them thoughtlessly are already generating cautionary tales.
Let me share what I've learned, sitting with both camps.
Where it's helping
Document review at scale. The old model: a small army of associates reading through discovery boxes for months. The new model: an AI tool that ranks documents by relevance, flags the interesting ones, and leaves the associates to focus on the real analysis. Major firms have been using this for nearly a decade now. It's boring. It works.
Contract review. First-pass analysis of routine commercial contracts — flagging unusual clauses, comparing against playbook provisions, highlighting missing terms — is something modern LLMs do surprisingly well. A senior associate can now review in an afternoon what used to take a week. The cognitive relief this provides is substantial.
Research. Legal research tools with good AI behind them (Thomson Reuters, LexisNexis, and a few independents) are genuinely better at finding relevant cases than the keyword-search previous generation. The trick is knowing they still make mistakes — the lawyer still has to read the cases, not just trust the summary.
Drafting routine documents. Templated letters, first drafts of motions, summary memos. AI is a competent junior who produces a first draft you then edit into the real thing. This is useful — provided nobody forgets to edit.
Where it's causing trouble
Some of this you've read about in the news. It's worth understanding why, because the patterns repeat.
Hallucinated citations. In 2023, a lawyer filed a brief citing six cases that did not exist — he'd asked ChatGPT to write the brief, and the model confabulated plausibly-named authorities. This is the canonical AI-in-law failure mode. It will keep happening unless lawyers internalise: generative AI will confidently make things up, and you must personally verify every citation.
Privilege and confidentiality. Many lawyers don't realise that typing a client's information into a consumer AI tool may waive privilege, leak confidential data into training sets, and possibly breach professional conduct rules. The fix: use enterprise versions with clear data-handling agreements, or self-hosted models for sensitive matters.
Over-reliance on tools for judgment. AI can flag clauses that look unusual. It can't tell you whether they matter for this deal, this client, this risk tolerance. The lawyering is still the lawyering. Tools are tools.
Bias in predictive tools. Some vendors market AI that predicts litigation outcomes, sentencing ranges, or parole decisions. These tools often have serious issues — training on historical data encodes historical biases. Jurisdictions are appropriately nervous about this, and so should you be.
The ethical baseline
A few things that show up in the professional conduct rules of most jurisdictions, and that bear repeating:
- Competence. Lawyers are obligated to be competent in the technology they use. If you're using an AI tool, you're obligated to understand its limitations.
- Confidentiality. Client information goes into AI tools only under carefully-designed conditions.
- Supervision. If an associate uses AI to draft, a partner is still professionally responsible for the final product.
- Candour. Judges are increasingly asking lawyers whether AI was used. Be ready to answer honestly.
What I'd recommend to a firm starting out
- Pick one workflow, not ten. Contract review is a good starting point. It's bounded, low-risk (the lawyer still reads everything), and the ROI is visible.
- Pay for enterprise tools. The consumer tools are tempting because they're free, but the data-handling terms are not suitable for client work.
- Train the team — and include the sceptics. The sceptics will find the failure modes faster, which is exactly what you want.
- Write a policy. A short, clear, firm-wide policy on how AI is used, what data can be entered, what must be disclosed to clients. Review annually.
A quiet optimism
The juniors I know in law are, on the whole, more enthusiastic about AI than the partners. Partly because they're closer to the tools. Partly because they see how much of their time it could free up for genuinely interesting work — the thinking work, the arguing work, the talking-to-humans work that made them want to be lawyers in the first place.
That's the version of AI in law that excites me. Not robot judges. Lawyers with more time to be lawyers.
If you're at a firm thinking about this, we'd love to chat. We've helped a few legal teams sort out their AI strategy, and we'll be honest about what we've seen work.