My friend Meera is a radiologist. She's been at it for twenty years, and she's very good at her job — so good, in fact, that until recently she was quietly suspicious of the AI tools her hospital kept trying to introduce.
"It's not that I don't think they work," she told me over coffee last winter. "It's that most of them are trying to replace me when what I actually need is a second pair of eyes on the last scan of a long shift."
That line has stuck with me, because it's the kindest summary I've heard of where AI fits in healthcare. Not as a replacement for the clinician. As a companion. As the colleague who's still fresh when you're tired.
What it actually looks like
Most of the useful AI in healthcare today is unglamorous. It's not making diagnoses. It's flagging a spot on an X-ray that sits in an awkward place behind the heart shadow. It's noticing that a patient's overnight vitals are drifting in a way that, on their own, would be easy to miss. It's scheduling appointments in an order that means fewer people wait three weeks to be told they're fine.
I visited a community clinic in rural Georgia last spring that had one of these systems installed. A nurse walked me through how they used it to screen for diabetic retinopathy — a preventable cause of blindness that usually gets caught too late in underserved communities, because there aren't enough specialist ophthalmologists to go round.
The system takes a photograph of the back of the eye, runs it through a trained model, and flags anyone who needs to see a specialist. In six months, it had picked up over 200 cases of early-stage disease that would otherwise have been found only after vision loss started.
There's no drama in that story. Nobody's being disrupted. But two hundred people are going to keep their sight.
The bits that are harder
I'll be honest about the bits that worry me, because they worry the clinicians I trust, too.
Bias in the training data. If the data used to train a diagnostic model is mostly from white, male, American patients, the model will quietly work worse for everyone else. This isn't a hypothetical — it's happened, repeatedly, and the fix is patient, careful work on what data we collect and from whom.
Automation without accountability. The worst version of AI in healthcare is the one where a clinician signs off on a machine's recommendation without really engaging with it, because the caseload is punishing and the tool is convenient. Every good deployment I've seen treats the AI as a suggestion, never a verdict.
Privacy, properly done. Medical data is some of the most sensitive information about a person that exists. If you're looking at AI in a clinical setting, the privacy conversation isn't a checkbox — it's the starting point.
What we've learned, working with healthcare teams
A few things that show up again and again when we help hospitals and clinics think about AI:
- Start with administration, not diagnosis. Appointment scheduling, prior authorisation, documentation. These aren't glamorous, but they're where the fastest, least-risky wins live, and the team will thank you for them.
- Pilot in parallel, not in place of. Keep the current process running while the AI runs alongside it. Compare notes for a few months. Then decide.
- Let clinicians shape the tool. The hospitals that love their AI are the ones where a senior nurse spent three afternoons with the product team, arguing about the alert UI. That argument is the whole secret.
What I keep coming back to
Meera, my radiologist friend, eventually did start using one of the AI tools. It catches a couple of things a year that she thinks she'd have missed — not because she's bad at her job, but because nobody's perfect at 11pm after twelve hours of scans.
She still catches everything the AI does, too, of course. That's the point. It's not about one replacing the other. It's about both of them having each other's backs.
That's the version of AI in healthcare that excites me. Not the flashy demo. The quieter one, where people are better-rested, fewer things get missed, and nobody has to lose their sight to poor scheduling.
If you're a clinician thinking about where AI might help, or a health-tech team trying to ship something genuinely useful, come and talk to us. We'd love to think it through with you.