My favourite description of what AI is doing in science right now came from a biologist I met at a conference last year. She'd been using an AI system to help her analyse protein structures, and she put it like this:
"It's like having a postdoc who's read every paper ever written, doesn't need sleep, doesn't need coffee, and isn't competing with me for lab space. I still have to ask the right questions. But the time between asking and knowing is now measured in minutes instead of months."
That's the version of AI in science I want to tell you about — the patient, tireless, deeply well-read collaborator. Not the one that "discovers" things on its own. The one that accelerates the humans doing the work.
What's genuinely changing
Protein structure prediction. This is probably the most celebrated story in AI-in-science, and it's genuinely deserving. DeepMind's AlphaFold, released in 2021, effectively solved a fifty-year-old problem: given a protein's amino acid sequence, predict its three-dimensional structure. Biologists who used to spend years on a single structure now get predictions in minutes. The whole field is working differently.
Materials discovery. New materials — battery electrolytes, solar cell coatings, catalysts for green chemistry — have historically been found through laborious trial and error. AI-assisted screening lets researchers propose candidate materials computationally and only test the most promising ones. Google DeepMind's GNoME, released in 2023, proposed 2.2 million new crystal structures, of which ~380,000 look stable and potentially useful. Those are being tested now.
Literature review. The research literature has become too large for any individual to read. Tools like Elicit, Consensus, and SciSpace help researchers navigate the literature, identify relevant papers, and surface contradictions. They're not replacing careful reading; they're making sure the careful reading starts in the right place.
Experimental design. Some experimental questions have vast design spaces — which combinations of reagents, which temperatures, which order? Bayesian optimisation and related AI techniques help scientists run fewer, more informative experiments. Labs that used to need a thousand tries are finishing in fifty.
Drug discovery. Candidate compounds can be proposed by generative models, screened computationally for predicted safety and efficacy, and prioritised for wet-lab testing. Insilico Medicine's AI-discovered drug for pulmonary fibrosis reached Phase II clinical trials in 30 months — unheard of in pharma.
What AI in science is not doing
Replacing scientists. The scientific method — hypothesising, designing experiments, interpreting results, arguing about them — is deeply human. AI is a tool that accelerates parts of it. It doesn't and can't replace the judgment, taste, and curiosity that great scientists bring.
Producing original conceptual breakthroughs, on its own. AI systems can find patterns in data and propose hypotheses, but the really creative leaps — noticing that something that looked like noise was actually signal, connecting two apparently unrelated fields — still come from humans. This might change. It hasn't yet.
Fixing bad science. An AI system trained on flawed data will produce flawed predictions. An AI literature-review tool that over-weights popular papers will just reinforce popularity bias. These tools make scientists more productive; they don't make poor research practices okay.
A few honest challenges
Reproducibility. AI-assisted research papers are sometimes irreproducible because the model, training data, or random seeds weren't documented. The field is working on this, but it's a real issue. Treat AI predictions in papers with appropriate skepticism.
Access. The most capable AI models require GPUs and datasets that only wealthy institutions can afford. This is rapidly creating a new "have" and "have-not" divide in science. Open-source alternatives (AlphaFold, Llama, etc.) help, but don't fully close the gap.
Over-claiming. Press releases about "AI makes breakthrough discovery" are often oversimplifying what happened. Usually: humans asked a good question, AI helped narrow the search space, humans did the careful experimental work that produced the result. Read the papers, not the press.
What I'd want to tell a research lab considering AI
- Start with your literature review. Not flashy. High ROI. Tools like Elicit can save a PhD student months.
- Pair AI with skilled humans. The research groups making the biggest gains are the ones where the computational and experimental sides eat lunch together.
- Don't skip the experimental validation. AI predictions are hypotheses, not conclusions. The lab bench still has the final word.
- Document your pipelines. Future you, and future reviewers, will need to reproduce what you did. Save everything.
A personal bet
If I were twenty-five and deciding what kind of scientist to be, I'd be spending time learning to work alongside AI tools — not because I think AI will replace scientists, but because scientists who collaborate well with AI are going to be vastly more productive than those who don't. And being productive, in science, means being able to pursue more of the questions you're curious about.
That's a wonderful trade.
If you're thinking about how AI fits into your lab or research programme, we'd love to think it through with you. Some of our favourite projects have been with scientists.