Researchers and university leaders are warning that AI narrows scientific research in ways that may not show up in headline productivity gains. In a new Financial Times opinion piece, Geraint Rees, vice-provost of research, innovation, and global engagement at University College London, argues that artificial intelligence is helping scientists work faster while also pushing inquiry toward problems that already have abundant, well-structured data.
That concern lands at a moment when AI has become more visible across labs, universities, and journals. The same FT piece points to recent scientific momentum in AI, including major advances in protein structure prediction, while arguing that the scientific community should not remain silent about the technology’s unintended effects on the kinds of questions asked in the first place.
Data-Rich Fields Gain While Harder Problems Risk Neglect
The central warning is not that AI weakens science across the board. It is that AI often performs best in areas where researchers already have large, clean, standardized datasets. That creates a practical incentive to focus on fields where machine learning can deliver faster results, clearer benchmarks, and more publishable outcomes. According to the FT summary, that shift risks contracting the scope of inquiry as research drifts toward data-rich problems and away from more difficult or less documented ones.
The article cites research from Tsinghua University suggesting that AI use can reduce research diversity, including a reported 5% decline in topic breadth and weaker interdisciplinary interaction. Rees argues that this matters because science does not advance only by refining well-mapped areas. It also depends on people choosing uncertain, messy, or poorly measured problems that may resist current AI methods but still hold major scientific value.
Why Scientists Are Being Urged to Speak Up
Rees’s argument goes beyond technical efficiency and into research policy. His core message is that silence from scientists could allow funding systems, institutions, and publishers to reward only the areas where AI already works well. If that happens, the long-term danger is not just concentration of effort, but a narrowing of ambition across the research system.
Other recent commentary on AI and science echoes that broader concern from different angles. An Undark analysis republished by Gavi says AI may reshape not only how research is produced but also how it is reviewed and communicated, while a January essay in Social Science Space argues that current systems still struggle with the parts of science that involve judgment, interpretation, and deciding which questions matter most. Together, those critiques point to a common theme: AI can accelerate parts of science without replacing the human role in setting direction and meaning.
The Debate Is Shifting Toward What Science Should Prioritize
Rees argues that one response is to invest more aggressively in data collection in underdeveloped fields, especially where weak datasets currently block discovery. In the FT summary, he points to areas such as blood-brain barrier research as examples where richer data infrastructure could widen what AI can help explore, rather than letting the technology reinforce existing imbalances. He also argues that AI itself could help identify knowledge gaps and guide future data-gathering efforts if researchers use it strategically.
That makes the debate less about whether AI belongs in science and more about how institutions shape its influence. If AI narrows scientific research, the answer may not be less AI, but better incentives, broader data-building efforts, and louder public engagement from scientists about what could be lost when speed and convenience start to determine the frontier of discovery.

