
A new whitepaper from Frontiers shows that AI has rapidly become part of everyday peer review, with 53% of reviewers now using AI tools. The findings in “Unlocking AI’s untapped potential: responsible innovation in research and publishing” point to a pivotal moment for research publishing.
Drawing on insights from 1,645 active researchers worldwide, the whitepaper identifies a global community eager to use AI confidently and responsibly. While many reviewers currently rely on AI for drafting reports or summarizing findings, the report highlights significant untapped potential for AI to support rigor, reproducibility, and deeper methodological insight.
The survey, conducted in May and June 2025, represents the first large scale study to examine AI adoption, trust, training, and governance within authoring, reviewing, and editorial workflows.
Kamila Markram, Chief Executive Officer and Co-founder of Frontiers, said, “AI is transforming how science is written and reviewed, opening new possibilities for quality, collaboration, and global participation. This whitepaper is a call to action for the whole research ecosystem to embrace that potential. With aligned policies and responsible governance, AI will strengthen the integrity of science and accelerate discovery”.
However, the transformation reveals both promise and limitations. Most usage remains limited in scope, with reviewers primarily relying on AI for drafting reports, improving clarity and summarizing manuscripts. Only about 19 percent use AI to evaluate methodology, statistical validity or experimental design, areas traditionally considered the intellectual core of peer review.
The study shows broad enthusiasm for using AI more effectively, especially among early career researchers, where 87 percent reported use, and within rapidly growing research regions including China at 77 percent and Africa at 66 percent.
Elena Vicario, Director of Research Integrity at Frontiers, said, “AI is already improving efficiency and clarity in peer review, but its greatest value lies ahead. With the right governance, transparency and training, AI can become a powerful partner in strengthening research quality and increasing trust in the scientific record”.
The research reveals what experts call a “trust paradox.” While many scientists agree that AI can improve manuscript quality, 57 percent say they would be unhappy if a reviewer used AI to write peer review reports on their own manuscripts. That number drops to 42 percent when AI is used merely to augment reports.
Additionally, 72 percent of respondents believe they could accurately detect an AI written peer review report on a manuscript they had authored, though research suggests this confidence may be misplaced.
When analyzing responses by career stage, more junior researchers tend to have a more positive view of the impact of generative AI compared with more senior colleagues. In total, 48 percent of junior researchers thought that AI would have a positive impact on peer review compared with 34 percent of more senior researchers.
In a foreword to the paper, Markram says AI is often used in peer review for surface tasks, like polishing language, drafting text or handling administration, rather than for deeper analytical and methodological work where it could truly elevate rigor, reproducibility and scientific discovery.
The report calls for coordinated action across the research ecosystem, urging publishers to embed transparency, disclosure and human oversight into editorial workflows. Universities and research institutions are encouraged to integrate AI literacy into formal training, while funders and policymakers are asked to harmonize standards internationally.
Frontiers’ position is that clear boundaries, human accountability and well governed, secure tools are more effective than blanket prohibitions in protecting and strengthening research integrity. The company notes that the greater risk to peer review quality comes from unregulated, opaque or undisclosed AI use, which is already occurring across the research ecosystem.
The quiet revolution inside peer review is already reshaping how science is evaluated, paper by paper, reviewer by reviewer. Whether it strengthens scientific integrity or weakens public trust will depend on whether the global research community can govern AI with the same rigor it demands of evidence itself.