Research Summary: GAMER improves transparency in AI-assisted medical research

4 December 2025

With the rapid development of generative artificial intelligence (AI), its application in medical research has become increasingly widespread. However, the lack of standardized reporting guidelines has raised concerns regarding transparency, reliability, and academic integrity. In response, an international research team, including Dr. Xufei LUO and Prof. Zhaoxiang BIAN from the Chinese EQUATOR Centre, developed a reporting guideline specifically for the use of generative AI tools in medical research—the GAMER (Generative Artificial Intelligence tools in Medical Research) checklist. This guideline was published in BMJ Evidence-Based Medicine in December 2025.

The development of this reporting guideline strictly followed internationally recommended methodologies and consensus processes. The research team first conducted a scoping review, followed by two rounds of Delphi surveys. A total of 51 experts from 26 countries—including specialists in medicine, artificial intelligence, epidemiology, ethics, and journal editing—participated in multiple online consensus meetings. The final checklist consists of nine core reporting items, focusing on key aspects such as AI usage statements, tool specifications, prompt methodologies, verification of generated content, data privacy protection, and the impact of AI on research conclusions. The study highlights that generative AI tools, including large language models and image generation models, can significantly improve the efficiency of tasks such as manuscript writing, data extraction, and data analysis. However, improper use or inadequate reporting may lead to issues related to plagiarism, credibility of results, and ethical concerns. The GAMER checklist aims to ensure that the use of AI tools is reported in a clear and comprehensive manner, thereby enabling reviewers, editors, and readers to better assess the quality and reliability of research findings.

Importantly, the checklist is not limited to a specific study design and can be applied across various types of medical research, including clinical trials, systematic reviews, and observational studies. The authors recommend that academic journals consider adopting GAMER as a minimum reporting standard for AI-assisted medical research, similar to existing guidelines such as CONSORT-AI and STARD-AI, to promote the standardized, responsible, and trustworthy use of AI in scientific research.

The research team noted that the release of the GAMER checklist fills a critical gap in the reporting standards for generative AI in research. It is expected to play a key role in future AI-assisted scientific work. Moving forward, the team plans to promote the guideline globally through international conferences, journal collaborations, and multilingual translations, with the aim of enhancing the rigor and credibility of medical research in the era of artificial intelligence.

GAMER GAMER

 

About Dr. Xufei LUO

Dr. Xufei LUO graduated from Lanzhou University and is currently undertaking postdoctoral research at the School of Chinese Medicine, Hong Kong Baptist University. Dr. LUO has long been engaged in methodological research in evidence-based medicine, with a focus on clinical practice guidelines, reporting standards, and evidence synthesis. His work particularly emphasizes the integration of generative artificial intelligence into systematic reviews, guideline development, and evidence-based decision-making, with the aim of enhancing the transparency, standardization, and reproducibility of medical research. He has led and contributed to the development of multiple international reporting guidelines and clinical practice guidelines, and actively promotes the standardized and responsible use of generative artificial intelligence in medical research.