New Paper Makes the Case for AI-Specific Guidance in Research Oversight

March 2, 2026

Estimated reading time: 3 minutes

AI is becoming a routine part of how research proposals are prepared and reviewed, from drafting applications to supporting administrative tasks. Used thoughtfully, these tools have real potential to improve operational efficiency and consistency across research workflows. As AI adoption grows, clarity around how it should be used in research oversight becomes increasingly important.

In traditional research governance, committees such as Institutional Review Boards (IRBs), Institutional Biosafety Committees (IBCs), and Institutional Animal Care and Use Committees (IACUCs) play a central role in protecting people, animals, and the environment. Their work depends on nuanced ethical judgment, from safeguarding human participants to ensuring humane animal care. Yet despite AI’s growing presence in research operations, guidance on its responsible use within oversight processes remains limited.

That gap is the focus of a timely peer-reviewed article, “Guidelines Needed for the Use of AI in the Preparation or Review of IRB, IBC, and IACUC Applications,” published in Accountability in Research. A multidisciplinary group of experts in research ethics and oversight co-authored the paper, including Mohammad Hosseini, MA, PhD, Assistant Professor, Northwestern University Feinberg School of Medicine, Department of Preventive Medicine; Nichelle Cobb, PhD, CIP, Senior Advisor for Strategic Initiatives, Association for the Accreditation of Human Research Protection Programs; and Advarra’s Daniel Eisenman, PhD, RBP, SM(NRCM), CBSP, Executive Director of Biosafety Services, and James Riddle, MCSE, CIP, CPIA, CRQM, Senior Vice President, Global Review Operations. Together, the authors explain that IRB, IBC, and IACUC workflows already present a realistic setting for AI application, yet lack clear, AI-specific guidance to support responsible review.

Hosseini, the paper’s first author emphasizes the importance of setting boundaries as AI becomes more embedded in oversight workflows. “Advances in AI offer real opportunities to reduce administrative burden and streamline aspects of research oversight,” he says, “but ethical review depends on human judgment and accountability. Those value-laden decisions can’t be delegated to algorithms.”

“AI can help with efficiency,” Eisenman notes, “but its limitations—especially in reasoning and ethical judgment—make it critical that humans always remain the final decision-makers in research oversight. Without clear standards, we risk undermining both the quality and integrity of research governance.”

The paper highlights several key considerations for integrating AI into research oversight:

  • AI can generate convincing but incorrect or incomplete content that may mislead research oversight decisions.

  • AI systems may reflect biased data or flawed assumptions, particularly in reviews involving vulnerable populations or complex ethical contexts.

  • Oversight decisions rely on value-laden, context-sensitive reasoning that cannot be delegated to algorithms.

  • Oversight submissions include sensitive human, animal, and biosafety information that may be exposed or misused when AI tools are involved.

  • Errors, over-reliance, and weakened accountability can still occur if AI outputs are treated as authoritative.

The authors argue that targeted, practical guidance will be essential as AI becomes more integrated into research workflows. Hosseini, Cobb, Eisenman, and Riddle will discuss this need and how to address it during a panel session at the 2026 THREE I’s Biosecurity and Research Integrity Conference on Tuesday, April 28.

Learn more and register here.  

Advarra to your inbox

Be the first to know about
new content, products, and
services from Advarra. Sign up for our newsletter and stay in the loop!

Login
Scroll to Top