Advarra Debuts Braid™ to Unlock Novel Operational Intelligence and Workflow Automation in Clinical Trials Through AI — Read the Press Release

Advarra’s Perspective on FDA’s Draft AI Guidance: Advancing Responsible Innovation

August 12, 2025

The U.S. Food and Drug Administration’s (FDA’s) recent draft guidance, “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products,” marks an important step forward in modernizing regulatory science to keep pace with artificial intelligence (AI) innovation1. As AI becomes more embedded in clinical research, the FDA’s willingness to engage with the nuances of AI governance, model credibility, and data integrity is both timely and necessary. 

At Advarra, we welcome this guidance and applaud the Agency’s efforts to foster responsible innovation. As part of the FDA’s public comment process, Advarra submitted formal feedback to help shape the final guidance based on our real-world experience2. However, as organizations begin operationalizing AI in regulatory and clinical contexts, clarity and practical direction are essential. Our perspective, grounded in implementation experience and scientific rigor, suggests five key areas where the FDA’s guidance can evolve. 

Strengthening Risk-Based Decision-Making  

The FDA’s emphasis on a risk-based credibility framework is critical. It aligns with growing consensus that AI models should be assessed not in a vacuum but based on the context and consequences of their use. However, without practical examples or clear expectations, sponsors may struggle to implement these principles consistently. 

For example, when evaluating an AI model designed to support clinical monitoring decisions, what level of recall is “good enough”? How should sponsors document tradeoffs between model sensitivity and specificity? Providing illustrative scenarios and endorsing the use of governance tools, such as multidisciplinary model oversight committees, can help ensure risk-benefit decisions are both transparent and reproducible. 

Incorporating MLOps, LLMOps, and Lifecycle Accountability 

AI is not static—models evolve, data shifts, and performance can degrade over time. That’s why machine learning operations (MLOps) and large language model operations (LLMOps) are essential to ensure model integrity as part of robust life cycle maintenance. 

MLOps refers to the engineering and process discipline that ensures reliable deployment and monitoring of ML models. LLMOps extends these principles to large language models, which have distinct requirements due to their size, architecture, and use patterns. These are no longer emerging best practices, but rather foundational for maintaining credibility.  

FDA guidance that explicitly references these domains can reinforce the importance of end-to-end oversight as well as help sponsors align AI practices with broader regulatory expectations for reproducibility and auditability. 

Data and Model Integrity 

The performance of any AI system depends on the quality of the data it learns from. The draft guidance touches on “fit-for-use” data, but additional detail is lacking on: 

  • Addressing class imbalance or sparse data categories 
  • Handling missing values or outliers 
  • The role of synthetic data when real-world data is incomplete or unavailable 

Additionally, the guidance should refine its approach to methodological transparency by clearly distinguishing between explainability (“the ability to describe the behavior of a system in understandable language to humans”) and interpretability (“the visibility and understanding of the inner logic and mechanics of the AI model”)3. Providing clarification on these concepts could help sponsors select appropriate approaches based on the model’s complexity, use case, and risks.  

LLM-Specific Considerations 

While the draft guidance briefly acknowledges the evolving AI landscape, it does not yet fully address the challenges posed by the absences of governance and oversight of LLMs. These models are already being used in regulatory writing, literature analysis, and trial protocol development—yet they carry risks of hallucination, prompt sensitivity, and limited explainability. 

These models warrant focused attention in a dedicated future document. By acknowledging the unique risks and proposing guardrails for LLM use in regulated settings, the FDA can help the industry navigate with greater confidence and promote safe innovation. 

Looking Ahead: Partnering for Safe, Ethical AI in Clinical Research 

There is enormous potential for AI in drug development and clinical research but realizing that potential requires thoughtful, collaborative guidance. The FDA’s draft represents an important foundation. By expanding its focus on operational rigor, data standards, and LLM-specific guardrails, the Agency can further accelerate responsible innovation. 

At Advarra, we’re committed to working with regulators, sites, sponsors, CROs, and technology leaders to ensure AI systems and AI-enabled workflows support scientific progress while protecting research integrity. In fact, we recently launched The Council for Responsible Use of AI in Clinical Trials specifically to align on responsible AI practices, drive innovation, and accelerate progress in clinical research across key stakeholders4. Advancing the safe, effective, and ethical use of AI throughout the clinical trial lifecycle is a core tenant of our work at Advarra and the guiding principle for the development of Advarra’s AI and data engine, Braid. 

Let’s Discuss More  
To learn more about how Advarra is weaving intelligence into clinical trials, read about the launch of our AI and data engine, Braid.   

To discuss how we can use AI to support your research goals, reach out to our team

References: 

  1. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/considerations-use-artificial-intelligence-support-regulatory-decision-making-drug-and-biological 
  1. https://www.regulations.gov/comment/FDA-2024-D-4689-0086 
  1. https://www.splunk.com/en_us/blog/learn/explainability-vs-interpretability.html 
  1. https://www.advarra.com/council-for-responsible-use-of-ai-in-clinical-trials/ 

Laura Russell

Laura Russell

SVP, Head of Data and AI Product Development

Laura Russell is a visionary leader in the life sciences and technology sectors, with expertise in product development and operational excellence. As SVP, Head of Data and AI Product Development at Advarra, she defines and oversees the company’s business transformation through the responsibly guided integration of AI, delivering advanced analytics solutions and novel applications of AI across Advarra’s portfolio of services and technologies.

View all posts

Advarra to Your Inbox

Be the first to know about
new content, products, and
services from Advarra. Sign up for our newsletter and stay in the loop!

Scroll to Top