The US Food and Drug Administration (FDA) has released new draft guidance which provides a risk-based credibility framework that can be used for assessing artificial intelligence (AI) data in the drug product lifecycle.
As a machine learning-based system that simulates human thought processes by analysing and predicting data, AI has become a larger part of clinical trials and, according to the FDA, can ‘support regulatory decision-making regarding safety, effectiveness, or quality for drugs.’ However, there has been a growing effort to ensure that AI is used accurately, safely and with consideration of its limitations.
The FDA draft guidance, ‘Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products’, outlines some of the challenges of AI, such as the reliability, quality and size of data used in AI models. It provides a framework for assessing the credibility of an AI model’s data by checking the accuracy and trustworthiness of the data, including its limitations and parameters.
The framework offers a seven-step process, which consists of:
- Defining the question of interest
- Defining the context of use for the product model
- Assessing the AI model risk
- Developing a plan to establish the credibility of the model
- Executing the plan
- Documenting the results
- Determining the adequacy of the model
It also gives detail about how to store, control and monitor data in the most effective way, providing useful guidance to regulatory affairs professionals.
Professionals can submit electronic or written comments on the draft guidance until 7 April 2025. Detailed instructions on how to submit comments can be found on the FDA website.