The FDA on AI and ML: New Discussion Paper Available

As with any new technology that has wide-ranging applications, artificial intelligence (AI) and machine learning (ML) are being evaluated and discussed by regulatory agencies around the world.

The Center for Drug Evaluation (CDER), a division of the U.S. Food and Drug Administration (FDA), recently released a discussion paper focused on the use and regulation of AI in drug and biologic development — as well as the development of medical devices.

(Note: It’s the mission of the CDER to ensure that safe, effective drugs are available to improve the health of people in the U.S. They regulate both over-the-counter and prescription drugs, including biological therapeutics and generic drugs.)

In this blog post, we’ll discuss some of the important issues mentioned in this paper which affect the life sciences and pharmaceutical industries.

The ramifications of powerful technologies

The paper, titled “Using Artificial Intelligence and Machine Learning in the Development of Drug and Biological Products,” is both a discussion paper and a request for feedback from a range of stakeholders. This format demonstrates the “work in progress” nature of attempts to monitor and regulate the use of AI.

In addition to an overview of the potential future uses for AI in therapeutic development, there are four overarching concerns in the paper:

  • The importance of human involvement
    People will always be necessary for oversight and real-world decision-making, even with the amazing capabilities of AI technologies. Saama (and our in-house team of AI experts) has always focused on keeping a “human in the loop” as part of our products and solutions. (See our blog for more on our philosophy.)

  • Adopting a risk-based approach for evaluation and management
    This type of approach is especially necessary for creating an environment that supports innovation while also protecting patient health and safety. At Saama, we make sure that our AI and ML tools never risk or jeopardize patient safety — but rather, support it holistically.   

  • Possible risks regarding data
    Inaccuracies, incompleteness, and biases in the data used to train ML algorithms are just a few risks with using AI. At Saama, we’ve trained our models using 300 million-plus data points, and we continually re-train our models based on user inputs. (We can even train them based on a sponsor’s past study data.)

  • Monitoring performance of models.
    To ensure reliability, relevance, and consistency over time, the role of monitoring performance of models is essential. At Saama, we continually re-test and re-train our models — based on new data and user inputs — to ensure accuracy and strong performance.

“Our team at Saama greatly appreciates that both utilizing AI and understanding its impact of AI are part of an ongoing process,” says Malaikannan Sankarasubbu, VP of AI Research at Saama.

“We’re happy to be a part of the community that’s working with regulatory agencies around the world to ensure safety and accuracy when it comes to AI — through both existing policies and developing new frameworks.”

Recommended Reading