We are finally beginning to see the transformative effect of artificial intelligence (AI) and machine learning (ML) on drug discovery, clinical development, and manufacturing. Indeed, earlier this year, U.S. Food and Drug Administration (FDA) Commissioner Robert Califf noted how “AI has the potential to enable major advances in the development of more effective, less risky medical products.”
However, this is a highly regulated field, and the fast evolution of AI and ML technologies is demanding the equally fast development of robust regulatory frameworks, raising some complex regulatory challenges that need to be navigated carefully. Manufacturers, therefore, need to look carefully at their use of AI and ML in good manufacturing practice (GMP) settings and ensure that they are in a position to demonstrate compliance.
The use of AI and ML in drug manufacturing and the importance of GMP compliance
Regulators have been regulating AI and ML for decades. Historically, this has often been in the medical device space, for example software as a medical device. Today, regulators are also recognizing the potential benefits conferred by AI and ML in drug manufacturing, while acknowledging that, in particular, generative AI will present regulatory challenges.
The use of AI and ML technologies to analyze vast data very quickly and accurately is increasingly of immense value when seeking to maintain quality standards in manufacturing. For example, in aseptic processing, AI systems are able to scrutinize every step of the manual vial-filling process, analyzing each segment to detect potential contamination risks. In environmental monitoring — traditionally reliant on the manual counting of microbial colonies — AI introduces a level of precision through image recognition technologies. In quality control testing, AI is now being used to automate the integration of chromatographic peaks and in the visual inspection of injectable drugs.
AI’s impact extends to the manufacturing of advanced therapy medicinal products, such as gene and cell therapies. Its ability to automate critical steps in the manufacturing process, improving the scalability and reliability of treatments, is especially beneficial in hospital settings, making personalized medicine more accessible and practical.
The adoption of AI and ML tools in the GMP manufacturing and quality control settings means such tools are likely to be closely examined during regulatory inspections. It is therefore imperative that drug manufacturers are able to demonstrate that AI- and ML-enhanced processes are reliable and GMP-compliant.
Regulators are already looking to gather experience. For example, the FDA has published a white paper describing how the Agency’s various centers, including CBER, CDER, CDRH, and OCP, are working together “to safeguard public health while fostering responsible and ethical innovation.” The European Medicines Agency (EMA) has established a Quality Innovation Group (QIG) aimed to, amongst others, support the translation of innovative approaches to the design, manufacture and quality control of medicines. Regulators are also updating existing or creating new guidelines to make them fit-for-a-digital-age, e.g., FDA has issued a draft guidance “Computer Software Assurance for Production and Quality System Software” and the EMA has issued a concept paper for its planned revision of GMP Annex 11 (computerized systems). FDA has also issued a discussion paper outlining considerations for Artificial Intelligence in Drug Manufacturing.
These emerging guidelines are, however, only developing slowly and the existing regulatory framework has, by and large, not been keeping pace with the rapidly advancing field of AI and ML applications for GMP uses, especially in the pharmaceutical context. Drug manufacturers, therefore, need to keep a close eye on this evolving field.
Emerging standards and guidelines for the use of AI in drug manufacturing
Both the EU and the U.S. have recently taken steps to regulate AI. The EU AI Act, adopted in March, follows an industry-agnostic approach, meaning that it is general for all sectors and does not take into account the specificities and risks of AI applications in pharmaceuticals. The Act introduced a risk-based approach to AI systems at a horizontal level: The higher the risk to certain AI systems, the stricter the rules. Systems considered “high-risk” AI systems under the EU AI Act include software as a medical device, while many other AI systems used in the life sciences ecosystem are not considered high-risk, such as wellness apps and chatbots for triage. However, in a drug manufacturing context, far from all applications, if any, will be considered high-risk (or even limited or minimal risk) by the AI Act. Instead, these AI (and ML) systems must ensure compliance with current GMP and other GxP standards, creating a need to interpret these regimes in light of the unique challenges posed by AI, in particular generative AI.
With a view to providing more specific guidance to the life sciences sector, the EMA has recently issued, for public consultation, a draft reflection paper concerning the use of AI in the drug life cycle, and firm guidelines are expected to be published in the second half of 2024. With respect to manufacturing, the EMA paper discusses how model development and performance assessment should follow quality risk management principles and that the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) Q8, Q9, and Q10 principles should be considered, pending the revision of current GMPs, which the EMA therefore takes for granted. More generally, the EMA paper states that it is the responsibility of the marketing authorization holder to ensure that the algorithms, models, and data sets used are in line with good practice standards. If the AI system may tend to have an impact on the drug benefit-risk balance, the EMA recommends early regulatory interaction and seeking of scientific advice.
In the U.S., the Biden administration, the FDA, and the U.S. Congress are heavily focused on AI. Among other initiatives, the Biden administration recently released an executive order requiring different federal agencies to adopt plans and safeguards concerning the use of AI. By December 2024, agencies will need to develop and implement specific safeguards for a broad use of AI applications, including in healthcare. Regarding pharmaceuticals, the FDA recently opened a dialogue with stakeholders via the publication of two discussion papers on the use of AI in drug development and manufacturing. The FDA papers do not yet provide guidance, but they identify some areas on which stakeholders’ feedback is sought. These include the need for clarity on whether and how the application of AI in drug manufacturing is subject to regulatory oversight and the need for standards for developing and validating AI models used for process control and to support release testing.
Five key factors that drug manufacturers should consider when using AI
Drug manufacturers wanting to utilize AI or ML in the drug manufacturing process must be able to ensure compliance with evolving GMP standards. Some considerations are set out below, which should aid such companies in harnessing the full potential of AI and ML to innovate and improve manufacturing processes.
- Integrity and Security of Data: Both the FDA and the EMA recognize that AI is intrinsically dependent on data and that this may lead to the introduction of unintended biases into models. The regulators expect data to be reliable and accurate. Manufacturers should, therefore, carefully establish procedures for data collection and selection. To ensure data integrity, manufacturers should also consider improving and adapting traditional tools — such as encryption, access controls, and audit trails — to cover potentially larger and more complex data sets and to ensure adequate data retention and archiving policies. Companies also need to reinforce their data security defences by investing in advanced cybersecurity measures and establishing rigorous data governance protocols.
- Explainability and Transparency of AI Systems: The opaque nature of many AI models, especially those based on deep learning, makes it challenging for regulators to assess and verify the decision-making processes. The EMA has stated that the use of transparent AI systems should be preferred. In anticipation of regulatory changes demanding greater transparency, companies should focus on developing interpretable AI models. Investing in explainable AI technologies and keeping detailed documentation of AI decision-making processes will be critical.
- Validation of Protocols and Testing Acceptance: Manufacturers must control critical aspects of their operations through validation over the product and process life cycle. AI/ML-specific features, such as continuous learning, pose significant challenges as AI systems continuously evolve based on new data. We anticipate that regulators might update their frameworks to encompass continuous monitoring and revalidation protocols for AI systems. Companies should, therefore, implement robust change control systems to manage updates to AI algorithms and consider developing validation protocols that define the objectives, scope, and acceptance criteria for validating AI systems. These may include performance testing, comparison against reference methods, evaluation of algorithm robustness, and maintaining detailed records of all algorithmic changes and performance metrics.
- Management of AI-Related Risks: The use of AI brings new and unknown types of risks. For example, if algorithms are not tested for potential errors, there is a risk of unfair or unreliable results, such as false positives/false negatives, which could lead to data integrity issues. Manufacturers should therefore put in place thorough risk-assessment procedures. This may require adopting specific controls, enhancing cybersecurity measures, and establishing contingency plans for system failures. Because AI is a constantly evolving area, continuous monitoring and review are necessary to ensure that mitigation measures remain effective.
- In-House Resources Training and Competence: The use of AI still largely relies on human oversight, so it is imperative that adequate resources are devoted to developing training programs to ensure that personnel using or overseeing AI-based applications are competent to carry out their tasks effectively. Besides the basic notions on the quality management systems and GMP, existing and newly recruited personnel should receive adequate training on AI fundamentals and any specific AI system deployed.