On December 8, 2023, the European Parliament (Parliament) reached a political agreement with the European Council (Council) on the Artificial Intelligence Act (AI Act). Once applicable, the AI Act will introduce a range of new requirements for AI systems falling within its scope. In this Sidley Update, we consider what Life Sciences companies can do to prepare for the key areas that will impact their business.
Timeline
The AI Act is expected to come into force in 2024 and the majority of its obligations will apply two years after, in 2026, with the exception of the rules which ban prohibited AI systems, which will apply six months after its entry into force. Moreover, and relevant for Life Sciences companies, the rules on general purpose AI systems, high-risk AI systems, conformity assessment bodies and governance structures will apply just 12 months after its entry into force, likely already in 2025. Applicable rules for medical devices that fall within scope of the AI Act (discussed below) are expected to apply 36 months after its entry into force.
How will Life Sciences companies be impacted?
The AI Act, and related emerging guidelines on the use of AI by the European Medicines Agency (EMA), will have a significant impact on the Life Sciences industry as AI systems are increasingly used to enhance processes across the medicinal product lifecycle.
For Life Sciences companies, the AI Act will have the following key impacts:
- Medical devices: The AI Act classifies AI systems that are themselves products required to undergo a third-party conformity assessment by the medical devices regulation (MDR) and in vitro diagnostic medical devices regulation (IVDR) as ‘high risk’ AI systems. As such, medical devices and IVDs (devices) of class IIa and above will be subject to the requirements of the AI Act applying to “high-risk” AI systems. The final text will reveal in detail the extent to which obligations between the AI Act and MDR/IVDR will overlap. It is understood, for example, that where requirements of the AI Act are considered equivalent to those in the MDR/IVDR, an AI component of a device will be deemed to comply with the AI Act to the extent equivalent requirements are covered in the MDR/IVDR. Moreover, it has been agreed that the AI Act will not require a separate conformity assessment for in-scope devices, but rather that the AI Act assessment of conformity will be embedded in the MDR/IVDR assessment. Once placed on the EU market, in-scope devices will be subject to risk management obligations, periodic testing and training, data governance rules, technical documentation requirements, audits, and other monitoring obligations which will require a rigorous compliance system.
- Digital companion diagnostics (CDx): In addition to being classified as devices, digital CDx that utilize AI systems will also be categorized as “high-risk” under the AI Act and will be subject to applicable requirements. For drug companies that rely on CDx, for example, to identify the appropriate patient population for a particular drug or monitor patient responses to treatments, this may be the only element of their product offering covered by the AI Act. Such companies will need to ensure that in-scope CDx comply with the AI Act as well as applicable device rules, whether they are designing their own AI systems or procuring them from third parties.
- Clinical trials: Software relying on AI is increasingly used as an aid in multiple stages of clinical trials, for example to identify new drug candidates, optimize molecular screening, predict drug efficacy and safety, enhance trial design, and conduct real-world data analysis. As outlined above, software used in clinical trials that are classified as medical devices or IVDs, for example clinical decision support software, and which utilize AI systems will be subject to the rules for high-risk AI systems.
- Non-device AI systems: AI systems used in the medicinal product lifecycle which are not devices may not be categorized as “high-risk” AI systems under the AI Act, though careful analysis of the final text will be required in each case. In light of this, the EMA together with the Heads of Medicines Agencies (HMA) are seeking to put in place certain guardrails for the use of AI in the medicinal product lifecycle, described in the AI Workplan 2023-2028 (Workplan). As an initial step in July this year, the EMA and HMA published a draft reflection paper, covering topics including the use of AI in drug discovery, precision medicines, clinical trials, and pharmacovigilance (discussed in a previous Sidley Update). The public consultation on the reflection paper closes on December 31, 2023 (impacted companies are encouraged to submit comments here) and the responses from stakeholders will support its finalisation and development of future AI guidance. According to the Workplan, guidance on the use of large language models for the European Medicines Regulatory Network is expected to be published in Q1 2024, and further guidance on AI in the medicinal product lifecycle will be developed in the second half of 2024, alongside the establishment of an AI Observatory to monitor AI capability and impact on the Life Sciences sector.
What can companies do to prepare?
Life Sciences companies can prepare for these regulatory developments, including the requirements set out in the AI Act which will soon become law, by taking several proactive steps:
- Consider whether your company may be affected by the extra-territorial scope of the AI Act, which covers companies that place or put AI systems into service in the EU (irrespective of where they are established) and providers and users outside of the EU where the output of the AI is used in the EU.
- Conduct a comprehensive review of existing AI systems and applications to identify areas that might be affected by the new regulations. This includes assessing the level of risk associated with each AI application, as determined by the AI Act’s categorization of AI systems.
- Invest in building a robust AI governance framework. This framework should include clear policies and procedures for AI development and deployment, emphasizing ethical considerations and compliance with regulatory standards. The implementation of a continuous monitoring system will be crucial to ensure ongoing compliance with the AI Act.
- Educate internal teams on potential risks under the AI Act: the maximum fine for use of prohibited AI is up to 7% of global annual turnover or €35m.
Sidley Austin LLP provides this information as a service to clients and other friends for educational purposes only. It should not be construed or relied on as legal advice or to create a lawyer-client relationship. Readers should not act upon this information without seeking advice from professional advisers.
Attorney Advertising—Sidley Austin LLP, One South Dearborn, Chicago, IL 60603. +1 312 853 7000. Sidley and Sidley Austin refer to Sidley Austin LLP and affiliated partnerships, as explained at www.sidley.com/disclaimer.
© Sidley Austin LLP