Industrypharmadrug-developmentclinical-trials

EU AI Act Compliance for Pharmaceutical Companies

How the EU AI Act impacts AI in drug development, clinical trials, pharmacovigilance, and manufacturing — classification requirements and GxP considerations.

April 8, 202512 min read

The pharmaceutical industry is rapidly integrating artificial intelligence across its value chain. AI now plays a role in target identification, molecule design, clinical trial optimisation, pharmacovigilance signal detection, manufacturing quality control, and regulatory submissions. The EU AI Act (Regulation 2024/1689) introduces a new regulatory dimension for pharmaceutical companies that use or develop AI, layering on top of an already complex regulatory environment governed by GxP requirements, EMA guidelines, and the medical devices regulations.

This article examines how the EU AI Act applies to AI in the pharmaceutical sector, how it interacts with existing pharma regulation, and what compliance strategies pharmaceutical companies should adopt.

How the EU AI Act Classifies Pharmaceutical AI

The EU AI Act does not include a dedicated category for "pharmaceutical AI" in its Annex III high-risk list. However, pharmaceutical AI systems may be classified as high-risk through several pathways, depending on their function and the regulatory frameworks that apply.

Pathway 1: Medical Device Classification (Annex I)

AI systems that qualify as medical devices under the MDR (Regulation 2017/745) or in vitro diagnostic medical devices under the IVDR (Regulation 2017/746) are classified as high-risk under the EU AI Act if they require third-party conformity assessment. This pathway is relevant for pharmaceutical companies that develop:

  • Companion diagnostic AI — AI systems that analyse patient data to determine eligibility for specific therapies, particularly in precision medicine and oncology
  • Software as a Medical Device (SaMD) — AI-powered clinical decision support tools that are marketed as medical devices alongside pharmaceutical products
  • AI-based in vitro diagnostics — systems that analyse laboratory data to guide treatment decisions

Pathway 2: Annex III Classifications

Several Annex III categories may apply to pharmaceutical AI:

Access to healthcare services (Annex III, point 5(a)): AI systems that evaluate eligibility for healthcare services or that are used in healthcare resource allocation could capture pharmaceutical AI used in patient access programmes, compassionate use assessments, or treatment allocation.

Employment decisions (Annex III, point 4): AI systems used in recruitment, performance evaluation, or task allocation in pharmaceutical companies fall within this category, though this applies equally to all industries.

Safety components of regulated products (Annex I, Section A): AI systems that function as safety components of products regulated under EU harmonisation legislation — including pharmaceutical products subject to GMP requirements — may be classified as high-risk.

Many pharmaceutical AI applications — including AI used for drug discovery, molecule optimisation, and pre-clinical research — are unlikely to be classified as high-risk under the EU AI Act, as they do not directly affect natural persons and are not covered by the Annex III categories. However, as these AI systems move closer to clinical application and regulatory submission, their classification may change. Pharmaceutical companies should reassess classification at each stage of the development lifecycle.

AI Across the Pharmaceutical Value Chain

Drug Discovery and Pre-Clinical Research

AI is transforming early-stage drug development. Machine learning models identify potential drug targets, predict molecular properties, generate novel molecular structures, and optimise lead compounds. Generative AI is being used to design molecules with desired pharmacological profiles, and AI-driven simulations reduce the need for certain categories of laboratory experiments.

Under the EU AI Act, most drug discovery AI operates at the minimal risk level. These systems do not directly affect natural persons and are not listed in Annex III. However, pharmaceutical companies should consider:

Data governance best practices: Even where not legally required by the AI Act, maintaining rigorous data governance for drug discovery AI — including documentation of training data sources, validation methodologies, and model performance metrics — supports both scientific integrity and regulatory defensibility when presenting AI-derived evidence to regulators.

Documentation for regulatory submissions: When AI plays a material role in the discovery or characterisation of a drug candidate, regulatory agencies (including the EMA) increasingly expect transparency about the AI methods used. Good documentation practices established for EU AI Act compliance can serve this purpose.

Clinical Trial Design and Optimisation

AI is increasingly used to optimise clinical trials, including patient recruitment and selection, site selection, protocol design, adaptive trial designs, and endpoint analysis. These applications raise more significant compliance considerations than drug discovery AI.

Patient selection AI: AI systems that select or exclude patients from clinical trials make decisions that directly affect individuals' access to potentially life-saving treatments. While clinical trial participation is not strictly a "service" under Annex III, point 5(a), the ethical implications are substantial. Pharmaceutical companies should apply high-risk-equivalent safeguards to patient selection AI as a matter of best practice, including:

  • Ensuring selection criteria do not embed biases that systematically exclude underrepresented populations
  • Maintaining transparency about how AI-driven selection criteria are derived
  • Documenting the AI's role in patient selection for inclusion in clinical trial regulatory submissions

Adaptive trial designs: AI systems that modify trial protocols in real-time based on incoming data require particular attention to accuracy, robustness, and documentation. Regulatory agencies expect full traceability of how AI-driven adaptations were implemented and how they affected trial outcomes.

The EMA and national competent authorities are actively developing their approaches to AI in clinical trials. EMA's reflection paper on AI in the medicinal product lifecycle and the ICH's work on AI/ML-related topics are shaping expectations that go beyond the EU AI Act's formal requirements. Pharmaceutical companies should monitor these regulatory developments alongside their EU AI Act compliance efforts.

Pharmacovigilance

AI is increasingly used in pharmacovigilance (drug safety monitoring) for signal detection, adverse event report processing, and benefit-risk assessment. These applications have direct implications for patient safety and are subject to extensive existing regulation under GVP (Good Pharmacovigilance Practices) modules.

Key compliance considerations for pharmacovigilance AI include:

Accuracy requirements: AI systems used for safety signal detection must achieve high accuracy to avoid both false positives (which waste resources) and false negatives (which pose patient safety risks). Article 15 of the EU AI Act requires "appropriate levels of accuracy" with declared metrics — a standard that should be interpreted strictly in the context of drug safety.

Human oversight: Pharmacovigilance decisions must remain under human control. AI systems can assist with data processing and pattern recognition, but qualified safety personnel must review and validate all safety signals and make final determinations about regulatory action. This aligns with both Article 14 of the EU AI Act and existing GVP requirements.

Record-keeping and traceability: GVP already requires extensive documentation of pharmacovigilance processes. The EU AI Act's Article 12 adds specific requirements for automated logging of AI system operations, which pharmaceutical companies must integrate into their existing pharmacovigilance documentation systems.

Manufacturing and Quality Control

AI is used in pharmaceutical manufacturing for process optimisation, predictive maintenance, visual inspection of products, and quality control. These applications are governed by GMP (Good Manufacturing Practice) requirements and are subject to inspection by national competent authorities and the EMA.

AI in quality-critical processes: Where AI systems influence quality-critical manufacturing decisions — such as batch release, deviation assessment, or in-process control — they function as components of a GMP-regulated process. The EU AI Act's requirements for accuracy, robustness, and documentation complement GMP requirements, and pharmaceutical companies should address both frameworks in an integrated manner.

Computer system validation (CSV): The pharmaceutical industry has well-established practices for computer system validation under GxP. These practices provide a strong foundation for EU AI Act compliance, particularly for documentation (Article 11), record-keeping (Article 12), and accuracy testing (Article 15). However, traditional CSV approaches may need to be adapted for AI systems, which are inherently more dynamic and less deterministic than traditional software.

Need auditable AI for compliance?

Ctrl AI provides full execution traces, expert verification, and trust-tagged outputs for every AI decision.

Learn About Ctrl AI

GxP and the EU AI Act: Integration Challenges

The pharmaceutical industry operates under a comprehensive set of Good Practice (GxP) requirements — GMP, GLP (Good Laboratory Practice), GCP (Good Clinical Practice), and GVP. These frameworks share several principles with the EU AI Act, but there are also important differences and potential tensions.

Shared Principles

Both GxP and the EU AI Act emphasise:

  • Risk-based approaches: GxP uses quality risk management (ICH Q9), while the EU AI Act mandates risk management under Article 9. Both frameworks require systematic identification, assessment, and mitigation of risks.
  • Documentation and traceability: GxP requires extensive documentation of processes, decisions, and changes. The EU AI Act requires technical documentation (Article 11) and record-keeping (Article 12). These requirements are complementary.
  • Data integrity: GxP's ALCOA+ principles (Attributable, Legible, Contemporaneous, Original, Accurate) align with the EU AI Act's data governance requirements under Article 10.
  • Change management: GxP requires controlled change management for validated systems. The EU AI Act requires documentation of "pre-determined changes" and monitoring for substantial modifications.

Key Differences

Despite these commonalities, several differences require attention:

Scope of risk assessment: GxP risk management focuses primarily on product quality and patient safety. The EU AI Act's Article 9 requires risk assessment that also covers fundamental rights impacts, including discrimination and bias — dimensions that are not traditionally part of GxP risk management.

Bias and fairness: GxP frameworks do not explicitly address algorithmic bias or fairness in the way that the EU AI Act does. Pharmaceutical companies must extend their risk assessment practices to include the potential for AI systems to produce discriminatory outcomes, particularly in patient-facing applications.

Transparency to affected individuals: GxP's transparency requirements are primarily directed at regulatory authorities. The EU AI Act's Article 13 requires transparency to deployers and, through them, to affected individuals. For pharmaceutical AI that affects patients, this represents a new dimension of transparency.

Continuous learning systems: GxP frameworks were designed for deterministic software systems. AI systems that learn and adapt from new data challenge traditional validation approaches. The EU AI Act requires ongoing monitoring and risk management throughout the system's lifecycle, which aligns with the need for new validation approaches for AI in GxP environments.

The PIC/S (Pharmaceutical Inspection Co-operation Scheme) and individual regulatory authorities are developing guidance on the use of AI/ML in GxP environments. Pharmaceutical companies should track these developments and integrate the emerging guidance with their EU AI Act compliance programmes.

Compliance Strategy for Pharmaceutical Companies

Phase 1: Inventory and Classification

Map all AI systems: Conduct a comprehensive inventory of AI systems across the value chain, from discovery through manufacturing and post-market surveillance. For each system, determine:

  • Whether it qualifies as a medical device, in vitro diagnostic, or safety component of a regulated product
  • Whether it falls within an Annex III high-risk category
  • Whether it is subject to GxP requirements
  • The provider-deployer distinction

Prioritise by risk: Focus initial compliance efforts on AI systems that are clearly high-risk (medical device AI, patient-facing systems) and those in GxP-regulated processes.

Phase 2: Framework Integration

Build an integrated compliance approach: Rather than creating parallel compliance programmes for GxP and the EU AI Act, develop an integrated framework that addresses both sets of requirements. This might include:

  • Extending existing quality management systems to incorporate EU AI Act requirements
  • Adapting computer system validation practices for AI/ML systems
  • Integrating bias and fairness assessment into existing risk management processes
  • Expanding documentation practices to cover EU AI Act Annex IV requirements

Phase 3: Governance and Capability Building

Establish AI governance: Create a cross-functional AI governance structure that includes representation from R&D, clinical, manufacturing, quality, regulatory affairs, data science, and legal functions.

Invest in AI literacy: Train staff across the organisation in AI fundamentals and EU AI Act requirements. Article 4's AI literacy obligation applies from 2 February 2025.

Develop internal expertise: Build or acquire expertise in AI risk assessment, bias detection, and explainability. These capabilities are essential for ongoing compliance and will become increasingly important as AI becomes more deeply embedded in pharmaceutical operations.

Phase 4: Regulatory Engagement

Engage with regulators proactively: Discuss AI strategies with the EMA, national competent authorities, and notified bodies early in the development process. Regulatory agencies are developing their own capabilities and expectations around AI, and early engagement helps avoid surprises during regulatory review.

Contribute to standards development: Participate in the development of harmonised standards for AI in healthcare and pharmaceuticals. These standards will shape how the EU AI Act's requirements are interpreted and applied in practice.

Timeline and Priorities

The EU AI Act's timeline creates a layered set of deadlines for the pharmaceutical sector:

  • 2 February 2025: AI literacy requirements (Article 4) and prohibited practices (Article 5) apply
  • 2 August 2025: Obligations for GPAI models (Article 53) apply
  • 2 August 2026: High-risk obligations for Annex III systems apply
  • 2 August 2027: High-risk obligations for Annex I systems (including medical devices) apply

For pharmaceutical companies, the 2027 deadline for Annex I systems is particularly relevant, as many pharmaceutical AI applications are classified through the medical device pathway. However, the complexity of integrating EU AI Act requirements with GxP and MDR/IVDR compliance means that preparation should begin well in advance of these deadlines.

Conclusion

The EU AI Act adds a new regulatory layer to the pharmaceutical industry's already complex compliance landscape. While many pharmaceutical AI applications fall outside the high-risk categories, the regulation's requirements have broad implications for how pharmaceutical companies develop, validate, deploy, and monitor AI systems.

The pharmaceutical industry's strong culture of regulatory compliance, quality management, and documentation provides a solid foundation for EU AI Act compliance. The key challenge is integration — extending existing GxP frameworks to address the AI Act's specific requirements for bias assessment, transparency, and fundamental rights protection, while adapting traditional validation approaches for the unique characteristics of AI systems.

Companies that start early and take an integrated approach will find that EU AI Act compliance reinforces rather than conflicts with their existing commitment to product quality, patient safety, and scientific rigour. Those that defer action risk facing a compressed timeline and the challenge of retrofitting compliance into systems and processes that were not designed with these requirements in mind.

Make Your AI Auditable and Compliant

Ctrl AI provides expert-verified reasoning units with full execution traces — the infrastructure you need for EU AI Act compliance.

Explore Ctrl AI

Related Articles