Decoding the EU AI Act’s Impact on Biotech

Right now, the European Union's Artificial Intelligence Act (EU AI Act) presents both challenges and opportunities for biotech companies. By introducing a risk-based classification system for AI applications, ranging from minimal-risk to high-risk categories, the EU AI Act imposes corresponding obligations that can have a big impact. In this article, we show how to decode these regulations and implement them successfully.

The European Union's Artificial Intelligence Act (EU AI Act) is the first comprehensive regulatory framework­ for artificial intelligence systems globally. It introduces a risk-based classification system for AI applications, ranging from minimal-risk to high-risk categories, and imposes corresponding obligations. For biotech companies that use AI in research and development (R&D), as well as in clinical or commercial applications, the Act has far-reaching implications.

 

The EU AI Act

The EU AI Act, a groundbreaking piece of legislation, represents the world's first comprehensive legal framework for regulating artificial intelligence. Developed over several years, the Act aims to ensure AI systems within the European Union are safe, transparent, accountable, and ethically used. Its journey began with initial frameworks in 2018, followed by a white paper in 2020, and an official proposal in 2021. After intensive negotiations and approvals, the Act entered into force on August 1, 2024. At its core, the legislation adopts a risk-based approach, categorizing AI systems into four levels: unacceptable, high, limited, and minimal risk. This approach allows for proportionate regulation based on potential harm. The Act applies horizontally across all sectors and addresses challenges posed by modern AI technologies, including generative AI and large language models. It sets out specific obligations for various stakeholders involved in AI development and deployment, with stringent requirements for high-risk systems. Notably, the Act includes provisions for general-purpose AI models, recognizing the rapid advancements in this field. With its comprehensive scope and forward-looking approach, the EU AI Act is positioned to set a global standard for AI regulation. It complements existing EU legislation like the GDPR and aims to foster trustworthy AI while supporting innovation and investment across Europe. The Act's phased implementation allows for gradual adaptation, with different deadlines for various provisions ranging from 6 to 36 months after coming into force. As the first of its kind worldwide, the EU AI Act is expected to inspire and shape global standards for AI governance in the years to come.

 

Risk Categorization  

Risk Categorization is a key foundational concept. This categorization determines the how the AI application is considered and what is required. The EU AI Act classifies AI systems into four categories based on risk:

Minimal-Risk:

Systems that face no specific regulatory obligations

Use Cases in Biotech: AI tools for administrative work, such as data annotation and labelling tools that help organize and categorize data. These systems help in streamlining processes without directly impacting patient care or safety.

Specific Transparency Risk:  

Systems must disclose that users are interacting with AI.

Use Cases in Biotech: Applications such as medical wearables (eg.: fitness trackers or non-invasive heath monitors) and AI-powered tools that assist in recovery on non-critical situations. Research assistants for reading and analyzing scientific literature also fall into this category.

High-Risk:  

High-risk systems include applications that affect safety or fundamental rights and must meet strict obligations.

Use Cases in Biotech: This category includes AI-driven medical devices (e.g., diagnostic tools, imaging systems), clinical decision-making tools and patient data analysis for personalized medicine, drug development and robotic surgery assistance.

Unacceptable Risk:  

AI applications that could harm fundamental rights or manipulate users are outright banned.

Use Cases in Biotech: DNA-based policing, social scoring systems based on health data, and other applications that violate privacy and human rights

 

AI Risks
4 categories of AI risks

Biotech companies developing AI systems will likely fall under the High-Risk category. However, regulations do not apply to AI systems developed exclusively for R&D purposes, with compliance being only mandatory once a system transitions to commercial or clinical use. This allows biotech companies to innovate freely during the R&D phase.

 

Compliance Requirements Under the EU AI Act

For biotech companies, complying with the EU AI Act is essential to avoid penalties and ensure that AI systems are safe and reliable. The Act mandates strict compliance obligations, especially for high-risk AI systems. Companies that fail to comply could face severe fines, ranging from €7.5 million (or 1.5% of global turnover) to €35 million (or 7% of global turnover), depending on the severity of the violation.

 

1. Risk Management and Data Governance 

Companies developing AI systems must establish a risk management system that spans the entire system's lifecycle, with automatic tracking of events and substantial modifications. Additionally, the company must ensure that training, validation, and testing datasets are relevant, representative, and free from errors or bias. For biotech applications, particularly those using patient data or other sensitive information, data integrity and fairness is crucial.

 

2. System Performance and Continuous Monitoring

AI systems deployed to the market must adhere to high standards for accuracy, robustness, and cybersecurity. To achieve this, companies should implement a quality management system designed to continually monitor and ensure compliance with regulatory standards. This includes rigorous testing to validate that AI predictions—such as identifying the most effective treatments for patients—are accurate and adaptable to diverse patient scenarios. Moreover, the system must be designed to guarantee human oversight, allowing deployers to intervene when necessary.

 

3. Technical Documentation and Transparency 

Transparency is a central requirement of the EU AI Act. Companies must provide detailed documentation that demonstrates how their AI systems comply with the legal and safety requirements. This includes offering transparent insights on their algorithms’ decisions to both medical professionals and regulators. For example, if an AI model predicts which patients are likely to respond well to a new treatment, it should provide an explanation for why it made a particular recommendation, such as a breakdown of biomarkers or patient health data that contributed to the system’s output.

 

Navigating Compliance Challenges: EU AI Act, GDPR, and Other Regulations in Biotech  

 

AI Compliance

Biotech companies operating in Europe also need to comply with the General Data Protection Regulation (GDPR), which governs personal data protection. While both the GDPR and the EU AI Act share some common goals—such as promoting transparency—they often impose conflicting requirements on data governance.  

The EU AI Act mandates the use of extensive datasets to avoid bias and ensure representativeness. This can often conflict with certain aspects of the GDPR:

1. Data Minimization

GDPR requires companies to collect only the minimum amount of personal data necessary for a specific purpose. This might not be enough to ensure the data extensiveness needed for compliance with the AI Act.

2. Consent

GDPR requires explicit consent from individuals for processing sensitive personal data. If consent is withdrawn, their data must be deleted or anonymized. This leads to potential gaps in the data that could compromise the accuracy of AI models.

3. Data Retention Periods

GDPR requires that personal data be deleted once it is no longer needed for its original purpose. For AI, especially in high-risk systems, there may be a need to store large datasets for extended periods (for model training, retraining, or auditing), which contradicts GDPR’s data retention policies. Biotech companies must navigate these opposing requirements when managing patient or clinical trial data.

4. Right to Erasure

GDPR gives individuals the right to request the deletion of their personal data. This can be particularly complex when personal data is used to train AI models, as the model itself may not be able to “forget” specific data points once they are integrated into the system.  

 

Navigating the complex landscape of regulations, such as the GDPR, MDR (Medical Device Regulation), IVDR (In Vitro Diagnostic Regulation), and now the EU AI Act presents a significant challenge. It requires investment in legal expertise, advanced data management systems, and ongoing monitoring processes. The regulations may conflict, and notified bodies responsible for assessing compliance often lack specialized knowledge of advanced machine learning techniques used in biotech applications, creating a skill gap that can delay approvals and increase uncertainty. To stay competitive, biotech companies must also integrate ethical principles—such as fairness, accountability, and human oversight— into their operations, all while continuing to innovate.

 

Best Practices for Compliance

  1. Develop a comprehensive AI governance strategy that includes risk assessments, documentation, and clearly defined responsibilities for each AI system within the organization.
  2. Implement a system for regular testing to ensure continuous compliance with both AI Act and GDPR. To comply with both regulations, invest in robust anonymization techniques and synthesis of data that allow the use of extensive datasets while maintaining privacy requirements.
  3. Invest in regular staff training to ensure that employees understand requirements for compliance with the EU AI Act, GDPR and other regulations. When needed, engage with regulators to clarify expectations and use AI regulatory sandboxes that allow for testing in a controlled environment.

 

Seize the AI Revolution in Biotech with Confidence

Now is the time to embrace the opportunities of the EU AI Act and approach its challenges with your head held high. To succeed in navigating this complex regulatory landscape, you need a proactive and strategic mindset. 
By embracing best practices in AI governance, data management, and ethical considerations, can you: 
•    Ensure compliance
•    Foster innovation
•    Maintain a competitive edge
Remember: Don't let regulatory uncertainty hinder your progress in leveraging AI's transformative power. 
  

Ready to unlock the full potential of AI in your biotech endeavors while ensuring seamless compliance?

Contact Tenthpin Management Consultants today for expert guidance and tailored solutions. Our team of experienced consultants understands the intricacies of the EU AI Act, GDPR, and other relevant regulations. We can help you develop a robust AI strategy, implement effective risk management systems, and navigate the evolving regulatory landscape with confidence.

Partner with Tenthpin and transform compliance into a catalyst for innovation. 

bart_ reijs

written by

Bart Reijs

Director

Adriana Monteiro

Adriana Monteiro

AI/ML Engineer

Stay up to date with the latest #Lifeattenthpin #LifeSciences #Pharma #MedDevices #Biotech #Digitalforlife #Thoughtleadership #Medical Technology #AnimalHealth news by following us on Instagram #LifeAtTenthpin Facebook Tenthpin and our Tenthpin LinkedIn corporate page.

We continuously optimize the online experience for our visitors by using cookies. For more information, please view our privacy guideline.