Artificial Intelligence and the Changing Landscape of Regulations in the Life Sciences

Gaugarin Oliver
5 min readFeb 12, 2024

--

AI has become an integral part of the life sciences landscape, contributing to advancements in diagnosis, treatment, and overall healthcare delivery. However, many worry that without proper government oversight, the use of AI and machine learning (ML) algorithms could also cause negative and unintended consequences, including critical errors and bias amplification.

That’s why many international regulatory bodies have recently focused on AI development in several industries, including life sciences.

What Are the Benefits of AI in Life Sciences?

AI and big data analytics have already become an important part of the life sciences. It’s already used in software for medical devices, and large language models (LLMs) and generative AI can alleviate staff shortages and other pressures hospitals face.

The World Health Organization says AI is useful in several life sciences-related applications, including:

  • Responding to written patient queries
  • Other clerical tasks such as summarizing patient visits
  • Symptom/treatment investigation and diagnostics
  • Powering lifelike simulations for healthcare staff training
  • Analyzing data to discover new compounds and risks for drug discovery and development

The WHO says AI tools have the potential to “transform the health sector” by strengthening clinical trials, improving diagnostics and personalized healthcare, and supplementing healthcare providers’ knowledge and skills.

AI can also benefit organizations involved in pharmacovigilance, help streamline pharmaceutical supply chains, improve patient satisfaction, and help with advanced diagnostics and personalized medical interventions.

AI Regulations Affecting Life Sciences

But AI also carries plenty of potential risks — especially in life sciences and healthcare scenarios where patients’ lives are on the line. Potential errors, biases, and the potential for data breaches as healthcare providers generate, receive, and store massive amounts of data are just a few.

Regardless, the increased awareness of AI on behalf of regulators mirrors the already-intense interest in AI in the life sciences industry, with regulatory submissions involving AI increasing nearly 10x between 2020 and 2021.

Life sciences news outlet Bio Buzz says most of these submissions were for Investigational New Drug Applications (IND).

This flurry in activity has led to several regulatory bodies enacting new compliance frameworks and guidelines regarding the use of AI in the life sciences, aiming to ensure healthcare products are safe and effective.

Here’s a look at some of the most impactful AI regulations for life sciences from around the world:

United States

The U.S. Food and Drug Administration (FDA) recently published several documents on the use of AI in life sciences, including the Center for Drug Evaluation and Research’s Artificial Intelligence in Drug Manufacturing paper and another discussion paper exploring the use of AI and ML for developing drugs and other biological products.

The FDA has also reviewed and authorized a number of medical devices with AI/ML (marketed via 510(k) clearance, granted De Novo request, or premarket approval) in the U.S. However, as of late 2023, the body has not authorized any devices using generative AI or powered by LLMs.

Authorized devices approved by the FDA include the following practice areas:

  • Radiology
  • Cardiovascular
  • Neurology
  • Hematology
  • Gastroenterology/urology
  • Ophthalmic
  • Ear, nose, and throat

Interest in AI regulations for life sciences goes beyond borders, however. Another recent FDA document produced in tandem with Health Canada and the U.K.’s Medicines and Healthcare Products Regulatory Agency outlines Good Machine Learning Practices (GLMP) for medical devices.

European Union

The E.U. recently passed legislation to regulate AI (the AI Act) that came into force this year. This act will have several impacts on life sciences companies, but the main constant is that pretty much any medical device is now classified as a “high-risk” AI system in the EU.

  • Class IIa and above medical devices and in vitro diagnostics (IVD) tests using AI: Will be subject to requirements applying to high-risk AI systems, subjecting them to tighter risk management obligations, testing, data governance rules, and documentation requirements.
  • Digital companion diagnostics (CDx): These products will also be classified as high-risk AI systems and subject to the same requirements.
  • Clinical trials: Clinical software that relies on AI to perform tasks such as optimizing molecular screening or predicting drug efficacy, and that is classified as medical devices or IVDs, are also considered high-risk AI systems.
  • Non-device AI systems: Any AI system used in the medical product lifecycle should not be categorized as a high-risk AI system, although the act says each system should be evaluated on a case-by-case basis.

WHO

While not a regulatory body, the World Health Organization (WHO) has also released its own regulatory considerations about safety and effectiveness around AI and the life sciences. Specifically, the WHO report highlights six main considerations around AI regulation:

  1. Regulation should foster trust through transparency and documentation of the entire product lifecycle.
  2. Make training models as simple as possible and address issues such as security threats.
  3. Externally validate all data and make sure the intentions of AI models are clear to ensure safety.
  4. Companies should commit to rigorous data quality mechanisms to ensure systems don’t amplify biases or errors.
  5. Models should respect data privacy regulations such as the E.U.’s GDPR and HIPAA in the U.S.
  6. Regulatory bodies, healthcare professionals, governments, industry representatives, and patients should collaborate to ensure products stay compliant throughout their lifecycle.

In early 2024, the WHO released an additional set of AI guidelines around the use of generative AI in healthcare settings. These guidelines urged governments to:

  • Invest in not-for-profit or public infrastructure related to generative AI, and ensure that companies in the space adhere to ethical principles to access the infrastructure.
  • Respect patients’ dignity, autonomy, and privacy with policies related to patient rights.
  • If possible, create a new regulatory agency (or assign these tasks to a current agency) within the government to approve LLMs.
  • Impose mandatory auditing and impact assessments by third parties on any large-scale use of generative AI in healthcare.

How Life Science Companies Can Prepare

Not everyone is completely happy with the current regulatory landscape, however. The relatively slow pace of regulation sometimes discourages innovation, according to one source from Cambridge Design Partnership, a U.K.-based company with expertise in medical device quality and regulatory affairs.

“The FDA is still focused on locking the algorithm down. It is not a continuous learning exercise. It is a preapproved model that has gone through testing and validation and then gets locked,” said Tim Murdoch, the company’s business development lead, in Medical Device Network.

He added that if regulatory bodies continue with this approach, “we’ll cut out a lot of potential innovation.”

Life sciences companies can prepare for new regulations, such as the E.U.’s AI Act, by being proactive, according to law firm Sidley Austin LLP. This can include reviewing your AI systems to identify at-risk elements, educating internal teams about the risk of non-compliance, and building a robust AI governance framework.

Conclusion

AI holds tremendous promise for life sciences and healthcare, but also potential downsides and risks that could cause negative health outcomes if not properly dealt with. That’s why regulatory bodies in the U.S., E.U., and elsewhere have recently either communicated regulatory guidelines or passed regulatory legislation around the issue.

--

--

Gaugarin Oliver
Gaugarin Oliver

Written by Gaugarin Oliver

Chairman & CEO at CapeStart — www.capestart.com (A leading AI solutions provider — End-to-End Data Annotation, Machine Learning and Software Development)