AI isn’t coming to life sciences—it’s already here.
It’s being used to screen ingredients, draft labels, monitor claims, summarize safety data, assemble submissions, and support regulatory decision-making across dietary supplements, Natural Health Products (NHPs), pharmaceuticals (OTC drugs), biotherapeutics, cosmetics, and food products.
The upside is obvious: faster timelines, more efficient workflows, and better use of data.
The downside is less obvious—but much more important.
AI is now influencing decisions that regulators ultimately hold companies accountable for. And in many cases, it’s being used without formal validation, governance, or traceability.
As we mentioned in our ISO/IEC 42001 article, AI governance isn’t something you apply at the end—it needs to exist across the entire lifecycle, from initial use to ongoing monitoring.
So the real question isn’t whether to use AI.
It’s how regulators are thinking about it—and where the risks actually sit.
This isn’t about AI as a standalone product.
This is about AI being used inside regulated processes across:
AI is already embedded across the full lifecycle—from planning and submission strategy through formulation, regulatory review, labeling, submission management, and post-market monitoring.
That breadth is what makes AI particularly complex from a regulatory perspective.
The frameworks governing these products were designed around human decision-making. They assume traceability, accountability, and a clear chain of responsibility. AI does not remove those expectations—it introduces new ways for them to break down if not properly controlled.
At a high level, there are three distinct approaches emerging globally.
The EU has taken the most structured approach, implementing a comprehensive, risk-based legal framework through the AI Act. This framework classifies AI systems based on their level of risk and applies corresponding obligations, from transparency requirements to strict controls for high-risk systems.
The United States, led by FDA, has taken a more operational and sector-specific approach. Rather than relying on a single overarching AI law, FDA is integrating AI into internal review processes while also developing targeted expectations for areas such as drug development, regulatory decision-making, and AI-enabled medical devices.
Health Canada remains principles-based and evolving while not having a principled AI framework in place. The proposed Artificial Intelligence and Data Act (AIDA) represents a move toward a formal framework, but Canada currently relies heavily on existing legislation and voluntary guidance.
Across all three, there is a clear convergence:
This reflects broader global regulatory trends, where governments are balancing innovation with safety, transparency, and accountability.
The differences between regions become much clearer when you look at how AI is treated across each stage of the regulatory lifecycle. While the underlying principles are starting to align globally, the way each regulator operationalizes those principles still varies.
AI across the regulatory lifecycle
| Lifecycle Stage | FDA (U.S.) | Health Canada | EU | Where AI Creates Risk |
| Planning & Strategy | Moving toward structured, AI-supported decision frameworks tied to context of use | No formal AI framework; relies on existing regulatory expectations | Risk-based classification applied early in lifecycle | Choosing the wrong regulatory pathway or building strategy on unverified outputs |
| Formulation & Development | AI acceptable if scientifically credible and validated | Full responsibility remains with sponsor | Strong emphasis on data governance and transparency | Bias in training data, non-reproducible outputs, flawed assumptions |
| Regulatory Review | Increasing use of AI-assisted tools to review submissions and safety data | No comparable public AI deployment | EMA developing internal AI capabilities | Submissions not structured for AI-assisted review; inconsistencies exposed |
| Market Authorization & Changes | Expectation for structured, consistent submissions | Governed through existing frameworks (e.g., NPNs, DINs) | Lifecycle-based oversight rather than static approval | Version drift, undocumented AI-generated updates |
| Labeling & Claims | AI used for label comparison and consistency checks | Governed under existing labeling regulations | Transparency obligations tied to AI use | Non-compliant claims, fabricated or unsupported substantiation |
| Submission Management | Shift toward structured, machine-readable data | No specific AI guidance; internal governance expected | Strong documentation and traceability expectations | Data leakage, lack of audit trail, unclear provenance of outputs |
| Lifecycle / QA / Safety Monitoring | AI used for adverse event summarization and signal detection | Emphasis on monitoring under existing frameworks | Increasing focus on AI-enabled pharmacovigilance | Missed safety signals, automation bias, inadequate escalation |
The EU’s approach is the most formalized. The AI Act introduces a clear, risk-based model where obligations scale based on the level of risk posed by the system. High-risk systems must meet strict requirements related to data governance, transparency, and oversight.
The FDA’s approach is more practical and iterative. Rather than waiting for a comprehensive legal framework, it is actively integrating AI into regulatory processes. This means companies are increasingly being evaluated in environments where AI plays a role in review and decision-making.
Health Canada’s approach is more flexible—but also more ambiguous. There is currently no comprehensive AI-specific regulatory framework. While AIDA proposes requirements around identifying and mitigating risks of harm and bias, much of its implementation remains pending. Within different product categories, there are also different levels of controls around regulation of AI. Health Canada has more concrete AI/ML guidance in the medical device space, including ML-enabled medical device guidance and joint GMLP / transparency / PCCP principles with FDA and MHRA
This creates a unique dynamic:
AI is not always explicitly regulated—but its outputs are still fully regulated.
Across all regions, the biggest issue isn’t AI itself.
It’s how AI is being used inside regulated decisions.
AI systems can generate outputs that appear credible but are factually incorrect. They can introduce bias through skewed datasets. They can summarize complex data in ways that omit critical context. And they can create a false sense of confidence in outputs that haven’t been properly validated.
These risks are not theoretical.
Research shows that AI-enabled systems in therapeutics can accelerate development and regulatory processes, but also introduce risks related to hallucinations, bias, and lack of transparency—each of which can directly impact safety, efficacy, and compliance.
There are several consistent challenges across jurisdictions that companies need to account for when integrating AI into regulated workflows.
For companies operating in regulated life sciences environments, the implications are clear.
AI is becoming embedded in regulatory workflows, but it does not reduce regulatory burden—it changes it.
Organizations must now manage not only submissions and compliance outputs, but also the systems that generate those outputs. That includes ensuring data integrity, validating outputs, maintaining traceability, and implementing governance frameworks.
Companies that treat AI as a simple efficiency tool are more likely to introduce risk. Those that integrate it into their regulatory and quality systems are more likely to create long-term value.
The most effective approach is to treat AI as part of the regulatory and quality system.
That includes:
As highlighted in our ISO/IEC 42001 article, governance frameworks are critical—not just for compliance, but for maintaining control as AI systems evolve.
AI is already embedded in how regulatory work is performed across life sciences. From early-stage planning through labeling and post-market monitoring, it is influencing decisions that directly impact compliance, safety, and market access.
What differs across jurisdictions is not the direction—but the level of maturity. The European Union has taken a more structured and prescriptive approach through a formal risk-based framework. The FDA is advancing more operationally, integrating AI into its internal processes and expectations. Health Canada continues to evolve, relying on existing regulatory structures while broader AI-specific legislation develops.
Despite these differences, a consistent expectation is emerging. Regulators are not restricting the use of AI—but they are reinforcing the need for control, transparency, and accountability in how it is used.
For companies, this means AI cannot be treated as a standalone tool or a simple efficiency gain. It must be integrated into regulatory and quality systems in a way that ensures outputs are traceable, validated, and defensible.
Ultimately, the risk is not the adoption of AI itself, but the lack of governance around it. Organizations that recognize this—and build appropriate controls into their processes—will be better positioned to leverage AI while maintaining compliance across global markets.
At dicentra, we work at the intersection of regulatory affairs, quality systems, and clinical strategy—where the impact of AI is already being felt across the product lifecycle.
As AI becomes more embedded in regulatory workflows, the challenge is no longer whether to use it, but how to ensure it is implemented in a way that remains compliant, defensible, and aligned with evolving regulatory expectations.
We support companies by:
Our role is to help organizations adopt AI in a way that strengthens—not compromises—their regulatory position.
Contact dicentra for support with artificial intelligence in regulatory affairs.