Skip to main content

“The risk isn’t the incident. The risk is whether you can prove what happened when the clock starts.”

For the Pharmacovigilance and Patient Safety Lead, every signal carries weight. Safety decisions — whether a case is escalated, classified, or linked to a broader pattern — must be defensible, reproducible, and traceable long after the original decision was made.

As AI-assisted systems increasingly support signal detection, case triage, and classification across large pharmacovigilance datasets, the core risk shifts from processing volume to evidencing judgement. When models evolve or data snapshots change, safety leaders must still be able to demonstrate how a decision was reached, under which rules, and with which version of the system.

PARCIS enables that evidentiary backbone. By preserving replayable decision traces, version lineage, and governance context, it allows PV teams to reconstruct how AI-assisted triage or classification decisions were made at any point in time. The result is patient safety oversight that remains transparent, reproducible, and defensible under regulatory or clinical scrutiny.

Pharma / Patient Safety Empathy Quadrant

Says:

“Two sentences can still be serious. Don’t let the queue bury this.”

“What did the model triage it as, and why?”

“Did anything change overnight? Which version was running?”

“I need proof for inspection: what happened, when, under what policy, on what model version.”

“Escalate this to human review now, then check for cohort impact.”

“Export the pack. I’m not doing screenshot archaeology.”

Thinks:

The real risk isn’t one misclassification; it’s a silent severity reshuffle that hides a signal.

“Benign patch” is still change, and change without evidence becomes a finding (or worse).

If the triage route isn’t replay-capable, we’ll fail the hardest question later: “prove what it did at the time.”

Evidence must be decision-time and version-anchored, or we’ll argue about timestamps and lose.

Tiering is essential: minimal everywhere else, Tier 1 where patient safety depends on it, Tier 2 only when the incident demands it.

Feels:

Immediate gravity: this is a patient, not a data point.

Pressure from the clock and the knowledge that “day zero” starts before anyone feels ready.

A cold jolt when she sees the overnight patch, because she knows how inspections turn on moments like this.

Relief when the case becomes a bounded, provable record instead of a memory exercise.

Determination to protect patients and the programme’s credibility at the same time.

Does:

Pulls the case’s QiTraceID and verifies model/version, gate behaviour, governance fingerprint, and timestamps.

Compares behaviour across the version boundary; spots the “moderate vs serious” divergence.

Escalates the case to a human reviewer immediately; then audits the affected window for similar shifts.

Uses Tiering deliberately: the PV triage pathway runs Tier 1 for documentary replay; enables Tier 2 only for a scoped incident window if escalation is needed.

Exports signed, immutable evidence packs (per QiTraceID and cohort) to support inspection, audit, and future defensibility without rebuilding the story each time.

Two Sentences and a Clock -A Pharmacovigilance & Patient Safety Lead’s Story

Assumed deployment posture (Pharma PV story): Tenant Platform Fee: Tier 1 enabled. Prod PED (PV triage route): Tier 1. Tier 2 (Forensics): on-demand for scoped incident windows (incident month). Other low-risk analytics surfaces: Tier 0.

It starts with a case narrative that doesn’t read like data. It reads like a person.

At 06:41, Leila opens the overnight queue and sees a free-text report that’s only two sentences long. No neat fields. No structured coding. Just a clinician in a regional hospital describing a sudden deterioration in a patient three days post-exposure, and a family member asking whether they should be terrified.

Two sentences.

Behind them, a human being who took a medicine that was supposed to help them.

The Reality of Pharmacovigilance

This is the part outsiders miss about pharmacovigilance: you don’t just manage volume. You manage consequence.

Leila’s team processes thousands of case reports a week—spontaneous reports, clinical trial events, literature signals, patient support programme data—and increasingly, AI-assisted triage helps classify severity, detect duplicates, flag potential signals, and prioritise what gets a human reviewer’s eyes first.

The AI isn’t making the safety decision. But it’s shaping which decisions get made quickly and which wait in the queue.

In pharmacovigilance, the queue is where people get hurt.

The Regulatory Clock

And the clock is already running.

If this is a serious suspected adverse reaction, regulators expect rapid submission—fifteen calendar days for post-authorisation ICSRs in most jurisdictions, seven days for fatal or life-threatening SUSARs in clinical trials.

Day zero isn’t when someone opens the email.

Day zero is when the organisation becomes aware.

The AI just triaged this case as moderate. If that classification is wrong, the clock has already started, and nobody knows it.

The Overnight Change

Then comes the detail that turns a busy morning into a genuinely dangerous one.

Leila discovers that a vendor patch landed overnight. A configuration bundle changed. The triage model was updated “to improve accuracy.”

Nobody did anything malicious. But Leila has been in safety long enough to know that in regulated pharmacovigilance, benign change is still change, and change is where defensibility lives or dies.

Because in three weeks—or three years, when a litigation team is reviewing trial data—someone will ask the question that makes PV teams go quiet: “Why did you triage it that way, on that day, under those conditions, and can you prove it?”

The Old Investigation

She’s lived the old version of this question.

The scramble to find which model version was running on which date. The vendor’s release notes that describe features, not behaviour. The validation records from initial deployment that no longer match what’s in production.

Someone in IT saying “we can check the logs” and coming back with timestamps that don’t align with the case management system.

A quality lead asking whether the change went through the deviation process. It didn’t, because nobody flagged an automatic patch as a change.

Three weeks later, the inspection finding writes itself: “unable to demonstrate validated state of the AI-assisted triage system at the time the classification decision was made.”

But this isn’t the old version of this morning.

Evidence at the Decision Boundary

Leila opens PARCIS XAI-Lite. It wraps the triage system at the decision boundary without touching the model—no access to weights, no retraining, no vendor IP required.

Every governed decision emits a QiTraceID, a cryptographic receipt minted at the moment the classification was made, backed by a tamper-evident audit spine.

The governance view is derived from the same integration hooks and decision context as the underlying AI.

Tracing the Classification

She pulls the case’s QiTraceID and sees instantly: which model and version was in force, what the Ethics Gate did at the boundary, the governance fingerprint before and after the classification, and the timestamps needed to evidence awareness and handling—without asking twelve people to remember.

She can see whether the overnight patch was active when this case was triaged. It was.

She can see what the previous version would have classified it as by comparing governance fingerprints across the version boundary.

The previous version flagged it as serious. The new version scored it moderate.

Detecting Signal Drift

Now she asks the question that matters for a safety function trying to protect patients, not just process cases: did the update change behaviour in a way that could hide a signal? Because pharmacovigilance isn’t just case handling. It’s signal detection.

And the most dangerous thing an AI triage system can do isn’t misclassify a single case—it’s quietly reshuffling the severity distribution after an update so that a safety signal that should have surfaced disappears into the noise of the moderate queue.

XAI-Lite can correlate decisions under the QiTraceID spine and surface drift across the version change—where severity classifications shifted, by how much, and for which patient profiles—without adding latency to the operational workflow.

Evidence Architecture by Design

Leila escalates the original case to a human reviewer immediately. Then she checks the evidence posture—and this is where the architecture decision the organisation made six months ago pays for itself.

Not every AI surface in the business runs at the same evidence depth. The tier is a policy decision, set prospectively per decision surface, and it determines what’s captured at the moment the decision is made. You can’t retroactively conjure replay data for decisions that were only captured as receipts.

If the payload wasn’t captured at the time, it doesn’t exist later. So the organisation made the call early: low-risk internal analytics run Tier 0—governance-minimal receipts, no raw patient data retained.

But the safety-critical triage pathway—the one that classifies severity and determines whether a case hits the expedited queue or waits—runs Tier 1 by default: the encrypted payload vault sufficient for documentary replay, with strong separation from the governance store.

Because when a regulator, an inspector, or a litigation team asks to see exactly what the AI did to a specific case three years ago, “we only kept the receipt” is not an answer a patient safety function can live with.

And if an incident escalates, Tier 2 adds time-bounded forensic capture for a defensible incident timeline—richer artefacts under an explicit incident basis, scoped and time-limited. No permanent over-collection. No raw PII persisted beyond what the tier requires.

In pharmacovigilance, where case narratives contain some of the most sensitive personal health data imaginable, that discipline matters.

The Evidence Pack

She exports the evidence pack: signed, immutable bundles per QiTraceID, stored with WORM retention, anchored back into the QiLedger so a third party—an inspector, an auditor, a litigation team reviewing trial data years later—can verify the pack independently.

Inside: case-level timeline tied to a stable ID, model, and version lineage, policy and governance context at the moment of classification, gate behaviour, integrity hashes, and replay bounds.

The pack can be shaped to meet GVP inspection requirements, clinical trial audit needs, or EU AI Act technical documentation expectations for high-risk AI—without rebuilding the story from scratch each time.

Correcting the Divergence

By the end of the day, Leila has done something that changes the emotional temperature of the entire function.

The original case has been reclassified by a human reviewer and expedited. The version change has been flagged through the deviation process with a full evidence trail.

The affected window has been audited against the QiTraceID spine, and three other cases have been re-reviewed.

And Leila can look an inspector, a safety committee, or a courtroom in the eye and say: “We can show what the AI-assisted triage did, when it did it, under which model version and policy regime, and how we detected and corrected the divergence. Here are the receipts. Here is the replay.”

Catching the Signal

Here’s what Leila knows: pharmacovigilance doesn’t fail because safety professionals aren’t vigilant. It fails because the systems they depend on change faster than the evidence architecture can track.

Every automatic patch, every model improvement, every “benign” update is a moment when the triage logic could silently shift—reclassifying cases, reshuffling queues, burying signals in the moderate pile.

And when someone asks years later why a signal wasn’t detected sooner, the answer can’t be “we didn’t know the model changed.”

Fix the evidence—make it decision-time, version-anchored, tamper-evident, and replayable—and you don’t just survive the inspection. You catch the signal. That’s not a compliance outcome. That’s a patient safety outcome.

Get in touch now for more information

Get in touch