Skip to main content

“The risk isn’t the incident. The risk is whether you can prove what happened when the clock starts.”

For the General Counsel, the critical question behind every automated decision is simple: if this ends up in a dispute, can we prove what happened?

As AI-driven decisions increasingly shape customer outcomes, operational actions, and regulatory reporting, legal risk no longer sits only in the policy that governs those systems. It sits in the evidentiary gap between the organisation’s explanation of a decision and the proof that explanation can withstand under legal scrutiny.

PARCIS closes that gap. It preserves decision-level provenance — showing what occurred, when it occurred, and under which policies and controls — with tamper-evident integrity. Instead of reconstructing events during litigation or disclosure requests, legal teams can retrieve verifiable records of automated decisions, reducing investigation friction and strengthening defensibility when it matters most.

GC Empathy Quadrant

Says:

“We have a preservation demand. Do not delete anything relevant.”

“Show me the event set, not ‘the conversation we think it was’.”

“What model/version and policy regime were in force at that time?”

“We are scoping the matter, not panic-preserving the universe.”

“I need chain-of-custody I can explain to a judge.”

“We’ll disclose what’s required, and nothing we don’t have to.”

Thinks:

The exposure is not “AI was wrong”, it’s inability to prove what happened with integrity.

If evidence depends on interviews and log archaeology, discovery becomes slow, costly, and contestable.

Silent updates + variable retention are legal landmines.

Tier 1-by-design on this endpoint is a deliberate legal-readiness choice: the preservation was done before the letter arrived.

Selective disclosure is the needle: prove provenance without unnecessary payload, corpus leakage, or PII overshare.

Feels:

Split focus (family moment vs adversarial clock starting).

Controlled urgency: fast action, zero drama, no unforced errors.

Cold jolt at “spoliation” and “pre-action”, because process is now adversarial.

Relief when the incident becomes a bounded, indexed set (QiTraceIDs) rather than a memory exercise.

Confidence returning once chain-of-custody is provable end-to-end.

Does:

Establishes immediate legal hold and scopes the relevant window (endpoint + time range + criteria).

Uses XAI-Lite to enumerate the QiTraceID event set and pull governance-critical facts per record (timestamps, model/tool identifiers and versions, policy set/version, gate outcome, integrity anchors).

Does not “turn on” Tier 1: relies on the endpoint’s Tier 1 default (vault capture already running) and applies a matter-scoped hold to the relevant QiTraceIDs.

Locks evidence bundles under retention/immutability controls and prepares a disclosure-ready pack with replay bounds.

Produces two outputs: a matter-scoped AI artefact index and replayable proof capsules per QiTraceID for independent verification.

Briefs the board with verifiable statements (“event set scoped; evidence preserved; pack prepared”), not reassurance.

The Phone That Buzzed at the School Play – A General Counsel’s Story

Assumed deployment posture: Tenant Platform Fee: Tier 1 enabled. Prod PED (AI concierge / customer-facing endpoint): Tier 1 (Replay).

It’s Saturday. Rachel is doing the rare thing: sitting still. Her daughter is third from the left in the second row, wearing a crown made of tinfoil and taking the role extremely seriously. Rachel’s phone is face-down on her lap because she promised herself she’d be a parent for one hour. Just one.

It buzzes. She ignores it. It buzzes again. And again. She turns it over. Four messages. The subject line from outside counsel: “URGENT: pre-action letter + preservation demand.”

What happened is oddly left-field. The company runs a customer-facing AI concierge—it answers questions, helps people navigate a service, pulls from approved internal knowledge. Last night, at a high-profile industry event with cameras and press in the room, it served up a sentence that should never have left the system. A defamatory statement about an identifiable person. Someone screenshotted it. The screenshot is already travelling faster than the comms team can type.

The letter alleges reputational harm. It demands the full conversation history, model, and version details, governance records, and warns about spoliation. And because these things never arrive alone, a privacy lead pings Rachel separately: a data access request has also landed.

Thinking Like a Lawyer

Rachel watches the rest of the play. She claps in the right places. She takes the photo. Then she gets in the car and starts thinking like a lawyer.

This is the legal crisis nobody trains for: being asked to produce truth with timestamps while your engineering reality is probabilistic, distributed, and full of silent updates.

The danger isn’t that the company used AI. The danger is that the company can’t prove what the AI did, when it did, and under which policy regime. That gap is where litigation gets expensive, discovery gets chaotic, and board confidence evaporates.

The Old Weekend

She’s lived the old version of this weekend. She messages engineering: “Preserve everything relevant. We need prompts, outputs, model version, guardrails, and logs.” Engineering’s honest answer: “We have some logs. Retention varies. We shipped a provider update on Thursday. We can’t promise we can reconstruct it precisely.”

In the old version, the next six weeks are a discovery nightmare. Interviews with engineers who weren’t there. Forensic consultants billing by the hour to reconstruct what a system did from fragments. Outside counsel running up a tab that dwarfs the claim. And at the end, a disclosure pack that Rachel knows opposing counsel will pick apart because the chain of custody has gaps she can’t close.

But this isn’t the old version of this weekend.

Evidence Written at Decision Time

Rachel opens PARCIS XAI-Lite from her laptop at the kitchen table while her daughter eats cereal and describes every scene of the play in forensic detail.

XAI-Lite wraps the AI stack at the decision boundary—models, tools, agents—without touching the model itself.

Enforcement lives on the synchronous path. Every governed decision already has a QiTraceID, a cryptographic receipt minted at decision time and backed by a tamper-evident audit spine.

The evidence was written when the decision was made, not reconstructed after the letter arrived.

Scoping the Event Set

Her first question to the system is the one lawyers secretly crave the answer to: “Show me the event set.” Not an anecdote. Not “the conversation we think it was.” The set.

She filters by endpoint and time window, and the system gives her a stable index: every QiTraceID for every relevant interaction.

For each one, she can see the governance-critical facts without rummaging through brittle logs—timestamps, model and tool identifiers and versions, policy set and version, Ethics Gate outcome, and integrity anchors.

Preservation Before the Demand

Now the second move: preservation. And this is where the architecture decision pays for itself.

The AI concierge is customer-facing, handles internal knowledge, and operates in high-profile settings—exactly the kind of system where, if something goes wrong, you need to show what it said, not just that it said something. So when the concierge was deployed, the firm made the call: Tier 1. The encrypted payload vault has been capturing from day one—documentary replay as a standing capability, with strong separation between the vault and the governance store.

Rachel doesn’t need to “enable” anything. The preservation was done before the preservation demand arrived. She applies a matter-scoped litigation hold on the relevant QiTraceIDs, and the evidence bundles—already sealed, already in object storage with versioning and WORM retention, already hash-anchored into QiLedger—are locked under legal hold.

That’s chain-of-custody she can explain to a judge without hesitation.

The Disclosure

Then the third move, the one that usually turns disputes into wars: disclosure. Opposing counsel wants everything. The privacy lead wants minimisation. Engineering wants to avoid leaking the internal corpus. Rachel needs to thread a needle—prove integrity and provenance without handing over unnecessary payload.

Because the Tier 1 vault already holds the documentary replay data, Rachel can choose what to disclose and at what depth. She generates an evidence pack that demonstrates what happened and under which governance regime, backed by QiTraceID, ledger anchors, and policy snapshots, without exposing model weights, training data, or raw PII.

Where disclosure thresholds require the underlying conversation, the vault allows controlled replay under policy—but only for the scoped QiTraceIDs, only under access controls, and only with the separation between payload and governance store intact. #

She’s not deciding whether the evidence exists. She’s deciding how much of it to show.

Changing the Tone of the Dispute

By Sunday evening, Rachel has two things that change the tone of the dispute before it starts.

First, a matter-scoped AI artefact index—which models were in production during the window, what artefacts exist, where they reside, and how they’re preserved.

Second, replayable proof capsules per QiTraceID: header metadata, model and version lineage, policy and governance context, gate status, integrity hashes and ledger anchors, and replay bounds. A third party can verify these independently.

The evidence doesn’t depend on anyone’s memory of what happened.

The Board Brief

On Monday morning, Rachel briefs the board. She doesn’t say “we’re looking into it.” She says: “We’ve scoped the event set, preserved the evidence under matter hold with cryptographic chain-of-custody, and prepared a disclosure-ready pack that demonstrates what happened, when, and under which policy. Outside counsel has reviewed it.”

The CEO asks: “How long would this have taken before?” Rachel doesn’t need to exaggerate. “Weeks. And the evidence would have been weaker.”

The Litigation Budget Reality

Here’s what Rachel knows: legal risk from AI doesn’t come from the AI being wrong.

AI will be wrong sometimes. Legal risk comes from not being able to prove what happened with integrity when someone asks.

Every day that evidence depends on engineering interviews, log archaeology, and forensic consultants billing for reconstruction is a day the exposure grows. Fix the evidence architecture—make it decision-time, tamper-evident, matter-scopeable, and independently verifiable—and you don’t eliminate disputes.

You make them shorter, cheaper, and defensible. That’s not a technology story. That’s a litigation budget story.

Get in touch now for more information

Get in touch