Reference engagement
LLM Audit Trail for a Regulated Financial Workflow
A financial services firm implemented an immutable audit log for an AI-assisted analysis workflow, making every generated output fully reconstructable and attributable for compliance purposes.
// Delivery pattern
This page describes a representative engagement of this shape — how the system is scoped, built, and handed over. Specific figures reflect typical outcomes of the pattern when delivered with the operational discipline described on the About page. Named customer engagements are shared under NDA on request.
Engagement shape
Typical outcomes
- ✓ Any AI-generated output reconstructable within 2 minutes from the audit log
- ✓ Compliance audit passed without findings on AI record-keeping
- ✓ Six-month audit log retained and queryable, meeting internal records policy
Stack
- — Append-only Postgres audit log with row-level hash verification
- — Git-backed versioned prompt registry
- — Explicit model version pinning in LLM client config
- — Internal FastAPI reconstruction endpoint
Typical timeline
3–4 weeks
kick-off to handover
Risks & guardrails
- Log storage growth at high request volumes — requires retention policy
- Prompt registry adoption: teams must not bypass versioning
- Hash-only input storage may limit forensic depth if inputs aren't retained separately
Challenge
A financial services firm had deployed an LLM-assisted workflow for summarising analyst research notes and flagging compliance-relevant language. The system was producing useful output, but the compliance team raised a concern: if a regulatory reviewer asked why a specific note had been summarised in a particular way, there was no mechanism to reconstruct the exact inputs, prompt, and model version used at the time of generation.
The firm's record-keeping obligations required that AI-generated outputs used in a regulated process be reproducible and attributable. The audit trail did not exist.
Approach
Structured audit log schema: Designed an append-only log table capturing: request_id, timestamp, user_id, input_payload_hash (SHA-256 of the raw input), prompt_id and prompt_version, model_id and model_version, output_hash, and a metadata JSON column for downstream context. The log is immutable — rows are inserted, never updated or deleted.
Prompt registry: All prompt templates moved to a versioned registry. Each template is identified by a prompt_id and version tag. Deploys increment the version; the previous version remains queryable. No production prompt change is possible without creating a new version entry.
Model version pinning: The LLM call configuration was updated to specify the exact model version string rather than an alias. Version is logged at request time. Rollout of a new model version requires an explicit configuration change, which triggers a version increment in the audit log.
Reconstruction tooling: A lightweight internal tool allows the compliance team to retrieve the exact input payload, prompt template, and model version for any logged request_id, enabling full reproduction of the generation context. Demonstrated in a tabletop compliance exercise before sign-off.
Typical Outcomes
Outcomes observed in this engagement — not guarantees for every deployment:
- Any AI-generated output in the regulated workflow reconstructable within 2 minutes from the audit log
- Compliance audit passed without findings related to AI tooling record-keeping
- Six-month audit log retained and queryable, meeting the firm's internal records policy
Technical Stack
- Audit log: append-only Postgres table with row-level hash verification
- Prompt registry: Git-backed versioned YAML store with migration tooling
- Model pinning: explicit version string in LLM client configuration, validated at startup
- Reconstruction tool: internal FastAPI endpoint with compliance-team access only
Related patterns
Scope a similar engagement
Does this pattern fit your situation?
Tell me the system you're trying to integrate and the outcome you're measured on. You'll get a clear next step — a readiness audit, a prototype plan, or a delivery proposal.