G|AI Works G|AI Works

Use Case

Automated Financial Report Generation

An asset manager reduced monthly reporting time by around 70% by deploying a validated LLM pipeline that drafts variance commentary directly from ERP data.

financial-services Finance Engineering

Start a project

At a glance

Outcomes

  • Around 70% reduction in analyst time per reporting cycle
  • No data errors logged in generated outputs across 6 months
  • Consistent terminology across all reporting periods

Stack

  • Python + SQLAlchemy (ERP extraction)
  • Claude with structured output (Anthropic)
  • Pydantic (input + output validation)
  • Astro + server actions (human review interface)

Typical timeline

3–4 weeks

kick-off to handover

Risks & guardrails

  • Model drift between periods — pin model version and gate updates with a golden test set
  • Compliance sign-off delay — engage compliance team in sprint 1, not sprint 3

Challenge

A mid-sized asset manager produced monthly investor reports manually. Analysts spent three to four days each month extracting figures from the ERP, writing variance commentary, and formatting output for distribution. The process was bottlenecked by a single analyst, creating delivery risk.

Approach

G|AI Works designed a three-stage pipeline:

  1. Data extraction layer: A scheduled job pulled portfolio performance metrics, benchmark comparisons, and attribution data into a validated JSON schema. Validation rules enforced completeness and value ranges before any LLM call was made.

  2. Narrative generation: A versioned prompt template — reviewed and signed off by the client's compliance team — injected the validated data into a structured generation request. Output was constrained to specific narrative sections: performance summary, attribution commentary, and risk indicators.

  3. Human review interface: A lightweight web interface presented draft reports side-by-side with the source data, allowing the reviewing analyst to approve, edit, or reject individual sections before final assembly.

Results

  • Around 70% reduction in analyst time per reporting cycle (from 3.5 days to under 1 day)
  • No data errors logged in generated reports across the first six months of operation (grounding validation catches mismatches before output)
  • Consistent terminology across all reporting periods, eliminating manual style review

Technical stack

  • Data extraction: Python + SQLAlchemy against existing ERP
  • LLM: claude-sonnet-4-6 with structured output
  • Validation: Pydantic schema enforcement on both input and output
  • Review interface: lightweight Astro + server actions

Ready to scope this?

Let's talk about your project.

Tell us what you're building. We'll respond with a clear next step: an audit, a prototype plan, or a delivery proposal.

Start a project →