Reference engagements
Representative delivery patterns.
These are the system shapes that come up repeatedly in applied AI work — written up as engagement patterns, not as individual customer stories. Figures reflect typical outcomes when the pattern is delivered end-to-end with the operational discipline described on the About page.
Specific, named case studies are kept under NDA and shared in scoping conversations.
// Featured cases · real engagements
All selected work →Two specific engagements, disclosed at the level the context allows. Below them, delivery patterns describe shapes of work that recur — not named cases.
// Internal · full disclosure
The gruenig.com content pipeline
Multi-agent editorial system behind this website. Every referenced artefact is a file in the repository. No NDA.
Read the case →// Client · redacted
Vertical B2B comparison portal
Operator-bias problem solved in scoring architecture, not copy. Names redacted; architecture and governance open.
Read the case →Delivery patterns
10 patterns-
Security & governance
2–3 weeksAI Attack Surface & Threat Modeling
Identify weak points in AI-enabled systems and design defenses that hold up in production.
Cross-industry Read pattern → -
LLMOps & reliability
2–4 weeksEvaluation Harness & Regression Gates
Keep quality stable: golden sets, automated evals, and release gates for prompt/model changes.
Cross-industry Read pattern → -
LLMOps & reliability
2–3 weeksLLM Cost Tracking & Budget Policies
Control spend without killing quality: per-request cost tracking, routing, caching, and budget gates.
Cross-industry Read pattern → -
Security & governance
2–4 weeksPrompt Injection Defense & Tool Authorization
Make agentic systems safe: strict tool boundaries, least privilege, and robust input handling.
Cross-industry Read pattern → -
Security & governance
2–3 weeksSecrets & PII Leakage Prevention
Prevent sensitive data leaks through logging, prompts, retrieval, and tool outputs.
Cross-industry Read pattern → -
Security & governance
4 weeksPrompt Injection Defense for a Customer-Facing AI Assistant
A SaaS company hardened a customer-facing LLM assistant against prompt injection attacks before public launch, adding layered input validation, output sandboxing, and pre-deployment red-teaming.
software Read pattern → -
Security & governance
3–4 weeksPer-User Data Access Governance for an Internal LLM API
A professional services firm built a permission-aware LLM API that enforces document-level access controls, ensuring users can only retrieve and reason over data they are authorised to see.
professional-services Read pattern → -
Security & governance
3–4 weeksLLM Audit Trail for a Regulated Financial Workflow
A financial services firm implemented an immutable audit log for an AI-assisted analysis workflow, making every generated output fully reconstructable and attributable for compliance purposes.
financial-services Read pattern → -
Integration & workflow
3–4 weeksAutomated Financial Report Generation
An asset manager reduced monthly reporting time by around 70% by deploying a validated LLM pipeline that drafts variance commentary directly from ERP data.
financial-services Read pattern → -
Integration & workflow
4 weeksAI-Powered Customer Segmentation for Targeted Campaigns
A B2B software company cut campaign cost-per-qualified-lead by 40% after replacing manual segmentation with an ML-based clustering pipeline.
software Read pattern →
// How to read these
Pattern, not promise
Each page describes how a system of this shape is scoped, built, and handed over — not a specific customer deployment.
Numbers anchor mechanism
Outcome ranges show what the pattern typically produces when the operational discipline around it is actually applied.
Real cases on request
Named engagements are shared in scoping calls under NDA. Start a scoping conversation →