G|AI Works G|AI Works

Selected work · featured case

The gruenig.com content pipeline.

A real, inspectable example of what an owner-operated applied AI system looks like when it is built the way I build them for clients — versioned prompts, explicit quality gates, a staging directory, and a handover-ready operating contract.

// Engagement shape

Client
G|AI Works — internal (this site)
Role
Sole owner, operator, and engineer
Status
Live in production
Disclosure
Fully inspectable in the repository
Boundaries
No NDA; no customer data involved

Context

A studio that publishes claims must be able to operate like the systems it recommends.

G|AI Works publishes technical content across six applied-AI pillars in two languages. A site like that is a working test of the studio's own discipline: if the stack cannot be operated cleanly by one owner, the promise to clients is rhetorical.

The initial problem was concrete. Drafting with a single long prompt produced output that read fluent but drifted — occasional unsourced statistics, inconsistent SEO structure, security-relevant detail leaking into otherwise safe articles, and drift between the EN and DE variants that should have carried identical meaning.

A content operation that cannot be trusted at the single-article level cannot be scaled to many articles. The fix had to come from the pipeline, not from post-hoc editing.

Intervention

Four decisions that shaped the system.

  • Replace ad-hoc drafting with a role-separated pipeline.

    Writing, fact-checking, SEO, security review, and editorial gating became distinct agents with versioned prompts and explicit handoffs. No single-call "write me an article" shortcut.

  • Make every draft pass four hard gates before publication.

    SEO, Security, Sources, Clarity. Each gate has concrete pass criteria recorded in the runbook. A single gate failure blocks publication.

  • Route drafts through a staging collection, never directly into the live site.

    An _incoming/ content directory holds every draft with draft: true. Only the Lead Editor can move files out. The sitemap and routing ignore _incoming/.

  • Keep EN and DE in aligned variants with identical meaning.

    EN is authored first. The DE variant is produced in the same batch, gated separately, and re-checked whenever EN is revised post-submission.

What was built

Six concrete artefacts, each verifiable in the repo.

Every item names the location in the repository where the artefact lives. This is deliberate: the value of a case study is proportional to how much of it the reader can check.

// Artefacts

  • 01

    Nine-agent roster

    prompts/agents/ · 9 files

    Trend Scout, Tech Writer, Marketing Strategist, Finance Specialist, Engineering Specialist, SEO Architect, Fact & Source Checker, Security Reviewer, Lead Editor — each as a standalone prompt with a model assignment (Sonnet for specialists, Opus for the editor).

  • 02

    Four quality gates

    docs/content-system/runbook.md · §4

    Gate 1 SEO, Gate 2 Security, Gate 3 Sources, Gate 4 Clarity. Each gate names an owner, a checklist, and a failure action. Target first-pass rates are tracked.

  • 03

    Staging directory + routing rules

    app/src/content/_incoming/

    All drafts land here with draft: true. On PASS, Lead Editor moves the file to blog/news/guides/case-studies and sets draft: false. On FAIL, revision returns as -v2.

  • 04

    Bilingual content collections

    app/src/content/*.md and *.de.md

    Pillars (6), use-cases, posts — every entry in EN and DE. Astro 5 ID normalization stripped dots from filenames; the templates derive slugs accordingly.

  • 05

    Publishing substrate

    Astro 5 · Tailwind · static output

    Static builds with hreflang alternates, canonical URLs, JSON-LD for Organization and WebSite, and sitemap generation that skips staging content.

  • 06

    Editorial documentation

    docs/content-system/ · 4 files

    content-cadence.md, content-schema.md, editorial-guidelines.md, runbook.md. The runbook is the operating contract the agents are held to.

Delivery & safeguards

How the system is operated, not just built.

  • Versioned prompts under review.

    Every agent prompt lives as a file in prompts/agents/ and changes with the repo history. The assigned model is named in the filename suffix (_opus / _sonnet).

  • Explicit revision loop.

    Drafts that fail a gate come back as -v2, -v3 in _incoming/. A ceiling of three revision cycles is defined before escalation to human review.

  • Safe-default security posture.

    The Security gate blocks step-by-step exploit reproduction, weaponisable PoCs, real credentials, and undisclosed vulnerabilities. Placeholder credentials and hardened defaults are the rule.

  • Source policy with measurable integrity.

    Trend and analysis pieces require 3–6 sources, Tier 1 or Tier 2 credibility, within 24 months, in a fixed citation format. Fabricated URLs fail the gate outright.

Outcome

Reliability as the product, not throughput.

The case does not claim a traffic lift or a cost saving. It claims an operational property: the content system is legible to one owner and behaves predictably under revision.

Boundaries

What this case is not.

Stating the limits is part of the case. A reader should know exactly what inferences this example supports and where the material ends.

  • No customer data, no client systems, no integration with third-party infrastructure are involved in this case.
  • Traffic, engagement, and conversion numbers are not disclosed on this page — the case is about architecture and delivery discipline, not marketing performance.
  • Internal cost and token-spend figures are not disclosed. They inform the budget for client engagements; they are not used as claim material here.
  • This is one worked example. It is not a substitute for a named customer case — those exist under NDA and are available on request.

Stack & artefacts

The concrete surface.

// Publishing

  • Astro 5
  • Tailwind CSS
  • Static output
  • JSON-LD
  • Hreflang

// Content system

  • 9 versioned agent prompts
  • 4 quality gates
  • _incoming/ staging
  • Bilingual collections

// Models

  • Claude Opus (editor)
  • Claude Sonnet (specialists)

// Artefacts

  • docs/content-system/runbook.md
  • prompts/agents/
  • app/src/content/_incoming/README.md

Next step

A system like this, applied to your content, data, or workflow.

The architectural pattern above — role separation, versioned prompts, explicit quality gates, staged publication — is the same one I apply to client engagements. Different subject, same operational discipline.