The Constraint That Designed my AI Content System

I started with a time constraint, not a spec. Here is the product thinking that shaped my AI content workflow — and what it actually delivers.

An AI content workflow works when it’s designed as a product problem — with explicit constraints, defined governance, and a quality standard that’s reproducible without constant intervention. Starting with a one-hour-per-week time budget forced every subsequent design decision, producing an operation that runs consistently without adding coordination overhead.

One hour a week. That was the constraint I started with — not a specification document, not a discovery phase, not a prompting guide. A time budget and a clear goal: quality content published consistently, without it sounding like it was written by AI.

That constraint forced every subsequent decision.

What the problem actually was

douli.com is my personal blog. I run it alongside a full-time Product Owner role. The problem I needed to solve wasn’t “how do I use AI for content.” It was: how do I build a repeatable content operation that matches my quality bar, fits a constrained time budget, and doesn’t create new coordination work every time I touch it?

That’s a product problem. So I applied product thinking to it.

Concretely: Claude is connected to my WordPress installation via an MCP integration. Every post goes through a defined brief template, follows a documented tone guide, and is published as a draft for my review. I brief, Claude researches and writes, I review and publish. The full cycle runs in under an hour a week.

What made this work wasn’t the integration. It was the thinking that preceded it.

The decisions before the first word

Before any content was written, I established a set of operating rules. Not in a formal document — through structured conversation, the same way I’d work through a product requirement with an engineer. Each decision was deliberate.

I defined what “good” meant specifically. Tone of voice rules with named prohibitions. A fixed list of credentials that can and cannot be referenced — real results only, never invented. Required elements for every post. This isn’t editorial preference; it’s a quality spec that makes consistency reproducible.

I used the time constraint as a scoping tool. One hour a week isn’t a preference — it’s a forcing function. It meant the workflow had to be sustainable from day one. Any process requiring more time than that was, by definition, the wrong process.

Constraints don’t limit good design; they drive it.

I built governance before anything went live. Draft-only publishing. No modifications to already-published content. No fabricated results or invented experience. These aren’t safeguards I added after a problem — they’re the kind of upstream risk decisions that prevent the problem from occurring.

I designed the system for its own failure modes. The most likely failure: an AI publishing something inaccurate or off-brand under my name. The design response: make those outcomes structurally impossible rather than dependent on vigilance. That’s a different category of decision from “I’ll check it carefully.”

What changed — and what didn’t

The workflow delivers what it was designed to deliver. I now have a content operation with real governance — brand rules, defined workflow, SEO requirements, publishing controls — built in roughly the time it would have taken to write two posts from scratch.

The less obvious outcome: I have something I can improve. A system has edges you can push against. A chat window doesn’t.

What didn’t change: the quality bar. Every post still requires my judgment before it goes live. The system doesn’t replace that — it compresses the time required to reach the point where my judgment is useful.

The product thinking that transfers to any AI tool

The teams I see getting the most from AI tools aren’t the ones with the most prompting experience. They’re the ones who do the upstream work first: what problem are we actually solving, what does a good outcome look like, what are the failure modes, what should never happen without a human decision. That’s not AI expertise. It’s product thinking applied to a new class of tool.

The constraint didn’t limit what I built. It defined it.

TL;DR

  • A one-hour-per-week budget was the design constraint that shaped every decision — scope, workflow, governance, and failure mode prevention
  • What made it work wasn’t the AI integration — it was the product thinking applied before the first word was written: defining “good,” setting governance rules, and designing for failure modes
  • The quality bar didn’t change — every post still requires human judgment before publishing; the system compresses the time to reach that point
  • The teams getting the most from AI aren’t the best prompters — they’re the ones who do the upstream problem definition work first

Delphine Ragazzi is a Product Owner with 20 years of experience across digital analytics, CRO, and product delivery. She writes about product decisions, data, and AI at douli.com.

Leave a Reply

Your email address will not be published. Required fields are marked *