$ man how-to/how-to-build-voice-system

Comparisonsadvanced

How to Build a Voice System for AI Content

Teach your AI to write like you - not like everyone else


Why Voice Matters for AI Content

AI writes in a default voice. It is competent, polished, and completely generic. Every AI-generated blog post, email, and social media post sounds the same because the models optimize for the average of their training data. A voice system overrides the default. Instead of generic AI output, you get content that sounds like a specific person with specific opinions, specific vocabulary, and specific patterns. Your audience cannot tell whether you wrote it or your AI wrote it because the AI has been trained on your voice. This is not prompt engineering. A prompt says "write in a casual tone." A voice system provides 50 specific patterns, 29 anti-patterns, platform-specific rules, and a quality gate process that catches deviations. The difference is the gap between "be casual" and a comprehensive style guide.
PATTERN

Layer 1: Core Voice DNA

The foundation is a voice DNA file - a document that captures your writing patterns. Not abstract descriptions ("warm and friendly") but specific, concrete patterns: Sentence structure: "Use short sentences. 8-12 words average. Break complex ideas into multiple short statements. Never compound sentences with three or more clauses." Vocabulary preferences: "Say 'ship' not 'deliver.' Say 'build' not 'develop.' Say 'works' not 'functions.' Say 'breaks' not 'encounters errors.'" Opening patterns: "Start with a statement, not a question. No rhetorical questions. Lead with the most specific claim, not the broadest context." Punctuation habits: "Use dashes for asides - like this. No semicolons. No parenthetical statements. No em dashes in customer-facing copy, use spaced hyphens instead." Abstraction level: "Always include a specific example within 2 sentences of any claim. No abstract statements without concrete proof." Build this by analyzing 20+ pieces of your own writing. Find the patterns. Write them as explicit rules the AI can follow.
FORMULA

Layer 2: Anti-Slop Filters

The anti-slop layer catches AI patterns that do not match your voice. Every AI model has default behaviors that leak through if unchecked: Banned phrases: maintain a list of phrases you never use that AI loves to generate. "In today's fast-paced world." "It is worth noting." "At the end of the day." "Let us dive in." "Leverage." "Utilize." "Harness." Each banned phrase is a red flag that the AI has fallen back to its default voice. Structural anti-patterns: "Never start 3+ consecutive paragraphs with the same word." "Never use a list where a paragraph works better." "Never use transition words like Furthermore, Additionally, Moreover." Substance requirements: "Every claim needs at least 2 of these 5: specific example, technical detail, reasoning shown, concrete result, or lesson learned." This prevents the AI from making vague assertions. The 3-flag rule: if a draft triggers 3 or more anti-slop flags, rewrite it from scratch. Do not patch. Patching creates Frankenstein content that reads as half-human, half-AI. Maintain the anti-slop list as a living document. Every time you spot a new AI-ism in your drafts, add it to the list. After 3 months, you have a comprehensive filter that catches almost everything.
PATTERN

Layer 3: Platform Playbooks

Your voice adapts per platform. You write differently on LinkedIn than on Reddit. Capture those adaptations in platform-specific playbooks: LinkedIn playbook: professional tone, insight-first structure, business outcomes framing, no jargon without explanation, CTA as a question. X playbook: punchy, opinionated, conviction over nuance, one idea per tweet, no hedging language. Reddit playbook: humble, specific, helpful, community-aware, no self-promotion framing, technical depth welcome. Blog playbook: comprehensive, structured with clear headings, SEO-aware keyword integration, personal narrative woven through technical content. Email playbook: conversational, direct, first-person, include context the reader needs to act. Each playbook is a one-page document that the AI reads before generating content for that platform. The core voice stays consistent. The presentation adapts.
PRO TIP

Putting It All Together

The full voice system loads in sequence: 1. Core voice DNA file loads first (foundation patterns) 2. Anti-slop filters load second (quality gates) 3. Platform playbook loads third (adaptation rules) 4. Content is generated 5. Quality gate process runs: slop check, specificity check, depth check, voice check Store these files in your repo so every AI session has access. In Claude Code, reference them from your CLAUDE.md or load them as skills. In other tools, include them in the system prompt or project context. The compound effect: after 3 months of using the voice system, your AI-generated content is indistinguishable from your human-written content. Readers cannot tell. Engagement rates are the same. But you are producing 5x the volume at 20% of the time investment. The maintenance: review the voice system monthly. Your voice evolves. New patterns emerge. Old patterns fade. Update the DNA file to match your current voice, not the voice you had 6 months ago. Start simple: a 50-line voice DNA file, a 30-item anti-slop list, and one platform playbook. Expand from there as you notice patterns the system misses.

related entries
AI Content vs Human ContentHow to Repurpose One Piece of Content Across 5 PlatformsMCP for the Content Stack
← how-to wikicontent wiki →
ShawnOS.ai|theGTMOS.ai|theContentOS.ai