Content strategy is infrastructure, not a publishing calendar.
Most content problems aren’t about writing more. They’re about organizing what you already have so search can make sense of it, people can find what they need, and the whole thing doesn’t collapse as you grow.
I’ve spent eight years on this problem: first in enterprise SEO managing complex, multi-site organizations, then building four live reference sites from zero using AI as an execution layer for systems I already knew how to design. The sites are the proof. Not demos or mockups. Indexed, maintained, and built from primary source data. They exist because architecture came before execution, every time.
This page explains how I think, shows the work, and links to the full build documentation and case studies behind each site.
Content strategy touches all three layers, but it lives primarily in Get Understood. If search can’t tell what your pages are about and how they connect, nothing downstream works.
The Distinction That Matters
- “We need 10 blog posts this month”
- Keyword-stuffed pages that compete with each other
- Publishing calendar drives decisions
- More pages = more traffic (supposedly)
- “What pages should exist, and why?”
- Each page has a clear job that no other page does
- Search behavior drives decisions
- Better structure = compounding visibility
I’ve seen sites with 2,000 pages outranked by sites with 40. The difference is almost never “more content.” It’s better architecture.
How I Think About Content Architecture
Every content decision I make comes back to three questions. If you can’t answer them clearly for a page, that page probably shouldn’t exist yet.
What is this page for?
Every page needs a single, clear purpose. Not a topic. A job. “This page helps someone who searched [X] decide [Y].” If you can’t finish that sentence, the page needs work.
How does it connect?
Isolated pages don’t build authority. Pages that link to and from related content create context that search systems use to understand your site’s expertise. The internal link graph is the architecture.
Does it earn its spot?
If two pages overlap, one of them is diluting the other. If a page exists because a keyword tool said so, it probably doesn’t add real information gain. Every URL should justify being there.
Intent Is the Foundation
The most common content strategy mistake I see is building pages around topics instead of around how people actually search. Different intent types need different page structures, different depth, and different next steps.
When intent and page structure don’t line up, even ready-to-buy visitors leave. And when intent transitions are misaligned across a user journey, the whole path to conversion breaks down.
This is more urgent than it sounds.
AI search systems are compressing top-of-funnel traffic right now. The “what is” and “how does” queries that used to drive awareness clicks are getting answered in AI Overviews and ChatGPT before anyone visits your site. Content strategy is the thing that determines whether your content survives that compression or gets summarized away. Pages with clear intent alignment, original data, and explicit structure get cited. Pages that repeat what everyone else says get compressed into nothing. The sites I work on, from multi-location dealer networks to data-driven reference tools, all face this same challenge: build content architecture that machines can parse and trust, or become invisible on the surfaces where people are increasingly searching.
Where You Can See This Working
These aren’t mockups or theoretical examples. Every site below is live, indexed, and built from zero. Go look at them.
CheckMyTap
This is probably the best example of content strategy as infrastructure I’ve built. 1,000+ city pages, all generated programmatically from public water quality data.
The architecture
Hub/spoke structure: /water-quality/ branches into states, states branch into cities. Every city page surfaces hardness, PFAS levels, lead data, and treatment recommendations specific to that city’s actual water.
But it goes way beyond city pages. There’s a problem layer (/problems/hard-water/, /problems/pfas/) that explains what’s wrong. A solutions layer (/solutions/salt-based-softeners/, /solutions/reverse-osmosis/) that explains the fix. A guides layer (/guides/softener-vs-filter/) that helps people compare. And a tools layer with a quiz, product comparison, and system finder. Each layer serves a different intent type.
The scale
The whole thing scales because the template was designed right. One structural fix improves every city page simultaneously. And it’s in 5 languages, the same architecture localized for Spanish, Vietnamese, Chinese, Korean, and Tagalog speakers.
How the system shows up here
Get Found: City pages target geo-intent queries like “Phoenix water quality” and “is Las Vegas water safe.” The state/city hierarchy gives Google clean crawl paths to every page.
Get Understood: The layered content tells search this isn’t just water data. It’s data connected to treatment recommendations connected to product comparisons. That’s topical depth.
Get Chosen: The quiz and city-specific softener guides are decision-support content. Someone searching “do I need a water softener” is ready to act. They just need confidence.
UpOrbit
UpOrbit launched with 290+ articles across 8 topic hubs, and the architecture decision was the most important one I made.
The hub decision
Instead of dumping everything into a flat /blog/ directory, every article lives under a topic hub: /topics/focus/, /topics/time/, /topics/emotions/, and so on. Eight hubs, each representing a core challenge that someone with ADHD actually faces.
This matters because it teaches search what the site covers at the cluster level. Google doesn’t see 290 random articles. It sees comprehensive coverage of focus, time management, emotional regulation, work, relationships, health, understanding ADHD, and life skills. Each hub is a topical authority signal.
The entry point
The homepage has a “What’s your biggest ADHD struggle?” section that routes people to the right hub based on their self-identified problem. That’s intent matching at the entry point level. Not “browse our articles.” It’s “tell me what’s hard and I’ll take you there.”
How the system shows up here
Get Found: 290 articles across 8 hubs gives Google a massive surface of indexable, internally linked content. Each hub page acts as a crawl entry point.
Get Understood: The hub structure is the entire play. Search can classify what UpOrbit covers because the architecture declares it. No ambiguity.
Get Chosen: The struggle-based routing and free app integration are what make someone stay. The content doesn’t just inform, it connects to a tool that helps.
WireRef
WireRef shows how content strategy works in a purely reference-based niche. Electrical wire sizing, ampacity tables, appliance wiring, breaker sizing, GFCI requirements, state-by-state codes. Every page answers one specific question a homeowner or electrician would actually Google.
The page types
The architecture has 6 distinct page types: ampacity pages (/ampacity/6-awg-thhn-copper/), wire sizing (/wire-size/35-amp-circuit/), comparisons (/compare/8-awg-vs-6-awg/), appliance guides, state code pages (/states/maryland/), and interactive tools. Each type serves a different intent. Ampacity is informational reference. Comparisons are commercial intent. State pages add local context no other wire sizing site includes.
The 50-state layer
This is the part I think about most. Every state page includes the adopted NEC edition, licensing requirements, permit info, GFCI rules, and a compliance score. That’s 50+ pages of genuinely unique content because nobody else has compiled it this way.
How the system shows up here
Get Found: Each page type targets a specific query pattern. “6 AWG wire ampacity” hits the ampacity page. “8 vs 6 AWG” hits the comparison. Clean URLs, clean crawl paths.
Get Understood: The page type segmentation tells Google this site covers wire sizing, comparisons, appliance wiring, code requirements, and state rules. Topical depth in a niche most sites barely scratch.
Get Chosen: The calculators and inspection checklists turn a reference visit into a tool visit. Someone who came for a spec stays to run a voltage drop calculation.
GageRef
GageRef is a welding electrode and hydraulic fitting reference built for the job site. Every value is sourced directly from AWS A5.1, A5.18, and SAE J514. Primary source attribution on every page, because a wrong number on a reference site is not an inconvenience, it’s a weld failure.
The sourcing architecture
Most welding reference content online is secondary at best — values repeated from site to site without anyone checking against the original standard. The architecture decision was to go primary source first: AWS and SAE documents as ground truth, with AI used to cross-examine values rather than generate them. A wrong-within-range error (a value that passes sanity checks but doesn’t match the actual spec) is the hardest failure to catch. The only fix is having the primary source open.
The data structure
Welding specs and hydraulic fitting data live in structured JSON files. Templates read from those files. When a value gets updated against a newer standard revision, the correction propagates correctly across every page that references it. The alternative, values hardcoded into templates, turns a data correction into a surgical operation with breakage risk. GageRef was built with that separation from day one.
How the system shows up here
Get Found: Electrode comparison and fitting spec pages target specific query patterns welders and engineers actually use: “E6011 vs E6013 amperage,” “SAE J514 fitting dimensions,” “ER70S-6 welding parameters.”
Get Understood: The methodology page names the standards explicitly. Search and AI systems can evaluate the sourcing chain without inferring it.
Get Chosen: Data above the fold. Someone on a job site pulling up a spec from their phone doesn’t need to read an article first.
What the Work Looks Like
Content strategy isn’t one deliverable. It’s a set of decisions that shape everything else. Here’s what I typically work through, whether it’s one of my own sites or someone else’s.
Audit what exists
Map every page, what it targets, how it’s connected, and whether it’s earning its spot. Most sites have overlap they don’t realize. Some have pages cannibalizing each other. Some have gaps where the demand is obvious but no page exists. I use Search Console data heavily here.
Define the architecture
Decide what pages should exist, what each one is for, and how they connect. This is the most important step and the one most teams skip. Technical structure follows content architecture, not the other way around.
Map intent to pages
Every page gets mapped to a specific intent type. Informational pages get different structures than transactional ones. Commercial pages need comparison frameworks. Navigational pages need to get out of the way. Mixing these up is one of the fastest ways to lose rankings.
Design the internal link system
Links aren’t decoration. They’re the primary way search understands relationships between pages. Hub pages link to supporting content. Supporting content links back. Related topics cross-reference. The link graph is the architecture made visible.
Build templates, not pages
When content needs to scale, templates are the answer. CheckMyTap has 1,000+ city pages not because someone wrote 1,000 pages, but because the template was designed right and the data pipeline feeds it. One structural fix improves every page at once.
Measure and refine
Track performance by topic and intent, not just by page. Measurement should tell you which topic clusters are growing, which pages aren’t earning their spot, and where the next opportunity sits. Then iterate.
Applied Work
Beyond my own projects, I apply the same thinking to complex client environments. Details are anonymized but the patterns are real.
Enterprise platform content architecture
Defined content templates, page hierarchy, and internal linking patterns across five enterprise website launches for a heavy equipment manufacturer. Each operating company had different product lines and service areas, but the content architecture needed to be consistent enough for search engines to understand the relationships between sites while flexible enough for each business.
This site’s content architecture
Built the blog hub system (8 topic hubs, 50+ articles), portfolio pillar structure, and internal link graph for haydenschuster.com. Documented every content architecture decision publicly, including what was consolidated, what was separated, and why. The site structure article is the full walkthrough.
Detailed reasoning behind many of these decisions is documented in my SEO decision log, where I track what changed, why, and what happened next.
Common Questions
What’s the difference between content strategy and content marketing?
Content marketing is about producing and distributing content to attract an audience. Content strategy is the layer underneath: deciding what should exist, what each page is for, how pages relate to each other, and how the whole system scales. You can do content marketing without content strategy, but you’ll end up with pages that compete with each other and confuse search engines. Strategy is the architecture, marketing is what you build on top of it.
How does site architecture affect SEO performance?
Site architecture determines how search engines discover, crawl, and interpret your content. A clear hierarchy with well-connected hub pages tells Google what your site is about and where authority should concentrate. Poor architecture leads to orphan pages, keyword cannibalization, and crawl waste. The technical foundation and content architecture need to work together. I think of it as plumbing and blueprints: the blueprint decides what goes where, the plumbing makes sure everything flows.
What is search intent and why does it matter for content?
Search intent is the reason behind a query. Someone searching “what is a water softener” wants to learn. Someone searching “best water softener for hard water” wants to compare. Someone searching “buy Fleck 5600SXT” is ready to purchase. Each needs a fundamentally different page. When page structure doesn’t match intent, people bounce and search systems learn not to rank you. I break intent into four types: informational, transactional, commercial, and navigational.
How do you scale content without creating thin pages?
Templates and data. If every page in a set needs unique, locally relevant information, you build a template that pulls from a structured data source. CheckMyTap has 1,000+ city pages, but each one surfaces that city’s specific water hardness, PFAS levels, lead data, and treatment recommendations. The template ensures structural consistency while the data provides uniqueness. If you can’t point to what makes each page genuinely different from the next, you’re building thin pages and search will compress them.
What does content strategy look like for AI search and answer engines?
AI search systems extract and summarize content rather than just linking to it. Your content needs to be clear enough that a machine can pull the right answer from the right page. Clean headings, direct answers near the top, consistent terminology, and well-structured hierarchies all help. The same principles that make content scannable for humans make it interpretable for AI systems. I wrote about this in more detail: what AI search systems need to understand your content.
What is keyword cannibalization and how do you fix it?
Cannibalization happens when multiple pages on your site target the same or very similar keywords, so Google can’t decide which one to rank. The result is neither page ranks well. Fixing it usually means consolidating overlapping pages, clarifying the distinct purpose of each URL, and making sure your internal links point to the right canonical version. I’ve seen sites recover significant traffic just by merging three mediocre pages into one strong one and redirecting the old URLs.
How do internal links affect content strategy?
Internal links are how you declare relationships between pages. They tell search engines which pages are important, which topics are related, and how authority should flow across your site. A hub page that links to 15 supporting articles builds topical depth. A supporting article that links back to its hub reinforces the hierarchy. Without intentional internal linking, you end up with orphan pages that search can’t find and authority that pools in random places. The link graph is the architecture made visible. I cover the technical side of this as well.
What is a hub and spoke content model?
Hub and spoke is an architecture pattern where one central page (the hub) covers a broad topic, and multiple supporting pages (the spokes) go deep on subtopics. The hub links to every spoke, and every spoke links back to the hub. This creates a tight cluster that search engines can recognize as comprehensive coverage of a subject. UpOrbit uses this pattern: 8 topic hubs with 290+ articles distributed across them. My own blog uses it too, with 8 hub pages connecting to individual articles.
How do you decide what pages a site actually needs?
I start with demand: what are people actually searching for in this space, and what do they need when they get there? Then I map that against what already exists on the site. The overlap shows me what’s working. The gaps show me what’s missing. And the duplicates show me what needs consolidation. Every page should have a clear job that no other page does. If two pages serve the same purpose, one needs to go. Search Console is one of the best tools for this because it shows you exactly what Google thinks each page is about.
Why do some pages rank well and then lose visibility over time?
Usually one of three things. First, the page was ranking for a query whose intent shifted, like Google deciding a query is transactional when your page is informational. Second, new competitors published better content that provides more information gain. Third, your own site created cannibalization by publishing newer pages that overlap with the original. The fix is different for each situation, but it always starts with understanding what changed. That’s why I keep an ongoing decision log that tracks this kind of thing over time.
Go Deeper
Posts where I dig into the ideas behind this work.