Skip to content

Technical SEO Work

๐Ÿ•ธ๏ธ Technical SEO

Technical SEO is plumbing. If it’s broken, nothing else works.

You can write the best content in the world and build the smartest architecture on paper. If Google can’t crawl it, can’t render it, can’t figure out which version is canonical, or takes 8 seconds to load it on mobile, none of that matters.

Technical SEO is the infrastructure layer. It’s not glamorous. Most of the time, when it’s working, nobody notices. But when it breaks, everything downstream breaks with it. This is the work that makes the other work possible.

Where this fits in my SEO system

Technical SEO is primarily the Get Found layer. If search engines can’t crawl, render, and index your pages correctly, the rest of the system has nothing to work with. But it touches all three: rendering affects understanding, performance affects whether people stay.

The Foundation Layer

Everything in technical SEO comes down to four things. Not a 47-point audit checklist. Four things.

Crawl

Can search engines find and access your pages? This covers robots.txt, sitemap architecture, internal link paths, crawl budget on large sites, and whether you’re accidentally blocking important content. On a 50-page site this is trivial. On a 1,000+ page site like CheckMyTap, crawl management becomes the entire game.

Index

Does Google actually store your pages, and the right versions of them? Canonical handling, duplicate management, noindex directives, parameter handling, pagination. The gap between “crawled” and “indexed” is where a lot of sites lose pages without realizing it. Search Console is the primary diagnostic tool here.

Render

Can search engines see what users see? JavaScript-heavy sites, client-side rendering, lazy-loaded content, dynamically injected elements. If your content only appears after JavaScript executes, Google might not see it at all, or might see a different version than your users do. This matters more every year as sites get more complex.

Perform

Is the experience fast enough that people stay and search engines trust it? Core Web Vitals, mobile responsiveness, server response times, image optimization, layout shift. Performance isn’t just a ranking factor. It’s a user experience factor that compounds. A slow site loses visitors before your content strategy ever gets a chance to work.

I’ve audited sites with thousands of pages where fewer than half were actually indexed. Sites where a staging environment was live and cannibalizing production. Sites where a single redirect chain added 3 seconds to every page load. The fix is almost never “publish more content.” It’s “fix the plumbing so the content you have can actually reach people.” This is also where AI search systems struggle: if your content isn’t crawlable and renderable by traditional search, it’s invisible to AI answer engines too.

When Technical Debt Compounds

Technical SEO problems don’t announce themselves. They accumulate quietly until something visible breaks. These are the patterns I see most often.

The crawl budget drain

Faceted navigation, infinite scroll pagination, URL parameters, and session IDs generating thousands of crawlable URLs that shouldn’t be indexed. Google spends its budget crawling garbage while your important pages go stale.

The canonicalization mess

HTTP vs HTTPS, www vs non-www, trailing slashes, case sensitivity, UTM parameters. When Google sees 4 versions of the same page and your canonical tags disagree with your internal links, it picks one on its own. Often the wrong one.

The redirect chain

Page A redirects to B, B redirects to C, C redirects to D. Every hop adds latency and dilutes link equity. After 3 or 4 years of site changes, migrations, and URL restructuring, you end up with chains 5 hops deep and nobody knows how they got there.

The render gap

Your site looks great in a browser but Google’s renderer sees something different. Content behind tabs, accordions, or JavaScript events might not get indexed. Critical internal links injected by JavaScript might not get followed. This gap also affects how search systems understand your page’s purpose, because they’re making decisions based on what they can actually see in the rendered DOM, not what you intended.

The migration that never finished

Old URLs returning 404s instead of redirecting. Redirect maps that covered 80% of pages but missed the 20% that had the most backlinks. Internal links still pointing to old URLs. Site migrations are where I see the most damage, because the problems don’t show up until weeks later in Search Console.

The performance spiral

One team adds a chat widget. Another adds analytics. Marketing adds a tracking pixel. Dev adds a font. Each one is “only 50kb.” After a year, you’re loading 2MB of JavaScript before the page even renders and nobody can point to a single decision that caused it.

Technical foundations matter more now, not less.

There’s a misconception that AI search makes technical SEO less important. The opposite is true. AI systems are pickier about what they extract. A page with render issues, broken canonicals, or JavaScript-dependent content doesn’t just rank poorly. It’s completely invisible to AI crawlers that need clean, rendered HTML on first request. The same search behavior shifts driving people toward AI-powered answers make the technical layer underneath even more critical. If the plumbing is broken, it doesn’t matter how good your content is or how well it’s structured for modern search extraction.

Where You Can See This Working

The technical decisions behind these sites are visible in the source code, URL structures, and crawl behavior. Nothing here is theoretical.

CheckMyTap

checkmytap.com

1,000+ programmatic city pages built from a data pipeline. The technical challenge isn’t building the pages. It’s making sure Google can efficiently crawl and correctly index all of them.

The data pipeline

Every city page is generated from public water quality records that get pulled, cleaned, validated, and merged into structured templates. The build process runs server-side, so each page exists as fully rendered HTML before any browser or bot touches it. No client-side data fetching, no JavaScript-dependent content. Googlebot sees exactly what users see on first request. That decision alone avoids an entire class of rendering and indexing problems.

Crawl management at scale

With 1,000+ city pages plus problem pages, solution pages, guide pages, and tool pages, crawl efficiency matters. Sitemaps are segmented by state so Google can process them in logical chunks rather than one massive file. Robots.txt blocks admin paths, utility URLs, and staging routes. Internal links create a hierarchical tree: hub pages link to state pages, state pages link to city pages. Every page is reachable within 3 clicks of the root. No page depends solely on the sitemap for discovery.

Canonical discipline

Self-referencing canonicals on every page. No trailing slash variations. No parameter-based duplicates. The URL structure is clean and predictable: /water-quality/arizona/phoenix/. When you have 1,000+ pages, one canonical mistake doesn’t create one problem. It creates 1,000 problems. The template enforces consistency so individual pages can’t drift.

How the system shows up here

Get Found: Server-side rendering, segmented sitemaps, hierarchical internal links, consistent canonicals. Crawl efficiency by design, not by cleanup.

Get Understood: Structured data on every city page. Consistent template means consistent signals across the entire index footprint.

Get Chosen: Pages load fast because there’s no client JavaScript blocking render. The template is lightweight by default.

1,000+ page crawl management Server-side rendering Segmented sitemaps Data pipeline to HTML Template-enforced canonicals
Read the full case study

WireRef

wireref.com

A reference site with calculators, data tables, and 50-state code pages. The technical challenge is making interactive tools indexable and dense specification data fast on mobile.

URL architecture as crawl signal

6 distinct URL patterns, each mapping to a query type: /ampacity/6-awg-thhn-copper/ for spec lookups, /wire-size/35-amp-circuit/ for sizing decisions, /compare/8-awg-vs-6-awg/ for head-to-head comparisons, /states/maryland/ for local code context. The URL itself declares what type of content Google will find before it even crawls the page. No query parameters, no dynamic routing. Clean, descriptive, permanent URLs.

Progressive enhancement for calculators

The voltage drop calculator and panel load calculator both render meaningful content server-side before any JavaScript loads. Default inputs, a worked example, and the NEC reference values are all in the initial HTML. JavaScript adds interactivity on top. This means Google indexes a complete, useful page on first crawl. The calculator enhances the experience but doesn’t gate it. That’s the difference between a page that ranks and a blank shell that doesn’t.

Cross-type internal linking

Every ampacity page links to the relevant wire sizing page. Every comparison links to both specs being compared. State pages link to the most common wire types used in that state. This creates a dense internal link graph where related content reinforces itself. From a crawl perspective, it means Google can traverse the full topical depth of the site through links alone, not just sitemaps. From a search behavior perspective, it means someone who lands on an ampacity spec can naturally move to a sizing decision without going back to search.

How the system shows up here

Get Found: Descriptive URL patterns per page type. Dense cross-type internal linking. Progressive enhancement means every page is crawlable without JS execution.

Get Understood: Proper HTML table markup for specification data. Structured data for technical values. Each page type has a consistent heading hierarchy that AI systems can parse for direct answers.

Get Chosen: Calculators load fast. Tables reflow on mobile without horizontal scrolling. Zero layout shift because dimensions are defined before content renders.

6 URL pattern types Progressive enhancement Cross-type internal linking Semantic table markup Zero CLS on mobile
Read the full case study

GageRef

gageref.com

Industrial reference tool for welding electrodes, hydraulic fittings, and metal specifications. Small site, but the technical fundamentals matter because every page is a direct-answer page that search and AI systems pull from.

Data table optimization

Specification tables with dozens of rows across multiple columns. On desktop this is straightforward. On mobile, where most reference lookups happen on a job site, it breaks fast. The approach: semantic HTML tables (not div grids pretending to be tables), responsive overflow patterns that let users scroll the table without scrolling the page, and explicit width/height attributes on every cell to prevent layout shift. Google can parse the table structure and extract individual values for featured snippet answers.

Crawl simplicity by design

Fewer than 100 pages, but every one is a high-value reference target. No pagination, no faceted navigation, no parameter URLs. Every page is 1-2 clicks from root. The internal link graph is tight: electrode spec pages link to related fittings, fittings link to relevant gauges. Google can index the entire site in a single crawl session. The technical strategy here is restraint. When your content is genuinely useful reference material, the technical job is to add zero friction between the user’s query and the answer.

How the system shows up here

Get Found: Flat, friction-free site structure. Complete crawl and index in minimal crawl budget. No wasted requests.

Get Understood: Semantic HTML tables that both Google and AI answer engines can parse for structured extraction. Proper heading hierarchy on every page.

Get Chosen: Fast on mobile. Tables render without layout shift. Someone looking up an E7018 electrode spec on a welding shop floor gets the answer in under 2 seconds.

Semantic HTML tables Mobile-first data layout Zero layout shift Complete index in minimal crawl
Read the full case study

What the Work Looks Like

Technical SEO isn’t a one-time audit. It’s ongoing infrastructure maintenance with periodic deep inspections. Here’s what I typically work through.

Crawl and index audit

Map what Google can actually find and what it’s choosing to index. Identify crawl traps, orphan pages, blocked resources, and the gap between submitted and indexed URLs. Search Console is the starting point, but log file analysis tells the full story on larger sites.

Canonical and duplicate resolution

Audit canonical tags, redirect chains, parameter handling, and alternate versions. Make sure Google is consolidating signals to the right URLs and not splitting authority across duplicates.

Render verification

Compare what users see versus what Googlebot sees. Check for content hidden behind JavaScript, critical links that aren’t in the initial HTML, and elements that only appear after interaction. This matters especially on React, Vue, and Angular sites.

Site speed and Core Web Vitals

Diagnose and fix performance issues: render-blocking resources, unoptimized images, layout shifts, slow server response. The goal is passing Core Web Vitals on real user data (CrUX), not just lab scores.

Migration planning and execution

URL mapping, redirect implementation, pre-launch validation, post-launch monitoring. Site migrations are where technical SEO earns its keep. One missed redirect map can undo years of authority.

Structured data implementation

Schema markup that’s accurate, relevant, and actually matches visible page content. Not just adding JSON-LD for rich results. Building structured data that helps search and AI systems understand what each page is about and how pages relate to each other.

Applied Work

Beyond the live sites above.

Five enterprise website launches

Led technical SEO across five enterprise site launches for a heavy equipment manufacturer during a multi-site platform rollout. Standardized site architecture, URL structure, indexing rules, internal linking patterns, and metadata templates before launch. The approach shifted SEO from post-launch remediation to pre-launch system design, reducing duplicate and structural issues across all five sites.

This site as a technical proof of concept

Built haydenschuster.com on WordPress with custom HTML portfolio pages, schema markup per template type, intentional internal link graph, and sitemap structure matching the site hierarchy. Documented every architectural and technical decision publicly. The site itself is the case study.

Detailed reasoning behind decisions like these is in my SEO decision log, including what changed, what the expected outcome was, and what actually happened.

Common Questions

What is technical SEO and why does it matter?

Technical SEO covers the infrastructure that allows search engines to crawl, render, index, and serve your pages. It includes things like site speed, crawlability, canonical handling, structured data, mobile responsiveness, and rendering behavior. It matters because none of your content strategy or link building works if Google can’t access and process your pages correctly in the first place.

How do I know if Google is crawling my site correctly?

Google Search Console is the starting point. The Coverage report shows you which pages are indexed, which are excluded, and why. The Crawl Stats report shows how often Google visits and how much it downloads. For deeper analysis, server log files show exactly which URLs Googlebot requests, how often, and what response codes it gets. The gap between what you want indexed and what Google actually indexes is where most technical problems live.

What is crawl budget and when should I worry about it?

Crawl budget is the number of pages Google will crawl on your site in a given period. For most sites under a few hundred pages, it’s not a real concern. Google will crawl everything. It becomes critical on large sites (1,000+ pages) where faceted navigation, parameter URLs, or infinite pagination can generate thousands of crawlable URLs that waste budget. If your important pages are taking weeks to get re-crawled after updates, crawl budget is probably the bottleneck.

What are canonical tags and why do they matter?

A canonical tag tells Google which version of a page is the “official” one when multiple URLs serve the same or very similar content. This happens more than most people realize: HTTP vs HTTPS, www vs non-www, trailing slash variations, UTM parameters. Without proper canonicalization, Google might split ranking signals across duplicates instead of consolidating them. The canonical tag is a signal, not a directive. Google can and does override it if your internal links, sitemaps, and redirects disagree.

How does JavaScript rendering affect SEO?

Google can render JavaScript, but it’s a two-phase process. First it crawls the initial HTML. Then it queues the page for rendering, which can take hours or days. If your content, links, or metadata only exist after JavaScript runs, there’s a delay before Google can see them, and a risk they won’t render correctly. The safest approach is server-side rendering or hybrid rendering where critical content is in the initial HTML response. Client-side rendering for interactivity is fine, but critical SEO content should never depend on it.

What should I check before and after a site migration?

Before: complete URL mapping (old to new), redirect plan (301s, not 302s), baseline traffic and index data, backlink audit to prioritize redirects for pages with external links. After: validate every redirect works, monitor Search Console for crawl errors and index drops, check that no old URLs return 404s, compare indexed page counts weekly for at least 6 weeks. I wrote a more detailed framework for repeatable migration processes.

What are Core Web Vitals and do they actually affect rankings?

Core Web Vitals measure loading performance (LCP), interactivity (INP), and visual stability (CLS). They are a confirmed ranking signal, but a relatively light one compared to content relevance and links. Where they matter most is as a tiebreaker and as a user experience factor. A slow site loses visitors before your content ever gets a chance. The important thing is measuring against real user data (Chrome User Experience Report), not just lab tools like Lighthouse, because real-world performance on real devices is what Google uses.

How does structured data help with SEO and AI search?

Structured data (JSON-LD schema markup) gives search engines explicit information about what your content means, not just what it says. It can enable rich results in Google (FAQ dropdowns, product ratings, how-to steps) and helps AI search systems extract and attribute information more accurately. The key rule: structured data must match visible page content. Marking up content that users can’t see violates Google’s guidelines and will get you penalized.

What is the difference between a 301 and 302 redirect?

A 301 is a permanent redirect. It tells search engines that the old URL has permanently moved and to transfer ranking signals to the new URL. A 302 is a temporary redirect, meaning the old URL might come back. Google handles them differently: 301s consolidate link equity to the new URL, while 302s keep the original URL in the index. For site migrations and URL changes, you almost always want 301s. I see 302s used accidentally more often than intentionally, and it’s one of the easiest things to catch in an audit.

How do you prioritize technical SEO fixes?

I work in this order: indexation and duplication risks first (are the right pages indexed, are duplicates consolidated), then crawl and internal link structure (can Google find everything, is authority flowing correctly), then site architecture (does the hierarchy make sense), then metadata and template consistency, then performance, and finally enhancements like schema. The logic is simple: fix the things that prevent pages from appearing in search before optimizing the things that make them rank better. No point polishing a page Google can’t even find.

Go Deeper

Articles where I go deeper on specific technical problems and diagnostic approaches.