Skip to content

GageRef: Welding Reference Built for the Mid-Task User

Case Study

Built for someone who needs the answer before the puddle cools.

GageRef is a free welding and hydraulic fitting reference with 30 electrodes, 67 comparisons, and 7 interactive tools. It is built for a user who is mid-task, not mid-research. That constraint shaped every decision on the site.

The user is not browsing

Every site I have built serves a different type of user at a different moment. CheckMyTap serves a homeowner researching water quality at their kitchen table. UpOrbit serves someone exploring productivity systems over multiple sessions. WireRef serves an electrician planning a job or checking code compliance mid-project.

GageRef serves someone standing in front of a welding machine who needs to know the amperage range for 1/8-inch E7018 right now. Or a mechanic lying under a piece of equipment trying to figure out which hydraulic fitting standard they are looking at. The session is short. The need is immediate. They are not comparing options or reading background material. They need a specific answer to a specific question, and they need it fast enough that looking it up does not interrupt their work.

That is a fundamentally different design problem than any of my other projects. It changes how pages are structured, what goes above the fold, why tools exist instead of articles, and why comparisons are split into dedicated pages instead of bundled. Every decision on GageRef traces back to this: the user is mid-task.

gageref.com – live site
30Electrodes
67Comparisons
7Tools
7Fitting Standards
25Guides

The SERP was built for the wrong moment

I studied what was ranking for electrode queries and the pattern was consistent. The content was written for someone learning about welding, not for someone doing it. Long editorial posts that explain what E7018 is in 2,000 words before showing the amperage chart. Comparison articles that bury the answer under paragraphs of context the user already has. Manufacturer pages designed for procurement teams, not for the person holding the stinger.

The gap was not informational. There is no shortage of welding content online. The gap was that nobody had built for the mid-task moment. Nobody had asked: what does this person need if they have 10 seconds, not 10 minutes?

Built for researchers

The existing SERP assumes the user wants to learn. It serves educational content to someone who already knows the fundamentals and just needs a specific data point.

  • Amperage ranges buried in paragraphs, not in tables
  • Comparison posts covering five rods in one article with no depth on any pair
  • No way to input your situation and get a recommendation
  • No defect troubleshooting by symptom
  • Hydraulic fitting data locked in manufacturer PDFs
Built for the mid-task user

GageRef assumes the user already knows what welding is. They need the spec, the setting, or the answer. Everything above the fold on every page serves that need first.

  • Full amperage chart visible immediately on every electrode page
  • One comparison per page, one pair per URL
  • Rod selector: 4 questions, ranked output with machine settings
  • Defect troubleshooter: pick the symptom, get causes and fixes
  • Thread identifier: measure, answer 3 questions, know the standard

How mid-task intent shaped every layer

Once you accept that your user is mid-task, every design decision has a clear answer. Should the page lead with context or with the data? Data. Should comparisons be bundled or split? Split, because the user searched for one specific pair. Should tools ask for a lot of input or a little? As little as possible. Should content be long or short? As short as it can be while still being complete.

Reference Pages

Answer first, context second

Every electrode page opens with the full amperage chart, polarity, and position data. The explanatory content exists below for users who want it, but the person who just needs “90 to 160 amps on 1/8-inch E7018” gets that without scrolling. This is the opposite of how most content sites work, where the answer is the reward for reading the article.

Tools over Articles

Input and output, not read and decide

A welder searching “what rod do I need” does not want to read a comparison of eight rods and figure it out themselves. They want to answer four questions and get a recommendation. The rod selector, setup calculator, defect troubleshooter, amperage lookup, brand cross-reference, thread identifier, and adapter finder all follow the same pattern: take an input, return a specific output.

One Pair, One Page

67 comparisons, each at its own URL

A user searching “E6010 vs E6011” has a different question than someone searching “E6013 vs E7018.” Bundling five comparisons into one article means neither user gets the depth they need. Every meaningful electrode pair and fitting pair has its own page with the spec difference, the practical trade-off, and the common mistake. That structure exists because the mid-task user searched for one specific pair, not a roundup.

Consistent Structure

Same layout, every page

When a welder lands on any electrode page, the amperage chart is in the same place. When they land on any comparison page, the spec difference is in the same place. Rigid structure is a feature for reference material. The returning user already knows where the answer is. They do not have to relearn the page every time.

Job-Specific Guides

Start from the task, not the product

The 12 “Which Rod for My Job?” guides are organized by what the user is doing: trailer frame repair, structural steel, rusty metal, thin sheet metal. Not by electrode classification. A welder does not think “I need a cellulose-coated fast-freeze electrode.” They think “I need to weld a trailer frame.” The guides start where the user starts.

Brand Cross-Reference

Welders search by what is on the box

AWS classifications are how the industry organizes electrodes. Brand names are how welders actually search. “Lincoln Excalibur 7018” and “ESAB Atom Arc 7018” are the same product class, but a welder searching by brand name needs the cross-reference to know that. Building for mid-task means building for how people actually look things up, not how the data is officially categorized.

What this taught me about intent

I already understood search intent as a concept from CheckMyTap. But GageRef forced me to think about intent at a more granular level. It is not just informational versus transactional. It is about the user’s physical context. Are they at a desk or in a shop? Do they have two minutes or ten seconds? Can they scroll or are they wearing gloves? Those constraints do not show up in keyword research. You have to think about who is actually on the other end of the query.

Two trades, one reference, same user

GageRef covers welding electrodes and hydraulic fittings under one domain. That decision was based on audience overlap. The people who weld are often the same people who work with hydraulic systems: fabrication shops, mobile equipment mechanics, maintenance crews, pipeline workers. These are not two separate audiences. They are the same person on different days, with the same mid-task need for fast structured reference.

The URL taxonomy keeps them cleanly separated: /welding/ and /hydraulic-fittings/ are independent branches with their own navigation, comparisons, tools, and guides. They share a domain and a design system, not content. That separation was planned before any content was written, so expanding the fittings section later did not require restructuring anything on the welding side.

Welding

30 electrodes, 7 processes, 46 comparisons

Stick, MIG, flux-core, TIG, stainless stick, stainless wire, and aluminum. Each electrode has a dedicated page with the full spec table. 12 job-specific guides. 13 educational guides. Rod selector, setup calculator, defect troubleshooter, amperage lookup, and brand cross-reference.

/welding/stick-electrodes/e7018

/welding/compare/e6010-vs-e6011

/welding/for/welding-rod-for-trailer-frame

Hydraulic Fittings

7 standards, 21 comparisons, 2 tools

JIC 37-degree, NPT, BSP, ORFS, DIN, JIS, and SAE ORB. Dimensional data, thread specs, and cross-reference for each standard. Thread identifier and adapter finder tools for field identification. 21 comparison pages for the pairs most commonly confused.

/hydraulic-fittings/jic

/hydraulic-fittings/compare/jic-vs-npt

/hydraulic-fittings/identify

My Setups: memory for returning users

A mid-task user who finds a setting that works does not want to look it up again next time. My Setups lets users save their working configurations: rod, diameter, amperage, polarity, material, and position. It persists in the browser with no login required. The next time they open GageRef, their saved setups are still there.

This feature exists because of how welders actually work. You find the sweet spot on your machine for a specific rod and material combination, and that setting is worth remembering. Most reference sites treat every visit like a first visit. My Setups treats the returning user as someone with history, which is how a reference tool should work if the audience comes back regularly.

The concept later expanded into WireRef’s My Jobs feature, where electricians save project-level configurations across multiple wire runs. GageRef tested the idea at the single-setting level. The retention mechanism is the same: give people a reason to come back that is not new content.

Retrospective

GageRef is the project that taught me to think about intent beyond keyword classification. Informational, navigational, commercial, transactional. Those categories are useful but they do not capture the physical context of the user. Someone searching “E7018 amperage” at a desk and someone searching the same query in a fabrication shop have the same keyword intent but completely different needs in terms of page structure, information density, and how fast the answer has to be accessible.

That realization now runs through everything I build. On WireRef, I designed for electricians mid-project. On CheckMyTap, I designed for homeowners mid-research. The intent classification is useful, but the real question is: where is this person and what are they doing right now? That determines how the page should be built.

GageRef is not done. There are electrode categories that need more depth, fitting standards that could use better visual identification guides, and tools that could be improved. But the architecture supports expansion without restructuring, and the intent-driven structure still holds as the site grows. The decision log tracks what is changing and why.

What went wrong

The mid-task framing helped me build the right pages. It did not prevent me from building them in the wrong order or at the wrong depth.

Hydraulic fittings launched too thin

I launched the fitting section with basic standard pages and comparison templates but not enough depth on visual identification guidance. The thread identifier tool helps, but the supporting content around “how to tell these apart by looking at them” was underdeveloped. I was excited about having two verticals live and rushed the second one.

Guides came after they should have

The 25 guides on topics like welding polarity, rod storage, and process comparisons should have launched alongside the reference pages. Instead they came later as I realized the reference pages alone were not capturing educational queries. A mid-task user might not need these, but the user who is learning before their first job does. I missed that audience at launch.

Job guides were reactive, not planned

The “Which Rod for My Job?” section fills a real gap by organizing recommendations around the task instead of the product. But I built them after seeing the search demand, not as part of the initial content plan. The URL structure accommodated them because the taxonomy was flexible, but the content sequencing was wrong.

Underestimated brand name search behavior

Welders search by brand name more than I expected. “Lincoln 7018” and “ESAB 7018” are real queries. The brand cross-reference tool addresses this now, but individual electrode pages did not originally include brand-specific content. I had to retrofit brand names and manufacturer equivalents into existing pages after seeing the data. Building for mid-task means building for how people actually search, and I missed that pattern initially.

The pattern across projects

CheckMyTap’s failures were about planning the wrong things. UpOrbit’s were about executing too fast. GageRef’s are about sequencing and underestimating how the audience actually behaves. The architecture holds up. The assumptions about user behavior needed correction. That is probably always going to be the case.

Intent is not just a keyword category.

GageRef taught me that where the user is and what they are doing matters as much as what they searched. That principle now shapes every project I build.

View All Case Studies