Evergreen monthly benchmark — 2026-03

Running Shoes

2026-03

Nike leads in total AI mentions, but ASICS owns the top ranking position across 3 of 4 models when asked to compare brands by performance. The gap between raw mentions and ranking authority is where the real opportunity lives.

Top Brands by AI Mention Frequency

1NikeOverall Leader11
2BrooksHigh Consensus10
3ASICS10
4AdidasHigh Consensus9
5Saucony9
6New Balance9
7HOKATop in claude6
8Puma2
9On Running2
10Altra2
11Mizuno2
12Salomon1
13NNormal1
14Skechers1

Why These Brands Win

  • Top-recommended brands share deep product line depth across multiple use cases (daily training, racing, stability), extensive editorial review coverage from running-specific publications.
  • Clear price-tier positioning that AI models can reference with specific dollar amounts, and strong presence in specialty running retail channels.

Cross-Agent Comparison

How 4 AI models rank the same category. Hover a brand to trace it across models.

Agent#1#2#3
chatgptAdidasBrooksHOKA
claudeHOKAASICSAdidas
geminiASICSAdidasBrooks
perplexityBrooksSauconyAltra

What This Means

  • The independent AI model (Claude) ranked ASICS first overall and flagged Puma as a dark horse, a call no other model made.
  • Claude also provided the most specific performance data (energy return percentages, weight in ounces).
  • The commerce-influenced models (ChatGPT, Gemini) both placed Nike higher in their rankings, for race-day performance.
  • The search-grounded model (Perplexity) surfaced niche brands like Altra, Skechers, and Mizuno that other models largely ignored.
  • Suggesting its web retrieval pulls from a wider range of specialty sources.
  • , ChatGPT heavily favored Brooks in its general recommendation (4 of 6 slots), which no other model did.

Analysis

What top brands have in common

Top-recommended brands share deep product line depth across multiple use cases (daily training, racing, stability), extensive editorial review coverage from running-specific publications. Clear price-tier positioning that AI models can reference with specific dollar amounts, and strong presence in specialty running retail channels.

Where AI models disagree

The independent AI model (Claude) ranked ASICS first overall and flagged Puma as a dark horse, a call no other model made. Claude also provided the most specific performance data (energy return percentages, weight in ounces). The commerce-influenced models (ChatGPT, Gemini) both placed Nike higher in their rankings, for race-day performance. The search-grounded model (Perplexity) surfaced niche brands like Altra, Skechers, and Mizuno that other models largely ignored. Suggesting its web retrieval pulls from a wider range of specialty sources. , ChatGPT heavily favored Brooks in its general recommendation (4 of 6 slots), which no other model did.

Gaps in the market

The $100-$150 beginner segment is crowded with safe picks but lacks differentiation. AI models default to the same 5 brands. Trail running is underserved in AI recommendations, with only Gemini giving it meaningful coverage (Salomon, NNormal). Puma appears in only 2 of 12 responses despite strong recent performance reviews. On Running gets mentioned but tagged as more lifestyle than performance, an opportunity for the brand to shift its AI narrative. Mizuno is nearly invisible despite being a legacy performance brand.

Opportunity for brands

  • The $100-$150 beginner segment is crowded with safe picks but lacks differentiation.
  • AI models default to the same 5 brands.
  • Trail running is underserved in AI recommendations, with only Gemini giving it meaningful coverage (Salomon, NNormal).
  • Puma appears in only 2 of 12 responses despite strong recent performance reviews.
  • On Running gets mentioned but tagged as more lifestyle than performance, an opportunity for the brand to shift its AI narrative.
  • Mizuno is nearly invisible despite being a legacy performance brand.

How we measure this

Each benchmark runs the same standardized prompts across multiple leading AI systems, including ChatGPT, Claude, Gemini, and Perplexity. We use consistent, category-specific questions designed to surface genuine product recommendations — not sponsored results.

Responses are parsed to extract brand mentions, rank position, and frequency. We then analyze cross-model agreement, identify which brands consistently appear in top positions, and flag where AI outputs diverge from marketplace trends.

Evergreen categories are benchmarked monthly. Results reflect organic AI behavior at the time of testing. Read the full methodology

Your competitors are showing up in AI results. Are you?

Running shoe brands outside the consensus top 5 are invisible to AI product discovery. A GEO audit reveals the specific content gaps that keep brands out of AI recommendations.

Request Your Free Audit