Popular Posts

Latest Articles

Semrush Partner
Search Engine Optimization
176 views

How Do You Measure AI Search Visibility?

“We need to improve our brand visibility in AI search.” How many times have you heard that sentence in the last six months?

I’ve heard it dozens of times. In client meetings, at conferences, on LinkedIn.

And everyone saying it is right. ChatGPT, Perplexity, and Google’s AI Overviews are already recommending brands, surfacing reviews, and shaping decisions — before the user ever visits your website. AI search has become a critical touchpoint in the buying journey.

But here’s the real question: How many of you are actually measuring that visibility? And of those who are — are you confident you’re tracking the metrics that actually matter?

Tomek Rudzki from Peec AI published a comprehensive guide this week on the KPIs that genuinely work for AI search visibility measurement. Aleyda Solis and Ethan Smith, CEO of Graphite, contributed their perspectives. When I read it, I thought: this is what I’ve been waiting for. Because it gives concrete answers to the “how do we measure this?” question for everyone who already understands why GEO matters.

Here are the 5 core KPIs you should be tracking — what each one means, why it matters, and where to start today.

Visibility %: Is AI Even Seeing You?

The first and most fundamental question: Is your brand appearing in AI search responses? And if so, how often?

This is where you look at Visibility % — the percentage of relevant AI responses that include your brand. Peec AI’s US data shows Chime has exactly double Revolut’s visibility: 66% vs. 33%. One number, a very different story.

But a single visibility score doesn’t tell the whole story. You need to segment your prompts: by topic, by funnel stage, by customer segment. Are you appearing at the awareness stage, or only at decision? The difference between those two rewrites your entire strategy.

Tracking individual prompts will always be unreliable — LLMs are non-deterministic by nature. But when you group prompts into categories, patterns emerge. Your results become measurable.

Position: Where Do You Land in the List?

Familiar territory for SEO professionals. Higher results get more attention — that logic hasn’t changed in AI search. Brands mentioned earlier in an AI response attract significantly more attention than those buried at the end.

Think about it: when someone asks ChatGPT for “the best CRM software” and your brand is 10th on the list, you’re capturing a fraction of the attention the first or second mention receives.

LLMs don’t rank brands randomly. Two factors drive position: your brand’s prominence in training data, and the real-time sources the model retrieves. Both are parameters you can influence.

Peec AI shared a striking example: AI Mode placed the Skoda Elroq at the top of its “best electric cars” list. For that brand, that’s a massive visibility advantage — one that directly affects purchase decisions. Rank it slightly lower and the advantage evaporates.

One practical tip: track position across multiple prompts and aggregate weekly. Daily results fluctuate significantly; weekly averages reveal the real trend.

Brand Sentiment: What Is AI Actually Saying About You?

Visibility and position metrics tell you that you’re in the room. Brand sentiment tells you what’s being said about you while you’re there.

This is one of the most undervalued — yet most actionable — areas of AI search optimization. Training data changes slowly, but the sources shaping your brand’s sentiment can often be corrected quickly. For established brands especially, this is where I’d start.

Consider: when a prospect close to buying asks “Is HubSpot easy to use?” or “Is HubSpot’s customer support good?” — that’s a purchase decision. And it depends entirely on what the AI says next.

Peec AI’s finding while auditing Revolut’s AI presence was striking: LLMs were consistently citing a site stuffed with fake reviews (Sitejabber) as a source. When they dug in, 75% of the negative reviews came from single-review accounts — most promoting unrelated services. Likely, a single email from the legal team could resolve this. Three-layer win: readers get accurate information, brand sentiment in LLMs improves, and that negative content gets weighted down in future training.

This isn’t a PR problem. It’s a strategic issue with a direct revenue impact.

Conversion & Revenue: How Do You Measure Business Coming from AI?

Measuring AI’s impact on revenue is possible. The most practical approach right now is self-reported attribution — asking your customers directly what percentage of your business comes through LLMs.

Where you ask matters. Some businesses collect this during demo calls or onboarding, where it fits the natural flow of conversation. Others ask during registration. Tally already does this: when a user selects “AI,” Tally follows up by asking which queries they used to find the product. This lets them calculate the conversion rate from AI platforms — and learn the exact prompts that drove discovery.

Once you know which customers arrived via LLMs, you can track the revenue they generate over time. If Google Analytics is already under-reporting your SEO impact, this approach gives you a much stronger argument.

Traffic: Familiar, But Not Sufficient Alone

Traffic is the metric we naturally gravitate toward — it’s familiar and easy to report. But AI search users rarely click. When ChatGPT recommends a CRM, there’s often no link in the response at all. The user either searches on Google (and GA4 codes it as “organic”) or types the URL directly (and it shows up as “direct”). Either way, the AI engine gets zero credit.

Eight Oh Two’s 2026 research supports this: 37% of consumers now start their searches with AI instead of Google, but 85% still cross-check in traditional search before converting. One journey, two channels — and attribution models typically only capture the second.

Aleyda Solis puts it plainly: “We need to stop using traffic as a reliable KPI. We need to shift to a mix of branding and performance KPIs: AI visibility, sentiment, purchases, and revenue.”

Ethan Smith emphasizes the statistical angle: “AI responds randomly, but that randomness is predictable. Even a 10-response sample gives you a reliable quick estimate.”

Start Today: 3 Concrete Steps

The KPIs in this piece give concrete answers to the “how do we measure this?” question for anyone who already understands why GEO matters. At Stradiji, we’ve started adding an AI Visibility Score to our client reports. Here’s how you can begin today.

  1. Open up your pricing page. ChatGPT’s premium model GPT-5.4 is actively crawling pricing pages and has cited them 138 times. A “contact us” wall may mean premium model users never see you at all.
  2. Add self-reported attribution. Put a “How did you find us?” question on your registration or demo form. Make sure “AI assistant (ChatGPT, Gemini, Perplexity)” is one of the options.
  3. Run a brand sentiment audit. Query your brand in ChatGPT, Gemini, and Perplexity. If any responses are drawing from negative or inaccurate sources, correcting the source benefits both your users and your LLM perception.

Every week you go without measuring AI search visibility is a week you can’t explain to clients, leadership, or yourself why any of this matters.

A month from now, you’ll wish you’d started sooner.

Share!

These May Also Interest You

Craving more SEO knowledge? Extend your learning with #SEOSDINERSCLUB

Subscribe to our newsletter for weekly SEO insights, join the discussion in our community, or engage with professionals on our Twitter group.