AI Search teams: don’t conflate ‘visibility’ and ‘citations’

The Jetpack team has attended a few conferences recently and we’ve noticed a worrying trend: speakers conflating visibility and citations as though they’re the same metric. This is important, and it’s worth re-stating, because these are distinct signals that point to different problems with different solutions. If you get them mixed up, you’re at risk of misdiagnosing your optimisation route entirely – and burning a quarter of work in the wrong direction.

This is the first of two posts on citations. In this one, we tackle the conceptual confusion between visibility and citation. In the follow-up, we’ll go a layer deeper — into why “citation rate” itself can mislead you when it’s measured as a flat number across engines that handle citations very differently.

Let’s get the definitions straight:

VISIBILITY SCORES – the rate (usually a percentage of answers) at which the brand is mentioned by name within the AI answer.

CITATION SCORES — the rate (again, usually a percentage) at which the answer surfaces your URL as a source link for generating the answer, but may not mention you in the body copy.

These are different metrics. You can be visible without being cited, and you can be cited without being visible. Let’s unpick how:

Scenario 1: Visible, not cited

Your brand, product or service is well established and disambiguated. You have rich offsite information in Wikipedia and Wikidata, with proper sameAs links pointing back to your site. The AI knows who you are. However, your pages are JavaScript-rendered (like a typical SPA) which means AI bots can’t extract the text.

In this scenario, you’ll be highly visible but barely cited. The risk: the AI ‘fills in the blanks’ with third-party content, or worse, hallucinates details about your products, pricing or positioning. You’re famous, but you’re not in control of the narrative – and that’s a brand integrity problem disguised as a search problem.

Scenario 2: Cited, not visible

You have excellent content – chunked into short paragraphs, tables and FAQs, with complete Schema markup. AI crawlers love your pages and pull from them happily. However, your knowledge graph footprint is spotty: no Wikipedia entry, no Wikidata item, a thin Google Knowledge Graph presence.

In this scenario, AIs may use your content but be reluctant to attribute it to you in the answer. You’re feeding the machine without getting credit – your URL appears as a source, but your brand name doesn’t make it into the response. Great for traffic in theory; useless for brand recall in practice.

Why the distinction matters

These two failure modes look superficially similar on a monitoring dashboard – both manifest as “we’re underperforming in AI search” – but they require completely different remediation paths:

If you’re visible but not cited, the work is technical and on-site. You need to audit your rendering setup (server-side rendering, dynamic rendering, or a fallback for bots), tighten your schema, restructure content into AI-friendly chunks, and make sure crawlable HTML mirrors what users see. No amount of off-site entity work will fix a page that bots can’t read.

If you’re cited but not visible, the work is largely off-site and entity-focused. You need a Wikipedia entry where editorially appropriate, a Wikidata item, structured Organisation schema with sameAs links, and a coherent entity footprint across reputable third-party sources. No amount of on-page content optimisation will earn you brand attribution if the AI doesn’t have a confident entity to attach your content to.

Confuse these two and you might spend a few months optimising in the wrong direction – tweaking schema when you needed to build entity authority, or chasing Wikipedia entries when your real problem was that bots couldn’t render your homepage.

The third state nobody talks about

There’s also a third diagnostic state worth flagging: low visibility AND low citation. This is the one most brands actually find themselves in when they first audit, and it’s the most useful starting point because it tells you to do both jobs in parallel – on-page rendering and content structure on one track, entity and knowledge graph work on the other. The mistake we see most often is brands picking one track because they read a single conference talk that emphasised it, then wondering why the needle barely moves.

How to measure both, properly

Visibility is measured by sampling a representative set of prompts in your category and counting brand mentions in the answer text. Citation is measured by counting source URLs in the same answers and checking how many resolve to your domain — though as we’ll cover in the next post, what counts as a citation varies meaningfully across engines, and this is where most monitoring tools fall down. The two need to be tracked independently, side by side – the gap between them is the diagnostic signal, not either number on its own.

This is the bit many monitoring tools don’t surface. They’ll show you a visibility number and a citation number, but they won’t tell you whether the gap is a rendering problem, a schema problem, or an entity authority problem — the three diagnostic axes that actually drive remediation. That requires auditing the underlying causes – rendering, schema, entity footprint, knowledge graph presence – not just measuring the symptoms. It’s the gap we built GEO Jetpack to close.

If you take one thing from this post: before you commission GEO work, make sure whoever’s leading it can articulate which of these scenarios you’re actually in. If they can’t, they’re guessing – and you’ll be paying for the guess.

Next post: why “citation rate” is itself a conflation — how Claude, Perplexity, ChatGPT, Google AI Mode, and Copilot all handle citations differently, and what that means for how you should actually be measuring AI search performance.

Table of Contents