SEO and AI: How to Stay Visible When the Clicks Disappear

Ian Duncan • 16th Sep 2025

SEO and AI have collided at the top of Google, and AI Overviews now dominate where websites once stood.

The fallout?

Website owners report less organic traffic, even for pages that rank well.

Meanwhile, more discovery journeys are starting inside chatbots, never making it out to the open web.

Some question the ongoing value of SEO, proclaiming it outdated and promoting new acronyms like GEO (Generative Engine Optimisation).

But the reality is not an obituary for search.

LLMs are new territory for organic visibility, and the process for staking a claim to that territory is — it turns out — remarkably similar to doing good SEO.

If you care about being visible in this AI-dominated landscape, this guide is here to help.

We’ll cover:

  • Where the clicks went
  • How AI is changing search behaviour
  • Balancing GEO and SEO
  • How LLMs actually work
  • Using AI to accelerate SEO
  • Why brand strength matters now more than ever

Let’s dive in.

Where did all the clicks go?

In recent years, a recurring theme has been news of falling organic traffic as more AI features were rolled out in Google. 

One factor in the reported decline is “zero-click search”.

These are queries that end without a click on any of the results presented.

Zero-click isn’t new. There have always been searches where the results page itself answered the query (think dates for key historical events — Google has been using Featured Snippets for this purpose since 2014).

What has changed is the scale and scope: AI Overviews now apply the zero-click dynamic to far broader intents, including commercial and consideration queries, not just trivia.

A large 2024 clickstream study found that around 59% of US and EU searches ended in zero-click.

And a separate 2025 study suggested that when an AI Overview appears, nearly 80% of those searches now finish without a click-through.

When AI Overviews appear, the only readily visible links are to the right-hand side.

Some of you may recall that the right-rail is where Google used to display paid ads.

According to Matt Lawson, formerly Google VP of Ads Marketing, Google stopped putting ads there because “users didn’t click on them as much”.

Beyond the fact that the AI overview itself will answer many queries on its own, other compounding reasons for reduced click through rates include:

  • Eye-movement patterns — People tend to scan the top-left area of pages, often in an F-shaped sweep that de-prioritises the right rail.
  • Banner blindness — most of us are conditioned to ignore elements on the right because historically they contained ads.

All of which adds up to the perfect conditions for link ignorability.

Rising clicks to Google-owned properties

A growing number of outbound clicks stay inside Google’s own ecosystem: YouTube, Shopping, Maps, Flights, Hotels and other Google properties.

In Q1 2025, SparkToro/Datos estimated that about 14.3% of US Google searches resulted in a click on a Google-owned property, up from 12.1% the year before.

According to an ahrefs study, YouTube gets the second-highest mention share in AI overviews, behind Wikipedia.

While this isn’t “zero-click” in the strict sense, it contributes to the relative decline in visits to non-Google websites.

AI Overviews under fire

Google’s AI Overviews have attracted heavy criticism.

Much of the web’s moral indignation centres on the broken value exchange: creators allow Google to index their work for free in return for traffic.

AI Overviews reduce that traffic by summarising the answer on Google’s page, an answer only possible because of those creators.

Unsurprisingly, Google presents this as a user benefit — less friction, faster answers — and has argued (unconvincingly) that less but “better” traffic is a good outcome. Their position sounds a lot like doublethink.

Critics also highlight quality issues. Housefresh showed how AI Overviews would recommend any product you showed interest in — even if the product didn’t exist. More broadly, the overall experience of search has degraded: results are more monetised, more cluttered, and more intent on keeping users inside Google’s ecosystem.

These criticisms mirror themes in The Man Who Killed Google Search, which describes how under shareholder pressure for growth, the Googlers in charge of paid search won out over those in charge of preserving the best overall search experience for the user.

As far as SEO is concerned, the salient point is that AI Overviews are part of Google’s growth story to investors. As such, they look set to remain a cornerstone of Google’s strategy. Your SEO strategy must evolve accordingly.

Search discovery beginning in chatbots, not search engines

At the same time as AI overviews siphon clicks, we’re also seeing discovery start increasingly inside AI chatbots.

Two scenarios are playing out against the backdrop of a broader Big Tech battle for AI dominance:

  • The replacement scenario — chatbots like ChatGPT, Claude or Gemini replace search engines as the starting point for discovery. A single conversation can condense what used to be several distinct Google searches, and the answer arrives fully composed without the user visiting another site.
  • The integration scenario — Google integrates the chatbot experience directly into its search engine via AI Mode (powered by Gemini). Google’s hope — clearly — is that users “choose” Gemini rather than migrate to ChatGPT or Claude.

In practice, user behaviour will likely be hybrid, with chatbots and search engines coexisting depending on task type. Either way, LLMs move to the centre of the experience.

From an SEO perspective, the balance of focus shifts toward visibility inside AI answers, not just blue-link clicks.

The big question is how much does SEO need to adapt? And is “SEO” still the right term for the job?

Search Generative Engine Optimisation?

Depending on who you read, SEO is dead (again) — long live GEO.

GEO stands for Generative Engine Optimisation. If you prefer a different acronym, there’s AIO, AEO or LLMO.

Or feel free to invent your own, there’s probably still time to make it stick.

The lists of GEO tasks usually look like this:

  • Structure content to be machine-readable
  • Make content summarisation-friendly
  • Earn brand mentions and citations in authoritative spaces
  • Build semantic authority across a topic cluster
  • Invest in high-quality, differentiated content

Looking at a list like that, many SEOs have made the case that GEO is just SEO.

So… is GEO just SEO? 

No, it isn’t. Like so many things in life, the reality is nuanced.

Think of it like this:

SEO is the umbrella. It’s always been about earning visibility wherever people search, and that now includes AI answer engines. Not to mention that Google has literally shoehorned an AI answer engine into the world’s most popular search engine.

GEO is the emphasis. It’s a useful label for a shift in context: we’re no longer just focussing on web search results, we’re optimising for inclusion in generative answers. If you’re “working on GEO” I immediately understand your goal is mentions or citations in LLM-based systems. By contrast, if you’re “working on SEO” there’s some ambiguity. You might be working on LLM visibility or your goal might be a specific SERP ranking, or both.

Tactics overlap. The practical workstreams (content quality, entities, citations, authority, UX) remain largely the same.

Measurement differs. GEO specifically targets mentions and citations in AI generated answers, rather than clicks from SERPs.

Silos don’t really make sense here. GEO is best understood not as a separate discipline, but as a new branch of SEO — one that recognises the reality of AI as a search layer.

For example, consider structured data.

The goal with structured data is to make content & associations in your content machine-readable. You do this by adding schema types (Organisation, Person, Product, Article, FAQ/HowTo etc.) and connecting them with links to official profiles.

Once the work is done, guess what: you’ve boosted both GEO and SEO.

The GEO boost: LLMs and retrieval systems favour clean, machine-readable facts tied to the right entity.

The SEO boost: schema has long powered rich results and better understanding in Google.

An evolved, GEO-aware SEO process

When you model out what an evolved SEO process (that is GEO-aware) looks like, the fundamentals of the process largely remain the same.

The GEO/SEO balance

However, if we want to dig deeper into the factors that drive GEO specifically, it helps to understand how LLMs work and how LLM-based systems pick what to surface.

How LLMs work

At a high level, language models — like the ones powering ChatGPT — are trained to predict the most statistically plausible output (their answer) to the input they receive (your prompt).

This process of generating an answer from learned patterns is called inference.

LLMs have a huge number of parameters (billions to hundreds of billions) that capture subtle correlations learned during training.

During training the model sees billions of tokenised word-pieces. Each token is mapped to an embedding vector — coordinates placing the token in a high-dimensional space.

Those coordinates are tuned so that tokens appearing in similar contexts drift closer together. That’s how it learns that “NYC”, “New York” and “Big Apple” are related: their vectors cluster.

What the model does not do is “think” or “know” facts like a human.

It produces the most plausible continuation, which often corresponds to true or sensible statements, but not always. Given the right conditions and prompt, it will hallucinate. (Sidenote: if you ever wondered why LLMs hallucinate, it’s because they are inadvertently incentivised to guess when they don’t know the answer, because every now and then they will guess right.)

How is all this relevant to SEO? Associations matter.

If you want LLMs to correlate your brand with the keywords, topics and themes you care about, create content that makes those connections unambiguous.

Your goal is to nudge the LLM’s understanding of where your brand sits in that semantic space.

System prompts

Every time you add a prompt to an AI chatbot, a system prompt gets added as well. You don’t see it; it gets injected automatically ahead of whatever you wrote.

Ever wondered how ChatGPT knows it’s called ChatGPT, or how Claude knows it’s called Claude? It’s because the system prompt literally says so.

The very first line of a leaked system prompt for Claude reads:

“The assistant is Claude, created by Anthropic.”

The full system prompt from that leak is a whopping 9741 words. It would take an average human around 39 minutes to read! And they add it in front of every prompt you write.

There’s a lot we can learn about GEO from the wording of the system prompt. Here’s a few highlights and key takeaways based on the Claude leak.

High-quality original content matters

Claude System Prompt:

“Favor original sources (e.g. company blogs, peer-reviewed papers, gov sites, SEC) over aggregators. Find highest-quality original sources. Skip low-quality sources like forums unless specifically relevant.”

Key GEO takeaway: The web has always rewarded high quality original content, and Claude has been explicitly instructed to do the same.

Fresh content gets priority

Claude System Prompt:

“Prioritize 1-3 month old sources for evolving topics.” “For topics that change frequently (daily/monthly) OR query has temporal indicators.”

Key GEO takeaway: Publish often, and update your best content frequently. If you don’t already, add in a “Last updated” timestamp for your content to explicitly call out content freshness.

Craft short, quotable lines

Claude System Prompt:

“NEVER reproduce copyrighted material.” “Include only a maximum of ONE very short quote from original sources per response, where that quote (if present) MUST be fewer than 15 words long.”

Key GEO takeaway: Take the time to craft short, quotable lines near the start and end of your content to give yourself the best chance of being quoted directly.

More complex queries get more searches

Claude System Prompt:

“Scale tool calls by difficulty: 2-4 for simple comparisons, 5-9 for multi-source analysis, 10+ for reports.” “Complex queries using terms like ‘deep dive,’ ‘comprehensive,’ ‘analyze’ require AT LEAST 5 tool calls.”

Key GEO takeaway: For complex queries you have a better chance of being searched. Make sure to create in-depth content targeting terms like “deep dive,” “report”, or “analysis”.

Many answers come from the model’s prior knowledge

Claude System Prompt:

“Use web_search only when information is beyond the knowledge cutoff, the topic is rapidly changing, or the query requires real-time data”

Key GEO takeaway: Many answers won’t trigger a live web fetch, so you need your brand/entity well-represented in the model’s prior knowledge too.

Multi-query, multi-facet relevance

When LLM-based systems access the live web, they need a strategy for deciding what keywords to search.

Google’s AI features use something dubbed “query fan-out”: the system issues multiple related sub-searches across subtopics to assemble a response.

Query fan-out example

For SEO, this means that appearing topically relevant requires breadth as well as depth. 

It helps to have content that answers a cluster of related sub-queries, not just one head term.

In practice, that looks like producing pages that cover facets — definitions, comparisons, pros/cons, pricing, implementation steps — and linking them together so both users and machines can explore the cluster.

You can ask your LLM of choice for help with a prompt like:

“Using this question — ‘[insert your question]’ — list the query fan-out facets and sub-queries an LLM would likely pursue.”

This has interesting implications for keyword research

Low-volume phrases may be strategically valuable if they map to sub-queries an LLM is likely to investigate during fan-out.

Knowledge graphs — reducing hallucinations

A knowledge graph is a structured “map” of real-world things (entities) and how they connect (relationships).

They are used to help stop LLMs improvising facts.

Each entity in a knowledge graph has a canonical ID, known aliases, and a set of encoded relationships. So for example, for a company this would include relationships like “founded by,” “headquartered in,” or “is a subsidiary of.”

This structure gives LLM-based systems a grounding layer so the language they produce can be checked against reality.

A commonly referenced public knowledge graph is Wikidata, which helps power Wikipedia.

What does this mean for GEO/SEO? The two pertinent checklist items are:

  1. Define the entities related to your brand using structured data.
  2. Make sure you exist accurately in public knowledge graphs (e.g., Wikidata).

Doing so improves disambiguation and increases the chance that your brand is the entity retrieved when an LLM or search engine looks for facts.

How to leverage AI for your SEO (and GEO)

AI opens practical doors across the SEO workflow, helping you accelerate your SEO activity.

As the competition for LLM mentions and citations intensifies, your ability to leverage the compute power, speed and scalability offered by AI is only going to become more important.

Topic mapping

Reasoning from first principles: if LLMs learn relationships between entities and topics, our goal is to ensure strong associations between the topics we care about and our organisational entity.

To do that, we must first build a model of the topic space.

One simple, pragmatic method is to query multiple LLMs with a prompt like:

“What are the most closely aligned entities, concepts and subtopics surrounding ‘[central_category]’ for a buyer researching this space?”

Once we have a model of the topic space, we need to establish how well our website covers the topic, and ultimately create authoritative content for each facet of each topic we want to be known for.

Think of this like overlaying a model of your website’s topic coverage on top of the LLM’s model of the topic space. You want the two to match semantically as much as possible.

Overlay your website topic coverage on a model for the topic space to identify the gaps

Sidenote: model snapshots can lag 3–9 months behind the live web; in other words, while you might significantly expand your content, LLMs may not fully recognise those efforts until their next refresh, which is a good reason to publish early and update often.

Content scoring — for originality, trust and completeness

Let’s start with the caveat: LLMs can make mistakes, so any LLM evaluations should be used as a first-pass filter, and validated through human review.

That said, there’s various dimensions across which LLMs can help as a directional gauge for content quality. Give an LLM your content and ask it to evaluate whether a draft:

  • Surfaces anything genuinely original (data, examples, methods) relative to the top-ranking or most-cited sources on the topic
  • Demonstrates trust signals (clear authorship, citations, conflict-of-interest disclosures, review processes)
  • Achieves topical completeness across the core facets the model expects
  • Explains ideas in a way likely to be shared by the intended audience

Content interrogation

LLM’s can help improve your thinking and writing clarity.

Ask ChatGPT or Claude to be a logic critic.

If you can handle a bruising teardown of your content, try this prompt:

“Imagine you are totally skeptical of the author of this piece. Imagine you want to find every error you can. Please read this article and write the most obnoxious but intelligent thing you can.”

Or for a more reasonable critique, try:

“Analyse this content and list out any logical flaws in the arguments, including suggestions for how to resolve these.”

AI-powered tactical workflows

There’s a range of ways you can use AI to assist in your workflows, including:

  • Creating page titles and meta descriptions at scale (either using a combination of prompts and page crawl data or using deep research)
  • Using vector embeddings to identify the best candidates for internal links (by comparing the content of the current page against every other non-linked page on the site)
  • Using vector embeddings to avoid semantic drift from your target keyword (by comparing the semantic similarity of your target keyword with your page content).

Brand strength is your moat in the age of AI

It’s worth remembering that even if you “win” at GEO, you’re still playing inside someone else’s ecosystem. Most of the trust and recognition accrues to an AI intermediary, not to you. 

Put another way, optimising for AI answers matters, but it isn’t the endgame. The endgame is having a brand that is strong enough to be recognised, recalled, and sought out directly.

If people already know you, they don’t need an AI broker to recommend you. They’ll type your brand directly, and bypass AI intermediaries altogether.

Brand strength + great content = users bypassing AI to seek you out.

Final thoughts

Despite the seismic impact of AI on search, the core mechanics of SEO remain constant — structured content, authority, relevance. What’s new is the weighting of those mechanics in an AI-shaped ecosystem.

Creating informational content remains critical. Even if Google rewards it with fewer clicks today, it forms the foundation of topical expertise, earns links and mentions, and increases your odds of being included, or cited by LLM-based systems.

It does appear perfectly possible to design content that can achieve visibility in AI answers and strong rankings in SERPs.

A layered structure where the top-level summary is GEO-friendly (short, factual, quotable), but the body content is SEO-friendly (in-depth, keyword diverse, and with a unique point of view) is a sensible default to be starting with.


Looking for support on how to adapt SEO to AI? Check out our SEO services or get in touch.

Back to News