Generative Engine Optimization: The Evidence Base

What the Research Actually Shows — And What's Hype Gartner projects a 25% decline in traditional search by 2026. One in three Australians already use AI assistants regularly. AI referral traffic grew 357% year-over-year. The shift from link-based retrieval to AI-synthesised answers is not a prediction — it is measured behaviour. This analysis covers the academic foundations, platform mechanics, measurement frameworks, and verified evidence for what works in Generative Engine Optimization. Written for enterprise and marketing leaders who need research, not rhetoric.

25% Search volume decline projected
40% Visibility lift from GEO (academic)
357% AI referral traffic growth YoY
1 in 3 Australians using AI assistants

By Gregory McKenzie · Registered Patent Attorney & Systems Architect · NETEVO

The Shift: From Search Results to Synthesised Answers

The most significant structural change in digital discovery since the commercialisation of the internet is now measurable. Traditional search is not dying, but it is fragmenting. Gartner predicts traditional search engine volume will decline 25% by 2026 due to AI chatbots and virtual agents. This is not a speculative claim — it is grounded in observable consumer behaviour that accelerated through 2025.

Pew Research Center's analysis of Google search behaviour in March 2025 found that 18% of Google searches produced an AI summary. Of those, 88% cited three or more sources. Users clicked a traditional result on just 8% of visits with an AI summary, compared to 15% without one. They clicked a link inside the AI summary on only 1% of such visits. Sessions ended on 26% of pages with an AI summary versus 16% without. The click is not disappearing entirely — but it is being redirected and compressed.

Similarweb's June 2025 data quantifies the other side of this equation. AI platforms generated over 1.1 billion referral visits to the top 1,000 websites, up 357% year-over-year. GenAI monthly visits grew 76% year-over-year. App downloads rose 319%. And AI referrals to transactional sites converted at about 7% — demonstrating that AI-referred users are not just browsing. They arrive pre-qualified.

Adobe's retail traffic data tells the same story from the commerce side. AI-driven retail traffic was 35 times higher than July 2024. AI visitors had a 27% lower bounce rate, spent 38% longer on site, viewed 10% more pages, and the conversion gap versus non-AI traffic narrowed from 91% lower to just 22% lower in under a year. Bain's survey found 80% of consumers now rely on AI or zero-click results for at least 40% of their searches, and approximately 60% of searches end without another destination site.

The zero-click phenomenon is particularly stark. Similarweb measured the median zero-click rate at roughly 60% for searches without AI Overviews, and roughly 80% with AI Overviews — an average of 83%. The implication for enterprises is direct: for a growing class of queries, the AI summary is the destination. If your brand is not part of the synthesis, you are not part of the answer.

What the Observed Data Shows (2025)

  • 18% of Google searches produce AI summaries; 88% cite 3+ sources (Pew Research Center)
  • 1.1 billion AI referral visits to top 1,000 sites, up 357% YoY (Similarweb)
  • AI-driven retail traffic 35x higher than July 2024 (Adobe)
  • 80% of consumers rely on AI/zero-click results for 40%+ of searches (Bain)
  • Zero-click rate: ~60% without AI Overviews, ~80% with (Similarweb)
  • AI visitors: 27% lower bounce rate, 38% longer sessions, 10% more pages (Adobe)
  • Traditional search volume projected to decline 25% by 2026 (Gartner)

The Academic Foundations: What the Research Proves

GEO has a real academic base. The evidence after the foundational paper is still maturing, but the strongest reproducible findings are clear.

The Foundational Paper: Aggarwal et al. (KDD 2024)

The term enters the academic record with Pranjal Aggarwal, Vishvak Murahari, Tanmay Rajpurohit, Ashwin Kalyan, Karthik R. Narasimhan, and Ameet Deshpande — researchers from Princeton, Georgia Tech, Allen Institute for AI, and IIT Delhi. Published at KDD 2024, the paper introduced GEO-Bench and reported that optimisation could raise visibility in generative engine responses by up to 40%. The paper tested nine strategies across over 10,000 queries, establishing the first systematic framework for AI search visibility.

What Works: Evidence-Based Strategies

The strongest interventions were adding citations and sources, adding quotations, adding statistics, and improving fluency and readability. The best single methods lifted baseline visibility by 30–40% on quantitative metrics. For lower-ranked pages (rank 5), the gains were dramatic: Cite Sources produced a 115.1% gain, Quotation Addition 99.7%, and Statistics Addition 97.9%. Combining Fluency Optimisation with Statistics Addition added more than 5.5% over single methods. On validation against Perplexity, reported gains were 22% and 37%.

What Doesn't Have Evidence

The least-supported claims in the market are that llms.txt, Markdown mirrors, or 'special AI schema' are proven citation levers. As of March 2026, primary platform documentation and the better public studies do not support that view. The best public study of llms.txt — roughly 300,000 domains — found 10.13% adoption and no measurable relationship to AI citation frequency. Google has explicitly said it does not support llms.txt. No major AI company has publicly confirmed using it in citation systems.

Subsequent Academic Work

The academic field is still thin but growing. C-SEO Bench (Puerto et al., NeurIPS 2025) argues many conversational SEO methods are weak and that gains shrink as more actors optimise. Beyond Keywords (Chen et al., 2025) introduces a content-centric framework evaluated across exposure, credit, and trustworthiness. AutoGEO (Wu et al., 2025) proposes learning engine preferences cooperatively. Chen et al. (2025) reports cross-engine differences and argues AI search favours earned media over brand-owned content.

The Core Finding

The pattern that survives the current evidence base is extractability, evidence density, and citation-oriented writing — not platform-specific gimmicks. Content that is answer-first, text-rich, passage-quotable, with explicit numbers, sourced claims, clear entity identity, and strong authorship performs measurably better across AI platforms. This is not a temporary hack. It is a structural advantage aligned with how retrieval-augmented generation works.

How AI Platforms Actually Work: Crawlers, Citations, and Controls

Each major AI platform operates distinct web agents with different roles, capabilities, and citation behaviours. Understanding the mechanics matters more than chasing platform-specific hacks.

OpenAI / ChatGPT
Three distinct agents: OAI-SearchBot (ChatGPT search results), GPTBot (model training), ChatGPT-User (user-initiated fetches). Responses include inline citations and a Sources panel. Commerce feeds and product metadata documented for shopping. OpenAI does not publicly document llms.txt support or Markdown-over-HTML preference for search visibility. 9
OpenAI Platform
Anthropic / Claude
ClaudeBot (training crawl), Claude-SearchBot (search indexing), Claude-User (user-initiated fetches). Web search responses include citations by default. The noindex directive prevents content from appearing in search-powered outputs. Anthropic has not documented llms.txt or schema-specific ranking signals. 10
Anthropic
Google / AI Overviews
Google's guidance is explicit: to appear in AI Overviews or AI Mode, a page must be indexed and eligible to appear with a snippet in Google Search. No additional technical requirements. Important content should be in text form. Structured data should match visible text. You do not need special AI files, AI text files, or special schema. Google-Extended is a robots.txt token for Gemini training, not a separate crawler. 8
Google for Developers
Perplexity
PerplexityBot respects robots.txt. If blocked, Perplexity may retain domain, headline, and a brief factual summary — but not full text. Perplexity does not build foundation models, so allowing the bot does not mean pre-training use. Citations available by default in API responses. 11
Perplexity
Microsoft / Copilot
Copilot shows a Sources button with exact Bing queries used. Bing supports data-nosnippet for excluding content from AI answers while keeping pages discoverable. Bing Webmaster Tools launched AI Performance in February 2026: Total Citations, Average Cited Pages, Grounding Queries, and Visibility trends. 12
Bing Webmaster Team
Apple / Siri
Applebot powers Spotlight, Siri, and Safari search. Applebot-Extended is the opt-out for generative AI training. Public attribution and citation behaviour from Apple is still less transparent than other platforms.
Apple Support

What Actually Works — Evidence vs Hype

Verified

Evidence-Based (Strong Support)

30-115% visibility gains (academic)

Adding citations and sources, quotations from recognised authorities, statistics and quantitative data, improving fluency and readability, and writing answer-first with clear structure. These strategies are supported by the Aggarwal et al. paper and survive cross-platform validation. Entity clarity through Organisation, Article, and ProfilePage schema is supported by Google's documentation. The Ahrefs brand-mention study supports off-site corroboration as a major signal.

View AI Visibility Service →
Unverified

Not Yet Evidence-Based (Weak or No Support)

0% measured citation impact

llms.txt (10.13% adoption, no measurable citation relationship, Google explicitly does not support it). Markdown-over-HTML preference (no primary-source evidence from major AI crawlers). 'Special AI schema' types (Google explicitly states none are required for AI features). FAQPage and HowTo as AI citation levers (Google limits FAQ rich results to authoritative government/health sites; HowTo deprecated). DefinedTerm as citation multiplier (no credible controlled study found).

Read the Share of Voice Framework →

GEO vs SEO: Not Replacement — Extension

The evidence does not support 'SEO is dead.' It supports a layered model where SEO remains the substrate and GEO adds AI-surface visibility.

Phase 1

SEO: The Eligibility Layer

Foundation

  • Crawlability, indexability, and snippet eligibility — still required for all AI features
  • Google's AI Overviews explicitly rely on standard SEO eligibility rules
  • 76% of URLs cited in AI Overviews also rank in Google's traditional top 10
  • Technical SEO (speed, security, structured HTML) remains the hygiene layer every AI crawler depends on
Deliverable: Without SEO eligibility, your content is invisible to AI systems. SEO is necessary but no longer sufficient.
Phase 2

GEO: The Retrieval and Citation Layer

Extension

  • Citation visibility, AI share of voice, and representation accuracy across AI platforms
  • New KPIs: citation rate, grounding queries, cited URL inventory, AI referral traffic
  • Content architecture optimised for passage extraction and evidence density
  • Entity clarity and authorship signals that help AI systems attribute claims correctly
Deliverable: GEO adds the operating layer that determines whether AI platforms cite you, mention you, or recommend you — not just whether they can find you.
Phase 3

Authority and Corroboration

Amplification

  • Ahrefs' 75K-brand study: branded web mentions had the strongest correlation with AI Overview visibility (Spearman 0.664), far above backlinks (0.218)
  • Off-site corroboration matters more for AI visibility than classic backlink volume
  • Chen et al. (2025): AI search favours earned media over brand-owned and social content
  • Moz AI Mode study: 88% of citations did not match the organic top 10 for the same query
Deliverable: The best current evidence says brand authority across the wider web matters materially more for AI visibility than on-site optimisation alone.
Phase 4

Measurement

Verification

  • Citation rate and AI share of voice across defined prompt sets
  • Grounding queries — what retrieval phrases trigger inclusion
  • AI referral traffic, conversion quality, and session behaviour
  • Accuracy and sentiment monitoring — is your representation correct and favourable?
Deliverable: Current practice is a mix of prompt-set testing, citation extraction, share-of-voice scoring, and referral analytics. No cross-vendor standard exists yet.

Questions

GEO: Frequently Asked Questions

What is Generative Engine Optimization (GEO)?

Generative Engine Optimization (GEO) is the practice of optimising content and digital presence for visibility inside AI-generated answers, rather than just traditional search engine result pages. The term was formalised by Aggarwal et al. in their KDD 2024 paper, which demonstrated that specific optimisation strategies — adding citations, statistics, quotations, and improving fluency — could raise visibility in generative engine responses by up to 40%. GEO differs from traditional SEO in that the primary KPIs shift from rankings and clicks to citations, mentions, share of voice, and representation accuracy across AI platforms like ChatGPT, Gemini, Copilot, Perplexity, and Claude.

Is GEO replacing SEO?

No. The evidence does not support GEO replacing SEO. Google's own documentation states that AI features rely on the same foundational SEO eligibility rules and require no special AI-only technical setup. GEO is best understood as a distinct operating layer on top of SEO. SEO remains the eligibility layer — crawlability, indexability, entity clarity, authoritative content, and trust signals. GEO adds the retrieval and citation layer — visibility inside AI-synthesised answers, citation monitoring, and share-of-model measurement. Industry authorities including Search Engine Land, Gartner, Moz, and Ahrefs converge on this complementary model.

What evidence exists that GEO strategies actually work?

The strongest evidence comes from Aggarwal et al. (Princeton, Georgia Tech, Allen Institute for AI — published at KDD 2024), which tested nine optimisation strategies across 10,000+ queries. The most effective single methods were: adding citations and sources (up to 115.1% visibility gain for lower-ranked pages), quotation addition (99.7% gain), and statistics addition (97.9% gain). Combining fluency optimisation with statistics addition added more than 5.5% above single methods. On validation against Perplexity specifically, reported gains were 22% and 37% on two evaluation dimensions. What lacks strong evidence: llms.txt as a ranking signal, Markdown preference over HTML, and 'special AI schema' types.

How do AI platforms actually decide what to cite?

Each major AI platform operates distinct web agents. OpenAI uses OAI-SearchBot for ChatGPT search results, GPTBot for model training, and ChatGPT-User for user-initiated fetches. Anthropic uses ClaudeBot for training and Claude-SearchBot for search indexing. Google requires standard indexability and snippet eligibility — no additional technical requirements for AI Overviews. Perplexity uses PerplexityBot which respects robots.txt. The verified common denominator across all platforms is crawlable text/HTML with clear entity identity and evidence-dense content. Platform-specific hacks are not supported by primary documentation.

How should Australian businesses approach GEO?

Australian businesses are in a strong position for GEO adoption. Adobe Australia reports one in three Australians are regular AI assistant users (up 7 points since March 2025), and LoopMe data shows Australia leads AI adoption at 37% versus 28% UK and 26% US. ANZ enterprises formally deploying GenAI rose from 14% to 29% in one year. The recommended approach is a four-layer model: (1) Eligibility and control — crawler access, indexability, robots/meta controls; (2) Extractability — answer-first, passage-quotable content with clear structure; (3) Authority and corroboration — authorship, entity clarity, off-site mentions; (4) Measurement — citations, share of voice, grounding queries, AI referrals.

How do you measure GEO performance?

The measurement stack is converging around six core metrics: (1) Citation rate — does your brand appear in AI responses? (2) AI Share of Voice — how often do you appear versus competitors across a defined prompt set? (3) Cited URL inventory — which of your pages get cited and how often? (4) Grounding queries — what retrieval phrases trigger inclusion? (5) Accuracy and sentiment — is the representation correct and favourable? (6) AI referral traffic and conversion quality — are citations generating qualified sessions? Official tools include Bing Webmaster Tools AI Performance (launched February 2026), Moz AI Visibility, Similarweb Gen AI Intelligence Toolkit, and Yext Scout. There is no cross-vendor measurement standard yet.

The Australian and ANZ Context

Australia / ANZ
Global Comparison
AI assistant regular use
1 in 3 Australians (Adobe, July 2025). Up 7 points since March 2025.
ChatGPT: 43% AU, 34% UK, 29% US (LoopMe)
AI adoption rate
37% in Australia (LoopMe). Search enhancement is top use case.
28% UK, 26% US. Australia leads adoption.
Enterprise GenAI deployment
Formally deploying or evaluating rose from 14% to 29% in one year.
Fastest growth in APAC (Adobe). 63% organisational GenAI usage (SAS — fourth globally).
NZ AI assistant usage
62% have used AI assistants more than once. 33% using AI to search the web.
53% of Gen Z/Millennial early adopters use AI to research brands/products.
Consumer intent
Two in three Australians want to use AI more. Replacing search is a key use case.
33% of NZ early adopters already replacing traditional search.
Enterprise GEO adoption
68% of organisations actively changing strategy for AI search (BrightEdge). Only 9% marketing budget allocated to GEO (Search Engine Land).
63% of marketers not yet investing significant time/budget in GEO. 41% expect resources to increase within a year.

Market behaviour is changing faster than enterprise operating models

One in three Australians are regular AI assistant users and two in three want to use AI more. But 63% of marketers have not yet invested significant time, budget, or staff in GEO. The gap between consumer adoption and enterprise readiness is the defining challenge — and opportunity — for Australian businesses.

Australia leads AI adoption in the English-speaking world

LoopMe data shows AI adoption at 37% in Australia versus 28% in the UK and 26% in the US, with ChatGPT usage at 43% (AU), 34% (UK), and 29% (US). Enterprise GenAI deployment rose from 14% to 29% in one year — the fastest growth in APAC. SAS ranks Australia fourth globally in organisational AI usage at 63%.

First-mover advantage is real and measurable

ANZ enterprises formally deploying GenAI doubled in one year. Sydney SEO Conference 2026 treats AI SEO and GEO as mainstream subject matter. Practitioners like Prosperity Media, Dan Petrovic (DEJAN), and Luminary are already positioning for this market. Organisations that establish AI citation authority now build a structural advantage as the market matures.

From Research to Implementation

AI Search Visibility (GEO)

Entity authority, AI citation strategy, and structured data engineering.

This whitepaper explains the evidence. The AI Search Visibility service delivers the implementation — entity authority building, AI citation monitoring, RAG-optimised content architecture, and crawler access configuration.

Learn more →
Share of Voice Framework

Measuring competitive visibility across search and AI.

Share of Model is the GEO-era equivalent of Share of Voice. This framework covers rank-weighted SOV, AI Share of Model measurement, ESOV growth theory, and the patent-backed scoring methodology.

Learn more →
SEO Visibility

Technical SEO, schema engineering, and revenue attribution.

SEO remains the eligibility layer for AI visibility. Without crawlability, indexability, and structured data foundations, GEO efforts have nothing to build on.

Learn more →

Sources

  1. GEO: Generative Engine Optimization — Aggarwal, Murahari, Rajpurohit, Kalyan, Narasimhan, Deshpande — Princeton / Georgia Tech / Allen Institute / IIT Delhi. Foundational GEO paper. KDD 2024. Introduced GEO-Bench and nine optimisation strategies. Reported up to 40% visibility gains.
    https://huggingface.co/papers/2311.09735
  2. Gartner Predicts Search Engine Volume Will Drop 25% by 2026 — Gartner. February 2024 prediction that traditional search volume will decline 25% by 2026 due to AI chatbots and virtual agents.
    https://www.gartner.com/en/newsroom/press-releases/2024-02-19-gartner-predicts-search-engine-volume-will-drop-25-percent-by-2026-due-to-ai-chatbots-and-other-virtual-agents
  3. Google Users Are Less Likely to Click When AI Summaries Appear — Pew Research Center. March 2025 study. 18% of Google searches produced AI summaries. Click rates dropped to 8% with AI summaries vs 15% without.
    https://www.pewresearch.org/short-reads/2025/07/22/google-users-are-less-likely-to-click-on-links-when-an-ai-summary-appears-in-the-results/
  4. AI Discovery Surges: 2025 Generative AI Report — Similarweb. 1.1 billion AI referral visits, 357% YoY growth. GenAI monthly visits up 76%. AI referrals converting at 7%.
    https://ir.similarweb.com/news-events/press-releases/detail/138/ai-discovery-surges-similarwebs-2025-generative-ai-report-says
  5. Zero-Click Searches — Similarweb. Median zero-click rate ~60% without AI Overviews, ~80% with. Average 83% with AI Overviews.
    https://www.similarweb.com/blog/marketing/seo/zero-click-searches/
  6. AI-Driven Traffic Surges Ahead in Q2 — Adobe Business. May 2025 retail data. AI traffic 35x higher than July 2024. 27% lower bounce rate, 38% longer sessions.
    https://business.adobe.com/blog/ai-driven-traffic-surges-ahead-in-q2
  7. Consumer Reliance on AI Search Results — Bain & Company. 80% of consumers rely on AI/zero-click results for 40%+ of searches. 60% of searches end without another destination.
    https://www.bain.com/about/media-center/press-releases/20252/consumer-reliance-on-ai-search-results-signals-new-era-of-marketing--bain--company-about-80-of-search-users-rely-on-ai-summaries-at-least-40-of-the-time-on-traditional-search-engines-about-60-of-searches-now-end-without-the-user-progressing-to-a/
  8. AI Features in Google Search — Google for Developers. Official guidance: no additional technical requirements for AI Overviews beyond standard SEO eligibility. No special AI files or schema required.
    https://developers.google.com/search/docs/appearance/ai-features
  9. OpenAI Platform — Bots — OpenAI. Documentation for OAI-SearchBot, GPTBot, and ChatGPT-User agents. Crawler roles and robots.txt controls.
    https://platform.openai.com/docs/bots
  10. Anthropic — Web Crawling and Data Collection — Anthropic. Documentation for ClaudeBot, Claude-SearchBot, and Claude-User. Citation defaults and noindex behaviour.
    https://support.anthropic.com/en/articles/8896518
  11. Perplexity — How Perplexity Follows Robots.txt — Perplexity. PerplexityBot behaviour, domain-level retention when blocked, no pre-training use.
    https://www.perplexity.ai/help-center/en/articles/10354969-how-does-perplexity-follow-robots-txt
  12. Introducing AI Performance in Bing Webmaster Tools — Microsoft — Bing Webmaster Team. February 2026 launch. Total Citations, Average Cited Pages, Grounding Queries, Visibility trends.
    https://blogs.bing.com/webmaster/February-2026/Introducing-AI-Performance-in-Bing-Webmaster-Tools-Public-Preview
  13. AI Overview Brand Correlation Study — Ahrefs. 75K-brand study. Branded web mentions: Spearman 0.664 correlation with AI Overview visibility. Backlinks: 0.218.
    https://ahrefs.com/blog/ai-overview-brand-correlation/
  14. Australia Agentic AI Usage is Accelerating Fast — Adobe Newsroom — APAC. One in three Australians are regular AI assistant users (July 2025). Up 7 points since March 2025.
    https://news.adobe.com/en/apac/news/2025/07/australia-agentic-ai-usage-is-accelerating-fast
  15. LoopMe Study: AI Adoption Has Soared — LoopMe. AI adoption: 37% Australia, 28% UK, 26% US. ChatGPT usage: 43% AU, 34% UK, 29% US.
    https://loopme.com/press_releases/loopme-study-ai-adoption-has-soared-to-30-per-cent/
  16. ANZ Brands Accelerate AI But Face Data Challenges — Adobe Newsroom — APAC. ANZ enterprises deploying/evaluating GenAI rose from 14% to 29% in one year. Fastest growth in APAC.
    https://news.adobe.com/en/apac/news/2025/05/brands-in-australia-and-new-zealand-accelerate-ai-but-face-data-challenges
  17. What is GEO — Guide — Search Engine Land. Industry definition convergence. GEO as positioning for AI citation, mention, and recommendation across AI platforms.
    https://searchengineland.com/guide/what-is-geo
  18. Marketers Aren't Ready for GEO — Search Engine Land / Centerfield. 63% of marketers not yet investing in GEO. Average 9% budget allocation. 41% expect resources to increase.
    https://searchengineland.com/marketers-arent-ready-for-geo-survey-461919
  19. BrightEdge Survey: 68% Embracing AI Search Shift — BrightEdge. 68% of organisations actively changing strategy for AI search.
    https://www.brightedge.com/news/press-releases/brightedge-survey-reveals-68-marketers-are-embracing-ai-search-shift
  20. Structured Data Policies — Google for Developers. JSON-LD recommended. Microdata and RDFa also supported. No special AI schema types required.
    https://developers.google.com/search/docs/appearance/structured-data/sd-policies
  21. llms.txt Specification — Jeremy Howard. Proposal published September 2024. Intended for LLM inference-time use. 10.13% adoption, no measured citation impact.
    https://llmstxt.org/
  22. Organization Structured Data — Google for Developers. Official schema guidance for entity identity and disambiguation.
    https://developers.google.com/search/docs/appearance/structured-data/organization
  23. ProfilePage Structured Data — Google for Developers. Schema guidance for authorship, creator identity, and first-hand perspectives.
    https://developers.google.com/search/docs/appearance/structured-data/profile-page
  24. Article Structured Data — Google for Developers. Schema guidance for articles with author, date, canonical, and sameAs fields.
    https://developers.google.com/search/docs/appearance/structured-data/article

Where Does Your Brand Stand in AI Search?

With one in three Australians already using AI assistants regularly and AI referral traffic growing 357% year-over-year, the question is not whether AI search matters — it is whether your brand is part of the answer. An AI visibility assessment maps your current citation footprint, identifies gaps, and builds the evidence-based strategy to close them.