SEO alone won’t reach every AI — the asymmetry EdgeShaping reveals

What edge logs told me

While reviewing access logs on a site, I noticed an asymmetry.

GPTBot was there. ClaudeBot was there. PerplexityBot was there — but Google, despite clearly referencing the site’s content in AI Overview, left no trace in the logs at all.

Googlebot was visiting on its regular crawl schedule. Yet the moment AI Overview cited information from the site, no request had been made to the server.

Why? The answer lies in a structural asset that only Google possesses.

Google’s walled garden — a replica of the web

For over two decades, Google has been continuously crawling the web with Googlebot. But it doesn’t just crawl. Through its Web Rendering Service (WRS), built on headless Chrome, Google fully executes JavaScript and renders pages before storing the results in its index.

In other words, Google has built and maintains its own replica of the web — separate from the internet itself. This replica, this walled garden, serves as the foundation for Google Search and, at the same time, the information source for AI Overview.

Both AI Overview and Gemini generate answers from this walled garden. Gemini accesses it through its Search Grounding feature, but the essence is the same: when a user asks a question, neither goes out to fetch content from the live web. They pull from what’s already been accumulated.

That’s why no request shows up in the access logs. This is the structural reason behind the observation at the opening of this article.

Other AI engines run on a different architecture

ChatGPT, Perplexity, and Claude operate on a fundamentally different architecture from Google.

These AIs are “fetch-based.” When a user asks a question, they go out to the web in real time to retrieve information. The user’s query itself triggers the crawl. That’s why User-Agents like GPTBot, ClaudeBot, and PerplexityBot appear in server logs.

On top of that, fetch-based AIs cannot render JavaScript. Sites built with CSR (Client-Side Rendering) frameworks — such as Figma CMS or React SPAs — don’t include content in their HTML source. These AIs structurally cannot see what’s inside.

Three distinct layers emerge from this.

CategoryCrawl triggerServer logsJS supportFreshness
Google (AI Overview)Googlebot’s scheduled crawl (unrelated to user queries)Recorded as Googlebot (no trace during AI reference)Fully renderedPipeline-dependent, latency present
GeminiAccesses walled garden via Search Grounding (no new fetch triggered by user queries)Not recorded (indistinguishable from Googlebot crawls)Rendered (via walled garden)Pipeline-dependent, latency present
ChatGPT / Perplexity / ClaudeUser query is the triggerRecorded in real time by bot nameNot supportedReal-time
Research by mare interno LLC

Even though all of these AIs use web content to generate answers, the paths they take to obtain it — and the visibility each path offers — are entirely different.

The fate of the walled garden — the freshness bottleneck

Google’s walled garden holds an overwhelming advantage in the depth of its accumulated data. But the very process of accumulation becomes a bottleneck for freshness.

The pipeline of crawl → render → index always introduces a time lag. Content published today must wait for the indexing cycle before it can appear in AI Overview.

Fetch-based AIs don’t have this constraint. Content written today can be read by an AI today. The user’s question triggers a direct fetch. This is one of the reasons Perplexity has been able to differentiate itself from Google — the immediacy of fetching “now.”

That said, this time lag is not necessarily on the scale of days or weeks. Since the Caffeine update in 2010, Google has built a system that updates its index continuously. For high-authority sites, reflection can happen within hours. The walled garden is slower, but Google’s long-accumulated investment in freshness should not be underestimated.

Even so, it remains structurally different from the immediacy of fetch-based AIs, where the user’s question triggers retrieval at that very moment.

The view from edge logs — EdgeShaping

Back to the observation at the start.

Fetch-based AIs leave traces in server logs. Which AI bot visited, when, and which page it fetched — all of this is recorded in edge (CDN or server) access logs.

Google’s walled garden reference, on the other hand, leaves no trace. AI Overview citations can only be observed indirectly through Search Console.

Starting from this asymmetry, EdgeShaping visualizes AI crawler behavior from edge access logs and enables you to understand which AIs can see your content and which cannot.

The person who reads logs is the first to notice this asymmetry. And from that observation, the strategy for AI-era visibility begins.

Conclusion — SEO works for AIO, but it’s not enough

Given the structure outlined above, one affirmation and one limitation become clear.

The affirmation: SEO works for AIO. Google’s walled garden is composed of content that has been properly indexed through SEO. Since both AI Overview and Gemini generate answers from this garden, content that performs well in SEO can appear in AI Overview. This is fact.

The limitation: that only applies inside Google’s walled garden. ChatGPT, Perplexity, and Claude are fetch-based and do not reference Google’s index. They cannot render JavaScript either. SEO alone cannot secure visibility with these AI engines.

To achieve full AI visibility, the following are needed in addition to SEO:

  • Content delivery via SSR or static HTML (formats that fetch-based AIs can read)
  • Structured data to make meaning explicit
  • Edge log observation to understand visibility per AI engine

SEO is a necessary condition for AIO — but not a sufficient one. The world of AI extends beyond Google’s walled garden. The starting point for seeing that full picture lies in the edge logs.