AEO Agentic Engine Optimization 6-layer framework diagram for optimizing content for AI agents
⚡ TL;DR — Direct Answer
  • On April 15, 2026, Addy Osmani (Engineering Director, Google Cloud AI) published the AEO (Agentic Engine Optimization) framework — the SEO equivalent for AI agents.
  • AI agents compress 4–8 browsing steps into 1–2 HTTP requests. Your Analytics will never see them.
  • Documents exceeding 25,000 tokens overflow most agents' context windows — leading to silent truncation or hallucinations.
  • 6 layers to implement: robots.txt, llms.txt, skill.md, Markdown formatting, token surfacing, and a "Copy for AI" button.

On April 15, 2026, Addy Osmani, Engineering Director at Google Cloud AI, published a comprehensive guide on Agentic Engine Optimization (AEO) — a discipline parallel to SEO, specifically designed so that AI agents can discover, read, and act on your content. The original guide is available at addyosmani.com, with coverage from Search Engine Land on the same day.

The core argument is direct: AI agents don't read your website the way humans do. They don't scroll, don't click navigation, don't interact with UI elements. They fire 1–2 HTTP requests, extract what they need, and move on. If your content isn't structured for that consumption pattern, it simply gets skipped — or worse, fabricated.

As AI agent traffic continues to surge, the relevant question is no longer "can AI see my site?" — it's "can AI agents actually use my site?" AEO is the framework that answers it.

The Token Crisis: Why Your Page May Be Unreadable to AI

Osmani identifies the token limit as the central bottleneck. AI agents operate within a finite context window. When a page exceeds that window, the agent silently truncates the content — or hallucinates to fill in the gaps.

The guide illustrates this with a real example: Cisco Secure Firewall documentation sits at 193,217 tokens — well above the usable context limit of virtually every popular AI agent.

<15K tokens max for a quick-start page
<25K tokens max for an individual API reference
<20K tokens max for a conceptual guide

These aren't soft guidelines — they're the operational thresholds below which most AI agents process content cleanly, without truncation or hallucination.

Does your content exceed these limits? A identifies high-risk pages and gives you a concrete restructuring plan.

The AEO Stack: 6 Layers to Implement

Osmani's framework is a six-layer architecture for AI-agent-readiness, applied in sequence:

Layer 1 — Access Control (robots.txt)

First thing to check: are you accidentally blocking AI crawlers? A broad Disallow rule can silently exclude Claude Code, Perplexity Bot, GPTBot, and other legitimate agents. This is part of any solid technical SEO audit — but few cover AI crawler-specific rules yet.

Layer 2 — Discoverability (llms.txt)

llms.txt is the AI-agent equivalent of sitemap.xml: a Markdown file at your domain root that presents your site's structure, with descriptions and ideally token counts. Recommended size: under 5,000 tokens.

Layer 3 — Capability Signaling (skill.md)

This file maps what your product or service does, required inputs, constraints, and documentation links. It lets an AI agent assess your content's relevance before consuming its limited context window.

Layer 4 — Content Formatting

Serve clean Markdown rather than bloated HTML. Structure for scanning with consistent headings. Eliminate navigation noise. Most critically: front-load essential information in the first 500 tokens. Agents read content as an inverted funnel — if the answer isn't at the top, it may never be found.

Layer 5 — Token Surfacing

Display token counts as visible metadata, HTTP headers, or page annotations. This enables agents to make informed decisions about which content to consume.

Layer 6 — "Copy for AI" Button

A button that copies your content as clean Markdown to the clipboard, for direct sharing with AI assistants. Osmani describes this as the fastest adoption win — minimal dev effort, immediate utility for technical users.

How to Detect AI Agent Traffic Already Hitting Your Site

Osmani identifies distinct HTTP fingerprints for each major AI agent:

  • Claude Code: User-Agent axios/1.8.4
  • Cursor: got (sindresorhus/got)
  • Cline / Junie: curl/8.4.0
  • Aider / OpenCode: Headless Chromium via Playwright
  • Windsurf: colly

In GA4 or your server logs, also watch for referrers like labs.perplexity.ai, claude.ai, and chatgpt.com. Agent traffic is nearly invisible in standard analytics: a human session generates 15–20 events; an AI agent generates 1–2 HTTP requests and leaves. For context on how these agents work in practice, see our analysis of Claude Code Routines — Anthropic's automation-without-agents feature.

Our Take

AEO is not a buzzword. It's the formalization of a real problem that technical teams have been quietly dealing with since AI agents entered development workflows. What changes with Osmani's publication is institutional credibility: when a Google Cloud AI director publishes a six-layer playbook, it shifts from expert opinion to directional signal.

For businesses building their SEO strategy in 2026, the implication is straightforward: if your site isn't readable by AI agents, you're invisible in a discovery channel that's growing fast. GEO (cited in generative AI answers) and AEO (used by AI agents) are converging. Better to build for both now than scramble in twelve months.

Sources

Is Your Site Readable by AI Agents?

Free diagnostic: tokens, robots.txt, Markdown structure, AEO signals — we analyze and tell you what to fix.

Alexis Dollé, founder of Cicéro
Alexis Dollé
CEO & Founder

Growth and SEO content strategist, I founded Cicéro to help businesses build lasting organic visibility — on Google and in AI-generated answers alike. Every piece of content we produce is designed to convert, not just to exist.

LinkedIn