Server log terminal showing Google-Agent HTTP requests — AI agent traffic identification

On March 20, 2026, Google officially added a new user agent to its documentation called Google-Agent — and began a gradual rollout. This isn't just another crawler. It's the first official signal that Google's agentic search is in production: AI agents are navigating your website on behalf of real users.

What Google-Agent is — and what it isn't

Google-Agent is fundamentally different from Googlebot. Googlebot continuously crawls your site in the background to index your content. Google-Agent only intervenes when a user triggers an action through a Google AI tool — like Project Mariner.

Concretely: a user asks their Google AI assistant "find me this restaurant's hours and make a reservation." Google-Agent visits the restaurant's website, reads the hours, and submits the reservation form. All without the user opening a tab.

The declared user agent strings from Google are:

Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Google-Agent; +https://developers.google.com/crawling/docs/crawlers-fetchers/google-agent) Chrome/W.X.Y.Z Safari/537.36
Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/W.X.Y.Z Mobile Safari/537.36 (compatible; Google-Agent; +https://developers.google.com/crawling/docs/crawlers-fetchers/google-agent)

Why this is a strategic signal

Google is formalizing something that already existed experimentally. By publishing official IP ranges and user agent strings, it acknowledges that its AI agents browse the web on behalf of real users — and that webmasters need to know about it.

This aligns with what we'd been observing in server logs for months on sites that had enabled agent access protocols. But now it's official. Google is saying: "Our agents visit your site. Here's how to recognize them."

Direct implication: If your WAF (web application firewall) or CDN blocks Google-Agent IP ranges, your pages will never appear in Google's agentic results. That's equivalent to blocking Googlebot — except you're also blocking agentic conversions.

And this is just the beginning. With AI agent traffic up 8,000% in 2025, Google is normalizing its agentic infrastructure. Other engines — Bing, Perplexity — will follow with their own declared user agents.

What you need to do right now

  1. Enable granular logging on your server or CDN and search for "Google-Agent" in incoming requests. Establish a baseline today — current volumes are low, but they'll explode.
  2. Review your WAF and CDN rules — ensure Google-Agent IP ranges are not blocked. Check Google's official documentation for the IP ranges to allow.
  3. Test your critical forms — quote requests, registrations, reservations. An agent that can't complete your form won't convert. That's lost revenue.
  4. Structure your content clearly — AI agents parse clean HTML, structured data, and factual information well. Clear prices, opening hours, and contact details become essential signals for agentic indexing.

Googlebot vs Google-Agent: a comparison

CriterionGooglebotGoogle-Agent
TriggerAutomatic (continuous crawl)User action (AI query)
PurposeContent indexingComplete a task for the user
InteractionsPassive readingNavigation, forms, transactions
FrequencyRegular, scheduledOn demand, real-time
Impact if blockedNo Google indexingNo visibility in agentic results

What is Project Mariner?

Google-Agent is primarily associated with Project Mariner, Google's AI agent capable of browsing the web to execute complex tasks. Unlike an assistant that answers questions, Mariner acts: it opens tabs, fills out forms, clicks buttons.

Use cases already tested in beta:

  • Searching and comparing flights, hotels, or products across multiple sites simultaneously
  • Filling in online quote forms on behalf of the user
  • Extracting specific information (prices, availability, hours) and synthesizing them
  • Submitting job applications or service requests

This isn't science fiction. It's in gradual rollout. And publishing the Google-Agent user agent is precisely the signal that this rollout is entering production phase.

robots.txt and AI agents: what you control

Good news: you have control. Google respects robots.txt directives for Google-Agent — just as it does for Googlebot. If you want to block Google-Agent from certain parts of your site (e.g., customer account area, checkout), you can do so explicitly:

User-agent: Google-Agent Disallow: /account/ Disallow: /checkout/

Conversely, if you want to maximize your agentic visibility, make sure your key pages are not accidentally blocked by an overly restrictive robots.txt. Many sites block * by default and forget to unblock specific AI agents.

Our take

Google-Agent isn't a technical curiosity — it's the official birth certificate of agentic search at Google. Within 18 months, a significant fraction of "visits" to your site will come from AI agents acting on behalf of humans. Preparing your site today means 18 months of competitive advantage over businesses that will wait for it to become mainstream.

The real question isn't "should I allow Google-Agent?" — the answer is almost always yes. The real question is: is my site optimized so that Google-Agent can actually accomplish something useful for a user? If the answer is no, you have work to do.

Alexis Dollé, founder of Cicéro
Alexis Dollé
CEO & Founder

Growth and SEO content strategist, I founded Cicéro to help businesses build lasting organic visibility — on Google and in AI-generated answers alike. Every piece of content we produce is designed to convert, not just to exist.

LinkedIn

Is your site ready for AI agents?

Free audit: we analyze your AI crawler accessibility and your GEO positioning.