Office workspace with laptop showing AI chat interface, everyday work atmosphere

On April 2, 2026, MIT published a study titled "Crashing Waves vs. Rising Tides", in which 41 AI models — including Claude, Gemini, and ChatGPT — were tested on more than 11,000 workplace tasks from the U.S. Department of Labor's O*NET database. The verdict: AI reaches "minimally sufficient" quality in 65% of cases — the equivalent of a disenchanted intern doing the bare minimum.

The numbers that matter

65% Success rate at "good enough" threshold (7/9)
< 50% Success rate at "excellent" threshold (9/9)
+11 pts/yr Estimated annual improvement
2029 Projected 80-95% success (good enough)

Researchers used a 1-to-9 scoring scale, where 7 means "minimally sufficient" — the work is usable as-is, no human editing required. At this level, AI passes two out of three tasks. But when excellence is demanded (score of 9), the success rate never exceeds 50%, regardless of how much time the model is given.

In other words: AI handles emails, summaries, and spreadsheets just fine. It fails when you need creativity, complex precision, or multi-step reasoning.

A rising tide, not a tsunami

The study debunks the "sudden mass replacement" narrative. AI automation is progressing steadily across a wide range of occupations — a rising tide, not a crashing wave. Legal and IT roles show lower success rates than construction or maintenance, where text-based tasks tend to be more standardized.

At the current pace of improvement (+11 percentage points per year), researchers estimate AI should reach 80-95% success on text-based tasks by 2029 — at the "good enough" threshold. Reaching reliably superior quality would take "several additional years."

What this means: AI can already produce "good enough" content. That's not enough to meet Google's E-E-A-T standards. The bar is rising for everyone.

What this changes for your content strategy

If you're publishing content on your website — and Google can already spot AI content after 16 months of testing — this study draws the line: "passable" content is becoming the AI default. What differentiates your brand is everything AI still can't do.

  1. Real-world expertise can't be automated. Client anecdotes, implementation feedback, internal data — no LLM can fabricate these. That's your E-E-A-T advantage.
  2. "Good enough" writing doesn't convert. 65% success at minimum quality means the web will flood with mediocre content. AI search engines pick sources that add something extra — your content needs to be among them.
  3. AI as a tool, not an author. Companies that succeed use AI to accelerate production, not replace thinking. An AI first draft + human expertise = content that performs. A 100% AI article = more noise.

Our take

This MIT study confirms what we've been seeing with our clients for a year: pure AI content is already insufficient to rank. Google, ChatGPT, and Perplexity favor sources that demonstrate genuine expertise. When AI reaches 90% by 2029, the only content that will survive is what no model can generate alone — your data, your experience, your point of view.

"Good enough" is the new baseline. Aim higher.

Sources

  • MIT FutureTech — "Crashing Waves vs. Rising Tides" study (April 2026)
  • Fortune — "MIT tested AI on thousands of workplace tasks" (April 3, 2026)
  • arXiv — Full paper (2604.01363)

Can your content survive the AI era?

Get a free visibility audit across Google and AI search engines.

Alexis Dollé, founder of Cicéro
Alexis Dollé
CEO & Founder

Growth and SEO content strategist, I founded Cicéro to help businesses build lasting organic visibility — on Google and in AI-generated answers alike. Every piece of content we produce is designed to convert, not just to exist.

LinkedIn