Part 1 · Reading now
Why Google's Helpful Content Update Crushed AI-Written Websites: What Smart Brands Are Doing Instead
Part 2
How to Write SEO Content That Actually Resonates, Speaks to Pain Points, and Drives Real Business Results
Part 3
How to Get Cited by AI Search Engines: The AIEO Playbook for Getting Your Content Into ChatGPT, Perplexity, and Google AI Overviews
Google's Helpful Content Update didn't just prefer human writing over AI. It actively penalized content that felt like it was written for search engines rather than people. Here's what happened, who got hit, and what the winners did differently.
In September 2023, Google rolled out what SEOs are still calling the most consequential algorithm update in a decade. The Helpful Content Update (HCU), later folded into Google's core algorithm in March 2024, didn't target spam in the traditional sense. It targeted something subtler: content that exists primarily to rank, not to help.
The timing was not a coincidence. By mid-2023, AI content generation tools had made it trivially easy to produce thousands of technically correct, topically relevant articles in hours. The web was flooded with it. Google noticed, and responded.
What the HCU Actually Measures
Google has been unusually direct about what signals the HCU is looking for. Their own documentation lists a series of questions the algorithm essentially asks about your content:
Does the content provide original information, reporting, research, or analysis?
Does it provide a substantial, complete, or comprehensive description of the topic?
Does the content reflect first-hand expertise and a depth of knowledge, like what a specialist would have?
After reading, will a visitor feel they've learned enough to help achieve their goal?
Would you be comfortable presenting this content to an expert in the field?
Notice the pattern. These questions are about depth, originality, and expertise: qualities that, frankly, ChatGPT cannot reliably produce for specialized professional topics. It can produce content that looks like it has those qualities. That's different.
Which Industries Got Hit Hardest
The HCU hit certain sectors dramatically harder than others. The pattern aligns almost perfectly with the industries where AI-generated content proliferated fastest, and where Google's YMYL (Your Money or Your Life) standards apply most stringently.
Healthcare & Medical
Severe ImpactAI-generated symptom guides and treatment overviews lost up to 90% of organic traffic. Google doubled down on E-E-A-T signals (Experience, Expertise, Authoritativeness, Trustworthiness).
Legal
Severe ImpactGeneric AI-written "is X legal in your state" articles were decimated. Sites with actual attorney bylines and jurisdiction-specific expertise held their ground.
Finance
Severe ImpactCookie-cutter investing and credit card comparison content collapsed. Sites with certified financial contributors and original analysis largely survived.
B2B / SaaS
Moderate ImpactGeneric "what is X software" content took hits. Original case studies, proprietary research, and expert bylines continued to perform.
By the Numbers
65%
of sites primarily using AI content saw ranking drops after the March 2024 core update
-76%
average organic traffic decline for sites flagged as "unhelpful" in major HCU analyses
4.1x
higher E-E-A-T signals found in top-10 results post-HCU vs. pre-HCU
Sources: Semrush, Ahrefs, Search Engine Roundtable HCU tracking studies
What the Winners Did Differently
The sites that maintained and even grew organic traffic through the HCU period had several things in common. None of them are secrets, but they are things AI cannot do on its own.
First-person expertise and experience
The HCU specifically added "Experience" to Google's E-A-T framework, making it E-E-A-T. That first E is the new battleground. Content that reflects actual, first-hand experience with a topic (clinical experience, legal practice experience, lived experience) dramatically outperformed content that could only describe that experience secondhand.
A healthcare content writer who has read 1,000 medical studies can produce accurate content. A healthcare content specialist who has worked with actual physicians and patient-facing brands, understands clinical communication, and knows the HIPAA guardrails from experience produces something entirely different.
Original data, analysis, and point of view
Content that cited only the same widely-available studies got outranked by content that added original analysis, proprietary case studies, or a clear expert perspective on what the data means. AI can summarize information that already exists online. It cannot have an opinion grounded in professional judgment. That gap is exactly what Google is measuring.
Clear authorship signals
Author bio pages, byline links to LinkedIn profiles, author schema markup, and consistent publishing under a real professional's name all became more important post-HCU. Anonymous or branded-only content lost ground. Human names, with credentials and real professional histories, gained it.
Content that actually helps, not just informs
There's a meaningful difference between content that informs someone about a topic and content that actually helps them do something or decide something. The best-performing post-HCU content was specific, actionable, and written for a real person with a real problem, not for a keyword. If you want a practical breakdown of what that looks like in practice, the no-fluff guide to SEO content goes deep on the mechanics.
The "AI-Assisted, Human-Led" Debate
I want to be clear about something because it comes up constantly: the issue isn't whether AI was involved in the production of content. The issue is whether the final product demonstrates genuine expertise, genuine helpfulness, and a genuine human point of view.
If a physician reviews an AI draft, corrects the clinical details, adds real patient context from their practice, and publishes under their name with their credentials, that is probably fine. The experience, expertise, authoritativeness, and trustworthiness are real.
If a content farm uses AI to produce 200 articles per day with no human review, no subject matter expertise, and no editorial judgment, that's exactly what the HCU was designed to remove from search results. And it's working.
The brands that are winning organic search in 2025 aren't the ones who found a clever way to automate content production. They're the ones who invested in actual expertise and communicated it clearly.
And that same dynamic is now playing out in a new arena: AI search engines. ChatGPT, Perplexity, and Google AI Overviews are applying the same logic the HCU introduced, but at the citation level. They don't just rank content; they decide which sources are trustworthy enough to cite by name. If you want to understand how to position your content for that next wave, the AIEO framework breaks down exactly what it takes to get cited by AI search engines and why the same expertise signals that survived the HCU are the ones that win there too.
Why AI-Generated Content Fails the Expertise Test
To understand why the Helpful Content Update hit AI-generated sites so hard, it helps to look at what AI actually does when it writes. Large language models are prediction engines. They predict the next most likely word based on statistical patterns in their training data. That means they excel at producing content that sounds like what already exists. They are fundamentally designed to be average, to write the statistical center of what has already been published.
The problem is that expertise lives at the edges, not the center. A specialist's insight is valuable precisely because it is not what everyone else is saying. It is a hard-won conclusion from years of specific experience, a pattern observed across a particular set of cases, or a framework developed through trial and error. None of that can be statistically predicted from existing content on the internet. AI can summarize what experts have written. It cannot have the experience that made them experts in the first place.
This is why the HCU's emphasis on first-hand experience is such a devastating filter for AI content. When Google asks whether the content reflects "first-hand expertise and a depth of knowledge, like what a specialist would have," it is asking a question that AI fundamentally cannot answer. The content may be accurate, well-structured, and comprehensive. But if it lacks the specificity, the judgment, and the point of view that comes from lived professional experience, it will not survive a competitive search result page in a post-HCU world.
For brands in healthcare, legal, finance, and other YMYL sectors, this means that the competitive moat is no longer technical SEO tricks or content volume. It is expertise, demonstrated clearly and consistently, by real professionals. The brands that understand this shift and invest in human-authored, expert-driven content are the ones that will dominate search for the next decade.
Building a Human-First Content Strategy After the HCU
The practical implication of the Helpful Content Update is that every brand needs to reevaluate its content production model. If your strategy is built on publishing a high volume of keyword-optimized articles at the lowest possible cost, the HCU has rendered that approach obsolete. The new winning strategy is lower volume, higher expertise, and genuine authorship.
Start by auditing your existing content through the lens of the HCU questions. Which pieces demonstrate first-hand expertise? Which are clearly summaries of readily available information? Which have a real author's name, credentials, and professional history attached? The content that fails these tests is not just underperforming; it is a liability. It drags down your site's overall quality score and reduces the visibility of your genuinely strong content.
Then, invest in the expertise layer. That means hiring writers who actually understand your industry, not just writers who can research it quickly. It means giving your subject matter experts a platform and supporting them with editorial and production help. It means creating content that answers the questions your audience is actually asking, not just the keywords with the highest search volume. And it means being willing to publish fewer pieces if each one is demonstrably better than what your competitors are producing.
The brands that make this shift are already seeing results. Their organic traffic is more stable, their content is more defensible against future algorithm updates, and their audience trusts them more because the expertise is real. The HCU was not a penalty. It was a correction. And the brands that align with the correction are the ones that will win.
Want the full case for human-written content?
The side-by-side breakdown of human writing vs. AI, including the real costs of AI errors in healthcare and legal, the E-E-A-T argument in full, and why a content writer that doesn't use AI is worth the investment.
Read: Why Human Writing Beats AI ContentRelated Reading
AI Search & Content Authority Series
3-part series · You're reading Part 1
Why Google's Helpful Content Update Crushed AI-Written Websites: What Smart Brands Are Doing Instead
How Google's HCU reshaped search rankings and why human expertise is now the only durable SEO advantage.
Part 2
How to Write SEO Content That Actually Resonates, Speaks to Pain Points, and Drives Real Business Results
A step-by-step framework for creating content that ranks, connects with real readers, and converts.
Part 3
How to Get Cited by AI Search Engines: The AIEO Playbook for Getting Your Content Into ChatGPT, Perplexity, and Google AI Overviews
The exact signals AI engines use to select cited sources, and how to engineer your content to qualify.
Each article in this series builds on the last. Start from Part 1 for the full picture.
Start from Part 1If your site's organic traffic has taken a hit, or if you're building a content strategy that needs to hold up to Google's increasingly human-focused standards, Book a free discovery call to see what expert-written content with documented outcomes looks like for your industry.


