
Search is anything but stable right now.
We’re seeing fresh signs of ranking volatility heating up again following the March 2026 Core Update, with tracking tools and early chatter pointing to continued movement across the SERPs.
At the same time, Google is doubling down on where search is headed—less tolerance for generic content, tighter spam handling, and deeper AI integration across the search experience.
This week’s updates highlight a clear direction:
Search is evolving fast, and what used to work is getting filtered out faster than ever.
At a recent Google Search Central Live event in Toronto, Danny Sullivan — Google’s Search Liaison who communicates how Search works and advises site owners on best practices — shared an important message about the future of content in search: commodity content is becoming easier for Google to ignore.
Speaking on behalf of Google’s Search team, Sullivan emphasized that as Search evolves — especially with AI-driven features — content that is generic, easily replicated, and lacking originality will struggle to stand out. The focus, instead, is on creating unique, authentic, and non-commodity content that provides real value to users.
What Google Means by Commodity Content
Commodity content refers to content that does not offer anything meaningfully different from what already exists across the web.
This typically includes:
The issue is not that this content is incorrect — it’s that it is interchangeable. If hundreds of sites can publish the same answer with little variation, there is nothing that makes one page stand out over another.
In today’s search environment, that lack of differentiation is becoming a bigger problem.
Why This Matters More in AI Search
Google’s push into AI-powered search experiences changes how content is surfaced.
When information is widely available and consistent across many sources, AI systems can summarize it directly. That reduces the need for users to click through to individual pages for basic answers.
This means commodity content faces a new challenge:
it is not just competing for rankings — it is competing with AI summaries.
If your content does not offer anything beyond what is already widely known, it becomes easier for Google to answer the query without sending traffic to your site.
What Google Considers Non-Commodity Content
Sullivan highlighted the importance of creating content that cannot be easily duplicated.
Non-commodity content includes:
This type of content is harder to scale, but it is also harder for competitors — and AI systems — to replicate.
SEO Fundamentals Still Matter
Another key takeaway from the Toronto event is that content quality does not replace SEO fundamentals.
Google still expects:
Non-commodity content works best when it is supported by solid SEO foundations. Even the most original content can struggle if Google cannot properly crawl or understand it.
The Shift Away From Volume-Driven Content
For years, many SEO strategies relied on publishing large volumes of content — creating pages for every keyword variation, location, or query format.
Sullivan’s message suggests that this approach is becoming less effective.
In an AI-driven search landscape, publishing more pages is not enough. What matters is whether those pages provide distinct value.
Instead of asking “How many pages can we publish?” the better question is:
“What does this page add that users can’t easily get elsewhere?”
How to Move Beyond Commodity Content
To align with Google’s direction, content strategies need to focus on differentiation.
That can mean:
These elements make content more useful — and more difficult to replicate.
The Bigger Picture
Google’s stance on commodity content reflects a broader shift in search.
As AI improves, it becomes better at handling basic, repetitive information. That raises the bar for what content needs to do in order to earn visibility and traffic.
The advantage is shifting toward content that is:
Key Takeaways
In short, Google is not saying to stop creating content — but it is making it clear that only content that truly stands out will continue to win in search.
In a recent interview, Google’s Liz Reid discussed one of the biggest questions facing search today: what happens to Google when more people get answers from AI tools instead of traditional search results?
The conversation covered AI Overviews, Gemini, AI Mode, search traffic, ads, user behavior, internet slop, and whether Google remains the main gateway to information in an AI-driven web. Reid’s main argument is clear: Google does not see AI as replacing search. Instead, it sees AI as expanding what people can ask, how they ask it, and how useful search can become.
AI Is Changing How People Search
One of the most important points Reid made is that AI is changing query behavior.
With AI Overviews and AI Mode, users are asking longer, more natural language questions. Instead of translating their needs into short keyword phrases, people are beginning to type what they actually mean.
She gave the example of restaurant searches. In the past, someone might search “restaurants in New York,” even though their real need was much more specific: a restaurant in a certain area, for five people, not too expensive, with vegan options, and kid-friendly. In the old keyword search model, users often simplified their actual question because they did not expect the computer to understand the full context.
With AI Mode, Reid says users are now more comfortable sharing the real problem. That allows Google to do more of the translation between human need and web information.
Google Does Not Show AI Overviews Just to Show AI
Reid also explained that Google does not aim to show AI Overviews for every query.
The goal is not to add AI for its own sake, but to determine when AI adds value. For example, a broad query like “Corgi” may return traditional results because the user might want images, breed information, Reddit discussions, or links. But a query like “What is a Corgi?” is more likely to trigger an AI Overview because the intent is more clearly informational.
Google uses user signals to decide when an AI Overview is helpful. If the AI answer improves the experience, Google shows it. If users are likely trying to reach a specific website or browse results directly, Google tries to get out of the way.
Search, AI Mode, and Gemini Serve Different Use Cases
A major part of the interview focused on how users move between Google Search, AI Mode, and the Gemini app.
According to Reid:
There is overlap, but Reid does not suggest that Google plans to collapse everything into one search box immediately. She compares the situation to Google Search, YouTube, Maps, Chrome, and the Google app. All include search-like behavior, but users still prefer different products for different contexts.
Google Sees AI as Expansionary, Not Zero-Sum
Reid repeatedly pushed back against the idea that AI simply replaces search traffic or search behavior.
Her view is that AI lowers the barrier to asking questions. People already have many questions they do not bother asking because the effort feels too high, the expected answer feels uncertain, or they assume search will not understand the full context.
AI changes that. If search becomes easier, more conversational, and more useful, users may ask more questions overall.
That is what Google means when it frames AI as an “expansionary” moment. It is not only about shifting existing searches from classic results to AI answers. It is about unlocking questions people previously left unasked.
What Happens to Website Traffic?
The interview also addressed one of the biggest concerns for publishers: if AI answers the question directly, will users still click through to websites?
Reid’s answer was nuanced. She said some “quick answer” clicks may decline, especially when users previously clicked a page only to grab one fact and immediately return. But she argues that users still click when they want depth, perspective, expertise, or a specific source.
In her view, AI can reduce low-quality bounce clicks while helping users find better pages when they do want to go deeper.
This is an important distinction. Google is not denying that behavior changes. But Reid frames the change as a shift in click quality rather than simply a collapse in traffic.
Ads Still Have a Role in AI Search
The hosts also asked how Google makes money if AI answers more questions directly.
Reid pointed out that many search queries never had ads in the first place. A factual query like asking whether tankers can be seen from the Strait of Malacca may not have been commercially valuable even before AI Overviews.
For commercial queries, the situation is different. If someone is shopping for shoes, an AI answer does not eliminate the need to buy the shoes. Users still need merchants, products, comparisons, and choices.
Reid also suggested that longer, more specific conversational queries could actually create better ad opportunities because they reveal more intent. In other words, AI may change the format of ads, but not necessarily eliminate the commercial opportunity.
Google as a Fact Checker for AI
Another interesting point was the idea that users may increasingly use Google to fact-check answers from other AI tools.
Reid acknowledged that people do use Google this way, but she noted that Google has always been used for fact-checking, even before LLMs. Users searched to verify things they heard from friends, news, social media, or other sources.
Still, this behavior is important. It suggests Google may retain a powerful role as a trust layer even when users start their journey in ChatGPT, Claude, Gemini, or other AI tools.
AI Slop Is New, But Slop Is Not
The interview also covered the rise of AI-generated slop.
Reid made an important distinction: AI did not invent low-quality web content. It simply made it easier to produce at scale. She pointed out that the web already had human-generated slop long before generative AI.
For Google, the challenge remains the same: crawl far more pages than it indexes, index far more than it surfaces, and keep low-quality spam out of results as much as possible.
The key issue is not whether content was generated by AI or humans. The real question is whether Google can surface trusted, useful, high-quality information above the noise.
The Future Interface May Not Be One Thing
When asked what the default entry point to the web might be in the future, Reid avoided predicting a single winner.
She does not believe everything necessarily collapses into one chatbot, one search box, or one personal agent. Instead, she expects more form factors and interfaces, not fewer.
People may use:
Different tasks may require different interfaces. A chat interface may be perfect for some tasks, but slow or awkward for others. The future, in Reid’s view, is more adaptive, personal, and ambient — not necessarily one-size-fits-all.
Key Takeaways
Google sees AI as expanding search, not replacing it. AI makes it easier for users to ask more natural, complex questions, which may increase overall search behavior.
AI Overviews are not meant to appear everywhere. Google says it shows them when they add value, based on user signals and query intent.
AI Mode and Gemini serve different purposes. AI Mode leans toward complex informational queries, while Gemini is more focused on productivity and creative tasks.
Search traffic may change, but Google argues not all clicks are equal. AI may reduce quick bounce clicks while still driving users to pages when they want depth, expertise, or perspective.
Ads are not going away. Commercial intent still exists, and longer AI-style queries may create new ad opportunities.
Google may remain important as a trust and fact-checking layer, even when users begin with other AI tools.
AI slop is a scale problem, not a brand-new problem. Google says its job is still to identify and surface useful, trusted content above low-quality material.
The future of search may involve many interfaces, not just one. Search, AI assistants, apps, agents, voice, and devices may all coexist depending on the task.
Final Thoughts
The biggest message from Liz Reid’s interview is that Google is not treating AI as a separate product category sitting outside search. It is treating AI as a way to make search more useful, more conversational, and more aligned with how people actually think.
For SEOs, publishers, and marketers, this matters because the shift is not simply from links to answers. It is from keyword-based searching to intent-rich interaction.
That means the content that wins will likely be content that provides real value beyond a basic fact — content with expertise, perspective, clarity, and usefulness that makes people want to go deeper.
Watch the full interview below:
Google has made another important update to its spam reporting system, this time focusing on privacy and how reports are processed.
After recently confirming that spam reports can lead to manual penalties, Google has now clarified a key condition:
If a spam report includes personally identifiable information (PII), it will not be processed at all.
This is the latest development in a series of rapid updates to how Google handles spam reports in April 2026.
A Quick Recap: Spam Reports Just Became More Powerful
Earlier this month, Google updated its documentation to confirm that spam reports can now directly contribute to manual actions. This marks a shift from their previous role as signals primarily used to improve algorithms over time.
As covered in your recent SEOIntel weekly roundup, this change elevated spam reports into a more active enforcement tool, with real consequences for sites that violate spam policies.
At the same time, Google introduced more transparency. The text submitted in a spam report may be shared with the reported site owner if a manual action is taken. Reports remain anonymous, but the content itself is no longer private.
New Update: Personal Information Will Invalidate Your Report
Following feedback from the SEO community, Google has now tightened its guidelines.
The updated documentation makes it clear that spam reports should not include personally identifying information. If such information is present, the report will not be processed or used.
This update is tied to the earlier change around transparency. Since report text may be shared with site owners, Google needs to ensure that no sensitive or personal data is passed along in the process.
Why Google Made This Change
This clarification appears to be a direct response to concerns raised after the earlier announcement about sharing report content.
When Google confirmed that submissions could be shared verbatim, it introduced potential risks related to privacy and compliance. Users might unintentionally include sensitive details in their reports, which could then be exposed.
By rejecting reports that contain personal data, Google is putting a clear safeguard in place while maintaining the integrity of its reporting system.
A Rapid Series of Changes in April 2026
This update is part of a broader sequence of changes to spam enforcement this month:
Taken together, these updates show Google refining both enforcement and transparency at the same time.
What This Means for SEOs
These changes affect both those submitting spam reports and those managing websites.
For those submitting reports, it is now more important to keep submissions focused on policy violations and avoid including any personal or sensitive details. Reports should be written clearly and professionally, with the assumption that the reported site may eventually see the content.
For site owners, spam reports now carry more weight than before. They may contribute to manual reviews and penalties, and in some cases, the context of the report may be visible.
The Bigger Picture
Google’s recent updates indicate a shift toward more direct and transparent spam enforcement.
Rather than relying solely on automated systems, Google is incorporating user-submitted reports into its enforcement workflow while setting clearer rules around how those reports are handled.
This reflects a broader direction in search, where accountability, user experience, and compliance are becoming increasingly important.
Final Thoughts
Google’s latest clarification on spam reports may seem like a small change, but it has meaningful implications.
Spam reports are now more actionable, but they also require more care. Submissions must be accurate, relevant, and free of personal data in order to be considered.
As Google continues to refine its systems, both enforcement and responsibility are becoming more immediate and more visible.
Google has updated its search documentation to include a new section on “Read more” deep links, providing guidance on how these links appear and how site owners can increase their chances of getting them.
While not a new feature, this update gives SEOs clearer insight into how Google surfaces section-level links directly within search snippets—and what can prevent them from showing.
What Are “Read More” Deep Links?
“Read more” deep links are clickable links within search result snippets that take users directly to a specific section of a page, rather than the top of the page.
These links typically appear when Google identifies a relevant section tied to the query, allowing users to jump straight to the most useful content.
They are part of a broader trend where Google is making search results more contextual and interactive, reducing the need for users to scan entire pages.
Google Finally Documents Best Practices
As part of its April 2026 updates, Google officially added guidance on “Read more” deep links to its snippet documentation.
The documentation outlines key best practices that can improve the likelihood of these links appearing in search results.
According to Google, three factors are especially important:
1. Content Must Be Immediately Visible
Content should be accessible on page load and not hidden behind tabs, accordions, or expandable sections.
If key content is hidden, Google may be less likely to generate deep links pointing to those sections.
2. Avoid Manipulating Scroll Behavior
Using JavaScript to force scroll positions—such as automatically jumping users to the top of the page—can interfere with how Google processes section links.
This can reduce the chances of “Read more” links appearing.
3. Preserve URL Fragments
“Read more” links rely on anchor-based URLs (hash fragments) to direct users to specific sections.
If your site removes or rewrites these fragments—especially in single-page applications—Google may not be able to generate deep links properly.
Why This Matters for SEO
Although this is a documentation update rather than a new feature, it has meaningful implications.
“Read more” links can:
Google itself notes that these links are tied to anchor URLs and can even appear in Search Console performance reports.
This means they are not just cosmetic—they are measurable and can influence traffic patterns.
A Subtle but Important Shift
This update reflects a broader change in how Google presents search results.
Instead of just ranking pages, Google is increasingly:
This aligns with trends like featured snippets and AI-driven results, where precision and accessibility matter more than ever.
What SEOs Should Do Now
To align with this update, site owners should:
These steps not only improve eligibility for “Read more” links, but also enhance overall usability.
Final Thoughts
Google’s new guidance on “Read more” deep links may seem like a small update, but it highlights an important direction in search.
Visibility is no longer just about ranking—it’s about how your content is presented and accessed within the search results themselves.
As Google continues to surface more granular content, the advantage will go to pages that are structured clearly, accessible immediately, and easy for both users and search engines to navigate.
Taken together, this week’s updates reinforce a consistent theme:
Google is raising the bar while reshaping how search works.
With ranking volatility still active and AI continuing to influence how results are generated and consumed, the gap between commodity content and truly valuable content is only getting wider.
The takeaway isn’t just to “wait out” the volatility—it’s to adapt to what Google is clearly prioritizing next.
Because in this version of search,
being different isn’t optional—it’s the requirement.