SEOIntel Weekly News Round-up (Fourth Week of December 2025)

This week’s roundup arrives in the middle of a noticeably active period in search. Google’s December 2025 Core Update is still rolling out, and early impacts are becoming harder to ignore as volatility intensifies across many sites and verticals. At the same time, Google continues to push forward on its AI-first search vision—expanding Gemini 3, […]
Marie Aquino
December 26, 2025

This week’s roundup arrives in the middle of a noticeably active period in search. Google’s December 2025 Core Update is still rolling out, and early impacts are becoming harder to ignore as volatility intensifies across many sites and verticals.

At the same time, Google continues to push forward on its AI-first search vision—expanding Gemini 3, accelerating AI Mode with Gemini 3 Flash, and offering deeper insight into how search is being redesigned through official podcasts and updates.

Layer in legal action against large-scale scraping and the global expansion of Preferred Sources, and it’s clear this isn’t just a busy week—it’s one that reflects where search is heading next.

December 2025 Core Update: Volatility Intensifies as Early Impacts Emerge

Google’s December 2025 Core Update is still rolling out, and over the past several days, search volatility has noticeably increased across many sectors. While core updates often take weeks to fully settle, SEO tools, industry forums, and social media discussions suggest that the effects of this update are now being more widely felt.

According to tracking data and practitioner reports, ranking fluctuations have become more pronounced since the weekend, aligning with Google’s confirmation that the update remains in progress. As with previous core updates, Google has not provided granular details about specific ranking signals affected, emphasizing that the update involves broad adjustments to its core ranking systems.

What the Volatility Looks Like So Far

Multiple third-party tracking tools are reporting elevated volatility levels, particularly compared to the days immediately following the initial rollout announcement. These spikes suggest that Google may be actively recalibrating rankings in waves, rather than applying all changes at once.

Across SEO forums and social platforms, site owners are reporting a mix of outcomes:

  • Some publishers are seeing sudden visibility drops, especially on content-heavy and informational sites
  • Others report partial recoveries from earlier declines or gains that appeared mid-year
  • A subset of sites appears largely unaffected, reinforcing the uneven nature of core updates

Importantly, no single pattern has emerged that points to a specific tactic or industry being targeted, which is consistent with Google’s framing of core updates as systemic quality improvements.

Common Themes Emerging in Community Discussions

While it is too early to draw firm conclusions, several recurring themes are showing up in early discussions:

  • Content quality and depth: Some site owners speculate that thin or overly templated content may be more vulnerable during this update.
  • Authority and trust signals: Fluctuations appear more pronounced on sites where expertise, sourcing, or brand recognition may be weaker.
  • Volatility without penalties: Many SEOs note that affected pages were not deindexed or penalized, but simply repositioned—suggesting re-ranking rather than punitive action.

These observations align with Google’s long-standing guidance that core updates are about reassessing overall content quality relative to other pages on the web, not issuing manual actions.

How This Update Fits Into Google’s Broader Strategy

This heightened volatility also comes shortly after Google clarified that it rolls out smaller, unannounced core updates throughout the year. That context helps explain why some sites may have already experienced gradual changes earlier in 2025, with the December update amplifying or adjusting those shifts.

Rather than viewing this update in isolation, it’s increasingly clear that Google’s ranking systems are evolving continuously, with major core updates acting as larger recalibration points rather than standalone events.

What Site Owners Should Do Right Now

Given that the rollout is ongoing, Google continues to recommend not making reactive changes based on short-term ranking movements. Instead, site owners should:

  • Monitor performance trends over multiple days or weeks
  • Review affected pages for clarity, usefulness, and originality
  • Compare content against competitors that gained visibility
  • Avoid overcorrecting until the update has fully completed

As history has shown, rankings can continue to shift throughout the rollout and even after Google confirms completion.

The December 2025 Core Update is shaping up to be one of the more noticeable search recalibrations in recent months, with volatility increasing as the rollout progresses. While early impacts are becoming clearer, the full picture won’t emerge until Google completes the update and rankings stabilize. For now, the safest approach remains patience, careful observation, and a continued focus on creating high-quality, trustworthy content that serves real user needs.


How Google Is Using Gemini 3 to Redesign Search

In a recent episode from the Google Search Central ecosystem, Google leaders shared an in-depth look at how Gemini 3 is reshaping the way Search works—both behind the scenes and in the user interface. The discussion, featuring Robby Stein, explores how search is moving beyond static blue links and summaries toward a more dynamic, AI-driven experience.

Rather than positioning AI as a separate layer on top of search, Google explains that Gemini 3 is becoming deeply embedded into how results are generated, reasoned through, and presented. This includes not only AI Overviews, but also AI Mode and new forms of interactive, generative interfaces inside Search.

Search Is Becoming a Generative Interface, Not Just a Results Page

One of the central themes of the podcast is the concept of Generative UI. Traditionally, Google Search relied on predefined templates—lists, cards, and panels that were populated with data. With Gemini 3, the model can now dynamically decide how information should be displayed based on the query itself.

For example, instead of returning a block of text, Search may generate:

  • Interactive charts or graphs
  • Step-by-step visual explanations
  • Simulations or diagrams for complex topics

This allows Search to better handle multi-layered questions, technical concepts, and educational queries that are difficult to answer clearly with text alone.

Gemini 3 Is Trained to “Think Like a Designer”

Google also discussed how Gemini 3 is trained using design systems and layout rules so that AI-generated results remain consistent, readable, and trustworthy. Rather than allowing the model to create arbitrary layouts, Google provides constraints—visual patterns, spacing rules, and usability standards—to ensure AI-generated pages still feel like Google Search.

This approach reflects a broader shift: AI models are no longer just responsible for generating answers, but also for assembling experiences that users can understand and interact with.

Speed, Scale, and Model Specialization Matter

The podcast highlights how different versions of Gemini work together:

  • Faster models handle real-time search needs where latency is critical
  • More advanced models power complex reasoning, data visualization, and simulations

This specialization allows Google to introduce richer AI features without sacrificing performance—a key requirement for Search, where speed remains essential.

AI Mode and AI Overviews Are Converging

Another major takeaway is that Google is experimenting with blending AI Mode into AI Overviews, rather than treating them as separate experiences. Instead of asking users to choose when to enter an AI interface, Google aims to make transitions more natural—surfacing deeper AI assistance only when the query calls for it.

This signals a future where conversational AI, traditional rankings, and interactive exploration coexist within a single search flow.

Key Takeaways From the Podcast

  • Google Search is evolving into a dynamic, AI-generated interface, not just a ranking engine
  • Gemini 3 enables Search to generate custom layouts, visuals, and simulations in real time
  • Design systems and visual constraints are critical to maintaining trust in AI results
  • AI Mode and AI Overviews are likely to merge into a more seamless experience
  • Search is being optimized for complex, multi-step, and exploratory queries, not just keyword lookups

Why This Matters

The podcast makes it clear that Google is no longer thinking about search purely in terms of links and rankings. Instead, Search is becoming a knowledge interface powered by reasoning models, where how information is structured and delivered is just as important as what information appears.

For site owners, publishers, and SEOs, this reinforces the importance of clarity, authority, and structured information—content that AI systems can understand, trust, and integrate into these emerging generative experiences.

Watch the podcast here –


SEO & SEO for AI: Key Takeaways From Google’s Search Off the Record Podcast

In the first part of a new Search Off the Record discussion , John Mueller and Danny Sullivan address one of the biggest questions facing site owners today: whether SEO needs to fundamentally change in response to AI-driven search formats such as AI Overviews, AI Mode, and conversational search.

Rather than introducing new tactics or frameworks, the conversation repeatedly returns to a central message—the core principles of SEO remain the same, even as search interfaces evolve. The episode aims to reduce confusion caused by the growing number of acronyms (AEO, GEO, AIO, and others) and refocus attention on what Google says consistently matters: creating content that genuinely satisfies human users.

SEO Is Still SEO — AI Is a Format, Not a Replacement

One of the podcast’s strongest points is the clarification that AI-focused optimizations are not a separate discipline from SEO. Danny Sullivan explains that AI-driven experiences are best understood as new formats within search, not as a new search engine requiring a different strategy.

Just as local SEO or video SEO involves understanding specific formats without abandoning core principles, AI results operate within the same overarching goal: helping people find useful, relevant information. Attempting to optimize exclusively for AI systems, rather than users, risks drifting away from that goal—especially as AI systems continue to improve and change.

Human-Centered Content Remains the “North Star”

Both speakers emphasize that Google’s ranking systems—AI-driven or otherwise—are designed to reward content that people find genuinely helpful. Content written primarily to satisfy algorithms, language models, or speculative ranking factors is less likely to succeed over time.

The podcast reinforces Google’s position that if content is already written for people, it is naturally better positioned to appear across traditional search results, AI Overviews, and future search experiences. Conversely, chasing short-term tactics aimed at specific systems often leads to fragile results that don’t hold up as ranking systems evolve.

Originality and Authenticity Matter More Than Commodity Content

A recurring theme is the growing divide between commodity content and original, experience-based content. The speakers note that factual, easily sourced information—such as simple answers to common questions—is increasingly handled directly by search systems and AI summaries.

What remains valuable are perspectives that AI cannot easily replicate:

  • Firsthand experience
  • Unique analysis or opinion
  • Authentic storytelling
  • Original research or demonstrations

This applies across formats. Google encourages creators to think beyond text alone and consider images, video, and audio where they add value—not as an AI tactic, but as a way to better serve users.

Multimodal Search Is About Usefulness, Not Buzzwords

While the term “multimodal” is used reluctantly in the discussion, the concept is clear: people now search using combinations of text, images, video, and voice. Google’s systems are increasingly capable of understanding and connecting these inputs to helpful outputs.

For creators, this does not mean manufacturing content for every possible format. Instead, it means recognizing opportunities where visual or experiential content genuinely improves understanding—such as demonstrations, walkthroughs, or real-world examples.

Measuring Success Beyond Clicks

The podcast also touches on how success should be evaluated in a changing search environment. As AI reduces unnecessary back-and-forth searching, users may arrive at websites with clearer intent and higher engagement.

Rather than focusing solely on raw traffic or keyword-level clicks, site owners are encouraged to:

  • Define what a meaningful conversion looks like for their site
  • Track engagement quality, not just volume
  • Measure outcomes aligned with business or audience goals

Google notes that users arriving from newer AI-driven experiences often appear more engaged, suggesting that fewer but better-qualified visits may become more common.

Key Takeaways

  • SEO fundamentals have not changed because of AI; AI introduces new formats, not new rules
  • Optimizing for users remains the most reliable long-term strategy
  • Original, authentic content is more resilient than commodity information
  • Multimodal content should be used where it adds real value
  • Success should be measured by engagement and outcomes, not just clicks

This episode serves as a grounding reminder amid rapid changes in search presentation. While AI continues to reshape how people interact with information, Google’s message is consistent: creators who focus on clarity, originality, and genuine usefulness are best positioned to succeed—regardless of how search results are displayed in the future.

Watch the podcast below:


Google Sues SerpApi Over Search Scraping — What It Means for Data, AI & SEO

Google has filed a lawsuit against SerpApi, a Texas-based search results scraping company, accusing it of illegally harvesting and reselling copyrighted content from Google Search at scale. The federal complaint, filed on December 19, 2025 in the U.S. District Court for the Northern District of California, alleges SerpApi used hundreds of millions of fake search requests and technological evasion techniques to access and redistribute data it did not have permission to use.

According to Google, SerpApi’s bots bypassed its security protections — including a system called SearchGuard — by disguising request attributes, rotating identities, and blasting search endpoints with massive automated queries. Google claims these actions undermine the choices of website owners and content rightsholders, and violate federal law — including the Digital Millennium Copyright Act (DMCA).

What Google Is Alleging

In its complaint, Google says SerpApi:

  • Circumvented security measures designed to prevent unauthorized extraction of search results and embedded content.
  • Bypassed SearchGuard protections, which require requests to meet certain criteria before delivering copyrighted material like Knowledge Panel images and real-time product data.
  • Used deceptive tactics — including fake browsers and bot networks — to mask scraping activity as human traffic.
  • Harvested and resold copyrighted content from search results, including licensed materials, without authorization or compensation to rights holders.

Google is seeking both monetary damages and a court order to stop SerpApi’s alleged circumvention activities and destroy related technology.

Why This Case Matters

1. Copyright and Access Controls

The suit relies in part on DMCA anti-circumvention provisions, which prohibit bypassing technological protections “controlling access” to copyrighted works. Google’s lawyers argue SearchGuard is a qualifying access control that SerpApi deliberately defeated.

2. Implications for SEO Tools & APIs

Many SEO tools and competitive intelligence platforms rely on SERP data — sometimes obtained via third-party APIs. If Google prevails, automated access to search results could become much more restricted, forcing tool developers to find alternative methods or license data directly.

3. Data Ownership & AI Training Debate

This lawsuit adds to broader industry tensions over data ownership, web scraping, and training datasets for AI systems. Previously, Reddit sued SerpApi and others over similar data extraction methods tied to AI model development, highlighting the legal and ethical debate around web data access.

4. Competitive & Ethical Dimensions

SerpApi has defended its practices, saying it provides information that is publicly visible in a browser and will “vigorously defend itself” in court. It also frames the lawsuit as potentially anti-competitive, given how AI and search tools rely on broad web data access.

What This Means for Businesses & SEOs

  • Third-party SERP access could change: Tools that rely on scraping search results may need to adapt or secure licensed access.
  • Data usage policies matter: Respecting robots.txt and API terms is increasingly important as legal scrutiny tightens.
  • AI development impacts: The case highlights the intertwining of search engines, data rights, and AI model training — a space where legal risks and innovation often collide.
  • Watch for ripple effects: If Google wins broad injunctive relief, it could affect how many SEO and AI tools function in the future.

Google’s lawsuit against SerpApi is more than a legal battle — it’s part of a wider clash over how web data can be accessed, used, and monetized in the age of AI and automated analytics. As courts parse complex issues around copyright, circumvention, and competition, the outcome could reshape not only scraping practices but also how AI developers and SEO platforms source and rely on search data.


Google Rolls Out Gemini 3 Flash to Supercharge AI Mode in Search

Google has begun a global rollout of Gemini 3 Flash, a new, speed-optimized version of its flagship Gemini 3 AI model family, tailored specifically for use in AI Mode across Google Search and other AI interfaces. This update positions Gemini 3 Flash as the default AI engine for everyday search queries that require fast, nuanced responses, helping users get smarter answers with lower latency and broader availability.

Gemini 3 Flash is part of the broader Gemini 3 model lineup introduced by Google — with Gemini 3 Flash, Pro, and Deep Think variants — and is designed to blend high-quality reasoning with real-world speed, making complex question handling feel as fast as conventional search results.

What Is Gemini 3 Flash and How It Works

Built for Speed and Public Search Use

Gemini 3 Flash brings the reasoning capability of the advanced Gemini 3 architecture into a model optimized for speed and efficiency, rather than pushing every complex task to the highest-tier models. Google says this enables AI Mode to generate thoughtful, context-aware answers far more quickly than earlier baseline models.

By making Gemini 3 Flash the default in Google Search’s AI Mode, users worldwide now experience faster responses — especially for queries involving deeper reasoning or comparisons — while still being able to switch to Gemini 3 Pro for highly complex outputs like custom visuals or advanced analytical results.

Broader Integration Across Google Products

In addition to Search, Gemini 3 Flash is becoming the standard AI model in the Gemini app itself, replacing older Flash variants like Gemini 2.5, and delivering faster performance for conversational queries and everyday AI tasks.

This expansion is part of Google’s push to make its AI technology more responsive, widely accessible, and cost-efficient for both casual users and developers working with AI-powered systems.

Why This Matters

Faster and Smarter Answers

Users will notice that AI Mode’s output — from research-level questions to multi-part comparisons — loads more quickly and engages more context effectively, even when queries involve nuanced reasoning. This helps shrink the perceptual gap between traditional search speed and generative AI responsiveness.

Balancing Speed and Capability

While Gemini 3 Pro remains available for heavy-duty use cases like code generation, math reasoning, or interactive visual apps, Gemini 3 Flash serves as a practical baseline model that delivers high-quality results without the heavier compute overhead. This makes AI Mode more scalable and responsive at global scale.

Impacts for Developers and SEO

For developers and AI integrators, Gemini 3 Flash’s rollout signals a new baseline for deploying production-ready AI interfaces that are both capable and efficient. For SEO professionals, the change underscores that AI-generated search experiences are becoming faster and more complex-aware, which may influence how content is selected and presented in AI-driven responses.

The launch of Gemini 3 Flash marks a significant inflection point in Google’s AI strategy: prioritizing speed plus intelligence at scale. By standardizing a powerful, resource-efficient model as the default for AI Mode in Search and the Gemini app, Google is pushing toward a future where generative AI feels as immediate and responsive as traditional web search — but with much richer reasoning under the hood. As AI continues to shape how search evolves, Gemini 3 Flash lays the groundwork for broader adoption of real-time AI assistance across everyday queries and complex workflows alike.


Google Expands Preferred Sources Globally — Users Can Now Customize Top Stories

Google is rolling out its Preferred Sources feature more broadly, allowing searchers around the world to choose the news outlets and websites they trust most and see more of their content in Top Stories on Search results. The global expansion builds on earlier tests in the United States and India and is now becoming available to English-language users worldwide, with additional languages following in early 2026. This change gives users more control over their news experience by letting them prioritize familiar or trusted sources in search results — from local blogs to global publications.

What Preferred Sources Is and How It Works

Preferred Sources lets users manually curate the publishers they want to see more often when searching for newsworthy topics. When a user selects one or more preferred sources, Google will show articles from those outlets more frequently in the Top Stories carousel — while still including other relevant results — whenever those sources have fresh and relevant content to display. Nearly 90,000 unique sources have already been selected by users during initial testing, spanning large news organizations as well as niche blogs and local publishers.

To activate Preferred Sources, users can search for a current topic that triggers a Top Stories section and click the star or settings icon next to the heading. From there, they can search for and select the outlets they want to prioritize, and refresh results to see content from those sources more prominently.

Why This Matters

The global rollout of Preferred Sources marks a shift in how Google personalizes news discovery by incorporating explicit user preference signals into search results. Rather than relying solely on algorithmic relevance and freshness, Google now lets individuals influence which publishers they see first for news topics — a first in terms of direct, user-controlled customization for Top Stories. Users who add preferred sources tend to click through to those sites about twice as often on average, highlighting how strong audience choice can impact engagement patterns.

This feature also reflects Google’s effort to balance personalized experiences with broad access to diverse perspectives, ensuring results still show other outlets while giving selected sources a visibility boost.

What’s Next

The rollout begins with English-language users worldwide, with support for additional languages coming soon. Google is also working on related features — such as highlighting links from users’ paid subscriptions more clearly in AI Mode and other surfaces — further connecting users with content they value.

For publishers and SEO professionals, Preferred Sources introduces a new dimension of audience loyalty and customization into the search ecosystem, potentially influencing who gets seen first during news searches when users explicitly choose their favorite outlets.

Takeaways

  • Google’s Preferred Sources feature is now rolling out globally for English-language users, letting people prioritize news outlets in Search results.
  • Users click through to their preferred sources roughly twice as often once selected, indicating strong engagement impact.
  • Selection happens via a star/settings icon in the Top Stories carousel, where users pick the publishers they trust.
  • Related features, like highlighting links from paid subscriptions and increased inline links in AI Mode, are part of Google’s broader push to connect users with trusted content.

As Google continues to give users more control over how they discover news, Preferred Sources highlights a broader shift toward personalization driven by explicit user choice rather than purely algorithmic signals. While the feature doesn’t replace traditional ranking factors, it introduces a new layer of visibility shaped by trust and familiarity. For publishers and site owners, this reinforces the long-term value of building recognizable brands and loyal audiences—especially as search results evolve to balance relevance, authority, and user preference.


What stands out this week is how interconnected these updates are: a core update reshaping rankings in real time, AI models redefining how results are generated and presented, and trust, data access, and source selection becoming more prominent themes.

As the December core update continues to roll out and its effects evolve, the broader takeaway remains consistent—search is moving faster, becoming more dynamic, and placing greater emphasis on quality, originality, and credibility.

For site owners and SEOs, this is a moment to observe patterns carefully, resist overreacting, and stay focused on building durable value as search continues to recalibrate.