SEO Lightning Round 2 is a 2-minute, 11-second video that Google recently published, during which about 4 questions were raised (and answered).
At ~1 minute, 22 seconds into the video, host John Mueller (Google Search Advocate) read and addressed a question:
“Jacco asks if having RSS feeds is problematic for crawling, since Googlebot fetches them so often.”
The actual, full wording of Jacco’s question was:
“Looking at some log files, I see that around 25% of Google’s crawl budget goes to the RSS feed URLs that are in the head of every page. Is it helpful to delete the meta-rules from the head section?”
Video queued below:
The short answer is, no, these are not problematic. Google systems balance crawling across a website automatically.
Sometimes this means crawling some pages more often, but this only happens once we’ve looked at the important ones already.
So, RSS feed URLs don’t usually deplete too much of your crawl budget.
And that makes sense: RSS feeds have been a built-in component of sites for years and years. Google, by now, has probably figured out how to allocate crawl budget to them. Plus, RSS feeds are, in my opinion, like a list of some of the site’s latest blog posts, so they can serve as a sort-of site map for search engines.