Google has announced that they have added a new crawler named GoogleOther to their list of crawlers. The reason for this? To take some strain off of Googlebot. Googlebot is now focused on internal crawl jobs for building the index that is used by Search. On the other hand, GoogleOther will be taking over Googlebot’s other jobs liked R&D crawls.
What is a crawler?
"Crawler" (sometimes also called a "robot" or "spider") is a generic term for any program that is used to automatically discover and scan websites by following links from one web page to another. According to the documentation on Google crawlers, GoogleOther is a generic crawler that may be used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development. The GoogleOther crawler always obeys robots.txt rules for its user agent token and the global user agent (*), and uses the same IP ranges as Googlebot.
The new crawler uses the same infrastructure as Googlebot and so it has the same limitations and features as Googlebot: hostload limitations, robotstxt (though different user agent token), http protocol version, fetch size, etc. It's basically Googlebot, just under a different name.
Others have been wondering if the new crawler will be used for Bard, it remains unclear other than it would be used for other functions other than for building the index for Search. Gary Illyes’ also commented “Please don't overthink it, it's really that boring.”
Check out Gary Illyes’ post about the new crawler here.