Google Gives Us A Peek On How Google Maps Reviews Work

Do you wonder how Google Maps and their reviews system work? Wondering how millions of reviews are sorted and published? Google gives us a peek on how it all works.
Marie Aquino
February 4, 2022

Google has published a blog that explains how Google maps reviews work, and how they handle and approve their user-generated reviews.

A lot of people rely on reviews of places, restaurants, and a whole lot more before they decide whether a place is worth visiting, or a restaurant is worth trying, etc.

A lot of people use Google maps to check out reviews of a place and find more details about it. The reviews are provided by users who have first-hand experience of the places, their particular experience when they visited the place, what they liked, what they didn’t, and other information about the place that can be very helpful to first-time visitors.

According to Google, millions of reviews are posted every day from people around the world, and they have a dedicated team that provides round-the-clock support to keep the information relevant and accurate and to also prevent inappropriate content from being published.

Google has created strict content policies to make sure that the reviews are based on real-world experiences and to keep irrelevant and offensive comments off of Google Business Profiles.

They ensure that their policies also evolve as the world evolves. As an example of it, they provided how they added extra protection in place to remove reviews that criticize a business for its’ health and safety policies that were enforced due to Covid19 restrictions and policies.

With regards to moderation of reviews, as soon as someone posts a review, it is sent to a moderation system to make sure that the review does not violate any of the Google policies on reviews. They describe it as a security guard that stops unauthorized people from getting into a building, but instead, the teams stop bad content from being posted.

Due to the huge volume of reviews being added, the system makes use of both machines and humans to moderate the content. Both have different strengths when it comes to the moderation of content – machines enable moderation at scale, while humans provide the understanding that machines are unable to.

Machines are the first part of content moderation as they are good at identifying patterns and can work at scale. They are able to recognize patterns that determine if content is legitimate or not, and the majority of fake and fraudulent content is removed before anyone sees it.

With regards to how the machines scan through the reviews, they look at multiple angles such as:

  • The content of the review: Does it contain offensive or off-topic content?
  • The account that left the review: Does the Google account have any history of suspicious behavior?
  • The place itself: Has there been uncharacteristic activity — such as an abundance of reviews over a short period of time? Has it recently gotten attention in the news or on social media that would motivate people to leave fraudulent reviews?

On the other hand, human operators regularly run quality tests and complete additional training to remove bias from the machine learning models. By training their models on the different ways words and phrases are used, there is improvement in their ability to catch policy-violating content and a reduction in the chance of unintentionally blocking legitimate reviews from being shown.

If the system does not detect any policy violations, then the review can be posted in just a matter of seconds.

However, it does not stop when the review goes live. Their system still continues to analyze the contributed content and watch for patterns that are questionable – such as a group of people leaving reviews on the same cluster of Business Profiles, or a place or business receiving an unusually high number of 1 or 5-star reviews over a short period of time.

Aside from that, people and businesses can also flag any policy-violating reviews. Their team of human operators then reviews the flagged content. Reviews that are found to violate policies are removed, and in some cases, the user account is suspended, even litigated.

Aside from reviewing flagged content, their team also proactively identifies potential abuse risks, in order to reduce the likelihood of successful abuse attacks.

They constantly improve their system in order to protect local businesses from fraud and abuse and to keep the information helpful for users.

For more details on the Google Maps Review System, check out the blog post here and their video explainer:

Do you have a Google Business Profile? What do you think of their reviews system? Has their system been effective or do you think there is more room for improvement needed?