兔子先生传媒文化作品

Skip to main content

CUriosity: Should you trust 5-star reviews?

In CUriosity, experts across the 兔子先生传媒文化作品 campus answer pressing questions about humans, our planet and the universe beyond.

As the holiday shopping season ramps up, Nicholas Reinholtz, assistant professor of marketing in the Leeds School of Business, delves into the question: 鈥淪hould you trust 5-star reviews?鈥

Photo of hand swiping on smartphone. Bubbles appear showing 5.0, 4.9 and 4.6 -star reviews

Credit: Adobe Stock

There鈥檚 no understating the influence of online reviews on consumers鈥 purchase decisions. Nine out of 10 say they consider reviews before making a purchase, and 45% simply won鈥檛 purchase a product if it has no reviews available, according to from consumer research firm PowerReviews.

But there are limitations to relying solely on user ratings and online reviews to evaluate product quality, according to Nicholas Reinholtz, who including , product and price search.

Reinholtz sat down with 兔子先生传媒文化作品 Today to discuss how consumer expectations and other factors affect ratings and how biases in rating systems can lead to inaccurate assessments鈥攁nd potentially bad purchases.

Should consumers be cautious and avoid getting too swept up in reviews?

My co-author Matt Meister, a former Leeds PhD student and current assistant professor of marketing at the University of San Francisco, and I have looked at Airbnb ratings, and we have a second paper that looks at ratings from REI. One thing both papers have in common is this idea that expectations can influence ratings.

So if you go on to Amazon and buy a $500 pair of headphones, and if there are any problems at all, you give it one star. You say, 鈥業 can't believe I paid $500 for a pair of headphones and there is a crackle.鈥 Unacceptable, right? Whereas if you paid $5 for the headphones, you give it five stars because they work.

That expectation should influence ratings does make sense. You can't have a five-point scale that encompasses the entire spectrum of human experience, right? Ratings are relative to the expectations you have going into the product purchase. There are multiple issues with that, and one of them is that when people are looking at products, they don't account for the fact that ratings reflect expectations.

With our research on Airbnb ratings, the point that we're trying to make is that it's totally fine and reasonable that people would give ratings that reflect their expectations. But it's problematic if future consumers don't recognize the role of those expectations and adjust for them accordingly.

Airbnb has this status symbol where they label certain hosts 鈥榮uperhosts.鈥 We look at Airbnbs that are superhosts in some time periods and not superhosts in other time periods, and we find that they get better ratings during the periods where they're not labeled a superhost. So presumably people are going into the experience saying, 鈥極h, I'm staying at a superhost, and so the same experience is rated slightly worse against those expectations.鈥

Are star ratings meaningless?

We should have a mantra: When you're on Amazon, more stars doesn't mean better. I don't think star ratings are useless because they can, particularly coupled with text reviews, identify truly problematic things, like if something gets a terrible rating.

I think if you're using ratings to compare, say, a product that looks better, but it's only 4.7 stars, whereas another similar product maybe looks a little bit worse, but it's 4.9 stars. Those are the types of situations where I think we really need to exert caution for a variety of reasons instead of just blindly following the ratings.

Nicholas Reinholtz headshot

Nicholas Reinholtz

If we rate experiences, it's really hard to disentangle contextual influences from intrinsic ones.

For example, we looked at ratings for winter jackets on REI and merged those ratings with weather data. It turns out that people rate winter jackets better on warmer days and worse on colder days. The reason we think that happens is that you go outside on a super cold day and you're cold, and when you rate the jacket you're wearing, you're like, 鈥榃ell, I'm cold, so this jacket must not be that great.鈥 Whereas you go out on a warm day, and the jacket feels great, right? It's perfect. You're totally warm.

What鈥檚 something surprising you鈥檝e found in your consumer ratings research?

The thing that surprised me the most is how uncritically people accept reviews as a measure of quality. We had a thought experiment related to headphones. We asked study participants to imagine they are looking at two pairs of headphones online. One is a $500 pair of headphones that has a 4.6 rating. The other is a $5 pair of headphones with a 4.8 rating. We asked: Which of these two pairs of headphones do you think are higher quality? We were convinced that everyone would point to the $500 pair. It turns out only about 50% of people did. The other half endorsed the idea that the $5 headphones were higher quality.

As a researcher and expert on the topic, how do you personally use reviews?

We always like to think of ourselves as more savvy. There's a powerful draw of reviews, and I still catch myself looking at them and being like, 鈥業 think I'll like it, but, you know, it's a 4.7. Maybe there's something wrong with it.鈥 I was buying carabiners the other day, and I found myself looking for higher-rated carabiners鈥4.7 versus 4.9. And then I had to be like, 鈥楥ome on, don鈥檛 do this.鈥

It鈥檚 a tough world out there for a consumer. And you don't have many people whose incentives are aligned with yours. These days I find myself gravitating more and more to brands, which is something I didn't do as a younger person, because I feel like you can build trust in brand quality, unlike picking a product on Amazon whose name you鈥檝e never heard of and sounds like alphabet soup.