Early last week we introduced a new system that rated businesses based off the products they create. One of the topics we touched on in that post was the weighting of reviews and how they factor into the overall rating.
Today we want to dive a bit deeper into the weighting system, explain how it works and show you a preview of what it’s about to do to the juice ratings site wide. If you’re interested, we’d appreciate your feedback or any thoughts you might have on the topic! Feel free to checkout our preview of the test juice ratings and submit feedback on our thread on /r/ecr.
Over the last 2 years, we’ve talked internally a lot about how to use JuiceDB’s rating system in the most effective manner possible. It’s hard to pull real, useful information out of a crowd sourced system purely based on opinions. It’s one of the most important things we can do though, so we need to do it right and in small increments.
One of the things we found was that we had some strange, disingenuous, and odd user behavior at times (surprise!). This kind of activity led to skewed ratings, drama and disinformation. A few of the more “difficult” behaviors we identified were:
That’s not to say everyone that falls into these categories is nefarious or did anything wrong. We just can’t trust them as much as the person who has 100 well written reviews across 25 different brands. Instead of trying to punish the strange behavior, we’re propping up those we can truly identify as real.
The best solution we have heard so far is weighting user reviews based on user activity. The more positive and trustworthy behavior a user performs, the more we need to consider their review in a product’s overall rating.
Our new experimental section on the Juices page is our first attempt at addressing this. Each one of the situations shown above we have developed a weighting mechanism for:
Currently the only thing this weight is applied towards is the product’s overall rating. The more weight a user has behind their account, the more their individual rating will affect the overall rating. This is all done in the background during the tally process and doesn’t affect the look and feel of a review an individual review at all.
We have some other experimental systems we’re toying with as well right now that would tie into weighting, but we’re not ready to dive into that yet.
Applying all of these new rules and weights to reviews has changed the landscape a lot. The “All Time” popular juices list you’re used to seeing feels a lot different in the new list. From what we’ve seen so far, these new lists are more true to the current trend’s observed by the community.
We would like to invite you to check out the new experimental ratings and compare it to our current “Most Popular” juices ratings. Our goal here is to have a listen to anyone with an opinion on this change and we’re planning on doing it via this reddit post on /r/ecr.
This change is already live on the business ratings and has helped refine the lists significantly. To us (and to many others), applying this change to the actual juices will have an even larger impact on what we think of as “the best juices on JuiceDB”.
If you stumble across something that doesn’t work or doesn’t look right, we can’t make it better unless you tell us about it! Don’t hesitate to submit bugs and feedback to us through one of the following channels:
It helps when reported issues are as specific as possible, so be sure to include the following information and anything else you think might be relevant: