Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What I always thought was that there really should be some user-based weighting system. Like, if a user upvotes 90% of the things he sees, his upvotes are probably worth less than upvotes by someone who upvotes only 1% of the posts he sees.

Same thing applies to things like Yelp reviews. Maybe a user with close to a 5-star lifetime rating average should have his reviews "renormalized" to 3's because his standards are probably just lower than the guy with a 1-star average.

The problem is that there are so many other factors here (maybe the 5-star average person only visits (? or just reviews) really good places). Maybe the crazy upvoter just spends more time reading each page on Reddit. These are complicating factors that are hard to predict and if the simple case is working, why try? If there were a simple, clearly better way of voting/rating, it would be done.



Yes, Bayesian weighting. I feel that every kind of pool needs Bayesian correcting for calculating how much information does an opinion really carry.

The problem of people only visiting isn't really a problem. If a person only visits good places, he'll be perfectly able to differentiate those good places from each other, and can say one of them is bad. Yet, somebody that visits good and bad places has a saying on what places are good, and what ones are bad.


A hybrid variant: 1,000 points/month, 100/day (yes, higher than the monthly average). Exceeding either on starts reducing the weight of total votes.

The periods should probably be rolling averages and apply to weights for CURRENT votes. Since early voting activity has undue influence, often in the first few minutes of contents' existence, retroactively deflating month-old ratings doesn't do much.

The idea is to enable reasonable inputs for a time, then start washing them out.

The deflation factor might be applied more broadly across other indicators (IP blocks, etc.).

Or ratings factored for conformance with stated site moderation goals. See my longer top-level comment.


> Same thing applies to things like Yelp reviews. Maybe a user with close to a 5-star lifetime rating average should have his reviews "renormalized" to 3's because his standards are probably just lower than the guy with a 1-star average.

Another problem is the perception of star ratings. It seems like 5 (maybe 4 also) is the only "positive" rating for many people. Anything less and it might as well be a 1.


Of course, there's many areas that I didn't even mention. Another thing in the same vein for upvotes that I sometimes think about is, what is the meaning of upvoting? Does it mean, "I like this," or can it also mean "I think this submission should be higher?" Maybe I think too much but I've refrained from upvoting posts I like because I don't think they should be higher than their current position.


Can the design tell the user the meaning of upvote("I like this" / "I think this submission should be higher")? Because the significance of upvote could be either of the two depending on the area. Also, then one can sort accordingly.


Criticker does this well. But as you say, I try not to waste time watching movies I don't think I will like, so there is a good reason my ratings are biased towards the high end.


But this encourages donwnvoting, which is something you definitely do not want.


Why do you not want downvoting? Isn't that desirable part of the opinion gathering?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: