Jan 032007
 

Yesterday, what with all the site issues, I forgot to provide a pointer to the excellent Appelcline & Allen article on Collective Choice: Experimenting with Ratings article over at Life With Alacrity. It’s a really really good breakdown on how rating systems work and how to address common problems with them (such as the “Lake Wobegon problem” where “everyone is above average”). It even includes algorithms.

Rating systems are something that aren’t much used in online worlds today, but should be. Yet another thing where the games have stuff to learn from the web.

  6 Responses to “Life With Alacrity: Collective Choice: Experimenting with Ratings”

  1. Interesting read. I was only able to give it a quick once over because of time constraints.

    It’s interesting because they seemed to have inherited a set of legacy features (and issues) when they aquired the site/community.

    I’m not sure what to make of thier systems complexity in thier modeling, shades of granularity sometimes lead to more shades of grey which leads to inconclusive outcomes. But it seems like thier doing a lot of good things to get thier accuracy rating up.

    I’m not sure I agree with a “trust based rating” system (IMO it introduces unacceptable value-laden assumptions, or rather unpredictable variables (subjective)) but it does compliment Baysian thoeries of derivation. I’m more of a frequentist (Venn) guy myself but I tend to work with large population analytics also, which introduces objectivist biases also.

    The viariance in thier outcomes is really skewed in thier 4 column chart. I’m thinking they should try a weight of 10 and 20 not 25 and 50, if I understood where they were going correctly. Also perhaps if they are going to use a Trusted Rating system maybe loosening up the definitions or rather widening the criteria of who is considered “trusted” may perhaps increase accuracy. It seesm like a strict criteria on whats considered trusted is causing the high variance that only increses as the weight increases. This is usually caused by throwing out good data with bad data.

    Or rather throwing out the baby with the bathwater 🙂

    Anyhow statistical approach is always interesting, approach is not as important as getting the outcome/analysis you need for the problem at hand. it just comes down to preferances, some people like Neopolitan ice cream and some people stick with Vanilla, have the same intended effect. 🙂

  2. User-Created Trust Networks in Second Life…

    I’m a big fan of the idea of user-created trust and ratings networks, although most of them seem to not work very well. I recently came across two interesting examples in the virtual world of Second Life, though, which are worth pointing out here…

  3. […] [Oops, I’d meant to include in the original post a link to this extensive blog post on ratings systems, which Raph Koster linked to yesterday. Complete with algorithms, as Raph points out.] […]

  4. Seems to me that the trouble with ratings is subjectivity: I don’t really care what people overall think about things, because people overall tend to like things like Brittney Spears… or on the flip side, rate obscure things extremely highly because nobody normal would be interested in it.

    The article doesn’t really address this, and perhaps that’s fine because the very nature of its reviews act as a filtering mechanism: only people interested in RPGs in the first place are going to read them.

    It seems that what’s needed is a contextual rating system, something like the way last.fm works. I want to know how people who like the same things as me rated something. In other words, people whose ratings most match mine.

    I’m reasonably sure this fixes the problem of whose ratings to trust, because people trying to skew the results a certain way simply won’t match my likes and dislikes in the first place. Although I’m sure it could be gamed in another way.

  5. One of the things I’ve been blathing about for years (not yet on TinkerX… maybe this would be a good excuse for a post… another ‘Raph set me thinking’ one; it needs a category, apparently) is the need for a rating system where one of two things happens:

    1. Before any rating gets done (or taken into account), the participant rates him/herself; or,

    2. At the time of rating review by another participant, all the ratings that have been performed by other participants are somehow “regressed” onto the participants in order to help scale/weigh then in some way based on my own ratings or my self-rating.

    There are music review systems, for example, like Pandora, that help you find music you like based on algorithms, some of which involve voting from other users. One of the best uses of Del.icio.us involves finding new links from taggers who have tagged things in similar ways to you. “Regressive rating” is essentially “rating the rating.” Call it either that or “meta-rating” if you like.

    And it should be transparent and modifiable. There are days, for example, when I may want to know what people who have tagged themselves “Dad” have to say about various subjects, like “vacation spots.” There are other days when I might want ratings on the exact same subject, but from people who are tagged “doctors” or “travel agents” or “security professionals.” Or “kids,” eh? And how helpful would it be to be able to mix and match? And remove anyone who has also tagged themselves as “gamer” and “sci-fi lover”?

    Self-tagging, meta-tagging, regressive rating… call it whatever you want. But a system that allows for the manipulation and review not just of the review data, but the context of it… ain’t that supposed to be the whole power of this “social web” thing? Overlapping, modular, amorphous groups and how we can be one thing one time in one place and something else ten seconds later somewhere else? Hell, I might even want to review something two different ways… this is a great book for a 10-year-old, but way to scary for a 6-year-old. How do I do that if all I can do is give it 1-5 “stars?”

  6. […] An article on experimenting with ratings from "Life with Alacrity," by way of Raph. […]

Sorry, the comment form is closed at this time.