Microsoft’s Kodu comes to the PC

 Posted by (Visited 6193 times)  Game talk
Jan 122010
 

For those who haven’t seen Kodu, it’s a visual game development environment originally created for educators and running on the Xbox 360. It uses pie menus, and a game controller, and basically, a graphical language based around trigger events, to allow even children to develop games.

ArsTechnica is reporting that Kodu now runs on the PC as a beta. There’s more details here at Microsoft’s news center.

The inspiration for Kodu came from MacLaurin’s daughter. A few years ago, MacLaurin noticed his daughter, then 3 years old, watching his wife browse her Facebook page. He flashed back to his early experiences with a computer, comparing the passive experience his daughter was having to the coding he did to interact with the machine. It was a sad realization, he said…

…MacLaurin and his Microsoft Research team set out to recapture that magic. Through the basics of programming, they wanted to teach youngsters how they could create new worlds from their imagination. Two years later, Kodu was a hit on Xbox LIVE and was being used in more than 60 educational institutions across the globe to introduce children to programming.

Oh, Avatar

 Posted by (Visited 10564 times)  Watching  Tagged with:
Jan 102010
 

So, I finally got around to watching Avatar yesterday. In 2d, not 3d, as it happens.

I enjoyed it a lot. It was fun, and yes, it even left me thinking. But it left me thinking in probably not the way that the filmmakers intended, because the core problem with it and the reason, I think, why so many people have a hard time with it, is because it is a great entertainment that is intellectually dishonest.

Spoilers below.

Continue reading »

The Psychology of Video Games blog

 Posted by (Visited 10724 times)  Game talk
Jan 072010
 

There is a wonderful new blog up called The Psychology of Video Games, written by Jamie Madigan, who has a PhD in psychology. It basically looks at individual “brain hacks” so to speak and explains specific incidents in games using them (similar to how I’ve referenced these brain hacks in that Games Are Math talk, in the second half; or even in A Theory of Fun)…

So far he has done posts on

  • confirmation bias
  • the commitment fallacy
  • fundamental attribution error
  • the hot hand fallacy
  • arousal and decision making
  • loss aversion
  • variable reinforcement and dopamine
  • sunk costs
  • contrast effect

The articles are lucid and funny… love the use of footnotes for humor. 🙂 And it’s a great intro to the overall topic for those of you who have not dug into this stuff before. I can’t want to read more.

To be sure, some players get lots of kill streaks because they are tiny, radiant gods of destruction whose skills at the game put every last member of the Boston Celtics to shame (who prefer Halo 3, after all). But skill aside, does the kill streak system in MW2 work in the sense that it gives players some momentum that propels them towards otherwise unreachable acts of virtual carnage? Is a player who has 10 kills in a row any more likely to get the 11th one needed to unlock a kill streak reward than he is to get the first kill?

Nope, says the science of psychology and basic probability theory. It’s all in their head because splash damage and javelin glitch abuse aside, each shot is basically an independent event. For any given player, any perception of kills clustering together more than usual is just a product the human brain’s tendency to see patterns where there are none –a phenomenon called “apophenia” by psychologists trying to win at Scrabble.

A smart use of Moore’s Law

 Posted by (Visited 7402 times)  Misc
Jan 062010
 

In the past I have written and spoken about what I called “Moore’s Wall,” which could be summarized as the notion that expanding computing capabilities give us higher bars to reach which then result in higher costs and development times, and not actually better products.

Well, Toshiba just announced a TV at CES that circumvents this in a clever way. The TV has a Cell chip in it, which makes it outrageously powerful for a TV. So powerful that it can in fact do silly things with the extra processing power, such as interpolate frames, or do special video effects.

Or render the image twice at full speed, so that it can turn any signal into a 3d image.

In effect, this means that the problem in Moore’s Wall is sort of circumvented to a degree; instead of upping the caliber of the content needed, it just uses the computing power to transform the content we already have.

I like this notion, in part because it has a lot in common with notions about standard formats and the like. But it also makes me propose a parlor game: what would <insert device here> be like with insane computing power but no changes to the rest of the technology? We have started to see glimmers of that with the way in which phones and iPods have been changing, of course, and the idea of networked fridges that detect spoiling food has been out there forever… but I am wondering about things like this, which seem to magically upgrade everything we already had.