Posts Tagged ‘netflix’

Looks like most other folks still messing around in the netflix competition are in a similar situation as myself.

This new article on NYtimes gives some decent insight into the folks still working on it and the remaining challenges to winning the prize.

The article discusses one of the problems I’ve found too.  Movies that actually suck by most accounts but people have very polarized opinions on – either love or hate.  This is very difficult for an algorithm working on sparse data to handle when MOST of the data is mean-reverting.

“Bertoni says it’s partly because of “Napoleon Dynamite,” an indie comedy from 2004 that achieved cult status and went on to become extremely popular on Netflix. It is, Bertoni and others have discovered, maddeningly hard to determine how much people will like it. When Bertoni runs his algorithms on regular hits like “Lethal Weapon” or “Miss Congeniality” and tries to predict how any given Netflix user will rate them, he’s usually within eight-tenths of a star. But with films like “Napoleon Dynamite,” he’s off by an average of 1.2 stars.

The reason, Bertoni says, is that “Napoleon Dynamite” is very weird and very polarizing. It contains a lot of arch, ironic humor, including a famously kooky dance performed by the titular teenage character to help his hapless friend win a student-council election. It’s the type of quirky entertainment that tends to be either loved or despised. The movie has been rated more than two million times in the Netflix database, and the ratings are disproportionately one or five stars.

Worse, close friends who normally share similar film aesthetics often heatedly disagree about whether “Napoleon Dynamite” is a masterpiece or an annoying bit of hipster self-indulgence. When Bertoni saw the movie himself with a group of friends, they argued for hours over it. “Half of them loved it, and half of them hated it,” he told me. “And they couldn’t really say why. It’s just a difficult movie.””

This is exactly the problem I’ve run into.  For the most part this prize algorithm has been uncovered for most of the database of users and movies.  It’s probably to the point where if this was an internal dev team working on the problem they would haven’t said “good enough” and moved on to other projects.

I’m glad I’m not alone in still working hard on this but getting almost further.  Whew.

Read Full Post »

Well I’m in the process of getting somewhere finally, let’s put it that way.  My initial efforts were piss poor with only slightly interesting improvements using some ad hoc messing with means and brute force approaches.  Nothing I did would get me into the leader board.

Of course, I could just take what all the leaders have done and build on that.  Some teams have done that.  I think there’s something more simple to do and finding that more simple thing is hard when you are taking others complicated models as your starting point (you pick up all their shortcomings too!).  I’m starting with the most basic systems I can and searching through various simple tweaks.  Also, I want a system that will make sense to me if I have to put this on the shelf for a week or two to get other work done.

What I’ve done:

I use the pyflix package to handle the basic dataset and algorithm framework.

I borrowed the basic weighted KNN clustering algorithm from O’Reilly publishing/Toby Segaran’s book “Collective Intelligence”

I’ve connected the Pythonika package to Mathematica 6 so that I can use Mathematica front end to work out algorithm variations and visualizations (Mathematica 6 has very good clustering algorithms and features that are far easier to play with than building your own or using some python, c, java library.  And, once speed is the key, it’s possible to do compiles and speed ups in mathematica, I digress).  Pythonika let’s me run python code (retrieve the data from the pyflix data store)

I used the python pickles from Ilya G. He did the hard work of creating feature vectors (coded descriptions) from IMBD data for things like release date, genre, cast and so forth.  This makes the KNN algorithms more interesting than just going off of the movie title, rating and year.

My basic outline so far of the algorithm system:

Create clusters of the data based on Genre, release year, number of ratings, director (as proxy for many other factors).  Further polish the clusters by user.

Add in some meta features of users like frequent rater, avid rater and so on for clustering the users.

Do multiple runs of rating predictions based on just movie clusters, user clusters and combinations with variations neighborhood sizes.

My particular challenges are:

Reducing the memory required to do trial runs on the data with the algorithms.

Reducing the code required to try new algorithms

Keeping track of all these different parts

I hope this is helpful for instructive.  It’s fun for me at the very least.

Read Full Post »

A couple of posts ago I made a mistake in my data assumptions on the number of movies in available for recommendation.  IMBD gives a better idea… i was off by about 20x.  hahaha.  yikes.

The other day i was making a point to someone about how the blog medium promotes research laziness.  Here I prove my own point.  Painful.  Luckily this same medium allows you to print retractions, updates, and improvements uber fast.



Read Full Post »


Check out the leaderboard.  The recent progress by the top four teams has been impressive recently.

BellKor should win this within 2 months.  They also showcase a key point in their blog.  To achieve practical results you don’t need a crazy model with a lot of predictors.

I’ve yet to figure out why they spend $1,000,000 on this algorithm and/or the press/buzz in generated.  Their own algos and business rules do almost as well as this already.  But, hey, if you have $1,000,000 laying around to give to some smart researchers, great!


Read Full Post »

In October of 2006, Netflix released a $1,000,000 contest to improve their rating prediction/movie recommendation algorithms.  No one has won the prize yet (surprisingly).

I read the latest wired mag (i know, i know) which featured a contestant.  I’m easily inspired to work on difficult challenges.  I figure this will be good learning AND good research into software collaboration (one of my favorite topics). Yes, I’ve entered the fray as Team SocialMode.  Heck, I’ve been looking for a meaty project to put a bunch of my thoughts together.  This prize is perfect for that and it takes nothing more than access to cheap CPU cycles and a brain (both of those I have, which I have more of I don’t know.)

My approach will not be highly abstract.  Having built several Pay Per Click engines/behavioral targeting systems, simple content recommendation engines, search algos and a few collaborative filtering systems, my experience leads me to believe an improved algorithm will come from practical analysis of rating behavior, user interface behavior, exposure to movies to be rated (cognitive dissonance type concepts), clustering of practical movie meta data (e.g. I like anything with George Clooney, 10 explosions or more, or dinosaurs) and normalizing simple “flags” (all people dislike the Star Wars with Jar Jar, just need to adjust for an individuals rating scale).

Some assumptions of mine:

  • People rate things they haven’t seen
  • People rate in batches
  • People don’t rate as they watch
  • Viewing experience affects rating
  • Technical quality affects rating
  • There are Gender and Age differences
  • Every individual has a different 1-5 scale
  • It is cognitivily easier to rate something as 1 or 5 (love or hate) than 2-4
    • We deal with bits better than inbetween values
    • User interface widgets often times make it harder to rate inbetween value

I’ll have more to say on this.

My solution will be using Python and the Orange library and will utilize data from IMDB, BoxOfficeMojo and RottenTomatoes.

Some interesting links:

Let’s roll!  All progress will be posted.


Read Full Post »