The Netflix Prize was an open competition for the best collaborative filtering algorithm to predict user ratings for movies , based on previous ratings without any other information about the users or films. .
The competition was held by Netflix , an online DVD , which was not connected with Netflix (current and former employees, agents, etc.) nor a resident of certain Blocked countries (such as Cuba or North Korea).  On September 21, 2009, the grand prize of US $ 1,000,000 was given to the BellKor’s Pragmatic Chaos team which bested Netflix’s own algorithm for predicting ratings by 10.06%. 
Problem and data sets
Netflix provided a training data set of 100,480,507 ratings that 480,189 users gave to 17,770 movies. Each training rating is a quadruplet of the form
<user, movie, date of grade, grade>. The user and movie fields are integer IDs, while grades are from 1 to 5 (integral) stars. 
The qualifying data set contains over 2,817,131 triplets of the form
<user, movie, date of grade>, with grades known only to the jury. A participating team’s algorithm must predict grades on the entire qualifying set, but they are only informed of the score for the data, the quiz set of 1,408,342 ratings. The other half is the test set of 1,408,789, and performance on this is used by the jury to determine potential prize winners. Only the judges know which ratings are in the quiz set, and which are in the test set-this arrangement is intended to make it difficult to hill climb on the test set. Submitted predictions are scored against the true grades in terms of root mean squared error (RMSE), And the goal is to reduce this error as much as possible. Note that while the actual grades are in the range 1 to 5, submitted predictions need not be. Netflix also identified a probesubset of 1,408,395 ratings within the training data set. The probe , quiz , and test data sets were chosen to have similar statistical properties.
In summary, the data used in the Netflix Prize looks as follows:
- Training set (99,072,112 not including the probe set, 100,480,507 including the probe set)
- Probe set (1,408,395 ratings)
- Qualifying set (2,817,131 ratings) consisting of:
- Test set (1,408,789 ratings), used to determine winners
- Quiz set (1,408,342 ratings), used to calculate leaderboard scores
For each movie, title and year of release are provided in a separate dataset. No information available. In order to protect the privacy of customers, “some of the rating data for some customers in the training and qualifying sets have been deliberately perturbed in one or more of the following ways: deleting ratings; “. 
The training was rated as over 200 movies, and the average movie was rated by over 5000 users. But there is a great variance in the data-some movies in the training set as few as 3 ratings,  while one user rated over 17,000 movies. 
There was some controversy as to the choice of RMSE as the defining metric. Would a reduction of the RMSE by 10% really benefit the users? RMSE results in a significant difference in the ranking of the “top-10” most recommended movies for a user. 
Prizes were based on improvement over Netflix’s own algorithm, called Cinematch , or the previous year’s score if a team has made improvement beyond certain threshold. A trivial algorithm that predicts for each movie in the quiz sets its average grade from the training data in an RMSE of 1.0540. Cinematch uses “straightforward statistical linear models with a lot of data conditioning”. 
Using only the training data, Cinematch scores an RMSE of 0.9514 on the quiz data, roughly a 10% improvement over the trivial algorithm. Cinematch has a similar performance on the test set, 0.9525. In order to win the big prize of $ 1,000,000, a participating team had to improve this by another 10%, to achieve 0.8572 on the test set.  Such an improvement on the quiz set corresponds to an RMSE of 0.8563.
As long as no team won the big prize, a progress prize of $ 50,000 was awarded every year for the best result so far. RMSE on the quiz by the least 1% over the previous progress prize winner (or over Cinematch, the first year). If no submission succeeded, the progress was not to be awarded for that year.
To win a prize, a prize winner has to provide a source code and a description of the algorithm to the jury within one week after being contacted by them. Following verification the winner also had a non-exclusive license to Netflix. Netflix would publish only the description, not the source code, of the system. A team could choose not to claim a prize, in order to keep their algorithm and source secret code. The jury also kept their predictions secret from other participants. A team could send as many attempts to predict grades as they wish. Originally submissions were limited to once a week. A team’s best submission so far counted as their current submission.
RMSE by 10% or more, the jury would have a last call , giving all teams 30 days to send their submissions. Only then, the team with best submission was asked for the algorithm description, source code, and non-exclusive license, and, after successful verification; Declared a grand prize winner.
The contest would last until the grand prize winner was declared. Had no one received the grand prize, it would have lasted for at least five years (until October 2, 2011). After that date, Netflix’s sole discretion.
Progress over the years
The competition began on October 2, 2006. By October 8, a team called WXYZConsulting had already beaten Cinematch’s results. 
By October 15, there were three teams who had beaten Cinematch, one of them by 1.06%, enough to qualify for the annual progress prize.  By June 2007 over 20,000 teams had registered for the competition from over 150 countries. 2,000 teams had submitted over 13,000 prediction sets. 
Over the first year of the competition, a handful of front-runners traded first place. The more prominent ones were: 
- WXYZConsulting, a team of Wei Xu and Yi Zhang. (A front runner during November-December 2006.)
- ML @ UToronto A, a team from the University of Toronto led by Prof. Geoffrey Hinton . (A front runner for parts of October-December 2006)
- Gravity, a team of four scientists from the University of Technology (A front runner during January-May 2007.)
- BellKor, a group of scientists from AT & T Labs . (A front runner since May 2007)
On August 12, 2007, many contestants gathered at the KDD Cup and Workshop 2007, held in San Jose, California .  During the workshop all four of the top teams on the leaderboard at that time presented their techniques. The team from IBM Research – Yan Liu, Saharon Rosset, Claudia Perlich, and Zhenzhen Kou – won the third place in Task 1.
Over the second year of the competition, only three teams reached the leading position:
- BellKor, a group of scientists from AT & T Labs . (Front runner during May 2007 – September 2008)
- BigChaos, a team of Austrian scientists from research and consulting (single team front runner since October 2008)
- BellKor in BigChaos, joined team of the two leading teams (A front runner since September 2008)
2007 Progress Prize
On September 2, 2007, the competition entered the “last call” period for the 2007 Progress Prize. Over 40,000 teams from 186 different countries had entered the contest. They had thirty days to tender submissions for consideration. At the beginning of this period the leading team was BellKor, with an RMSE of 0.8728 (8.26% improvement). Followed by Dinosaur Planet (RMSE = 0.8769; 7.83% improvement), and Gravity (RMSE = 0.8785; 7.66% improvement). In the last hour of the last call period, an entry by “KorBell” took first place. This BellKor Team BellKor. [ Citation needed ]
On November 13, 2007, Team KorBell (formerly BellKor) was declared the winner of the $ 50,000 Progress Prize with an RMSE of 0.8712 (8.43% improvement).  The team consisted of three researchers from AT & T Labs , Yehuda Koren , Robert Bell , and Chris Volinsky .  As required, they have a description of their algorithm. 
2008 Progress Prize
The 2008 Progress Prize was awarded to the BellKor team. Their submission combined with a different team, BigChaos achieved an RMSE of 0.8616 with 207 predictor sets.  The joint team consisted of two Researchers from commendo research & consulting GmbH, Andreas Töscher and Michael Jahrer (originally team BigChaos) and three Researchers from AT & T Labs , Yehuda Koren , Robert Bell , and Chris Volinsky(originally team BellKor).  As required, they have a description of their algorithm.  
This was the final Progress Prize for achieving the required 1% improvement over the 2008 Progress Prize. The prize money was donated to the charities chosen by the winners
On June 26, 2009 the team “BellKor’s Pragmatic Chaos”, a merger of teams “Bellkor in BigChaos” and “Pragmatic Theory”, achieved a 10.05% improvement over Cinematch (a Quiz RMSE of 0.8558). The Netflix Prize Competition for the Grand Prize. In agreement with the Rules, teams had thirty (30) days, until July 26, 2009 18:42:37 UTC, to make submissions that will be considered for this Prize. 
On July 25, 2009 the team “The Ensemble”, a merger of the “Grand Prize Team” and ” Opera Solutions and Vandelay United”, achieved a 10.09% improvement over Cinematch (a RMSE Quiz of 0.8554).  
On July 26, 2009, Netflix stopped gathering submissions for the Netflix Prize contest. 
The Final Prize of the Leaderboard. “The Ensemble” with a 10.09% improvement over Cinematch on the Qualifying set (a Quiz RMSE of 0.8554), and “BellKor’s Pragmatic Chaos” with a 10.09% improvement over. The Grand Prize winner was the best performer on the test set.
On September 18, 2009, Netflix annoncé team “BellKor’s Pragmatic Chaos” as the prize winner (a test RMSE of 0.8567), and the prize Was Awarded to the team in a ceremony on September 21, 2009.  “The Ensemble” Team had matched BellKor’s result, but since BellKor returned their results 20 minutes earlier, the rules awarded the prize to BellKor.  
The joint team “BellKor’s Pragmatic Chaos” consisted of two Austrian Researchers from commendo Research & Consulting GmbH, Andreas Töscher and Michael Jahrer (originally team BigChaos), two Researchers from AT & T Labs , Robert Bell , and Chris Volinsky , Yehuda Koren from Yahoo! (Originally team BellKor) and two researchers from Pragmatic Theory , Martin Piotte and Martin Chabbert.  As required, they have a description of their algorithm. 
The team reported to have achieved the “dubious honors” ( sic Netflix) of the worst RMSEs on the Quiz and Test data sets among among the 44,014 submissions made by 5,169 teams was “Red Lantern”, led by JM Linacre, who was also a Member of “The Ensemble” team.
On March 12, 2010, Netflix announced that it would not have a second prize. The decision was made in response to a lawsuit and Federal Trade Commission. 
Although the data were constructed to preserve customer privacy, the Prize has been criticized by privacy advocates. In 2007 two researchers from the University of Texas at Austin were able to identify individual usersby matching the data sets with film ratings on the Internet Movie Database .  
On December 17, 2009, oven Netflix users filed a class action lawsuit contre Netflix, alleging That Netflix HAD violated US fair trade laws and the Video Privacy Protection Act by releasing the datasets.  There was public debate about privacy for research participants . On March 19, 2010, Netflix reached a settlement with the plaintiffs, after which they voluntarily dismissed the lawsuit. 
- Open innovation
- Innovation competition
- Jump up^ “The Netflix Prize Rules” . Retrieved 2014-04-01 .
- ^ Jump up to:a b c “The Netflix Prize” . Retrieved 2012-07-09 .
- ^ Jump up to:a b James Bennett; Stan Lanning (August 12, 2007). “The Netflix Prize” (PDF) . Proceedings of KDD Cup and Workshop 2007 . Retrieved 2007-08-25 .
- Jump up^ Sigmoid Curve (2006-10-08). “Miss Congeniality” . Netflix Prize Forum . Retrieved 2007-08-25 .
- Jump up^ “prodigious” (2006-10-06). “A single customer that rated 17,000 movies” . Netflix Prize Forum . Retrieved 2007-08-25 .
- Jump up^ YehudaKoren (2007-12-18). “How useful is a lower RMSE?” . Netflix Prize Forum .
- Jump up^ “Netflix Prize Frequently Asked Questions” . Retrieved 2007-08-21 .
- Jump up^ “Netflix Prize Rankings” . Hacking NetFlix . October 9, 2006 . Retrieved 2007-08-21 .
- Jump up^ “Netflix Prize (I tried to resist, but …)” . Juho Snellman’s Weblog . October 15, 2006 . Retrieved 2007-08-21 .
- Jump up^ “Top contenders for Progress Prize 2007 chart” .
- Jump up^ “The KDD Cup and Workshop 2007” .
- Jump up^ Prizemaster (2007-11-13). “Netflix Progress Prize 2007 awarded to team KorBell” . Netflix Prize Forum .
- Jump up^ “$ 50,000 Progress Prize is Awarded on First Anniversary of $ 1 Million Netflix Prize” .
- Jump up^ R. Bell; Y. Koren; C. Volinsky (2007). “The BellKor solution to the Netflix Prize” (PDF) .
- Jump up^ Robert Bell; Yehuda Koren; Chris Volinsky (2008-12-10). “The BellKor 2008 Solution to the Netflix Prize” (PDF) . Netflix Prize Forum .
- Jump up^ “Netflix Awards $ 50,000 Progress Prize in Year Two of Multi-Year, Multi-National Netflix Prize Competition” .
- Jump up^ A. Töscher; Mr. Jahrer (2008). “The BigChaos Solution to the Netflix Prize 2008” (PDF) .
- Jump up^ R. Bell; Y. Koren; C. Volinsky (2008). “The BellKor solution to the Netflix Prize 2008” (PDF) .
- Jump up^ “BellKor’s Pragmatic Chaos” . 2009-06-26.
- Jump up^ “The Ensemble” . 2009-07-25.
- ^ Jump up to:a b “Netflix Prize Leaderboard” . 2009-07-26.
- Jump up^ “Contest Closed” . 2009-07-26.
- Jump up^ “The Netflix Prize Comes To A Buzzer-Beater, Nailbiting Finish” . 2009-07-26.
- Jump up^ “Grand Prize awarded to team BellKor’s Pragmatic Chaos” . Netflix Prize Forum. 2009-09-21.
- Jump up^ Steve Lohr (2009-09-21). “A $ 1 Million Research Bargain for Netflix, and Maybe a Model for Others” . New York Times.
- Jump up^ “Netflix Awards $ 1 Million Netflix Prize and Announces Second $ 1 Million Challenge” .
- Jump up^ Andreas Töscher & Michael Jahrer (2009-09-21). “The BigChaos Solution to the Netflix Grand Prize.”. commendo.
- Jump up^ “Netflix Prize Update” . Netflix Prize Forum. 2010-03-12.
- Jump up^ Narayanan, Arvind; Shmatikov, Vitaly. “How To Break Anonymity of the Netflix Prize Dataset”. arXiv : cs / 0610105 .
- Jump up^ Demerjian, Dave (15 March 2007). “Rise of the Netflix Hackers” . Wired.com . Wired . Retrieved 13 December 2014 .
- Jump up^ https://www.wired.com/threatlevel/2009/12/netflix-privacy-lawsuit/Netflix Spilled Your Brokeback Mountain Secret, Lawsuit Claims
- Jump up^ Netflix Form 10-Q Quarterly Report. Filed Apr 28, 2010