top of page

MaxDiff and March Madness

Updated: Mar 5

Last night, the Kansas Jayhawks made March Madness history with an epic comeback over the North Carolina Tarheels to win the NCAA championship game.



Nate Silver’s team at @FiveThirtyEight not only makes predictions at the beginning of the tournament, but also shares live probabilities of who might win depending on the score. Really fun to see if you’re a stats AND basketball nerd like myself.



But perhaps you don’t have all the predictive tools that the FiveThirtyEight team has. One other solution is to pool your closest, most basketball intellectual friends, and ask them - which team is most likely to win? And of course, you can do this by creating a quick MaxDiff survey.


So, last week, we put the teams that had made it to the Sweet Sixteen into a MaxDiff survey, and we asked our network…



What is MaxDiff?


MaxDiff is an approach for measuring consumer preference for a list of items. Items could include messages, benefits, images, product names, claims, brands, features, packaging options, and more! In this case – we have our NCAA men’s sweet 16 teams.


In a MaxDiff exercise, respondents are typically shown 2-6 items at a time, and asked to indicate which is best and which is worst, or most appealing/least appealing, most likely to purchase/least likely to purchase, etc. Or here - most and least likely to win the tournament.


This task is repeated many times, showing a different set of items in each task.


Tip #1 - Ask respondents about 3 to 5 items at a time in a Best-Worst experiment to capture more information quicker than asking about just 2 items.


Tip #2 - Try and show each respondent each team at least once. If your list of items is small, try and show each team 3 times to each respondent. Balancing the length of the exercise with as much information to inform your model as you can get!



The Results


Next, we can use the power of math, hierarchical Bayesian regression to be exact, to build a model that allows us to stack rank each of the 16 teams up against each other.


The resulting model offers a score for every team for every individual. Better yet, the results are ratio-scaled so that a score of a 10 means that team was twice as likely to win as a team with a score of 5!




According to our results - Duke was the most favored team to win. Followed by Gonzaga and then Kansas. And people were 2x more likely to pick Duke than Purdue or UCLA or Houstan and 3x more likely to pick Duke than Iowa State, Providence, Miami, Michigan, and Saint Peter’s. But alas, Coach K and the Blue Devils went down in the Final Four round. (If you're reading this Coach K - Hat's off to an incredible career!)


Many people are familiar with the stack rank shown above, but did you know that you could simulate who would win based on the final four teams? We can use the same model that gives us the stack rank above to predict which team each individual would choose in every match up, regardless of if they actually saw that match up in their experiment or not!


For example, based on the final four teams, 36% of voters thought Kansas would go onto win it (they were correct), versus only 15% thought Villanova would win.








Then, in the final match-up, 2/3 respondents correctly predicted that Kansas would beat out UNC. But, it was pretty 'unpredictable' for those of us who watched the game!









TL;DR - MaxDiff is so useful it can be applied to almost anything!


Thinking of running a MaxDiff? Reach out to our team with questions at info@numerious.com

214 views0 comments

Recent Posts

See All
bottom of page