An Evaluation of Eater's BruniBetting [charts]
The New York Times' Frank Bruni is a tough nut to crack. He often writes restaurant reviews that read like two- or even three-star reviews, but awards only one star. Or, conversely, he writes a somewhat harsh review but awards two stars. This nebulous, contradictory nature of the reviews makes it difficult to anticipate how many stars he's going to award.
Eater tries to predict the star ratings with their weekly BruniBetting, going so far as offering the odds as if its a horse race. But I always wondered, how accurate are Eater's predictions?
There was only one way to truly know, and that was to put some headphones on and plug all the data into a spreadsheet and wield some Excel kung fu. Here's a publicly-viewable Google Doc spreadsheet with all the raw data of Eater's BruniBetting vs actual New York Times star rating, which I then imported into Excel to generate some fancy charts.
The results are alarming: overall, Eater's BruniBetting only has a 59% accuracy rate.
From casually glancing at the data, here are some basic interpolations and interesting facts:
- Eater's predictions are tracking downward for 2008 at a 59% success rate, versus 2007's 64% success rate.
- The longest run of correct predictions was seven in a row, starting from April 10, 2007 with E.U., and ending May 22, 2007 with Resto.
- The longest run of incorrect predictions was way back in 2006, with five in a row.
I fixed a couple of incorrect values in the Eater BruniBetting Matrix, but it didn't affect the percentage of correct predictions.
I also added a new column with the odds Eater's oddsmakers gave, and compared them to the NYT ratings.
Bad news: the oddsmakers only nail it 53% of the time, versus Eater's 59% correct prediction rate.
* Please note that all charts and figures are valid as of October 20, 2008. I'll be updating the spreadsheet as time goes on.