A little over a week ago, my friend Jeff Siegel published details by PhD Suneal Chaudhary, who analyzed over 64,000 wine scores, dating to the `70s, from “major wine magazines.” The study’s aim was to ascertain if red wines routinely receive higher point score reviews than white wines (other styles were presumably ignored).
Long-time 1WD readers know that I have become a big fan of statistically relevant data, and the data in this case (including how those data were handled) are, for sure, statistically relevant, in sample size, time duration, and applied analysis.
It’s dangerous to draw too many conclusions, but Jeff summed up the congruence of the findings with the common sense experiences of wine geeks everywhere nicely in his original post on the subject:
“We don’t pretend that these results are conclusive, given the variables involved. Red wines may be inherently better than white wines (though that seems difficult to believe). They certainly cost more to make, and that might affect the findings. The review process itself may have influenced the study. Not every critic publishes every wine he or she reviews, and those that were published may have been more favorable to reds than whites. And, third, the scoring process, flawed as it is, may have skewed the results regardless of what the critics did.
Still, given the size of the database, and size matters here, Suneal’s math shows something is going on. And that’s just not our conclusion. I asked three wine academics to review our work, and each agreed the numbers say that what is happening is more than a coincidence. That’s the point of the chart that illustrates this post – 90 percent of the 2010 red wines that we had scores for got 90 points or more.”
What to make of all of this?
Personally, I think that we wine geeks ought to be a bit more flabbergasted at the discrepancy, considering that, in general, white wines are superior to reds aromatically…