I’ve always had a bit of a mixed reaction to the reports published by the Journal of Wine Economics. On the one hand, I love the fact that serious statistical attention is being given to topics like wine awards, in the hopes that scientific examination will help reveal more about how wine and consumers interact. BUT… I’ve also had to deconstruct their lead articles to highlight what I felt to be conclusions that they draw from their analysis that I felt weren’t adequately supported by their data.
Well, now it seems that the American Association of Wine Economists has gone off the deep end.
The latest issue of the JWE (Volume 4, Issue 1, Spring 2009) contains a lead-off article by Robert T. Hodgson titled An Analysis of the Concordance Among 13 U.S. Wine Competitions. After reading the nine-page analysis, I’d go so far as to say that the AAWE’s release is bordering on being totally irresponsible. In my opinion, the science of how the statistics are applied is, at best, specious, and at worst might be downright deceitful.
Heady criticism, right? Let’s get deconstructin’!
The report examines data from 13 U.S. wine competitions in 2003. Here’s a bit of excerpt from the article abstract (emphasis is mine):
“An analysis of the number of Gold medals received in multiple competitions indicates that the probability of winning a Gold medal at one competition is stochastically independent of the probability of receiving a Gold at another competition, indicating that winning a Gold medal is greatly influenced by chance alone.”
Stochastic independence is simply another way of saying that the events are not related. For example, if you roll a 5 on a die, the event of rolling a 5 on your second role are independent. In other words, a wine winning a medal in one competition doesn’t impact what it will or won’t win in another competition. Which is exactly what you’d expect from a different competition, with different judges, and competing against different wines. The problem is that none of those other conditions are detailed in the JWE report.
Ignoring the fact that 13 competitions might not be a statistically relevant sample, not detailing the other factors that would certainly impact the outcome of the wine competitions is a seriously glaring omission.
Things get worse…
Where the AAWE report drops the cork is when it makes the leap (based on analysis of partial data) to a conclusion that inappropriately challenges the validity of the wine competitions:
“An examination of the results of 13 U.S. wine competitions shows that (1) there is almost no consensus among the 13 wine competitions regarding wine quality, (2) for wines receiving a Gold medal in one or more competitions, it is very likely that the same wine received no award at another, (3) the likelihood of receiving a Gold medal can be statistically explained by chance alone.”
The report makes this conclusion by analyzing data that was gathered on gold medals awarded wines at a small number of competitions held in one year in one country and without revealing any details on key elements involved that could significantly impact the outcome of the competitions:
- who the judges were
- what different wines were entered in one competition vs. another
- how many wines were tasted by each judge at each competition…
But they do graph the results against a binomial distribution, which of course sounds and looks official to anyone who didn’t pay attention in university Stats class.
“Examining the form of the distribution of Gold medals received by a particular wine entered in various competitions suggests a simple binomial probability distribution. This distribution mirrors what might be expected should a Gold medal be awarded by chance alone.”
The graph looks compelling, but there’s one problem: the conclusion is probably total bullsh*t.
The problem with this pseudo-scientific view is that it’s a bit like saying that I am always going to be stronger than my friend Bob, because in 13 attempts I jumped an average of fifteen feet into the air, while my friend Bob jumped only 4 feet. Therefore, we can conclude statistically that I am stronger than my buddy Bob. Oh, but we left out little tidbits that might influence our conclusion – like the fact that I jumped from a trampoline on the surface of the moon, while poor Bob jumped from a standstill on paved road in Iowa, while nursing a sprained left ankle.
You get the idea.
I’m not defending gold medal awards at wine competitions. Personally, I don’t pay any attention to them and I certainly don’t use them for recommending wines to others. The competitions may, in fact, be total crap, and the judging in those competitions may in fact border on random. But the latest AAWE report shouldn’t be used as a compass for navigating that kind of judgment.
The data is probably totally legit, but the analysis (as presented in the AAWE report, anyway) ignores far too many factors for the conclusions to be even close to scientific.