Well… whose ratings should a wine drinker pay attention to? Or, stated with a tad more more grammatical correctness (warning: sounding-like-douche-bag-potential alert!), To Whose Ratings Should A Wine Drinker Pay More Attention?
An American Association of Wine Economists (AAWE) working paper with that tile was just released, though, interestingly, it doesn’t actually answer the question. I will answer it, in a few minutes anyway, but not before torturing you with exposition and report dissection first. Because, well, I’m really just not that nice of a guy.
Despite the bait-and-switch title, the paper starts with a fascinating premise: given that ratings for the same wines vary between professional wine critics (called “experts” in the paper’s lingo), is there an established expert whose ratings correlate closely with those of the general wine-drinkin’ public?
Turns out, there is one – at least,there is one out of the three expert sources that the paper used.
The paper’s authors, Omer Gokcekus and Dennis Nottebaum (no, I do not know how to pronounce those), chose to examine ratings/scores of 120 Bordeaux wines from the 2005 vintage. The voice of the people was played by the scores for those wines as recorded in Cellar Tracker, subsets of which were then compared with the scores for the same wines as reported by three pro wine critic sources. Big-time influencer Robert Parker (via The Wine Advocate) was included, as well as Wine Spectator, so they covered the 1.5 most influential wine critics in the U.S. The third included was Stephan Tanzer’s International Wine Cellar, though to be honest I’ve no idea why they included that last one. Just kidding, Stephen!
Anyway… It’s important to note the results were aggregated, and this makes them a tad misleading because the same wines were not compared between the three pro critics and Cellar Tracker – a subset of the wines were compared (CT to RP, CT to WS, and CT to ST). These were not the same wines (or the same amount of wines) in each case, so while there will be some wines in the group that were compared against all four ratings sources, there will also be some wines that were only compared between Cellar Tracker and one of the pro sources. Got it? Good!
Overlooking that minor cavil, the results are pretty darn fascinating…
What does the paper reveal to us in terms of insights gleaned from the study?
- Cellar Tracker users are stern reviewers, giving lower ratings overall than the pros. This will come as a surprise to roughly 0% of all regular CT users.
- The experts agreed with each other (had a higher score correlation) than they did with the general public. Anyone who has ever followed wine scores will be nodding their heads at that one, too.
- Surprisingly, wine prices correlated most strongly with the median community score than with expert scores. I say “surprisingly” not because I think that pro wine critics’ scores would be the more likely price drivers, but because this result is probably what you’d expect to see in a free market economy – and the wine sales business in the U.S. in particular is more like a totalitarian regime than a free market. Ok, sorry, couldn’t resist that…
- The pro whose ratings most closely correlated with the general public was… Stephen Tanzer! Damn, too bad nobody from the U.S. reads his stuff. Just kidding, Stephen!
Of course, one could make a serious argument that Cellar Tracker users don’t necessarily represent the average wine drinker and are skewed more towards hard-core consumers, but I think it’s as close an approximation as the AAWE is likely to get with a database of scores that can be compared with those of the pros. If you are a CT user, though, this paper suggests that Tanzer is your man; the paper also suggests that you might hate Robert Parker:
“…these regular wine drinkers resent Robert Parker’s influence ‐or shall we say hegemony over the wine community ‐ and systematically challenge his ratings by either giving higher scores to the wines with low RP ratings and lower scores to the wines with high RP ratings.”
Yeah, well, that’s one explanation for why there’s low correlation between RP’s ratings and the CT ratings for the same wines, anyway. But is it likely that there’s a huge RP-resentment conspiracy, or even general negative-vibe infecting millions of CT users and subconsciously affecting their impressions of a given wine? Maybe, but it seems far more likely (and logical) that the average CT user just doesn’t like those wines as much as RP? I know how I’d vote. Seemed like a stretch to me.
So… whose ratings should wine drinkers pay attention to? Aside from mine, I mean?
Now, wouldn’t that be something to write about?
Or did I just do that?