I’ve never been a big fan of wine ratings or wine scoring systems. Mostly because I don’t know anyone who speaks in ratings. Even sports fans (who, justifiably, love numbers, rankings, and comparisons) don’t really speak in ratings.
“Man, the Steelers offensive line was totally an 87 in last night’s game…“
I also find it odd that wine rating talk generates so much passion when it is discussed. As cases in point, I offer two recent examples:
- Wine Enthusiast editor Steve Heimoff’s critique of Mutineer magazine’s critique of wine ratings (and Mutineer editor Alan Kropf’s response).
- A thread on the excellent wine social networking website OpenWineConsortium.org, titled “What are the faults with the 100 point [wine rating] system” which, as of this writing, has eleven pages of responses.
I shudder to think of the cross-talk that might ensue on the web in response to the granddaddy of wine rating lists, Wine Spectators’ Top 10 Wines of the year (only five of which I’ve actually sampled…).
Me, I’ve changed my tune slightly on wine ratings since I wrote two articles about the trouble with wine ratings (Part 1 and Part 2). That’s because I’ve come to realize something very important when it comes to wine ratings…
There is no trouble with wine ratings.
Think about it – there is no harm at all in rating a wine. In fact, wine ratings have played an integral part in wine criticism, which itself has played an integral part in furthering wine into the incredibly exciting state that it’s in today. There are over 7,000 wine brands available to U.S. wine consumers – somebody has to help consumers make sense of it all. As former wine writer and Ravenswood founder Joel Peterson told me recently over lunch (much more to come on that, by the way, in an upcoming post): “If we didn’t have wine critics, we’d have to invent them!”
The trouble comes in how the ratings are used.
“A rating system makes an assumption that there is an absolute,” said Joel. “We know that there are no absolutes. It’s a more measure of like than of absolute quality.”
To back up his observation, Joel told me a story about a tasting experiment that he performed with a group of experienced wine tasters: he took all of the Zinfandels that he could find that scored 90+ points in the big wine mags, and had them taste the wines blind. The result: all of the wines scored between 85 and 96 points.
Joel then took all of the 90+ scoring wines from that tasting and had them taste those wines again at a later time. The result: the wines scored between 85 and 96 points!
Scoring is relative, and it’s naturally tailored to the taster’s palate. The trouble is, people put too much faith in scores without reading the fine print.
Joel’s take: “Robert Parker was really the change-over point. A wine critic can make make or break a wine in the same way that a music critic can make or break a live music performance. Scores are like your training wheels – hopefully you take them off at some point.”
Would you ride down the street proudly on your shiny Schwinn bicycle with banana seat, handlebar horn, and red sparkle paint job with training wheels still attached? All the while bragging to your friends about how you only ride bikes with training wheels on them?
Well, that’s pretty much what you’re doing if you decide to only buy wines from the Wine Spectator top 100 list, or if you insist that a sommelier only show you wines rating 94 points or above when dining at a restaurant.
Where you goin’, training-wheel boy??
Far better, I think, to discover your own palate.
And then ditch those training wheels.
(images: allposters.com, ehow.com)