Articles Tagged wine scores
There’s been a good bit of discussion lately on the Global Interwebs over a recent blog post by the wine-data-focused David Morrison (to which I was alerted by intrepid friend-of-1WD Bob Henry).
In that post, Morrison puts the scores of two of Wine Spectator’s then-critics-both-named-James, James Laube and James Suckling, through the data-analysis wringer, focusing on scores they gave to wines as part of WS’s “Cabernet Challenge” of 1996.
Generally speaking, Morrison’s blog post, while enviably thorough, can justifiably be criticized as much ado about nothing, considering that no one in the right minds could draw any statistically relevant conclusions from such a small data set. The summary version is that he found a high level of disagreement in the scores that the two Jameses gave to the same wines. Morrison draws out some interesting suggestions from this finding, though, primarily about the use of numbers when evaluating wine quality; to wit (emphasis is mine):
“The formal explanation for the degree of disagreement is this: the tasters are not using the same scoring scheme to make their assessments, even though they are expressing those assessments using the same scale. This is not just a minor semantic distinction, but is instead a fundamental and important property of anything expressed mathematically. As an example, it means that when two tasters produce a score of 85 it does not necessarily imply that they have a similar opinion about the wine; and if one produces 85 points and the other 90 then they do not necessarily differ in their opinion.“
So… where have we heard that before?
Oh, that’s right, we heard it right here on 1WD. Several times, actually…
Read the rest of this stuff »
White wines get the review shaft (image: winecurmudgeon.com)
A little over a week ago, my friend Jeff Siegel published details by PhD Suneal Chaudhary, who analyzed over 64,000 wine scores, dating to the `70s, from “major wine magazines.” The study’s aim was to ascertain if red wines routinely receive higher point score reviews than white wines (other styles were presumably ignored).
Long-time 1WD readers know that I have become a big fan of statistically relevant data, and the data in this case (including how those data were handled) are, for sure, statistically relevant, in sample size, time duration, and applied analysis.
It’s dangerous to draw too many conclusions, but Jeff summed up the congruence of the findings with the common sense experiences of wine geeks everywhere nicely in his original post on the subject:
“We don’t pretend that these results are conclusive, given the variables involved. Red wines may be inherently better than white wines (though that seems difficult to believe). They certainly cost more to make, and that might affect the findings. The review process itself may have influenced the study. Not every critic publishes every wine he or she reviews, and those that were published may have been more favorable to reds than whites. And, third, the scoring process, flawed as it is, may have skewed the results regardless of what the critics did.
Still, given the size of the database, and size matters here, Suneal’s math shows something is going on. And that’s just not our conclusion. I asked three wine academics to review our work, and each agreed the numbers say that what is happening is more than a coincidence. That’s the point of the chart that illustrates this post – 90 percent of the 2010 red wines that we had scores for got 90 points or more.”
What to make of all of this?
Personally, I think that we wine geeks ought to be a bit more flabbergasted at the discrepancy, considering that, in general, white wines are superior to reds aromatically…
Read the rest of this stuff »
This little meme-type-thingy that I generated several days ago got a bit of traction on The Book of Face, and so I thought that I’d elaborate a bit on the position behind it (friends of mine will enjoy the RDJ inclusion, since they are constantly telling me “dude, you are so RDJ as Tony Stark, except you are Tony Snark!!!”):
Seriously, people, can we just stop shoving wine scores into the faces of media types?
Here’s the deal:
I know that you’re justifiably proud of the scores that your wine received from [ insert major wine publication here ]. By all means, use them to help you sell wine: advertise them, put them on shelf talkers, teach your tasting room staff to wax philosophically in fake humility about them to your winery’s visitors.
I don’t want to know about them, for two (to me) very important reasons:
1) Putting my critic-type-guy hat on for a moment, I’d rather not know what other critic-type-people have said about your wine. I don’t want it to influence me, even if subconsciously, so I strongly feel it’s best to just not go there until I’ve had a chance to taste it (usually using a very different process than what’s employed at those publications) and have at least formulated the genesis of an opinion on my own. What you’re implying by continually mentioning the score is that if I disagree with it, then you will think that one of us is wrong (and I am pretty sure that I know which of us that will be).
2) As a knowledgeable wine consumer, I might find a score helpful, and I might not. It’s not that simple; it depends on who is giving the score, their history of such scores, etc., etc., etc. I know what I like, from a purely subjective point of view, and so I’ll just formulate my own opinion on how good your wine is for me (thankyouverymuch). By the way, I strongly suspect, given the fact that it’s easier than every before for wine consumers to become knowledgeable, that I am not the only person buying wine for personal consumption who feels this way.
So… can we be done here, please?
During one of my many recent Left Coast jaunts, I had a rather disturbing conversation with a California winemaker over dinner. It’s a conversation that haunted me for weeks afterward, until I could catch up with that particular winemaker (who needs to remain anonymous for reasons that should become obvious very quickly) and get more detailed information on the topic that we’d discussed.
The short version of the story is that the winemaker with whom I spoke referenced a matrix he’d developed that set qualitative and quantitative targets – for color, extraction, sugar levels, tannin, taste profiles, etc. – for achieving specific scores from specific U.S. wine critics.
Not a wide ranges of scores, but in some cases, targeted score ranges that were quite narrow (between three or four points).
While the matrices began innocuously enough – as reference points for achieving certain styles or quality levels of wines more quantitatively – in some cases they morphed into tools meant to target specific scores from certain critics for marketing purposes. And this winemaker hinted that such matrices / formulas were and are fairly common tools in terms of fine winemaking in California.
Apparently, certain characteristics are almost guaranteed to get you a better chance at a particular score from particular critics. There are companies that will do similar analysis and – for a fee that is apparently not unsubstantial – will tell you when (harvest timing, etc.) and how (extraction, manipulations, etc.) to make your wine, based on that analysis, complete with target score ranges in major wine publications…
Read the rest of this stuff »