As of last week, the results of the 2013 California State Fair Commercial Wine Competition have been fully revealed, and July 4th seemed an auspicious time to recap the (all American) Best of Show winners from the comp. (itself a bit of an American institution, having been established in the 1800s), and share my thoughts on my fave wine of the competition from the judge’s seat.
And now that I’ve completed my tour of the International wine judging circuit for 2013 (having lent my palate to the 2013 Argentina Wine Awards, the 2013 Wines of Portugal Challenge, the 2013 CA State Fair Commercial Wine Competition and the 2013 Critics Challenge), this also seems like a good opportunity to confirm or bust up several wine competition myths, since wine comps. in general are once again under attack in the media as “junk science” (can anyone, anywhere, name one single soul who has ever proffered wine competition judging as an actual scientific endeavor? Because I’d like to be first in line to kick that person in the gluteus max).
First, let’s tackle the wine comp. myths, because that will go a long way in explaining why some of the wines that won Best of Show in the newly-revamped CA State Fair comp. (now headed up by my friends and long-time wine writers Mike Dunne and Rick Kushman, both of whom have done yeomen’s work in bringing new levels of both fun and professionalism to the event)…
Warning… 1800+ word screed ahead… you have been warned!…
Wine Competition Myth Number 1: Wines compete as the best-of-the-best
Wine competitions are competitions between the wines that get entered. Period. Anyone who tells you differently is trying to sell you something (hint: that something is probably wine, in a tasting room). Generally speaking, the more prestigious the competition (due to the judges involved, the history of the event, the quality of the wines typically entered, etc.), the more likely that better wines and top-tier producers are to enter it. The best of the best don’t need to enter comps., because they can sell their wines without medals. For many a wine brand, the medals offer a means of differentiating their wares from those who don’t have medals hanging around their bottle necks for whatever reason.
Wine Competition Myth Number 2: Bronze Medals in American wine comps. are basically meaningless.
One of my biggest pet peeves with wine competitions in the Americas (both North and South) versus those in Europe is that Bronze medals are awarded to wines that are not flawed but otherwise aren’t showing much on the day they’re judged. Usually, this is because those wines suffer from an affliction that’s almost worse than being bad: they’re boring as hell. They’re the “C” grade wines in my (stupid) wine reviewing system. They’re wines that induce lassitude both from the effects of their alcohol content and their one-sided dullness. As a result, bronze medal winning wines in most American wine competitions can be safely ignored by the American public, but usually aren’t because winning any medal that’s fashioned after Olympic-styled accolades are viewed as having achieved something more than just showing up and not being bad. Thankfully, the Critics Challenge comp. doesn’t even bother with Bronze medals, I think for that very reason. I wish more American comps. would follow suit.
Wine Competition Myth Number 3: All wine competition judging is inherently bullsh*t.
Verdict: Part One – PARTIALLY-BUSTED
Every so often (which is to say, every twelve ours or so), wine rating and wine competition judging are attacked in the media as being the equivalent of bovine turd-iness. The latest attack comes via an article in The Guardian, in which David Derbyshire cites an experiment that retired professor turned Humboldt County wine proprietor Robert Hodgson conducted on judges from the CA State Wine Fair Comp.
The experiment, which dates back to 2005, is one that has been performed on me several times in blind tasting sensory evaluation panels, and in summary consists of serving the same wine blind a few times within the same tasting session, and presumably seeing how consistent the judges / evaluators are at consistently awarding/evaluating the same wine. According to the article, Hodgson found the results disturbing:
“Only about 10% of judges are consistent and those judges who were consistent one year were ordinary the next year. Chance has a great deal to do with the awards that wines win… They say I’m full of bullshit but that’s OK. I’m proud of what I do. It’s part of my academic background to find the truth.”
I respect what Robert Hodgson is doing here (even if his website proclaims that his winery is “recognized for producing medal winning wines in both national and international wine competitions for over 30 years” – I’m guessing these are the same comps. he’s now trying to debunk; they picked up a Silver this year, by the way). And I deeply respect the fact that whatever reverts to mean over time must, by mathematical definition, be random in its result. This is true for actively managed mutual funds (seriously – the data on that are unequivocal, and if you have an adviser who has recommended any such funds to you, you should fire that person immediately). It’s also true for athletic performance, but only a fool would assume that the reasons for random results would be identical in those two cases.
And so it also stands to reason, I think, that Hodgson’s wine comp. study results are not random for the same reasons that athlete’s winning records and mutual fund returns are random. I’d posit that wine comps. have random results because humans are involved, and neither they nor the wines they are judging are ever static, with a cherry-on-top reminder that, unlike index funds for the mutual find business, no superior alternative to human judging at wine comps. yet exists.
In other words, Derbyshire’s assessment that “Over the years [Hodgson] has shown again and again that even trained, professional palates are terrible at judging wine” is crap. Hodgson’s work has only shown – so far – that a wine’s performance in medal awarding by expert judges is inconsistent.
Well… no f*cking duh, dude.
Verdict: Part Two – PARTIALLY-CONFIRMED?
I’ve no idea how inconsistent I was at the CA State Fair comp. (and it’s not clear to me from the article if I was one of the guinea pigs or not). However, for the past year or so I’ve been involved in a sort-of-secret sensory evaluation group that has met periodically in the Finger Lakes. I’ve been told personally by the organizers of that group (not connected with the comps. I’ve mentioned in any way) that the data I provide in my tastings for them is consistent enough statistically to be used as “good” data for their purposes, and I’ve been invited back to every session that they’ve had to date (I’d tell you more on this, but I can’t as I’m under a NDA). Which suggests some consistency on my part, and which bothers the hell out of me.
It bothers me because fine wine is totally inconsistent. Fine wine should be changing in the bottle and in the glass. The wine I taste one minute should be different than the one I taste several minutes later, if the wine is any good. Wine changes, our tasting of it changes, and we’d probably need chaos theory levels of math to incorporate the vast number of variables influencing the outcomes of quality assessment. It could be argued that if I – or any judge – were really that consistent, then it could be argued that our results ought to be inconsistent.
The trouble with drawing meaningful conclusions here, even from good data, is that what gets labeled as random noise in most other studies is actually essential part-and-parcel-cause-and-effect for the results when it comes to wine. Put another way, do you know what could change a wine from a gold medal winner in one competition to a loser in another, even among the same judges? Anything. The barometric pressure, whether or not I had an argument with somebody, needed to take a dump, had a great song stuck in my head, ate a good breakfast, saw to much of the color red on billboard ads on the way into the judging hall that day, or got a pour into a glass that got polished with the wrong towel… You get the idea.
I’m not faulting Hogdson’s results, but am faulting the Guardian’s conclusions. It’s not that all competition judges suck at what they do, it’s that their task is handicapped into an artificial situation from the start. And if a competition compiles judges who spend far more time sampling wine than any normal human ought to do, well, anything, then we need to start asking ourselves if it’s our assumptions that are off before we start throwing stones at hard-working humans because they fail to act like machines.
To be fair, both Derbyshire and Hodgson hint at this in the Guardian article, particularly when Hodgson is quoted as saying “I think there are individual expert tasters with exceptional abilities sitting alone who have a good sense, but when you sit 100 wines in front of them the task is beyond human ability.” No disagreement there, provided that we add “to remain consistent in the face of such myriad ways in which it can get entirely F-ed up, even at the most well-run competitions” to the end of that quote (it’s implied, right?).
Whether the inconsistent results are down to the people, the wines, the environment, or all three, the moral of the story is this: if your assumption is that a wine should win the same medal every time in any given competition, then you’re just as much a fool as the high-fee, active mutual fund buyer. But if you also think that you shouldn’t tout a gold medal result (if you’re fortunate enough to win one) to help you market your wine, then you’re also a fool. The system of quickly evaluating a wine isn’t natural, isn’t perfect, and isn’t simple, and so if our assumptions are wrong (e.g., humans have robot-like quality assessment ability, wine is static, etc.) then our conclusions based on the results are bound to be off, too.
So here’s to off conclusions, as we salute the Best of Show winners:
Best of Show Dessert
2012 Navarro Vineyards Gewurztraminer Cluster Select Late Harvest (Mendocino, $28)
Best of Show White
NV Korbel Blanc de Noirs Méthode Champenoise (California, $12)
NV Barefoot Cellars Moscato (California, $6)
And, finally, my personal fave of the wines I tasted at the comp.:
Best of Show Red:
2010 Imagery Estate Winery Cabernet Franc (Sonoma Valley, $39)
Not my favorite style of Cab Franc, but I had a crap-ton of respect for this wine. It combines the lush, sweet, extracted fruit, silky tannins, and generous nature of the CA reds style with the herbaceous, spicy, and vibrant nature of Cab Franc from the East Coast and Europe. Chinon or Virginia it’s not, and the marriage of stylistic components isn’t quite perfect, but the wine delivers big-time on the fronts of pleasure, complexity and intellectual curiosity. For many Left Coasters, this could be the Cab Franc gateway drug; for Right Coasters, it could be the CA red gateway drug; for Mid-westerners… it will definitely go with steak, so you’re covered, too.
Cheers – and have a safe & happy Independence Day!