blogger web statistics/a>
VineSleuth Data Shows That Expert Wine Tasters Are Consistent | 1 Wine Dude

Blinding You With Wine Evaluation Science! (VineSleuth Data Shows That Expert Wine Tasters Are Actually Consistent)

Vinted on August 15, 2013 under commentary, wine news
WP Greet Box icon
HiYa! If you're new here, you may want to Sign Up to get all the latest wine coolness delivered to your virtual doorstep. I've also got short, easily-digestible mini wine reviews and some educational, entertaining wine vids. If you're looking to up your wine tasting IQ, check out my book How to Taste Like a Wine Geek: A practical guide to tasting, enjoying, and learning about the world's greatest beverage. Cheers!

Piling onto so-called expert wine evaluators has become all the rage lately. Remember when the California State Fair commercial wine competition judges got steamrolled (again) by data showing that blind tasting medals are awarded in a random distribution?

So expert wine evaluation is all just donkey-bong bunk, right?

Not so fast, Jerky.

According to data collected over the last several months by VineSleuth, it turns out that when we live by the wine evaluation data sword, we also die by the wine data evaluation sword. VineSleuth’s data shows that expert wine evaluators “are able to repeat their observations on individual wine samples about 90% of the time” when tasting wines blind.

Now, where I come from, 90% is a sh*t-ton better performance than can be explained by random chance. It suggests that the blind wine evaluation game isn’t so clearly flawed as some might make it out to be.

And before you start manically flailing away at your keyboards typing me flaming e-mails about how the experts chosen for VineSleuth’s analysis must not actually be experts, or that their (patent-pending and proprietary) methodology is somehow flawed, you should know that they ran it with the help of sensory scientists and numerical algorithms researchers/experts, and that they stocked their tasting panels with folks who make their livings tasting wine: winemakers, oenologists, sommeliers, writers… and little ol’ me.

And pretty soon, you’ll be able to test out my work for yourself…

I served as a “core evaluator” for VineSleuth’s upcoming wine app, wine4.me, in part of a process that is based on scoring wines by intensity. The results went through some pretty rigorous and proprietary methods of analysis to ensure that the data tested “clean” and repeatable from a scientific standpoint (statistical analysis that rejected any inconsistent, inaccurate, or imprecise data) for their soon-to-be-released wine app.

In other words, VineSleuth’s team knows exactly how consistent (or inconsistent) I am at blind wine sensory evaluation, and I’m somewhere in the realm of 90%, which I’ll gladly take [insert golf claps here]. I see that much less as an ego thing and much more as a “if you do something for 10,000 hours, you’ll probably get good at it” thing.

This kind of thing wouldn’t normally float my gloat boat, but it took on extra significance when I was told that I was (understandably, considering the tasters with whom I shared the experience) the dark horse candidate in all of this work. According to my friend and VineSleuth CEO/co-founder Amy Gross, the decision to include me in the process went down something like this (paraphrased version of events):

Super-smart scientists: “Amy, here is the list of evaluators we think that you should use for your project. These are experts who we strongly suspect will be excellent tasters.”

Amy: “Okay, great! I want to add this guy, too – Joe Roberts, from 1WineDude.com”

Super-smart scientists (looking quite concerned): [Awkward pause] “Uhm… okaaaaay… but when he totally blows it and we have to reject **all** of his data, that’s on you.”

As for the VineSleuth data analysis process, there’s not much I’m allowed to tell you about it (due to a NDA), but I think I’m permitted to mention that the scientific rigor with which the tasting sessions were executed, and the strong focus on cleaning the resulting data, sets this sort of wine evaluation quite far apart (as in, say, North-Pole-to-Antarctica-distance apart) from any wine competition in which I’ve yet judged. Not that the comps. are somehow incompetent in their execution (far from it, in fact – see my take on the 2013 Critics Challenge as an example of how it’s done right), it’s just that they have an entirely different focus than VineSleuth’s work. That different focus (pinpoint sensory evaluation versus quick quality assessment) might account for why the VineSleuth evaluator results are so strikingly different from those being proffered by wine competition detractors lately.

The main point is that, while the evaluations include somewhat different focuses, non-rigorous (in a scientific sense) wine competition data shows expert wine tasters to be inconsistent, while quite rigorous and clean scientific data shows that wine tasters can be consistent within 90%. So which data set are you gonna go with (I know which one I’m picking)?

VineSleuth is about to release Wine4.me, a smartphone and (eventually a) web application. You can sign up to be a beta tester now at http://wine4.me and put all of my tasting evaluation work to, well, work.

Cheers!

Don't miss the good vino! Sign-up now for non-SPAMmy delivery of 1WineDude updates to your Inbox.

Email address:

    Leave a Comment

    Your email is never shared.
    Required fields are marked *




The Fine Print

This site is licensed under Creative Commons. Content may be used for non-commercial use only; no modifications allowed; attribution required in the form of a statement "originally published by 1WineDude" with a link back to the original posting.

Play nice! Code of Ethics and Privacy.

Contact: joe (at) 1winedude (dot) com

Google+

Labels

Vintage

Find