This is the last of three articles outlining the process of taking a taste panel from n00bs to rockstars. The first articles covered setting up the tasting environment, training for flavor attribute recognition, attribute intensity training, and building a ballot lexicon. This article will cover the actual flavor profiling, and how to manage the data and results.
Part vi: Flavor profiling!
So, your panelists have been trained to the point of being able to reliably detect and identify a few dozen beer flavors without having to consult a list, and they can consistently apply intensity ratings to them which fall in line with the other panelists. They’ve also generated a solid list of flavor attributes that need attention in your beers. Good work! It took a significant amount of work, time, and resources for everyone to get to this point. Just be aware, as you move forward through this project, that the type of training that your panelists went through should be continued at regular intervals so that panelists do not drift from each other or become complacent and inattentive. Just because your panel correctly identified 90% of the spiked samples you presented to them in the previous year does not mean that they will be able to do this in the following year. Taste panel training is a lengthy process that never really ends – it takes continued effort to maintain your panelists to be the most comprehensive quality control testing devices in your brewery. But now it’s time to put the rubber to the road and generate some brand flavor profiles!
If possible, find some of what your panel considers to be “ideal” examples of these brands – you’ll need enough to taste in multiple replications, so don’t short yourself. If your process variation is not tight enough to find ideal examples of your brand, you can instead gather several “typical” examples of the brand and do a few more replications than otherwise. Present these ideal/typical brand examples to your panel on at least 3 separate sessions, along with a few other examples of the brand of varying quality to mix up the lineup (perhaps one with a spiked flavor, or some that are less fresh). Don’t present too many, as fatigue sets in quickly when doing brand profiling, particularly for the hoppy or boozy beer brands. I like to stay under ten samples per session. Using the standard ballot your panel helped generate in Part V, have your panel rate each of the attributes according to how they are perceived. After doing this for the three or more replications, the values for each of the Core and Standard attributes on the ballot can be combined into a profile average. Negative attributes aren’t included in the official brand profile results as the profile should represent the ideal version of the brand and ideally there are never any negative attributes.
Part vii: Reporting
Now that you have built a profile for one of your main brands, you have a reference point to compare against any future production of that brand. For each future sample, each attribute on the brand’s profiling ballot gets an averaged rating from the panel that is compared against the intensity described by the brand profile. Results can be plotted and reported in a number of ways, some more sophisticated than others. At the higher end of sophistication is Principal Components Analysis, which is a pretty heady multivariate statistical test that simplifies the variation in the data into a few dimensions and reveals patterns and associations in the data that are not necessarily evident otherwise. I won’t go into PCA today, and will instead focus on the sorts of reporting that Microsoft Excel can help with. The most basic, and fairly popular, method is to use the radar or spider plot. Some people have an aversion to using these types of charts, but I don’t mind them so long as you’re not trying to plot too many attributes or samples at a time – there’s only so much room in the circle, and the chart gets cluttered quickly. I use these for showing the “shape” of the brand profile, since the Core and Standard attributes is a simpler set of data to plot than including all the Negative attributes. An example of one of our brands’ profiles is below, in radar plot form. [RADAR PLOT] Plotted in the chart are both the profile and a random sample of the brand, so you can easily see where the main attributes diverged from their intended intensities. The radar plot decent at providing a snapshot of the important flavor characteristics of the brand, but it can do little beyond that. For a more comprehensive view of a sample’s profiling results, I use what I call a “difference chart”. This type of chart shows how different an attribute’s rating is relative to its intensity in the profile. The farther from zero an attribute is on the chart, the bigger the disparity between the measured value and the profile’s value, therefore a sample that perfectly matches the brand profile would be indicated with an empty chart. [DIFFERENCE PLOT] You may notice that there are some extra attributes at the far-right of the chart, and those are just that: extra attributes – flavors that a panelist found that is not included on the brand’s profiling ballot. Often, your Negative attribute list will be insufficient for all the potential issues that could arise in the flavor of your products, so I like to leave room for panelists to include their own attributes.
I bet there is someone in your organization who likes to distill data collected on several variables into a single number, like a percentage score, am I right? As much as I dislike taking this approach to such a broad set of data, I can see how some would find it more digestible and broadly comparable than using 30 numbers to describe a single sample. So, how would someone take all these attribute ratings and turn it into a single score? What I do is I add up the distance that each attribute rating landed from the profile value (averaging results from each panelist) and adjust them with a weighted factor, and then subtract these from 100. For Core and Standard attributes, I use a weight of 1x, and for Negative and Extra attributes I use a factor of 2.5x (each “unit” of intensity that a Negative/Extra attribute receives will drop the score 2.5 times faster than each unit that a Core or Standard attribute is off from the profile). I chose these values after some trial-and-error by testing some truly “off” samples (like old or spiked ones) and seeing where the score landed relative to “OK” samples. When your fresh and good samples are scoring in the 90’s and your terrible samples are scoring below 50, I think you’re in the ballpark of where you need to be. For reference, our lowest performer so far was an unfiltered product that was packaged in late 2009. Tasted in early 2015, it received a score of 25/100.
Part viii: Data collection
So you may have noticed along the way that descriptive profiling generates a lot of data. How do you collect and process so much data in a way that does not cross your eyes or make you want to pull your hair out from all the repetitious copy/pasting? Breweries with large budgets might be willing to spring for Compusense, a software package that handles ballot generation and data handling/analysis for a multitude of sensory tests. But this can cost upwards of $15-20K, depending on how many add-ons you’d like to include. Even for a multi-brewery company with nationwide (and some export) distribution, that is a hill too far. So what I did is I learned Visual Basic for Applications, specifically for Excel. It’s taken a few years to get myself proficient enough at it that it doesn’t take me several weeks to build a sensory utility, and it was fairly frustrating at times, but hundreds of Google searches have helped me figure out how to do most of the things that Compusense does at basically no cost other than the MS Office suite.
So the basics of the system I’ve created go like this: 1) ballots are built with Google Forms, which is a free utility that Google provides to create custom surveys; 2) the data from these surveys is fed automatically into Google Sheets documents which are downloaded to my workstation at the end of each tasting session; 3) the spreadsheets are opened in Excel where I use my VBA macros to re-format the data into a format that can be easily used to create Pivot Tables and charts (with more macros); 4) more macros are used to automatically export the score reports and charts into a Powerpoint file that can be archived. From the end of the last tasting to the point at which I have scores and charts, only a few moments and button-presses are needed. It’s such quick turn-around that we have just installed a flat-panel display in our tasting room so that we can look at the panel’s results in a near-instantaneous time-frame. Being able to do this is very helpful for discussing the results with the panelists and can help accelerate the development of profiles and spur conversations that otherwise wouldn’t have a way to begin, since results reporting would normally take place after panel is over and panelists have resumed their normal duties.
I guess that about covers it. Easily this is my longest article yet, something over 4800 words and a few days of writing. If anyone has any questions regarding the details or procedures outlined here (especially regarding the data collection/reporting), please feel free to reach out to me by email, which can be found in the About page. Sometimes I can be pretty bad about getting back to people in a timely matter, but I do still go back through that email account and try to help out the people who request things, if I am able. Cheers!