In my last post I defined what brand lift is, how it’s distinct from other forms of lift measurement, and why a brand should use this ad effectiveness method. In this installment – part two of four – I’m going to dive into some of the practical details of how brand lift studies are measured.
How Do You Measure Brand Lift?
Brand lift is all about listeners’ perceptions of a brand, and that’s best measured with surveys. We ask them standardized questions for metrics like brand awareness, favorability, and so on.
For instance, to measure favorability we may ask something like: “How would you describe your overall opinion about each of the following brands?” Respondents then choose an answer on a five-point scale, ranging from very favorable to neutral to very unfavorable.
Crucially, listeners take these surveys only after they’ve heard the ad campaign. Or, really, half of them do, as I’ll explain.
Keep Control
Let’s say we’re measuring a podcast campaign for a new coffee flavored root beer (don’t knock it ’til you’ve tried it). The survey results show that 45% of listeners who heard the ad say they are likely to purchase the drink. The buyer seems to think this is good, but the brand manager feels like the score should be over 50%. How do we determine if it’s really good or just OK?
In order to measure lift, we need something to compare against. So we recruit a matching panel of podcast listeners who we know were not exposed to the brand’s ad campaign, and we survey them with the same questions. Then we subtract the scores of those listeners who didn’t hear an ad (our control group) from the scores of those who did hear an ad (our exposed group). Any positive difference is the lift.
Back to our coffee-flavored root beer campaign, the survey results for the control group show that only 30% of these listeners who didn’t hear the ad say they’re likely to purchase the root beer. That’s 15% fewer than the exposed group. We can say that the ad campaign successfully drove a lift in purchase intent – and that’s a good thing.
A Lift Looks Good, but Is It Significant?
We go back to the brand manager with the 15% of lift in purchase intent. They seem happier, but then they want to be assured that this is significant. By “significant” they’re not asking if it’s important, but rather asking if it is statistically significant.
I’m not going to drop us into a Statistics 101 here, but statistical significance is actually a pretty straightforward concept. It’s a test of how confident we are that our results are not due to chance. More precisely, how likely it is that we would get similar results if we repeated the survey. That’s expressed as something known as a confidence interval, usually a percentage.
A common confidence interval used in brand lift research is 90%. If we say that something is statistically significant at a 90% confidence interval that means we’re confident that if we repeated the survey 10 times, we would see a lift in that measure nine of those times – or, 90 times out of 100. This is much better than chance, which is just five times out of 10.
Back with the root beer survey, we indeed found that the 15% lift in purchase intent was statistically significant at 90%. We can tell the brand manager that even if we repeated this survey 99 more times, we’re confident we’d see lift in that measure at least 90 times. Or, put simply, these results are not a fluke.
Should Everything Be Significant?
It’s great when we see significant lifts in a brand lift study, but it’s important to keep our expectations in perspective. A campaign that achieves significant lift in awareness may not drive a similar lift in recommendation intent. That isn’t necessarily a flaw in the campaign, show or channel. There’s only so much work that a 15, 30 or 60-second ad can do, and it’s important to focus the ad creative on just one or two measures, or KPIs, and then look to those specific measures when you get the results.
Sometimes the lift in a measure can fall just shy of significance. In the root beer example we also saw a lift in message association, but with 10% of lift it wasn’t statistically significant. Even so, we don’t necessarily want to dismiss this result out of hand. It shows the ad had some effect on messaging, just not quite as strong as on purchase intent. We usually call that a “directional lift” to contrast it with “significant list.” Or put another way, it’s more likely than not that the campaign generated some lift in message association, just not enough to be considered “significant.”
Please note that the lifts I’m using here are examples and don’t represent any kind of standard. That is, a 15% lift may be significant in one study, but not in another. There are a number of variables at play here that are beyond the scope of this post, but suffice it to say the number of respondents in your survey is an important one.
Now that we have a better understanding of how brand lift works, you might be wondering, “what kind of audio campaign can I measure with brand lift? Can we measure baked-in or direct ad-injected (DAI) podcasts? Streaming music or radio? Branded content?”
The short answer is, yes. But I’ll get more detailed in my next post.
For those looking to grab a top-notch summary document around all things Brand Lift Study – look no further! Download our 3-page Quick Guide that helps demystify how brand lift works and how it can help you.