While brand lift studies are an increasingly important way to assess a podcast ad campaign’s performance, it’s essential to go beyond the headlines to get the full benefit of the data. Let’s talk about why and how to do that.
As podcasting continues to grow and develop as a medium, so does the volume and sophistication of advertising it attracts. This is obviously excellent news for the podcasting industry, but it brings with it greater scrutiny on how that advertising is working. Podcasts are competing for ad dollars with other media in the audio space and beyond; and in other media marketers want to know that their brand advertising is having an effect on people’s awareness, perceptions and behaviours. Marketers are increasingly asking the same questions about their podcast advertising, and they have a number of tools to help provide the answers.
Ad delivery reports can give you some understanding on things like audience reach, target, reach, and frequency. URL attribution studies can point to the campaign’s ability to deliver web visits and sales conversions. But for things like brand recognition and intent to buy – measures closer to a campaign’s impact than its delivery – there’s usually a need for some sort of custom research.
This is where a brand lift study comes in.
Brand lift studies allow marketers to measure how their campaign is affecting how people think and feel about the brand, and whether it’s moving the needle on awareness, favourability, consideration and the like. Here at Signal Hill Insights, we’re seeing a growing number of brands and podcast publishers commissioning this kind of study.
Brand lift studies provide validation – but also the opportunity to learn.
If you’ve seen a brand lift study – whether for a podcast campaign or in another medium – you’re probably familiar with how the debrief meetings tend to go: ‘among listeners exposed to the campaign, there was a 12% increase in brand awareness, and an 8% increase in favourability; and they showed a significant increase in intent to purchase’. Big ticks all round, the campaign’s clearly worked, everyone goes away happy.
Sometimes, though, the results aren’t quite as expected: maybe awareness hasn’t shifted as much as everyone had hoped, or favourability hasn’t increased at all. Nobody likes to see that – we sometimes see results like this from our ad studies and clients’ first response is often that the campaign “didn’t work!”, and certainly raises questions about whether the campaign was worth its investment.
Good news or otherwise, brand lift studies always result in insights.
As any good researcher will tell you, even if the headlines aren’t as you may have hoped, there are always useful, sometimes fascinating, insights beneath them. Brand lift studies are no different, but it requires a shift in thinking, away from seeing the study as the answer to the question “did the campaign work”, and towards seeing it as an opportunity to explore how the campaign worked. The number of campaigns that genuinely “didn’t work!” is vanishingly small; campaigns almost always did something, and we just have to delve into the data to see what it did, whom it really spoke to, and how it combined tone, relevance and memorability to do so.
What is there to learn when the headline’s not favourable?
So where do we start? First off, it’s always useful to have previous studies or benchmarks against which to compare the data; but even without those there’s usually a wealth of insight and learnings about the campaign that can be uncovered.
- Start by understanding whether there are any audiences among whom the ads did perform well. Often we’ll see that while a campaign may not have performed as well as expected across the listening population as a whole, we’ll see a rather better response from the campaign’s target audience – be that people with family abroad, Canadians looking to replace their credit cards, or another group. It’s not a surprise of course: we’re all exposed to so much advertising in our daily lives that we often filter out those not speaking to an immediate concern or decision we’re going through.
- The data holds valuable insights about the creative. We typically assess ads across a range of key metrics, including recall and likeability and how informative listeners find it. Whether an ad’s performing well in terms of being informative but less well on measures of likeability, or really hitting the spot in likeability but isn’t moving the needle on brand recognition, the data has incredibly useful insight about how the campaign is resonating with listeners and how it can be enhanced.
- … and it also has a wealth of information about the brand and campaign. We’ve seen some campaigns where the ads really test well, but recall isn’t as strong as expected. Again, on the surface it’s not a great headline; but it’s not unexpected if the brand or product is new, or one with which listeners aren’t familiar yet. Or there may be implications in terms of the ad’s fit with the podcasts on which it’s appearing. Conversely, ads with excellent recall that don’t move the needle on favourability can sometimes be found in campaigns for more established brands (banks, or mobile phone networks, for example).
Brand lift studies are often quite simple in approach and, in all honesty, can sometimes be a bit of an afterthought in the strategic planning around a campaign – in podcasts and in other media. Yet there’s a huge amount of invaluable insight and learning beyond just “did my campaign work Y/N?” to be found in the results. It’s important that brands, publishers and researchers understand their value; because even though podcasts are growing as an advertising medium, there’s still so much we can learn in how to really make the most of it and move the industry forward.