Is Your Attribution Platform Properly Informing Your Media Budget?
This is the second in a two-part series, “You don’t have to be a data scientist to answer difficult questions,” highlighting a recent case study I completed in Adobe Data Workbench. Be sure to read the first post, “How Can I Avoid Using Misleading Data?”
Today’s marketing tools are beginning to blur the lines between marketer, analyst, and data scientist. Technology is enabling marketers and analysts to answer their difficult questions without advanced data analysis skills. With the Data Workbench solution in Adobe Analytics Premium, I’ve seen firsthand how enhanced data can help you determine if your attribution platform is informing your budget allocations.
Let’s face it, all attribution results are meant to inform budget allocations. But anyone using a last touch attribution model for example knows the budget allocations aren’t being properly guided. To evaluate viewable impressions only compared to all served impressions requires a flexible attribution platform allowing for easy toggling between touch metrics to see which result in incremental purchase events vs. others. Not only did we enhance our data, we also enhanced the flexibility in how we evaluated that data by loading the data into Adobe Data Workbench to evaluate it algorithmically.
Think of it this way, you wouldn’t put a Formula One mixture of fuel into your old 1990’s model vehicle. The high-performance fuel would be wasted in a vehicle that couldn’t use the fuel to perform at a higher level. Likewise, enhanced data needs to be fed into a high-performance platform to take analytical advantage of the potential that the enhanced data contains. It didn’t take multiple different SQL queries and joining tables together. Instead, it simply required building a couple filters and metrics to tease out the out-of-view touches from the data in Adobe Data Workbench.
How we did it and what it showed:
First, we compared the touch metric of “all served impressions” according to last touch attribution vs. algorithmic. We ranked all media partners from best to last through both lenses. Our hypothesis was that media partners would change rank relative to one another when we evaluated their performance using the algorithmic solution, and that’s exactly what happened. Five out of the eight media partners that were evaluated changed rank relative to each other. This not only reinforced the knowledge that last touch is a flawed model, but it also uncovered attribution insights we never had before. For example, one of our best performers through a last touch lens was the worst performer in the algorithmic model. According to algorithmic attribution, that partner was not contributing to incremental purchase orders relative to other partners during the time period analyzed.
Then, we took it a step further and changed the touch metric to “viewable Impressions” only. This is the enhanced data that remains after removing the misleading, out-of-view data lurking below the surface. Again, our partners changed rank in relation to one another just by changing the touch metric and removing out-of-view touches from consideration. This simple exercise uncovered the portion of our media budget that was not contributing to incremental purchase orders, justifying the considerations for shifting media budget allocation.
Without a flexible attribution solution or a team of dedicated data scientists, these insights are extremely difficult to garner especially on an on-going basis.
Algorithmic attribution aided by enhanced data has the potential to change the way we look at digital marketing data.
Without a platform allowing you to benefit from your enhanced data, there’s no benefit to enhance your data in the first place. Why turn your gasoline into Formula One fuel if you’re just going to put it into your old car that won’t yield its potential.
*This particular case study included one data source reflecting one media channel but the tool will allow for multiple cross-channel comparisons.