Most B2B marketers today understand the importance of testing. A quick change in a subject line, some copy, or an offer can dramatically alter the performance of a campaign. At Eloqua, our own marketing team does A/B testing on almost every piece that is sent out. However, the question we all face as marketers, is what result we should be testing against. In B2C marketing, it is often significantly easier, as campaigns are often designed to drive explicit purchase behavior, and success can be measured against revenue results.
In B2B marketing, however, the revenue result is often significantly further away. Ideally in testing a campaign, one would be able to determine which drove more actual buying behavior, but with most buying processes stretching out over months, it would be impossible to efficiently test and launch a campaign in this manner. Inversely, testing against the common results of opens, or click-throughs only tells a small fraction of the story, as there is only a very loose tie between an email open or clickthrough, and the final result of a purchase.
Luckily, our options for what results we test against, in fact, span a spectrum. A careful selection of the end result to test against guides exactly what our tests will show. Looked at along two dimensions, we have the following options to test against:
Email Opens: The easiest to test against, but the least accurate by far. This is mainly an indicator of the quality of your subject line. Whereas this can be tested very quickly, it suffers from technological differences in email platforms, as well as a very limited testing scope.
Email Click-throughs: Slightly more difficult to test against as it requires tracking of links clicked, but this is common in most email platforms today. However, this also suffers from a very limited accuracy, as it does not indicate much about the recipient’s interest in purchasing
Inquiries: Tracking of which email drove more inquiries (landing page form submissions) is a significant bump in test accuracy. This now tests whether the subject line was compelling enough to lead to the email being read, the content and offer was interesting and drove a click-through, and the landing page was optimized to maximize form submission rates. This is a very comprehensive test of your campaign, and usually sees results within one or two days of running an A/B test campaign
Qualified Inquiries: Even higher in terms of testing accuracy is testing against qualified inquiries. If, in the A/B test, email A drove 100 inquiries, and email B drove 80, but most of B’s inquiries are the right target executive, while most of A’s inquiries are students and more junior staff members, clearly email B is the best option. Note that the dimension of lead scoring we are talking about here is explicit scoring, as we are just looking to see whether the right executives are the ones inquiring.
Opportunities: We do see an increase in accuracy as we move to Opportunities as a result to test against, but this also increases our difficulty significantly. There are often many more factors involved in qualifying an opportunity as being ready for sales than just one campaign, so this leads to a significant increase in complexity of the situation to analyze.
Revenue: This is clearly the highest accuracy to test against, but the length of a sales cycle means that it is prohibitively difficult to work with, and the ideal timing to run the campaign in question may have long passed by the time that the test results are available. The way we need to think about B2B marketing analysis means that, in general, it is nearly impossible to test the effectiveness of a single campaign on revenue in a meaningful way.
Each of these options has benefits and drawbacks, so it is important to consider what you are testing against when defining your A/B test. In my experience, in a B2B marketing situation, testing to determine which option produced more qualified inquiries often provides the optimal balance between ease of testing and accuracy of results.
0 komentar:
Post a Comment