Showing posts with label A/B testing. Show all posts
Showing posts with label A/B testing. Show all posts

A/B Testing; What Result are you Testing Against?

Most B2B marketers today understand the importance of testing. A quick change in a subject line, some copy, or an offer can dramatically alter the performance of a campaign. At Eloqua, our own marketing team does A/B testing on almost every piece that is sent out. However, the question we all face as marketers, is what result we should be testing against. In B2C marketing, it is often significantly easier, as campaigns are often designed to drive explicit purchase behavior, and success can be measured against revenue results.



In B2B marketing, however, the revenue result is often significantly further away. Ideally in testing a campaign, one would be able to determine which drove more actual buying behavior, but with most buying processes stretching out over months, it would be impossible to efficiently test and launch a campaign in this manner. Inversely, testing against the common results of opens, or click-throughs only tells a small fraction of the story, as there is only a very loose tie between an email open or clickthrough, and the final result of a purchase.

Luckily, our options for what results we test against, in fact, span a spectrum. A careful selection of the end result to test against guides exactly what our tests will show. Looked at along two dimensions, we have the following options to test against:



Email Opens: The easiest to test against, but the least accurate by far. This is mainly an indicator of the quality of your subject line. Whereas this can be tested very quickly, it suffers from technological differences in email platforms, as well as a very limited testing scope.


Email Click-throughs: Slightly more difficult to test against as it requires tracking of links clicked, but this is common in most email platforms today. However, this also suffers from a very limited accuracy, as it does not indicate much about the recipient’s interest in purchasing


Inquiries: Tracking of which email drove more inquiries (landing page form submissions) is a significant bump in test accuracy. This now tests whether the subject line was compelling enough to lead to the email being read, the content and offer was interesting and drove a click-through, and the landing page was optimized to maximize form submission rates. This is a very comprehensive test of your campaign, and usually sees results within one or two days of running an A/B test campaign


Qualified Inquiries: Even higher in terms of testing accuracy is testing against qualified inquiries. If, in the A/B test, email A drove 100 inquiries, and email B drove 80, but most of B’s inquiries are the right target executive, while most of A’s inquiries are students and more junior staff members, clearly email B is the best option. Note that the dimension of lead scoring we are talking about here is explicit scoring, as we are just looking to see whether the right executives are the ones inquiring.


Opportunities: We do see an increase in accuracy as we move to Opportunities as a result to test against, but this also increases our difficulty significantly. There are often many more factors involved in qualifying an opportunity as being ready for sales than just one campaign, so this leads to a significant increase in complexity of the situation to analyze.


Revenue: This is clearly the highest accuracy to test against, but the length of a sales cycle means that it is prohibitively difficult to work with, and the ideal timing to run the campaign in question may have long passed by the time that the test results are available. The way we need to think about B2B marketing analysis means that, in general, it is nearly impossible to test the effectiveness of a single campaign on revenue in a meaningful way.



Each of these options has benefits and drawbacks, so it is important to consider what you are testing against when defining your A/B test. In my experience, in a B2B marketing situation, testing to determine which option produced more qualified inquiries often provides the optimal balance between ease of testing and accuracy of results. Read More...

Looking for an Outcome - Testing in B2B

I was watching Tamara Gielen's video of Jim Sterne from the EIS keynote (http://www.b2bemailmarketing.com/2008/12/eis-keynote-cro.html), and it got me thinking about the challenges of testing in B2B. It's a tough exercise because of one fundamental challenge. Testing of marketing involves defining an outcome you're looking for so that you can say that option A did a better job than option B of driving that outcome. The trouble with B2B marketing is that those outcomes make a quick jump from irrelevant to immeasurable.


What do I mean by that?


Well, in B2C marketing, you can often define the outcome of a marketing campaign as purchase revenue. Send an email, observe how many people buy and how much they spend. You can then test against that outcome to see which copy, creative, subject, or list drives more of it and that's your best marketing option.


In B2B marketing, your sales cycles are often much longer - months if not quarters or years - and the sales cycle is often concluded off-line by a sales rep getting a signature on a contract. This means that any testing you might want to do against the ideal results - the driving of revenue - are both extremely difficult to tie together, and require more time to elapse than is practical (if you have to wait 3 months to see enough results to determine which marketing campaign to launch, you've likely missed your window).


Similarly, if you look at things that can be easily measured in B2B, they are usually not significant enough to guide decisions; opens, clickthroughs, and form submits are not great indicators of revenue. If you are testing against which campaign drives more of these activities, you are likely going to find that free iPod giveaways perform fairly well, but as we've all seen, they are not likely to turn into good leads for your sales team to follow up with.


Luckily, there is an interim outcome that you can test against, does correspond to revenue potential, and is quickly determined; the qualified lead. With a definition in place of what a qualified lead is, you now have a measurement of what outcome your campaigns are trying for.


Note that what we're talking about here is leads qualified based on their interest (implicit scoring) rather than who they are (explicit scoring), as your marketing campaigns are unlikely to change the titles or industries of your audience. (for a deeper discussion on dimensions of scoring, see here:

http://digitalbodylanguage.blogspot.com/2008/12/dimensions-of-lead-scoring.html)


The advantage of this approach is that it can test more than just a single-point communication such as an email. In B2B marketing, we are often looking to test one sequence of communications against another (let's say it's a 3 step program for post-webinar follow-up and we're testing two different messaging options, or an all-email version against a multi-channel version using email, direct mail, and voice). If you're doing that, you have even more need to use an abstracted outcome like qualified leads to be able to look at the overall effect of one sequence over another.


I look forward to you comments on what has and has not worked in your testing efforts against longer sales cycles. Read More...
Related Posts with Thumbnails
GiF Pictures, Images and Photos