What is an A/B test?
A/B testing — also called split testing — is a method that compares two pieces of ad content against each other to determine which one is most effective.
As marketing has become more data-driven, A/B tests have become a popular way to decide which creatives, like ad copy, images, etc., produce the most engagement.
The most straightforward A/B test will split an audience and serve them two different versions of an advertisement. Then it will record which ad produced the most sales, leads, or other desired action.
Why is A/B testing useful?
Ad creatives are a very subjective thing. One person can love an ad’s tone, feeling, and wording, while the next person might hate it. While there are good general practices for ad creatives, there are no strict rules. As a result, it’s not easy to predict — with certainty — how an ad will perform.
A/B testing offers a way to evaluate creatives objectively. By running scientific experiments that compare various ad forms, you can choose which ads to run based on the most critical metric: which one converts!
How to run an A/B test
Now you understand the concept and benefits of A/B testing, let’s look at how you can run one of these experiments.
#1. Pick an independent variable
You need to pick one variable to get the most reliable results. For example, you can use the exact same ad but test out two different call to action (CTA) buttons or copy.
If there are too many different variables (like different copy, images, headlines, etc.), it’s not easy to know which one is responsible for better conversions.
#2. Define your goal
Define which metric you want to test. It could be a higher click-through rate (CTA), conversions, cost per sale, etc.
Note down what you want to achieve before running your test.
#3. Create two versions
Next up, you need to create two versions of your ad. You need:
a) the control version
b) the version with a slightly altered variable
#4. Split your groups
Then you need to split your audience. To get accurate and reliable results, you need to divide your groups equally and randomly.
So 50% of your audience gets one ad, while 50% get the other.
#5. Think about the sample size
You need an adequate sample size to ensure your results are meaningful. Small sample sizes are overly influenced by personal or individual preferences.
Larger sample sizes will produce more accurate results. However, no one has an unlimited ad budget that they can casually burn through running experiments.
Calculating your sample size is a complex and contentious issue beyond this article’s scope. However, you can use this sample size calculator to decide how many people to show each ad variant.
#6. Run your ad
If you have a Facebook Ads account, you can create your test in Ads Manager or Experiments. In Google Ads, it’s available through the Optimize Account setting.
Both platforms will track your audience’s choice and provide you with the analytics you need to decide which variant is superior.
#7. Test both variants at the same time
This point is really crucial. To get the most accurate results, you should test both ads at the same time. Timing plays a big role in people’s decisions, so if you want to test for the ad — and not for how each month affects your ads — you need to run your A/B test simultaneously.
For example, if you tested an ad in December and one in January, the risk is that people’s different commercial behavior will distort your results. Similarly, testing ads at different times of the day will skew the results of testing your chosen variable.
#8. Give your test enough time for your variants
If your ads get a lot of traffic, you can quickly get meaningful data. However, if your organization is small or you sell a very niche product, it might take weeks or months to know which variant is the true king.
So adjust this based on your impressions, traffic, and so on.
#9. Focus on your goal
Once you’ve completed your testing, it’s time to interpret the results and use them to optimize your campaign. A/B tests will produce a lot of data, like click-through rates, conversions, cost-per-sale, etc.
But the thing you need to concentrate on is the metric you choose in step 2 of this guide. Judge your ad based on that number, and focus less on the other information.
#10. Calculate the significance of each result
There are lots of free tools that allow you to test the statistical significance of each result. Plugging in the number of viewers and conversions into a statistical significance calculator will help you determine if your A/B test is valid.
#11. Use your results to drive changes
Hopefully, by this stage, you’ll have some conclusive results. If one ad variant produces statistically stronger effects than the other, you know that you are on to a winner.
However, in some cases, a test won’t have a strong enough impact to warrant any changes. It might suggest a slight improvement, but not enough to justify choosing one ad over the other.
This information is still valuable. You’ll understand that whatever element you’ve altered isn’t necessarily what you need to change about the ad. So, if your results for one variant are inconclusive, go back to step 1 and pick a new independent variable to test.
Optimizing ads is a continuous process.
A/B tests allow marketing teams to leverage a more scientific approach to deciding on ad creatives. By testing elements like ad copy, images, and layout, you can gain a solid understanding of what factors drive audience engagement.
Running an A/B test is recommended when you start a new campaign. Creating different versions of an ad and running them against each other for a limited amount of time means reducing wasted ad spending and boosting conversions, sales, lead generation, etc., by running the most effective versions of your ads.
And if A/B tests are not your thing at all, our AI advertising robot is very good at exactly that — repetitive, tedious tasks. So that you can spend time doing things that humans do better.