7 Rules for A/B Testing Your Designs
Conversion rate refers to the percentage of visitors to a site who complete a desired goal, such as signing up for a service or purchasing a product, out of total number of visitors. Boosting this rate is the number one priority for many products. Fortunately, designers can use a simple method to perform experiments designed to see which design elements best drive conversions.
A/B testing, also known as split testing, is a technique that allows you to compare two proposed versions of a design and select the one that leads to the most conversions. Although A/B testing may seem like an easy exercise, it can be tricky to master this tool and get reliable results from using it.
Here are a few recommendations that can help you make the most of A/B testing in your work.
7 Rules for A/B Testing Your Designs
- Test the right page.
- Get the sample size right.
- Don’t make too many changes between versions.
- Schedule your tests correctly.
- Don’t make any changes in design during testing.
- Ask visitors to provide feedback.
- Conduct A/B testing on a regular basis.
1. Test the Right Page
First and foremost, you should conduct A/B testing for pages where users convert. If most conversion happens at the home page, for instance, then that’s where you need to conduct your testing. Tools like Google Analytics can help you find key lead generation pages. By leads, I mean new sign-ups or purchases.
Start by identifying the most visited pages on your site. You can get this info from Google Analytics. Go to Behavior > Site Content > All Pages to see the list of most visited pages. Next, analyze what type of content attracts users’ attention. Use a heatmap, available in Google Page Analytics or HotJar, to understand which areas of the page get the most traffic and which ones users ignore. This step will help you identify the content that is most aligned with visitors’ goals, and you can use this information to create better opt-in pages.
2. Get the Sample Size Right
A/B testing is a quantitative method. You select the winner based on the data, meaning the actual number of users who convert. Thus, choosing the proper number of users for testing is crucial. If you don’t perform testing on enough users, your results will be biased, making the test less useful.
The process of calculating sample size isn’t easy, so you should use special calculators for this task. Prior to using a calculator, however, you need to learn your baseline conversion rate. This number is your current conversion performance. You also need to define the statistically significant level, which indicates the likelihood that the difference in conversion between a design variation and the baseline is not due to chance. In other words, statistical significance tells us the probability of getting legitimate results from the tests. Generally, 95 percent is an accepted standard for statistical significance. Once you’ve established these numbers, here is a useful sample size calculator that will help you select the correct number of test participants.
Remember that you also need to target the right audience. All test participants should represent a specific set of users. For example, if you know that your ideal user is a middle-aged head-of-household with an annual income of $100,000, you should target precisely this category. You can segment your users (each segment will have a specific set of attributes such as age, gender, location, etc.) and filter the results of A/B testing based on this particular segment. You can still collect inputs from various groups of users, but focus your analysis on the behavior of the targeted group.
3. Don’t Make Too Many Changes Between Versions
The basic rule of A/B testing is that you can change only one design element at one time. This one element is called a testing variable. If you make more than one change in a design, you won’t be able to isolate which element led to a change in conversions, rendering the test largely useless.
For example, if you want to conduct A/B testing of landing page design, you could select any one of the following variables:
Size and shape of the primary call to action button.
Color of a primary call to action button.
Placement of a primary call to action button.
Key message (change title or body text).
Imagery that supports the key message.
Obviously, you may have a lot of different variables to choose from, So, how do you know which ones to test? To select the right variable, you need to form a hypothesis on which factors most impact conversion rate. When forming a hypothesis, keep in mind that a small change can have a significant impact on the conversion rate.
For example, you might form a hypothesis that color plays a major role in conversion, and a red-colored primary button will generate more conversion rather than a green button. You can then A/B test each choice to validate this hypothesis.
4. Schedule Your Tests Correctly
Timing plays a significant role in A/B testing. For representative results, you need to run each test for comparable periods. Measuring version A’s performance during the weekend against version B’s performance during the business week is a mistake. How would you know whether a version performed better because of design differences or changes in user behavior? After all, users are generally less active during weekends.
Ideally, you should test both versions simultaneously. For example, if you’re testing two versions of a landing page, you can split your site’s visitors into two groups and show version A to one group and version B to another group at the same time.
Setting the right testing duration, meaning the number of days over which you need to collect data, is also essential. Testing should last long enough to help you draw accurate conclusions about your results. Every project is different, so it’s impossible to provide general recommendations on test duration. You can likely find specific recommendations based on your product performance, however, such as an average number of daily visitors. Check out this A/B test duration calculator, which will help you calculate the number of days to run the test for your specific project
5. Don’t Make Any Changes in Design During Testing
Introducing changes in design during active testing is one of the most critical yet widespread mistakes that many product teams make. When you see that something in your design doesn’t perform as well as expected, it can be tempting to fix the issue on the fly. But it’s better to avoid this temptation. Why? Because by making changes during a test, you introduce bias in your results. You won’t be able to tell whether or not your change affected the data. This uncertainty means the results will be unreliable.
6. Ask Visitors to Provide Feedback
As I mentioned above, A/B testing is a quantitative method. As such, the test can tell you which version (A or B) performs better, but it won’t tell you why that’s what happened. To complement the quantitative data, then, reach out to visitors for qualitative feedback to get a complete picture of user behavior. You can send an email asking users participating in the test about their experience. For example, this email can include a link to a form with a single question. Something as simple as, “Please tell us what you think about our new design” gives the users space to provide their feedback in an unstructured format.
7. Conduct A/B Testing on a Regular Basis
“Test early, and test often” is an integral rule of product design. You cannot create a successful product without proper testing. Always include A/B testing alongside usability testing in your strategy. When developing this larger strategy, you need to decide when to conduct test. Typically, A/B testing happens during product redesign or when a new feature is released. Ultimately, though, it’s up to you to define when to run testing to best serve your goals.
Upgrade Your Testing Process
A/B testing makes product creators confident in their design decisions. It simplifies the validation process for your design and allows you to see exactly what works best for your users. Further, pairing A/B testing with qualitative techniques such as user interviews to create a holistic understanding of user behavior. This way, you will learn not only what your users do, but also why they do it. With a solid testing regimen in place, you can ensure that your products are poised to dominate your niche.