Four Reasons Why Your A/B Tests Fail

A/B testing can optimize your website for better performance and better user experience through systematic testing small changes to your website. The benefit of doing A/B testing is that companies don’t have to make large investments into a website without knowing if the investment will be successful. These tests are relatively inexpensive compared to other testing methods.

Here are four common reasons why an A/B test fails.

The Trackable Goal is Too Far Removed From the Test

When setting up A/B testing, many people will set up a test up-funnel and see if it affects conversion rates. The best tests set up a hypothesis, make a single change and then track the difference from the action of that one change.

If a test is being conducted on search filters, the main metric that should be viewed during the test is on the interaction of those search filters, not the complete conversion rate. The conversion rate should not solely determine if a test is a success or fail. Sometimes the higher conversion rate is not seen immediately after a test is launched. It is really important to make the winning metric on the page with the change or the next step in the user’s experience.

Always aim for small improvements at each step in the process to lead to bigger gains down the line.

Not Starting with Enough Data

Another common pitfall in A/B testing is when a company does not have data to guide them in their decision making.

An example would be if a client had a hypothesis that removing an internal banner on the site would increase engagement rates. The idea would be that the page would have less clutter and allow for the customers to interact with the items for purchase. In fact, overall engagement on that page and conversion rates went down because the heavy-use consumers could not find the banner that they relied on.

With a robust analytics suite feeding your team information, you can see where customers are engaging on your site, where they are falling off and where potential new optimizations could be tested. The analytics can give you sound data to form your educated  hypotheses that will lead to more successful tests every time.

Going Off of Your Gut Instinct Only

Another reason why many A/B tests fail is when the items that a person does not like get tested. Being able to distinguish between a personal preference and a potential optimization is critical for running a successful A/B test.

In the example above, one reason why the test could have been concluded is if the person in charge of testing thought the banner was an eyesore. This personal preference, along with not viewing the data (or having data available) can lead to a failed A/B test.

Building a Hypothesis That Cannot Be Quantified

Finally, many tests are set up without a way to numerically quantify if the change was successful. Every test should be set up so that the hypothesis can be tracked with numerical data.

A great example of a quantifiable goal is if a button has a drop off rate of 15%. With changing the location of the button on the page, the hypothesized change will result in a decreased drop off rate to 12%.

Many people do not put numerical data into their testing goals, so there is no way to conclude if the test was successful.

As you go about setting up A/B tests, be sure the test is set up successfully. Using your professional experience should not overrule data. Additionally, keeping an eye to quantifiable goals will ultimately lead to a better optimized website.

Help Them Check Off Their Bucket List
Our travel and lifestyle portfolio has everything from first-class flights to fitness classes, including over 400,000 tours and activities.