Tips for better conversion when doing A/B testing.

When conducting A/B tests, which we highly recommend if you have enough traffic, you’ll probably find yourself in a situation when you ran out of ideas for new hypotheses, or the ideas you have don’t convince you much.

We’ve conducted a few experiments in our days, and here we’ll give you some tips that will hopefully help you increase those KPI’s.

1. Generate hypotheses based on actual data.

It’s very easy to have opinions about what should be done in a digital product to increase metrics, and each team member might have a different view. Add structure and substance to your processes, and that will help you avoid those situations.

Data is the most robust back up for your hypothesis.Whether it’s analytics, a heuristic evaluation or UX audit, benchmarking data, qualitative research or any other source.

For this, your team needs to have KPI’s set in place, agreed by the team. You need to be able to answer the question “What does success looks like?”, and be able to answer with quantifiable data.

A good format for a hypothesis can be:

“Changing ‘element A’ (the current design) to ‘element B’ (design challenger) will increase/decrease ‘specific metric’ by ‘estimated %’.”

A real example could sound like this:

“Moving our sign up form above the fold, will increase our sign up conversion rate by 20%.”

Estimating can be hard at the beginning, but don’t worry, the more you do it, the more accurate you’ll get.

2. If you’re selling a product, consider features vs benefits.

It’s very common when a company is trying tho sell a product, to show the value of the product, they display a list of features.

This is the probably the easiest way to show the value of a product, they’re very literal, you can copy them from the product specification details. But most times these features don’t do a great job explaining why someone should buy it.

What’s in it for me? Why should I buy it? What do all these features mean?

If your page is selling features, try to translate them into benefits for the users of your product, and that page will be a great challenger for your current design in an A/B test.

Although, this will not be the case in all situations.

Different products have different users, and they’re all in different spots in the product lifecycle, and this could affect your A/B testing experiment.

For example, you might be in a point with your competitors, that the release of a particular feature might be the defining point of your strategy to acquire more customers.

That’s why it’s so important to work on an evolutionary redesign strategy through A/B testing.

3. Keep track of all past experiments.

This is something vital that most people don’t do a great job at. Processes need to be future proof as much as possible, as people in a team will come and go, but the learnings should remain.

It’s also vital to have a system to gather all the learnings for past experiments, so you don’t test the same hypothesis twice.

The learnings from A/B tests are also valuable to learn about how our users see our product. These insights could feed better performing marketing strategies.

A spreadsheet with all the data from the experiments, from hypothesis to results and comments, should be hosted and accessible for everybody from your team.

Not only when you’re working on a new hypothesis, but also when you have to make a design decision with not much data, go back and check the results of all the past experiments, and you’ll learn a lot from your users and what they do and how they react.

4. Always double check your analytics tracking.

This is the technical and boring bit, but unfortunately it had to make it on the list, because it’s been too many times that the data we’ve seen in our analytics software wasn’t accurate enough.

Sometimes you might need a developer or analytics specialist for this, but it’s good to calibrate you’re tracking once in a while. If you have a sign up form and you’re tracking the submits, make some controlled experiments before hand, so you can fix recurring problems before your experiment goes live.

From having a very low bounce rate, due to double triggering of analytics code, to a broken tracking event or a forgotten redirect, many technical problems can ruin your tests, and you need to be prepared for it.

5. Consider where the traffic is coming from.

For some people might sound obvious, but I’ve seen it way too many times, where A/B testing experiments we conducted without considering where the traffic was coming from.

I’ve seen traffic coming from marketing ads that were doing their own A/B testing experiments independently.

When users are browsing the web, they’re following scents. They might be googling something, and see your site in results. That link will have a message, same as if it was a marketing ad.

When users click on your message, they’re following that scent. If they land somewhere where the message isn’t aligned, they lose the scent, and they leave.

Align your marketing messages, learn from SEO what your users are searching for when using search engines and arriving to your site, check your social media stats. Don’t treat your site like it’s on its own. It’s part of a digital universe where everything is connected.

If you need a hand with your strategy for A/B testing, we’re here to help.

We hope this is helpful and thanks for reading.

Get a free consultation

Let’s have a chat about your digital goals and how to achieve them.

We always reply in less than 24 hours.

21 MAY 2019

Complete guide to improve your UX research skills.

After years conducting user research for several companies, from start ups to big corporations, in-house…

12 MAR 2019

Portland tumblr lomo vegan crucifix heirloom scenester af roof party

Organic flexitarian etsy, waistcoat bushwick 90's kitsch cronut green juice raw denim try-harIceland taxidermy…

12 MAR 2019

Portland tumblr lomo vegan crucifix heirloom scenester af roof party

Organic flexitarian etsy, waistcoat bushwick 90's kitsch cronut green juice raw denim try-harIceland taxidermy…