A/B Testing

Implementing A/B tests isn’t rocket science. I’ll show how you can implement one

Mykhailo Glodian
3 min readDec 29, 2020
A/B Testing

What is A/B Testing?

A/B testing (or split testing) is a process of showing two variants of the same web page to different segments of website visitors at the same time.

The main idea behind split testing is to compare which variant drives more conversions and to improve the conversion rate on the website. At the end of a day with the same amount of traffic, you’ll get more conversions which leads to more money.

What can you test?

  • Headlines and Sub-headlines (your main USP)
  • Body of the website
  • Design
  • Website’s navigation
  • Forms
  • CTA
  • Social Proof, etc.

The list is not exhaustive.

“A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.”

John Gall

So start with simple tests — change CTA on a button, change the color of that button, test another USP, etc.

What instruments to choose for A/B-testing?

There are a lot of instruments on the market:

If you are new at A/B testing I recommend Google Optimize: it’s free, easy to set-up, integrated with Google Analytics. You can add an anti-flicker script so your visitors will never know that you are running a split-test.

How to start?

Step 1

Write all your hypotheses.

Prepare a document with hypotheses you want to test and with the potential outcomes you want to get.

Step 2

Prioritize them.

I usually use The PIE prioritization framework

The PIE prioritization framework

Assign a score of 1–10 for the potential increase of conversions the page will give.

Assign a score of 1–10 for the value of the page’s traffic for your business and how important it is for ROI.

Finally, talk to your developers and ask them to assign a score for how easy it will be to design and implement the fix.

Step 3

Prepare a test.

Check everything. Are you sure that your A/B test looks okay on mobile?

Step 4

Run the A/B test.

While your test is running, make sure it meets every requirement to produce statistically significant results before closure.

Statistical significance

Step 5

Repeat.

It’s all about learning from your previous experiments

What is the statistical significance? How is it calculated?

A statistically significant result would be one where, after rigorous testing, you reach a certain degree of confidence in the results. We call that degree of confidence in our confidence level, which demonstrates how sure we are that our data was not skewed by random chance.

The easiest way to calculate is to use a calculator.

statistical significance

Follow the steps and the calculator will show you whether your experiment is significant or not.

statistical significance example

When you find out your baseline conversion rate you will be able to plan the test duration.

statistical significance e.g.

Learn more about A/B Testing:

--

--

Mykhailo Glodian
0 Followers

Head of Growth | Fascination with user acquisition, activation, and retention to drive revenue