a/b testcroconversionstatistics

A/B Testing: A Beginner's Guide

How to properly run A/B tests, calculate statistical significance and avoid common mistakes. With examples from e-commerce.

Published February 23, 2026·Time to read: 8 min

What is an A/B test?

A/B test is an experiment in which you show two identical groups of users different versions of the page (A - control, B - experimental) and measure which one gives the best result.

How does statistical significance work?

A/B testing uses the Z-test (or chi-square). We want to make sure that the difference in conversion is not a fluke.

Rule: Confidence interval must be ≥ 95% to make a decision.

CR(A) = Conversions_A / Visitors_A 
CR(B) = Conversions_B / Visitors_B 
Uplift = (CR(B) - CR(A)) / CR(A) × 100% 

Calculation example

- Option A: 1000 visitors, 80 conversions → CR = 8%

- Option B: 1000 visitors, 104 conversions → CR = 10.4%

- Uplift: +30%

- Confidence: 97% ✅ Accept B

Common mistakes

1. Stop the test too early - wait for statistical significance

2. Test on holidays - abnormal traffic distorts the results

3. Change too many elements at once

How long to wait?

Use sample size calculator to calculate the number of users needed BEFORE running the test.

Test your A/B test for free with our statistical calculator.

We use cookies for analytics. Privacy Policy You can accept or decline non-essential tracking.