Understanding Statistical Significance in A/B Testing
Statistical significance tells you the probability that your A/B test results are not due to random chance. A result is conventionally considered significant at p < 0.05 (95% confidence), meaning there is less than a 5% chance the observed difference happened by luck. For example, if your control converts at 3.2% and your variant at 4.1%, the absolute lift is +0.9 pp. Without enough traffic, this difference could easily be noise. The p-value calculation (typically using a two-proportion z-test or chi-squared test) determines whether your sample size was sufficient to declare a winner with confidence.