Section 1 - Instruction

Remember hypothesis testing from our last session? A/B testing is that framework applied to real business experiments!

Instead of just analyzing existing data, you actively create two versions (A and B) to test which performs better.

Engagement Message

Can you describe how A/B testing turns hypotheses into experiments?

Section 2 - Instruction

Here's how A/B testing works: you split your audience randomly into two groups. Group A sees the original version (control), Group B sees your new version (treatment).

For example: half your website visitors see the current checkout page, half see your redesigned version.

Engagement Message

Why is random splitting crucial for reliable results?

Section 3 - Instruction

This directly applies your hypothesis testing knowledge. Your null hypothesis: "The new design doesn't improve conversion rates." Alternative hypothesis: "The new design DOES improve conversions."

You're testing these competing claims with real customer behavior, not just analyzing historical data.

Engagement Message

Which hypothesis would you hope to prove wrong?

Section 4 - Instruction

Sample size is critical for A/B testing success. Too small, and you might miss real improvements. Too large, and you waste time and resources.

Generally, you need hundreds or thousands of users per group to detect meaningful business changes.

Engagement Message

Why might 50 users per group be insufficient for reliable results?

Section 5 - Instruction

Here's a common mistake: stopping tests too early when you see promising results. This is called "peeking bias" - you're more likely to stop during random positive fluctuations.

Plan your test duration upfront and stick to it, regardless of early results.

Engagement Message

In one phrase, why can ending an A/B test early be misleading?

Section 6 - Instruction
Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal