Experimentation is at the heart of smart business decision-making. In this unit, you’ll learn how to turn strategic questions into simple, actionable experiments—especially A/B tests—so you can move from guessing to knowing what actually works. By mastering these basics, you’ll be able to test ideas quickly and make evidence-based choices that drive results.
The HBR Guide to Data Analytics Basics for Managers asserts that in order to design a strong business experiment, you mustr start by turning a broad question into a specific, testable hypothesis. This means moving from something vague—like “Will a new homepage help sales?”—to something you can actually measure, such as “Will changing the homepage headline increase completed purchases by at least 5% over two weeks?” This shift helps you focus your experiment and define what success looks like.
When it comes to testing, you'll use two main approaches to your testing: A/B testing and multivariate testing:
Most often, you’ll use A/B testing to answer these questions. A/B testing is a method for comparing two versions of something—like a webpage, email, or product feature—to see which one performs better. Users are randomly assigned to see either the original version (A) or a new variation (B), and you measure which group achieves your desired outcome more often. This random assignment helps ensure that any difference you observe is likely due to your change, not just random chance or outside factors.
Sometimes, your business question involves more than one variable—such as button size, color, and text. In these cases, you can use multivariate testing to test combinations of changes at once (ex: large blue button vs. small red button vs. large red button, etc.). Multivariate testing helps you understand not just which individual change works best, but also how different changes interact with each other. This approach is more complex, but it can reveal insights that simple A/B tests might miss.
A well-designed experiment always includes a treatment group (who experiences the change) and a control group (who does not). This setup allows you to isolate the effect of your change. For instance, in an A/B test, half your users might see a new email subject line, while the other half sees the original. Randomly assigning users to each group is critical—it helps ensure that other factors (like whether someone is on mobile or desktop) are evenly distributed, so your results reflect the impact of your change, not outside influences. In some cases, you might further divide users by key characteristics (like device type) before random assignment, a technique called “blocking,” to make your comparison even more fair.
It’s essential to define your primary outcome before you start. Choose one or two metrics that truly matter for your decision—such as "signup completion rate" for a form test, or "click-through rate" for an email subject line experiment. Also, agree on how long the experiment will run, or how many users you’ll include, to ensure your results are reliable.
A simple experiment plan might look like this:
Here’s a realistic conversation between two colleagues, where one demonstrates how to focus an experiment and avoid common pitfalls:
- Jessica: Hey Ryan, I think we should overhaul the entire signup page—new layout, new colors, and maybe even a different flow. That should boost our signups, right?
- Ryan: I like the ambition, but if we change everything at once, we won’t know what actually made the difference. What if we start by testing just one change, like shortening the form from 8 fields to 4?
- Jessica: But what if the color or layout is the real problem?
- Ryan: Good point. Let’s test the form length first and measure the signup completion rate. If that moves the needle, we can test other changes next. We’ll run the test for two weeks or until we get 500 completions, so we have enough data to trust the results.
Even simple experiments can go off track if you’re not careful. One common mistake is changing too many things at once—if you update the headline, button color, and layout all together, you won’t know which change made the difference. Stick to one variable per test whenever possible, unless you’re intentionally running a multivariate test. Another pitfall is drawing conclusions from too small a sample; for instance, don’t declare success after just 10 signups. Finally, always decide what success looks like before you start, and resist the urge to “find a win” by looking at lots of different metrics after the fact. If your goal is more signups, don’t switch to measuring page views just because the signup rate didn’t change.
When you analyze your results, look for a meaningful difference in your primary metric, and consider the margin of error reported by your testing tool. A small lift may not be meaningful if it falls within the margin of error, so use your judgment and consider the costs and benefits before rolling out a change.
By keeping your experiments focused and well-defined, you’ll build confidence in your results and make smarter, faster business decisions. In the upcoming roleplay session, you’ll get hands-on practice designing a simple A/B test and navigating the real-world challenges that come with experimentation.
