You've learned a six-step analytics process that keeps teams on track from framing a question through learning from outcomes. But even the best process doesn't eliminate a fundamental tension every manager faces: when should you trust your gut, and when should you demand more data? The truth is that intuition and evidence aren't opposites—they're partners. Your experience gives you hunches worth exploring, and evidence helps you figure out which hunches are right. The skill lies in translating instincts into testable claims, defining what "enough" evidence looks like, and knowing when to act versus when to wait.
This balance matters especially for People Managers, where many decisions involve human behavior that doesn't reduce neatly to numbers. Whether you're sensing that a team dynamic is off, believing a new hire process will improve quality, or feeling that a policy change will backfire, you bring valuable pattern recognition to the table. The goal isn't to silence that intuition but to make your beliefs explicit enough that you can check them against reality. Throughout this unit, you'll learn practical techniques for honoring your experience while staying honest about uncertainty.
According to the HBR Guide to Data Analytics Basics for Managers, a key facet of being data-driven is being able to integrate concrete data with your intuition as a manager. But we must remember: a gut feeling is a signal, not a conclusion. When you sense that something is true—"This candidate won't work out" or "Our engagement scores are about to drop"—you're drawing on patterns you've observed over time. That's valuable, but gut feelings are hard to evaluate or act on until you turn them into statements that can be tested.
The first step is making your belief explicit. Instead of a vague sense that "something's off with the team," articulate exactly what you believe is happening and why. A testable hypothesis might sound like "I believe our team's recent productivity decline is caused by unclear project priorities, not lack of effort." Notice how this version names a specific cause and distinguishes it from alternatives. It's no longer just a feeling, it's a claim you can investigate.
Good hypotheses share three essential characteristics:
-
They must be specific enough that you could recognize evidence for or against them, so rather than saying "Morale is low," you might state "At least 40% of team members will cite unclear expectations as a top frustration in our next survey."
-
Additionally, strong hypotheses are falsifiable, meaning you can imagine results that would prove them wrong—if no possible outcome would change your mind, you don't have a hypothesis but rather a conviction.
-
Finally, effective hypotheses connect to observable behaviors or outcomes, not just internal states. You can't directly measure whether someone is "disengaged," but you can track whether they attend optional meetings, volunteer for projects, or respond to messages within a reasonable timeframe.
When someone on your team expresses a strong intuition, help them translate it using simple prompts. You might ask "What specifically do you think is happening?" followed by "How would we know if that's true?" and then "What would we see if it's not true?" These questions transform opinions into investigations. Here's an example of how this might unfold between two colleagues:
- Natalie: I just have this feeling that our remote employees aren't as engaged as the people in the office.
- Nova: That's interesting. What specifically makes you think that?
Once you have a testable hypothesis, you need to decide how much evidence is enough. This is where many managers get stuck, either demanding perfect certainty before acting or accepting any scrap of data that confirms what they already believe. Neither extreme serves you well, so it is important to take evidence through a checklist to ensure the evidence you are using is sufficient:
-
Size the evidence bar: Start by asking "What happens if I'm wrong?" High-stakes decisions deserve more rigorous evidence than low-stakes, easily adjustable ones. For example: if you're considering whether to restructure your entire team, you should gather substantial evidence because the cost of being wrong is high. The question isn't "Do I have enough data?" but rather "Do I have enough data given what I'm risking?"
-
Decide your "pass" criteria first: Before you look at results, define what would make you act. To avoid moving the goalposts later, set three bands: evidence that validates your choice, evidence that is unclear, and evidence that disconfirms it. For instance, if you believe a new onboarding approach will reduce turnover, you might decide: "If turnover drops by 15%, we validate. If it drops less than 5%, we revisit. Anything in between is unclear and we extend the pilot."
-
Actively try to disprove yourself: To counter confirmation bias—the tendency to only see data that supports your hunch—ask "What evidence would change my mind?" and go looking for exactly that. If you're convinced an employee is underperforming, don't just look for mistakes; look for their recent successes. If you find disconfirming evidence, you must be willing to update your belief.
-
Match certainty to commitment: Your evidence standard should match the level of commitment required. Hard-to-undo decisions (like a permanent hire) require high certainty. Reversible decisions (like hiring a contractor for a three-month trial) can be made sooner with less evidence, allowing you to refine your approach through learning.
Even with a clear hypothesis and defined evidence standards, you'll face moments where the data is ambiguous or incomplete. In these situations, you need a framework for deciding whether to act now or gather more information.
The key question is whether additional data would actually change your decision. Sometimes more analysis is genuinely useful—it might reveal a critical factor you hadn't considered or clarify a close call. However, requests for more data are often avoidance strategies dressed up as rigor. If you're honest with yourself and can't identify a realistic finding that would flip your choice, then "needing more data" is delay, not diligence. A useful test is to ask "If I got exactly the data I'm requesting, what would I do differently?" If the answer is "Probably nothing," you already have enough to decide.
Beyond the value of additional data, consider the cost of delay alongside the cost of a wrong decision. Every day you spend gathering more evidence is a day you're not capturing the benefits of acting. If your hypothesis is that a team member needs coaching to improve, waiting six more months for "definitive proof" means six more months of underperformance—plus potential damage to team morale and the individual's career trajectory.
Some situations call for structured experiments rather than all-or-nothing decisions. If you're uncertain whether a new flexible scheduling policy will help retention, you don't have to roll it out everywhere or nowhere. You could pilot it with one team for 90 days, define what success looks like, and then decide whether to expand based on actual results. This approach lets you move forward while managing risk—you're not waiting forever for data, but you're also not betting everything on untested intuition.
There's also wisdom in knowing when your intuition should carry extra weight. If you're a veteran manager who has seen similar situations play out dozens of times, your pattern recognition is genuinely informative. If you're dealing with a fast-moving situation where the window for action is closing, speed matters more than precision. And if the stakes are low enough that a wrong choice is easily corrected, going with your gut and learning from the outcome is often the smartest play. The goal is not to become paralyzed by uncertainty but to make uncertainty visible so you can manage it with purpose.
In the upcoming roleplay session, you'll practice these skills by working with a colleague who wants to make a significant decision based purely on instinct. Your job will be to honor that instinct while transforming it into a testable hypothesis, defining evidence standards, and proposing a practical path forward that balances speed with rigor.
