Imagine a scientist in a white lab coat, two beakers in hand. You ask what she’s doing, and she says, “I’m conducting an experiment!” When you ask what the experiment entails, she responds, “I don’t know! I’m going to combine these liquids and see what happens!”
Obviously no self-respecting scientist would call this an experiment. However people in almost any other line of business conduct these “experiments” all the time! “We’re going to build an open-concept work area and see what happens.” “We’re buying this new design tool. Let’s see what happens.” “We’re going to cancel our monthly staff meeting and see what happens.” Agility allows teams to pursue continuous improvement through experimentation, but only if we’re actually experimenting. For a real experiment, you need a real hypothesis—a best guess about what will happen. “See what happens” is not a helpful hypothesis because you can seldom disprove it. Something almost always happens! Keep on generating those ideas, but ask yourself these questions: What do I think is going to happen? It’s not an experiment if you don’t have a disprovable hypothesis (at the end, you must know whether your guess was right or not). When you implement this change, what will happen? Will productivity go up? Will the number of bugs discovered decrease? Be specific about the changes you expect. How and when will I measure my results? Some metrics are easy to track (e.g. number of bugs, call hold time, customer clicks, etc.), but others are very difficult (e.g. employee stress, quality improvement, customer engagement). Make sure you’re choosing metrics that actually prove or disprove your hypothesis, not just the easiest metrics to access. Who is involved in this experiment? Most of the changes we make impact other people. Choose enough participants so that you are learning from several perspectives, but not so many participants that—if your brilliant idea goes badly—the effects are catastrophic. “Narrow the blast radius,” as my coworker says. How long am I going to run this experiment? Change is hard, but it’s even harder when you don’t know how long it will last. Choose an appropriate timeline for your experiment and communicate it. This will keep you from monitoring a never-ending experiment, and will build trust with your participants. What other factors might confuse my results? If you’ve got a new design for your website and you want to measure how many more visitors you are receiving, think about other factors that might affect that result. Are you running your experiment on a weekend when web traffic is higher? Did you also put out a new web ad? Did you change your Google ad words? Limit those “confounding” variables when you can, and document them when you can’t. Keep in mind that your experiment is designed to be proven or disproven. If you thought your change was going to boost productivity and, instead, productivity plummeted—that’s a successful experiment! You and your team know more than you did before. You can make more informed decisions, and you never have to waste productivity through this change again. The only failed experiment is the one you don’t learn from.