In my work both with research and recruiting at Beyond, I do a lot of experimentation and testing. Especially in recruiting, we are constantly testing new ways of communicating vision, gathering potential candidates, and identifying high-potential prospects: people who are telling us they are serious about pursuing missions as a career. I've been reading a lot from business literature about designing experiments, and trying to apply that practically. We have discovered what probably every good business major/entrepreneur knows, but seems like a rare thing in missions: the key to a good experiment is to set a clear bar both for "success" and "failure" before running the experiment!
Unfortunately, we often "do something as an experiment" in missions, and only after do we ask whether it was a success or not. "Well, we had a couple of people respond..." So, is that a success? Is it a good ROI? ("How do we measure the value of a soul? How do we know what kind of impact they will have on the field?") We end up shooting the arrow, then painting the target around it.
We've found it's a lot easier to set the win conditions first. In one of our recent small experiments, we defined a "win" as three levels: (1) at least 20 people show up, (2) people ask questions, (3) one to three people self-indicate they are "potential candidates." The "failure" bar was an inverse: (1) fewer than 20 show up, (2) few are interested (=asks questions), (3) no potential candidates come out of it. An "abandon" failure bar was (1) no one shows up, or (2) no questions are asked.
If the experiment failed, we would then have clear questions to ask about how we performed the experiment: were there things we could to improve the show-up and participation rate? Are we inviting the right people (e.g. likely to be potential candidates)? do we have the wrong discussion topics? From every experiment we should be getting feedback and learning how to improve our success rates, before the next iteration.
The challenge for a lot of missions - especially smaller ones - is that we don't know where the candidates are, how to find them, and how to mobilize them into mission. Experiments are a way to remove the "fog of uncertainty." But experiments need to be run with clear conditions to know whether they should be amplified, modified, or abandoned.