Are you an Insights leader trying to pivot to leaner, more reliable approaches? You’re not alone. We often hear confessions along these lines from Insights leaders:
We learned the hard way that asking people for their opinion about the product we developed didn’t translate to purchase like we had hoped. After conducting many focus groups, we thought the product we developed would have traction. But we were asking if people liked it and thought they would buy it, instead of mocking up the product and putting it out into real-world contexts and seeing what happened. When the product got to market, nobody bought it – and we were shocked – because the opinions in the focus groups had been so favorable.
We hate hearing stories where opinion-based research-led teams to feel false confidence in a product, service or business model idea, only for the idea to end up costing the company hundreds of thousands of dollars because it didn’t have true product-market-fit. This sort of misleading research is avoidable by pivoting the research setup and testing your riskiest assumptions, the startup-minded scientific-method-meets-market-research way of behaviorally vetting product, service, or disruptive new business model prototypes.
Tools that fit into our Assumption Based Development toolbox must fit three criteria:
These three criteria ensure the misleading opinion-asking research setup described earlier doesn’t happen again, and pivots to quicker, leaner testing overall (instead of an all-your-eggs-in-one-basket approach). With these three parameters in mind, The Garage Group typically leverages one of these three prototype testing vehicles when testing your riskiest assumptions:
We typically constrain this to roughly 150 participants give 10-15 min of overnight quantitative feedback on prototypes. Prototype formats that work well for quick quantitative testing are explainer videos depicting a process/service, illustrated mock-up visuals, text descriptions and more.
Output: Number data and minimal explanation of the ‘why’ behind customers’ decisions.
Benefit: “Behind the gate” testing where other brands aren’t going to publicly see your idea and steal it. It’s important to set up the test as transactionally as possible, though – steer clear of asking for opinions.
Example: Three different scenarios were tested using explainer videos. Participants were asked if they would be willing to do a certain behavior in response to each explainer video. This data helped fuel an Innovation Board pitch that got the team a “Persevere” decision and more funding for future testing and development.
Roughly 15 participants give 15-20 min of overnight qualitative feedback on prototypes. Examples of prototype formats that work well for quick qualitative testing are explainer videos depicting a process/service, illustrated mock-up visuals, text descriptions and more.
Output: In contrast to the quick quantitative test above, this test provides a deeper explanation of what’s working and what’s not working and how to improve on the prototype.
Benefit: Just like the quick quantitative test above, this test is “behind the gate,” where other brands aren’t going to publicly see your idea and steal it. It’s important to set up the test as transactionally as possible, though – steer clear of asking for opinions.
Example: Participants were pointed to a website and ran through a scenario, where the team could track where the participants navigated. After going to the website, participants answered several questions about what they gravitated towards and why.
Ads directing customers to a website or email sign-up page are strategically placed online over 48hrs and garner click-through and sign-up data to indicate interest in prototypes and variants. Examples of prototype formats that work well for in-the-wild ad testing are websites or prototype visuals that depict the product/service/business model as it will be in-market as if it actually were in-market today.
Output: Transactional data on what features, benefits, and variants drove the most amount of interest from people interacting with the ads.
Benefit: Participants aren’t in a “research” mindset, and are going about their daily lives when they see the ad. It’s a more real-world view of how people would react in-market when the product/service/business model actually becomes available. Depending on the test, contacts garnered through the ad can be tapped into for further lean testing.
Example: Teams launched a series of Facebook/Instagram ads featuring different product variants to see which ad got the most interaction. They used this data to make a decision on which variant to move forward on building out.
These three vehicles are our most commonly used (but we have lots of others up our sleeve, depending on the challenge at hand).
Insert your email below to download more examples of real BigCo experiments.
[pardot-form height=”200″ id=”8013″ title=”Lean Research Experiment Examples”]
After tests are fielded, the results directly fuel a Pivot, Persevere, Perish conversation with the leadership team or innovation board, then teams with Pivot or Persevere decisions move forward building out their idea further, then putting it into another round of testing using one or more of the three vehicles above. The teams we work with go through many rounds (not just one singular round) of this testing to build-test-learn their way to successful product/service/business model launch.
If you’re in an Insights leadership role and are wondering how to put these tests into action, drop us a line. We’d love to talk. Contact email@example.com.