Bill Gates says his favorite business book is Business Adventures, a collection of articles from the New Yorker writer John Brooks. As proof of why, he cites the article “The Fate Of The Edsel,” which covers the research, design, promotion, and spectacular failure of the Ford Edsel, intended in the 1950s to be the “car of the future.” Designed as the manufacturer’s new flagship vehicle, Ford poured some $250 million (in 1950s dollars), losing $350 million when the car flopped. Why? Gates writes: 

“Although the Edsel was supposed to be advertised, and otherwise promoted, strictly on the basis of preferences expressed in polls, some old-fashioned snake-oil selling methods, intuitive rather than scientific, crept in.” 

They had the research and testing showing what people wanted and how to talk about it, but instead threw it away, opting for what some internal folks felt on a “gut” level. The story is a case in point for the importance of a research method called concept testing, which can be adapted and adopted by businesses of all sizes.

Concept testing is a research method that, quite simply, asks customers questions about a concept or idea before it’s actually launched. In the product development lifecycle, this generally comes after ideation – so once an idea is settled on – but before actual development begins. It’s an incredibly powerful way to pause and systematically realign the development process around the target audience’s actual needs and desires. 

For UX designers and researchers, concept testing can be used in a variety of situations. Early on, it can gauge whether or not a given product is interesting to the target audience. Once a product is launched, concept testing can gauge the desirability of new features. Specific UX and other design elements – for example, a chatbot that persists between screens – can be reacted to by different audience segments. Even in the crucial phase after a product is fully designed but before it launches, concept testing can help hone marketing materials, messaging, or help get in front of possible quality assurance issues. 

Concept testing

The benefits of concept testing are many. 

  • It saves money. According to Harvard Business Review, approximately 95% of new products fail. Concept testing allows a safe way to nip bad ideas in the bud, before you end up with a Ford Edsel situation.

  • It builds buy-in. On larger teams, a lot of ideas – about products, features, and use cases – may be advanced. Concept testing validates the best ones, generating hard proof and gaining support from different factions. 

  • It encourages creativity. Failure is okay when it’s de-risked by a smaller trial, which is exactly what a concept test accomplishes. 

  • It optimizes an already-good product. If a product is further in its development cycle, concept testing can help solidify thinking around pricing, market competition, and brand fit. For example, IEEE reports that correcting a software error after launch costs 100X what it would cost if fixed in the development phase. 

  • It can be used at any stage. Even after launch, a concept test can help generate texture around QA issues and market response. 

Concept testing steps

The point of concept testing is to be methodical and intentional about answering a nebulous question: Is this a good idea? Given the fact that the very point of concept testing is to save time down the road, don’t skimp on giving the process the time that it takes to be conducted properly. 

Concept testing

1. Set a goal for your test

Even though the core question – “is this a good idea?” – is pretty open-ended, you should start by narrowing that down. Are you testing your entire target audience or a possible slice of it? Is this meant to appeal to a new audience entirely? How would a good idea be quantified internally – via user retention, sales upticks, form completions, engagement time? What’s a specific number that could be pegged to this uptick that would merit broad buy-in across the company? 

You want to turn “is this a good idea?” into something more like “will this new feature increase likelihood to purchase by 20% among users in Canada?”. Making your question very specific upfront helps create a concept test that generates real, actionable data. 

2. Create your test

For an unmoderated test – that is, one conducted without direct supervision – you may be using a survey or form. For a moderated test, it may take the form of a presentation and a script to interview users in-person. Either way, keep things as brief as you can to retain your users’ attention. Try to come up with open-ended questions that don’t influence users one way or another. For example:

  • What were your first reactions? 

  • What did you like about the product? What did you dislike about the product? 

  • How would you use it? 

  • What is your likelihood to purchase it? 

If there’s a marketing component to the concept test, this is a great place to ask the user about their awareness of the brand, how they feel about its category more broadly, and their awareness of competitors. 

In this phase, including carefully chosen, representative images is essential to creating alignment among users. Additionally, the Likert scale – which asks users to gauge their impressions from “strongly favorable” to “strongly unfavorable” – can help create quantifiable data in addition to some of the more qualitative questions listed above. 

Your go-to user research platform

The best teams use Lyssna so they can deeply understand their audience and move in the right direction — faster.

3. Recruit the right participants

If you want the question you asked in step 1 to really be answered, you have to make sure your test participants match the group you outlined. An unmoderated test is going to be cheaper, allowing you to cast a wider net, whereas a moderated test is probably better for deeper insights from a much narrower slice. 

Either way, look to implement some variation in the participants so as not to introduce unconscious bias. For example, if all of your participants are recruited via an email blast, they may unintentionally only capture preexisting users, possibly skewing the results of the data about usability or pricing. 

4. Review and interpret your results

Analyzing both the quantitative and qualitative data gives you the holistic understanding of how well your product concept resonated with participants. This is where all of the work of concept testing turns into hard data, now fleshed out with juicy, context-adding quotes. 

Be sure to analyze data in aggregate to identify patterns and trends, examine responses to open-ended questions to get more detailed insights, and compare results to previous tests or benchmarks. By taking these steps, you can gain valuable insights that can help you make informed decisions about your product concept and improve user experience.

Types of concept testing

While the overall steps to concept testing are pretty solid, there are a few different frameworks to think through and possibly implement. They vary pretty widely in terms of application and scale. 

Monadic testing

In monadic testing, participants are shown a single concept or idea and asked to provide feedback on it. This type of testing is useful for gathering detailed feedback on a specific concept, as participants are able to focus solely on that concept without being influenced by other ideas. However, because participants are only shown one concept, it can be difficult to compare the effectiveness of multiple concepts.

Sequential monadic testing

Similar to monadic testing, sequential monadic testing involves showing participants one concept at a time. However, in sequential monadic testing, participants are shown multiple concepts in a specific order. This allows designers and researchers to gather feedback on multiple concepts while also controlling for order effects. However, this type of testing can be time-consuming and may not be practical for large numbers of concepts.

Comparative testing

In comparative testing, or preference testing, participants are shown two or more concepts side-by-side, and asked to provide feedback on each concept. This type of testing is useful for comparing the effectiveness of different concepts, but can be more complex to design and analyze, as participants may be influenced by the presence of other concepts.

Protomonadic testing

Protomonadic testing involves showing participants a prototype of a product or feature, rather than a simple concept. This allows designers and researchers to gather feedback on the user experience of the prototype, and make iterative improvements based on that feedback. However, protomonadic testing can be time-consuming, as it requires the creation of a functional prototype.

Concept testing examples

Concept testing is such a broad idea that it can seem a little abstract, but remember that at its core, it’s really just taking a quick pause in the product development process at the right time to get a gut-check from users. Some of the world’s biggest brands use this process. 

For example, Airbnb used concept testing to determine whether users preferred a horizontal or vertical layout for their search results page. Through testing, they found that users preferred the horizontal layout, and implemented it on their site. Microsoft conducted comparative testing to determine the effectiveness of different taskbar designs, and ultimately used the results to inform the design of the taskbar in Windows 10. And Amazon used concept testing to refine the design and even the voice of the Amazon Echo. 

On a smaller scale, concept testing can be implemented in a variety of ways. Maybe it means running some new homepage designs by a representative set of customers. It may mean designing a survey that gauges interest in a new product. Or it could be getting detailed feedback from a wide group around a few possible rebranding options. The magic of concept testing comes in taking the time to be intentional and methodical about the deliberations. 

A preference test created in Lyssna, which asks participants to choose which homepage design they prefer.

Common mistakes when concept testing

If you follow the steps above closely, you’re well on your way to an effective concept test. Still, it’s worth buzzing through a couple of common mistakes as a failsafe. 

Undefined research goals

Undefined research goals can lead to a lack of focus and direction, and can make it difficult to interpret the results. To avoid this, be sure you’ve scoped time at the beginning of your process to adequately ask these big questions.

Using the wrong participants

This can lead to inaccurate results that aren’t representative of the target audience. To avoid this, use appropriate screening questions to ensure that participants are a good fit, or seek help from an external agency to find users. 

Poorly designed concept tests

This may include using a design that’s not randomized, using too many concepts or questions, or using questions that are poorly worded or confusing. To avoid this, consider pilot testing with a very small group to identify any problems before launch. 

Using the wrong metrics

Using the wrong metrics can lead to inaccurate results that don’t accurately reflect the success of the concepts being tested. To avoid this, conduct stakeholder interviews with key decision-makers before testing, and be willing to ask multiple times (in multiple ways) what the most important key metrics are. 

Misinterpreting the results

This can happen when you don’t consider the context of the results, or when they don’t compare the performance of different concepts effectively. To avoid this, expose the raw data and results to a handful of interpreters so that it’s not all filtered through one person’s POV (and possible biases).

Ready to get started with concept testing?

Incorporating concept testing into your product development process can greatly enhance your chances of success and prevent costly failures. 

Lyssna can help you gather valuable insights from your target audience with features like five second testing, first click testing, prototype testing, and preference testing, plus the ability to recruit from the participant panel.

Elevate your research practice

Join over 320,000+ marketers, designers, researchers, and product leaders who use Lyssna to make data-driven decisions.

Frequently asked questions about concept testing

What is concept testing?
minus icon
minus icon
What are the steps for concept testing?
minus icon
minus icon
What are the different types of concept testing?
minus icon
minus icon

You may also like these articles

Sign up to our newsletter

We'll keep you updated with the latest UX insights and more.

Email