How a couple of hours of design research can save you from failure

Many design mistakes can be resolved quickly and easily by doing some initial design research. In this article, we show you how to test early and often.

How a couple of hours of design research can save you from failure

Mistakes are unavoidable. We all make them. Instead of trying to be perfect, we recommend using design research to make it it safer for you and your team to make mistakes. In fact, a mindset that you’ve produced perfect work that doesn’t need testing deprives you of opportunities to learn through research.

Many problems can be fixed quickly if you catch them early enough. But the longer you leave them, the bigger they grow. Just like a dropped stitch ruins a sweater as it becomes a gaping hole, a rogue assumption can ruin your project – and the effects can get worse the earlier the assumption is introduced.

Let’s start with a story

Let’s say you work as a marketing manager in a department store. One of your quarterly goals is to develop an outdoors campaign for new kitchen appliances. You quickly write a brief, contact the appliance manufacturer for some images, and send the brief away to your agency to book in media. The creative comes back with few options to choose from. You immediately circulate these with your internal stakeholders for feedback (perhaps your line manager, your brand representative, sales, customer service, etc.).

You’ve got your favorite design that has a unique image with a cute pun. After the feedback rounds, one of your colleagues looks over your shoulder and says something that surprises you a bit –that the creative looks a little bit like a person. You shrug it off. It’s just one opinion. Nobody else has mentioned it.

Your agency finalizes the creative, makes it look polished and beautiful. The design is really eye catching, and you make sure your brand colors and fonts are on point. One of the designers makes an off-hand comment about not being sure what's being advertised, but it’s not really their job to criticize the concept and you send it to agency to send out to media to publish. Your sales team needs the campaign out immediately to start hitting the targets, and the media team needs assets right now from the agency not to miss out on the premium spot by the highway.

The next day day, you drive to work and catch a glimpse by the side of the road. There's a billboard with your freshly printed ad. And as you drive by you feel your stomach fall through the floor.

You’ve accidentally printed a billboard that evokes the face of the most hated person in all of human history.

Why do these things happen?

Why do we see funny listicles published with the“top 10 marketing fails”?

How do oversights like this make it to production?

Why do projects sometimes go to primetime with “obvious” mistakes baked in?

Why don’t we catch them earlier?

The first step toward testing assumptions is recognizing them

There’s a few reasons why this happens. Teams get lost in assumptions. In the story I told, two different colleagues pointed out problems with the design, but the marketing manager ignored their feedback and didn’t really think to test this at all.

There are lots of reasons why we fail to test and thus fail to discover these mistakes. We might run out of time or even get distracted by the highest paid person’s opinion. Or perhaps we just don’t know to go about validating or invalidating a hypothesis. But that’s why we’re here to help.

As a designer, I know this one well. We’re all making assumptions all the time. It’s just how we operate as humans. Some of the big ones that drove this hypothetical campaign to this conclusion might have been …

  • That the concept “bells and whistles” and the pun would be recognizable to the target market
  • That the billboard is a great place that’s highly visible
  • That the kettle is a familiar object and wont cause any problems that are easily recognizable
  • That the brand is easily recognizable — who the advertiser is and what they're advertising

Each of these statements are assumptions, and once you get to the stage where you can articulate them, you can also start thinking about how risky they are and whether you need to test them.

Cognitive biases and sneaky thinking errors

Cognitive biases and thinking errors can trick us into thinking we know what’s what, when really we should be being more critical of ourselves. They're quite tricky to navigate because they’re technically subconscious, happening in the background. This is where the behavioral psychology side of things comes in.

You might have had an experience of feeling like you’re in a “bubble” with your design. For me, once I’ve been working on a design for too long, I stop being able to really see it. I need to somehow see it with “fresh eyes”. When I show a design I’ve gone blind with to others, they often pick up something that I have completely missed. This high level “blindness” can be broken down as three big biases at play (and probably many smaller ones).

The mere-exposure effect is a psychological phenomenon where people tend to develop a preference for things because they’re familiar with them. You tend to just like things that you've seen more of. This is why, when we test multiple different designs with people, we have to be aware that people will become attached to the designs they’ve already seen.

A team might also be affected by cognitive bias called the IKEA effect. Where we place a disproportionate value on a the product we’ve partially created. This came out of an experiment asking people to put a value on a piece of IKEA furniture they constructed compared to valuing a pre-assembled piece of furniture. The participants placed a higher value on the pieces they built. We tend to overly value work we've been a part of, and it becomes harder for us to view this with a dispassionate view.

Finally, escalation of commitment is a human behavior pattern where an individual or group facing increasingly negative outcomes from a decision, action, or investment, nevertheless continues the behavior instead of altering the course. Essentially, a loss-avoidance behavior. For example, gamblers who keep upping their bets even though they’re losing are being deeply affected by the escalation of commitment thinking error.

A compounding and amplifying effect to all these is that the pressure to deliver makes it even less likely that you take your time and check things thoroughly. The closer you get to the pointy end of the deadline, the more interested you’ll be in delivering it than checking every possible assumption. Unless you’ve already built these pauses and check-points in to your process. One of the best ways to mitigate this is to find low effort and low friction tools for your testing so it doesn’t seem like such a big ask.

Test early and often

In the example above, the marketer could have tested many of the assumptions we listed and saved themselves a lot of time, money, and reputational damage, internally and externally for their customers.

This is the first and most important lesson. Test early and often – much earlier than you think! Not after launch, not a day before launch. Test as early as concept stage for the absolute best results. In fact, the earlier you test, the more time you have to respond to the feedback and adjust course. It’s so common for even the most Agile teams to not leave enough time or budget to react to the feedback they’ve gathered.

Testing often also means testing at different levels of fidelity – at concept stage, when it’s mocked up, and then finally when it’s in production. At each increase of fidelity, assumptions trickle into the project from different collaborators, different constraints, and different decisions. The only way to catch those sneaky assumptions that continue to intrude on your project is regularly testing as a project evolves.

You also hit up against sunk cost fallacy at the pointy end of those deadlines. Another thinking error that makes us feel attached to something because of all the perceived or actual investment into it. You’ll often experience this as a fear of making changes in the creative because it’s perceived by the team as rework and a waste of money. The key to mitigating the sunk cost fallacy is understanding that it’s never too late. It’s always better to learn about a mistake before it happens rather than after, especially when the risk of shipping something disastrous is bigger than the cost of rework.

Research doesn’t have to be an ordeal

Driven by an already steady increase in interest in research ops and a trend toward more distributed organizations, the world of online research tooling has exploded.

Researchers from all walks of life (software, UX, marketing) are leveraging those tools that allow research to happen even if we can’t be in the room with our subjects. The biggest misconception about research is that it’s unavoidably time consuming; that you have to recruit hundreds of people for statistical significance, and you have to compensate them all with a lot of money. Or that you might have to work with a recruitment agency and the process takes months.

The good news is that's not the case anymore. Research can happen in as quickly as an hour or so, for a very low cost if you use the right tools and you know what you’re doing.

UsabilityHub lets you run tests at a relatively low cost with a broad reach and not a lot of time. Our panel of  users across the world means that your results can come back in as little time as it takes to make a coffee.

First impressions of your design really matter

Studies indicate that individuals form an initial impression of an object within a short period of time: 3 seconds (Lindgaard et al., 2006); 4 seconds (Kaiser, 2001); 5 seconds (Perfetti, 2005); and, 7 seconds (Ramsey 2004) in human-to-human interaction.

In addition, recent studies indicate that this time span may be very brief (i.e., as short as 50 milliseconds (Hotchkiss, 2006) when applied to the online context. In short, first impressions count.

In a different study, researchers analyzed page visits from 205,000+ websites, each with more than 10,000 visits (in total, aggregated data was more than two billion individual site visit events). The results showed that first ten seconds are critical in deciding whether a user stays or bounces.

So, in 50 milliseconds, first impressions set in. That’s the limbic system, or the lizard brain. You then have around 10 critical seconds to try to grab someone’s attention. In those first 50 milliseconds, their gut instincts kick in. In the first 10 seconds, users look more carefully at something, engaging prefrontal cortex that's responsible for more making complex reasoning. To gain several minutes of user attention, you must clearly communicate your value proposition within 10 seconds.

Another framing is Daniel Kahneman’s is fast and slow thinking. Your fast, instinctive thinking and your slow, considered thinking both need to be optimized for when you’re designing. All this data about attention might be a little confronting, but the good news is that there are some amazing tools out there that help you test exactly this.

Let's test it!

I’ll share an extremely simple test that took only about an hour to create. First, I’m going to ask a user to view the image for five seconds and then I'll ask these simple questions:

  • What is this ad about?
  • Did you notice anything unusual about the image?
  • Have you seen this billboard before?

I’m then going to show them the same image again, but without a limit, giving them time to engage slow thinking. Then I'll ask:

  • Now that you’ve had a bit of time to examine the image, how would you describe it?

And that’s it, that’s the whole test. You’re welcome to take it yourself. It took me about an hour to create, and I sent it out to the broadest possible audience. It cost me 150 credits, which is $150USD on our platform. Maybe a lot compared to doing no research at all, but very cheap compared to a big moderated research project. And the turnaround time was five minutes.

Here are some of the results. Note that my questions were not leading. I didn’t ask “Does this look like Hitler to you?” (as this study did). Instead, I left it very open, so I didn’t suggest the answer at all.

I think it should go without saying that even if you have one comment from anyone that says that your design evokes the image of Hitler that you probably need to revise your design, but as we’ve already gone over, there are many reasons why you might ignore just one or two comments.

However, getting those comments from totally unbiased people adds a certain amount of weight, making them much harder to ignore.

The three questions that I asked are also very versatile and reusable. I simply asked

  • A comprehension question for the limbic system: What was the ad about?
  • A question looking for blind spots: Did you notice anything unusual?
  • A double check for the prefrontal cortex: Now that you’ve had time to think about it, would you describe it differently?

These are the questions you can use on almost any campaign. This generalized testing methodology is both simple and very powerful, and as you can see, can help reveal things that you or your team might miss due to bias and thinking errors, time constraints, and project pressure. In the case of our illustrative story, the team didn’t see the fascist teapot themselves, but if they had run this test, they may have been surprised by the comments come through. They would have have discovered this error through research before they printed many billboards across the country.

4 things we learnt doing a quick design test

1. Beware the bubble

Remember that that as you proceed with your project, you and everyone involved is adding their own assumptions as you go along. Due to various biases and thinking errors, compounded by pressure to deliver, you’ll lose your ability to see which of your assumptions are risky when you spend a lot of time in the bubble of the project. To mitigate this: do some research.

2. You don’t have to be a sophisticated scientist to do research

Research is complex, but you can avoid catastrophic project failure without being a scientist. So many tools exist that help you test those assumptions easily, cheaply, and quickly. You can even make research fun – get your team involved! Getting feedback sounds scary, but my experience is that it gives you a little dopamine hit when you learn that you’ve hit the mark, or you find out something that you’ve missed.‍

3. Research is easier than you might think

It takes less than an hour and less than $200 to set up a quick test and you should get the results back the same day if not faster. The questions you ask don’t have to be complex, either.

4. Testing assumptions helps you decrease the project risk

Obviously you can use research to avoid mistakes and mitigate risk. But some secondary benefit exist as well. You can use research to support more ambitious plans and gambits. By running research, you might be able to get more buy-in from your team about an idea that’s a little bit more out there. By gaining insight from outside authority, you’ll have the confidence to pitch your wild ideas and make sure they don't sound quite so scary.

In summary: real world feedback is the best. It’s so useful, energizing, and important. It’s almost silly not to do it. You can view the results of the test we ran here.

If you’re looking to get started testing your ideas quickly, easily and affordably, you can sign up to UsabilityHub to get started.

Join the world’s leading brands

Over 280,000 designers, marketers, researchers, and UX professionals use UsabilityHub to take the guess work out of design decisions.

No credit card required