How Many Qualitative User Interviews Is Enough For Customer Discovery?

Mike Chirokas
5 min readOct 7, 2020

I hate “it depends” answers so let’s get right to the point: the answer is eight.

In The Lean Startup, Eric Reiss talks about spending endless hours on releasing a new product only to find out that, “After all the hours we had spent arguing about which features to include and which bugs to fix, our value proposition was so far off that customers weren’t getting far enough into the experience to figure out how bad our design choices were”.

Eric and his team couldn’t have solved all the product issues purely with eight customer discovery interviews, but they probably would have landed with a minimum viable product that got users far enough to at least experience the functionality so that they could provide useful feedback.

Why Eight Is the Magic Number

A dozen user interviews are better than eight, and one million user interviews are better than twelve, but we have a product to build and without a cutoff point, we’ll delay our time to a minimum viable product; thus, delaying our time to feedback for the next iteration. We should conduct enough user interviews to get to the next level, but not so many that we never get there.

If you’ve conducted user interviews either within the customer discovery process or later in a product lifecycle you may have experienced an s-curve of confidence similar to the following:

Here’s my typical experience:

1. By the end of this first interview: I’ve learned which questions work well, which need refinement, and which should never have seen the light of day. If I started this interview with eight questions, two will be successful, two will need refinement, and four will be utter trash. I generally walk away from this interview with little customer knowledge, but a much better list of questions.

For example, during my last round of user interviews, I started by asking, “When you think about conducting user interviews, what do you expect them to be like”. My hypothesis was that I’d be able to get some solid details in the emotions of folks about to user interview. What actually happened was a complete and utter blank stare. I stumbled to clarify what I meant and failed. In the end, I replaced the question with a better one for the remainder of the interviews.

2. By the end of the second interview: I’m starting to learn a bit more about which users might be a better fit than others for these interviews. If I can, I’ll refine my segment and reduce the number of questions in favor of more well-thought-out follow-ups to the questions that work well.

For example, when conducting user interviews for a consumer-focused product I found that after interviewing one person between 26–36 and someone 40+, the 40+ user had absolutely no interest in the product due to the social media platforms she and her friends use. Clearly not a statistical sample but it helped drive me in the right direction as I recruited new user interviewees.

3. By the end of the third interview: I’ll have a finalized list of questions and follow-ups to provide some consistency for the following five customer discovery user interviews. I’m usually confident in the segment I’m seeking and will do the final refinement of the users I want to interview if I can.

For example, when conducting user interviews for The User Interview Exchange I started to realize that product managers kept referring to the employees beneath them in the organization that find the users to interview; I refined my list from product managers to UX researchers.

4. By the end of the fourth interview: I’ll be buzzing with some confidence in this topic. I usually have an idea of what a response will be and the follow-up questions that I can respond with. I also am starting to get an idea for how folks are solving this problem today.

For example, by the fourth user interview I conducted for a travel/fintech app I created, I had uncovered that the segment of “travel hackers” have sophisticated spreadsheets that work well for them and that they enjoy the “game” of earning and using points to travel for nearly free. For them, the process was a hobby and community, rather than a means to an end.

5. By the end of the fifth interview: I’m starting to get a really solid shape for who might use this, who might not use this, and alternatives that people use to solve the problem today. My speed has picked up by now so I’ll usually have some time to add in a forward-thinking question to start testing a hypothesis.

For example, I’ll ask, “on a scale of 1 (not likely at all) to 10 (very likely), how likely would you be to use the product if it did x? Why?” It isn’t statistically significant but it forces the user to examine why they gave this a higher score if it has x than if it didn’t.

6. By the end of the sixth user interview: I’m starting to feel like an expert in this small scope of questions. I’ve got a solid collection of stories and usually walk away with a couple of hypotheses on how to address recurring comments, concerns, and problems that users say they have. I can better empathize with users at this point, which generally helps get more sincere responses.

7. By the end of the seventh interview: I’m starting to hear the same responses over and over. Every once in awhile, I’ll get an answer that’s counter to what others have said, but most of the time I won’t.

8. By the end of the eighth interview: Responses sound pretty similar to those that I’ve already heard. By now, I have developed a solid list of key takeaways and a list of reactions to feature/functionality. We’ve got a product to build or iterate, let’s get going!

For more information, I suggest checking out the template I start with when I conduct a new round of user interviews.

Originally published at https://www.userinterviewexchange.com on October 7, 2020.

--

--

Mike Chirokas

Mike is the founder of The User Interview Exchange. He believes that user interviews are a superpower and is frustrated that too few use them effectively.