DESIGN RESEARCHER
  • About
  • Projects
  • Contact
  • About
  • Projects
  • Contact

The misguided rivalry between qualitative and quantitative research

12/29/2022

0 Comments

 
Research is a critical part of developing products and services. It’s half of R&D, but you wouldn’t know it by looking at staffing numbers; usually corporations have way more people paid to build and do things, than learn things. Interestingly, the researchers you see in movies usually combine and exaggerate these roles: you see scientists as people using their hyper-specialized knowledge to build their advanced contraptions or schemes on their own. Actual researchers in corporations usually don’t write code, design interfaces, create prototypes or convince customers to buy something. Yet, taking away research removes a deep understanding of customers, the ability to evaluate the size and impact of an opportunity, and get a hint of the potential outcome of ideas and decisions before they are made. It also makes it hard to know how things are going, and identify when there’s a problem and how to fix it.

Researchers are not all the same
All of those things are not enabled by a single science-y skillset. There are as many types of researchers as there are designers or engineers. A computer engineer and a chemical engineer have overlapping job titles, they may even have some of the same training and similar perspectives on systems and problem solving, but their work looks really different. In the same way, market researchers, design researchers, and data scientists are not the same even though they’re often grouped into the same category. If you look at job listings, UX researcher jobs are often split into quantitative and qualitative roles.

I want to emphasize this split, because most people don’t fully appreciate the differences between qualitative and quantitative research. Large tech companies usually have both, and they are often on different teams or serving different parts of the organization. People who have deep training in one type of research usually are not as experienced with the other type, unless they came from a program that melded the two. This divergence is probably the main cause of the strange feud between the two worlds.

A silly, yet unfortunately real conflict
Let me give you a glimpse into this nerdy rivalry, from the perspective of someone who was in a PhD program with training in exclusively quantitative research. I was a numbers guy. What numbers did I crunch? Anything from surveys to reaction time data, to tiny electrical signals that came out of EEG equipment strapped onto research participants’ noggins (excuse the scientific terminology), as an indicator that their brains were doing something noteworthy. Social sciences like psychology sit in an interesting place, melding the strict scientific practes of observation, experimentation, and measurement, with “wishy-washy” concepts like emotion and motivation. You get to see how people’s behavior changes with seemingly minor changes to their environment and quantify really abstract concepts like emotion and motivation. This training came with a feeling of intellectual superiority, specifically over people who were asking similar questions but with a radically different approach that didn’t include math: qualitative researchers.

I was aware of qualitative research as a concept, but did not have deep knowledge of it, and summarily dismissed it as inferior. How was this scientific? Sure you ask good questions, but you’re letting people answer in their own words. There are so many problems with that. People might lie or not know the answer, there’s no way to standardize the data they give you since it’s unstructured, you can’t run statistical analyses on it, and if someone says something interesting, that’s a single data point, basically an anecdote. Plus your sample sizes are tiny, if you ran the same study again you will get different data.

I will pause here so I don’t give you the wrong idea that I’m anti-qualitative research. Most (90-95%) of my day to day research these days is qualitative research, with only the occasional need to draw from my quant knowledge. What changed? I wish I could tell you that I came to my senses after some deep personal reflection on my own. Instead, I found myself in a UX research role after my doctorate, where most of the research I did was qualitative. This wasn’t a surprise; of course I studied and prepared for the role, and I was relatively comfortable with doing the work. The principles were similar, though the methods were different. At the time, I was more thrown off by the day and night differences between the academic and corporate worlds. But in the back of my mind, the cogs were turning, processing the problems I had with qualitative research with the reality that I was doing it all the time, and seeing the benefits. I realized I was thinking about quantitative vs. qualitative methods the wrong way. I was asking too broad a question:

“Which is more reliable for doing research, qualitative or quantitative data?”

You might have spotted one of my biases: I was asking about the best tool for my research at the time, but I was making a judgment call about the overall value of the tools for research overall. The questions I was trying to answer and tools I was using in graduate school REQUIRED numbers. You can’t, after all, ask people to tell you how many microvolts of electricity were coming from a specific area of their skull. That is time no one is getting back. I ignored and learned to work around the weaknesses of a purely quantitative toolkit.

Where research falls short: a quantitative example
Pretend you want to figure out which of two images you should use for your website, so you ask the question: Which image, A or B, evokes a stronger gut reaction from people, and better captures their attention? Assuming you have some fancy equipment, you then narrow your question to: how does looking at image A vs image B affect a person’s galvanic skin response (how much they start sweating, which tells you how strong their reaction is) and time focusing on each image (measured with an eye tracker, which tells you where their attention is)? After running the study, you might get a result like “the more emotionally charged image A tended to capture people’s attention to a greater extent versus B, and creates a bigger physiological reaction in them.”
Gripping stuff.

Now suppose this study also had a task at the end, which was to choose which of the two images they preferred. Imagine it turned out that the image that caused a less intense response and that people paid less attention to (B), was the one people actually preferred. Now what? The stats are clear, the results are not in question. But what story do you tell when something doesn’t add up? When you’re doing only quantitative research, the numbers rarely tell the whole story, and their interpretation requires the analyst to make some assumptions.

Now, let’s introduce one more question at the end, an open-ended question after people made their choice. “Why did you choose that image?”

It doesn’t matter what the answer is. Maybe people had fond associations with the more boring picture. Maybe the more exciting one reminded them of a something else they didn’t care for. Maybe the images didn’t matter and they chose randomly. The benefit of qualitative research is that you can start answering "why" questions.

You should have noticed that by adding qualitative methods, the study went from “We’re sure this happened, but we have no idea why” to “this happened, and here are some possibilities for next steps.”

Turning the tables: Qualitative research only
I hope you didn’t take away that qualitative research is better overall compared to quantitative research. The criticisms I mentioned earlier against qualitative research were all valid, and I stand by them today. If you reversed the order of this example and began with qualitative research, let’s see what would have happened. You start with the same research questions, and turn them into interview questions that you give to a handful of people as they look at your two images. Which image gives you a stronger reaction? Which one did you find yourself looking at more? Which one do you prefer? You get a mixed response on most of them, roughly 60/40 in either direction, and let’s say you end up with roughly the same answers as quantitative research gave you: “6 out of 10 people told us they preferred image B, for various reasons. Some had fond associations with it, some disliked image A, and a few had no particular reason. Roughly the same amount of participants also reported that image A gave them a stronger response (7/10 participants) and they paid more attention to it (6/10 participants).”

If you read that and were wondering “is 6 out of 10 participants statistically significant?” I hope you can guess that the answer is an emphatic “no.” In fact, it’s so close to a 50/50 split that you might even be struggling to give any recommendations. But honestly, even if 10 out of 10 participants told you they preferred 1 image over the other, a single 10-participant study will not convince your product partners to make that prototype live tomorrow.

Now, adding the quantitative research back in afterwards, what happens? You went from “we have some nuanced findings and a direction that might be promising” to “we are confident that this is the direction to go, based on a deep understanding of the customer.”

A pointless comparison
Clearly, each method has its strengths and weaknesses, which I’ll lay out in detail below. But trying to measure them against each other is a colossal waste of time because the two tools have different purposes. They are used to answer different questions, for different reasons, and each is the best tool to do what they do. It’s not apples and oranges, it’s apples and hammers. One will win if you’re hungry and the other will win if you have a nail sticking out of a floorboard you need to flatten. In a complex problem space, either tool used alone will end up lacking. But for the sake of clarity, let’s talk about where apples and hammers individually shine and stumble.

Strengths and weaknesses
Quantitative research answers the “what” and “how much” types of questions, measuring the size of things (quantities, trends, relationships, etc.) and using that to make inferences about larger groups and things that you didn’t necessarily measure. That means even if your survey only has a few hundred or thousand people, you can have reasonable confidence that your findings apply to a much larger group of people if you do it right. Here are a few types of questions that quant research is good at answering:
  • Measuring size or scale: How big? How strong? What duration?
  • Identifying similarities and differences: Are these groups the same? Which one is bigger/smaller?
  • Identifying trends and relationships: What predicts this phenomenon? What pattern does this follow?

The downside is that quantitative data assumes you know what each variable, relationship, and trend actually means. Numbers don’t tell you why they are the way they are, they just…exist. With only numbers, you rely on logic and reasonable interpretation to understand your results, leaving two big vulnerabilities:
  1. The numbers don’t make sense, and you have no idea why. You’re stuck.
  2. The numbers make sense to you intuitively, but your reasoning is wrong, so you make a wrong decision. Also you have no idea why, so you’re stuck.

Qualitative research answers the “why” and “how” questions, helping you understand phenomena, patterns, and processes that can be difficult to measure or quantify; sometimes a 5-point scale isn’t enough to capture the way you’re feeling, or the relationship you have with another person, for example. It also leaves room for learning about things you weren’t aware that you didn’t know. When people get to answer in their own words, they often give answers and ideas you would never have thought to put as a multiple choice question. Here are a few types questions that qual methods are good at answering:
  • Identifying patterns and phenomena: What themes and ideas are there?
  • Finding meaning and understand relationships and processes: Why does it happen this way?
  • Understanding motivations and causes: Why do people think and act the way they do?

The insidious problem with qualitative data is that it seems really easy to use, because you’re probably human and have been interpreting words most of your life, probably without even using software or analytical frameworks. The problems with that are:
  1. The way we naturally process information is full of biases, so without some good practices you can interpret data incorrectly even with the best of intentions. Also you have no idea why, so you’re stuck.
  2. You have small sample sizes, so you don’t know what is applicable to a larger population. You’re stuck.

Teamwork! Or at least open dialogue
If you’ve been reading along this whole time, you probably dissected my overall message from this already (like a qualitative researcher would): most things worth doing requires both qual and quant research. Really important decisions at a large company probably won’t be decided by single data points. If you noticed from the example, even after the qualitative research produced results, you still need quantitative research to test, size, or validate them. Qualitative and quantitative researchers may work independently, but they need to be communicating with each other and strategizing about who is doing what, when, and why or else it will, at best, take longer than necessary to provide valuable research to make those informed decisions.

Can qualitative and quantitative teams operate without ever talking? Sort of. As someone who has worked in both lanes, each method has ways to make up for its weaknesses in the absence of the other. Qualitative research can be expanded to a larger scale or done repetitively to increase its validity and make inferences more reliable. Quant research can be repeated with modified variables and updated hypotheses to triangulate the “truth.” But these are inefficient band-aids where an easy solution exists: work with people who are strong where you are weak. Bringing back the very stupid apple and hammer metaphor, if you have an apple, don’t try to use it to drive the nail back in; borrow a hammer. If you have a hammer and you’re hungry… I’ll let you fill in the blanks there.

Without covering for each other’s weaknesses, qualitative research tends to sits on a shelf without validation or sizing, or ends up driving investments that goes to waste. Quantitative research finds opportunities and problems and sets priorities, but problems often don’t get solved and opportunities are squandered because they aren’t understood.

When to use each method
There isn’t really a universal order of operations; it depends on what you’re trying to do and your specific context. Here are a few scenarios, and how qual and quant researchers might work together:

  1. You know there’s a problem, but don’t know why. Maybe sales are lower than expected. Site traffic suddenly dropped this month. Social media mentions took a turn toward the negative. You probably already have numbers telling you there’s a problem, so quantitative researchers are likely on the case already. If not, they’re usually a good resource to pinpoint the problem area and its size or impact. But at this point, there might simply not be the right type of numbers that give you any idea of why things are happening. You need qualitative research to understand potential causes of the problem, after which you can try to create and test solutions (quantitatively) to see if they address the issue.
  2. You want to enter a new problem space. You want to create solutions for a new group of customers you don’t know anything about, or solve a different problem for your existing customers. There are two ways you could approach this. If you have an established product with one group of customers, your marketing department might be able to quickly test its viability with quantitative research with a different group, as long as you have identified exactly who they are (that’s a big IF!). If that fails, or if you are starting with a blank page, your best bet is some type of empathy research (qualitative) to understand who these people are and what problems they face, and start figuring out what types of solutions might solve their problem.
  3. You think you understand the problem or have a solution, but you’re not sure. Let’s say you just did some qualitative research, and you have a small set of possible customer problems, or ideas that were proposed to solve those problems. At this point, you might expect me to say that you need large-scale quantitative research to validate them. You could, but I wouldn’t. The quant research could give you an idea of which way the wind is blowing, but at this point your data and ideas are so vague that it might not be that helpful, and small changes in wording (the nuances of which you don’t understand yet) can change the results by a lot. I would actually suggest finding a way to experiment, or “test and learn.” Come up with some hypotheses based on the data you have, use minimal resources to prototype, and test with qualitative research in a limited setting. It’s usually faster and more cost-effective than a large-scale quantitative study, and leaves you in a better place too. When you reach a point that the problems/opportunities are better defined and need to be sized or validated, that’s a good time to bring the numbers.
  4. You’re preparing to send a prototype to your engineers to build. Let’s assume you have a design and qualitative research team who have been working together to understand a customer problem and build + test concepts to a point where you’re confident it’s useful, usable, and delightful. It’s not real, though, just a mockup with no code and no substance. At this point, you really want some quantitative research to make sure the solution a) solves the problem better than whatever existed before, b) has a reasonable chance in the market, and c) is worth the cost of building. After building it, you want one final round of quantitative research before you open it up to the public. All the tests up until now were with a fake product, fake users, and an artificial environment. You want a small-scale realistic scenario where you can see how it actually performs without all the risk of a full launch. This is usually a limited release, a beta, an A/B test, etc. Although quantitative data is important here, there’s no harm in adding some qualitative questions too, if you can fit them in! Mixed methods can be very useful and efficient, just don’t overdo it and exhaust your customers with endless questions.

Tl;dr

If you are a qualitative or quantitative researcher, don't think of the "other side" as competition, they are your allies who can make your work even more valuable. Meet them, talk to them, figure out which problems you share and struggle with!

If you work with researchers: Make sure you get all of them in the room! They each have different perspectives and tools to help you make data-driven decisions.
0 Comments



Leave a Reply.

    About

    I will occasionally post cool and inspirational things here, along with some personal projects.

    RSS Feed

Proudly powered by Weebly