I’m going to tell you a secret: qualitative research intimidates me.
Quantitative studies, despite their statistical complexity, are comforting. The numbers are what they are, and every respondent’s opinions are accounted for. No single respondent’s data is arbitrarily more important than another’s. You can see the whole forest at once.
Qualitative research, specifically 1-1 interviews, are a different story. Conversations are always free flowing. You’ll speak to some talkative respondents and some taciturn ones. It’s too easy to succumb to recency bias or “charisma bias”—prioritizing comments from the most enthusiastic respondent. And your data is unstructured and voluminous. Being efficient and objective in your interview analysis and coming to robust insights is a huge challenge, and a lot of poor quality research happens as a result.
I’m going to share how I solved this problem. Inspired by sociological and psychological research methods, I developed a process that helps me deal with these challenges and allows me to analyze interview data in a more objective and analytical way, using the researcher’s most beloved tool—the spreadsheet.
The process begins with the transcript. Whenever you conduct an interview, always ask for the respondent’s consent to record the interview. When the interview is complete, transcribe the recording directly into a spreadsheet (better yet, hire a transcription service to do this for you). Start with four columns: the name of the respondent, the line of the transcript, the speaker’s name, and the comment itself. Every interview transcript goes into a single spreadsheet, keeping all of the data structured and centralized.
Once your transcripts are in the spreadsheet, create a set of codes that are aligned with the learning outcomes of the project. For example, if the project learning outcomes are to identify current use cases, customer pain points, and perceptions of the client brand, you might create three codes: “uses,” “pain points,” and “perception.” If you want to distinguish between positive and negative affect, you could expand the “perception” code to “perception pos” and “perception neg.” Try not to create more than 8-10 codes. Too few and you’ll miss nuances in the data; too many and you’ll make analysis a nightmare. Put your code list into a drop-down menu in the fifth column of the dataset.
Once your data is structured, it’s time to start coding it. As soon as you can, read each transcript and classify each comment with the relevant code. After you complete the final interview, go back and read the entire dataset again to make sure you didn’t miss anything and to correct erroneous classifications. You may find you need to add to your code list to account for hypotheses you develop in the course of reading the transcripts.
Now you are ready to begin analyzing your data. Filter the dataset by each code so that you can see all of the comments within a code together and read what everyone had to say about that topic.
There are a number of advantages to this approach over simply reading your notes or relying on your memory of the conversation:
- It’s more organized. Having all of your data in one structured place keeps it searchable and filterable.
- It’s more efficient. Not only does it speed up your analysis and reporting, it makes it easier to share the workload with teammates and increase project margins.
- Most of all, it more objective. Developing hypotheses before you begin the research, documenting your thought process, and mitigating recency and charisma bias makes your research more transparent, repeatable, and robust.
I hope you’ve found this method to be insightful and useful. There are number of commercial tools out there for interview analysis and text analytics, which are improving every day. However, in my opinion there is no substitute for having a solid understanding of your data. Bring a rigorous and analytical process to qualitative research can turn a project from one that evokes intimidation into one that inspires confidence.