Attacking Weaknesses in the Research Process
In many ways, AI-drive tools are what researchers have ben waiting for. Agents to find and schedule your participants? Amazing. Quick organization, tagging, and summarization of interview data? Fantastic.
Desk Research
Before starting, before venturing to start, it’s important to assemble some initial level of context. Compiling a range of initial sources of information is highly useful in establishing initial context for the problem space.
The summarization and distillation abilities of ChatGPT can be useful for initially establishing the foundational context. This step is not without its risks, however. Hallucinations can sneak in, affecting your ability to orient yourself around the problem. I have overcome this by creating custom GPT with strict guardrails for summarizing and synthesizing factual information. With these parameters in place, information provided can bereliably verifiable.
Generative Research
In user research, there are two general areas of research. In this post, I am primarily focused on generative research. Validation will be discussed at a later post. Generative research is the practice of learning about people, their circumstances, and a problem they face in order to develop a strategy for solving that problem. It is the process of defining context in which a product will operate.
Take a research framework like Learn More Faster, a lightweight way to conduct research for product ideas from Michael Margolis and the team at Google Ventures, alongside ways to leverage AI-powered strategies for lightening the load and accelerate the process, particularly for researchers who find themselves workign alone to lead the process.
- Agree on goals: It cannot be stressed enough that these must come from the stakeholders.
- Define your bullseye customer: while sources of customer data can be wide-ranging, a simple way to start this process is prompting an LLM to summarize the findings and identify behavioral trends that cold be characterise.
- Recruit five bullseye customers: This is where I am perhaps most excited. If AI-enabled tools, especially agents, could locate, contact, and schedule potential customer interviews, it would be a major win.
- Choose your value props and three prototypes: producing value propositions can be a challenging writing exercise for anyone. Take a tip from different professional writers: work in groups. At the Onion, individual writers pitch their hilarious headlines, but they refine them in groups. In Hollywood, screenwriters work in pairs (or more) to tag team on scripts. You can develop value propositions by working in partners with a GPT. Consider using a multi-shot prompt that provides significant customer context from the steps above, then provides examples of well-structured value prompts. Additionally, whether it’s Figma’s new tools or vibe coding platforms like Lovable, rapid prototyping is getting more rapid by the day. Creating different versions or variations of a given idea can be rendered in just a few minutes.
- Draft your interview guide: more paired writing, working with a GPT as a writing partner. Margolis gives some good examples of how to frame questions so they don’t lead the participant, but provide open ended opportunities to dig into real issues.
- Learn more from every interview: interviews must be conducted by humans with humans. Never develop synthetic personas from the existing customer data as any more than a thought exercise. AI tools that transcribe, record, and summarize each interaction with participants. At Nooma, we have created ‘recipes’ that can run analysis on a given participant transcript and provde insights into the research questions nearly instantly. Final results? no. Faster preliminary findings? yes.
- Plan a watch party: more a matter of coordination and participation than AI generation and analysis, but I’m interested to see where AI tools can be a ‘watcher’ down the road. For instance, could one of the tools used in corporate interview process, which assesses people against the stated skills of a job description, be assess an interview participant’s account against the research questions while catching facial expressions, tone and other aspects? It sill feels kind of gross. But assistance from a service like that could eventually be refined.
- Analysis: Breaking down what you hear into components is commonly referred to as tagging. Can AI help automate this process? Some platforms like Dovetail are already in the process of doing just that. Without a platform like that, you could use a prompt chain through a GPT to help accelerate the process. For instance, prompt the GPT to break down the transcript into topics, assemble passages into a table, identify common themes, then apply those themes in the table alongside the quotations. The GPT would need to be trained to do this effectively on the first 3-5 transcripts, but the time savings could ultimately be very significant.
- Synthesis: Typically, a research campaign will also include a final presentation to decision makers where the ‘so what?’ ideas are shared. To achieve this, the trove of interview and research data needs to be synthesized into concise and valuable points that can be used as a foundation for accelerating clear-headed decision making. While I have written prompts that can compile multiple transcripts to find trends, this would still be in development. My approach reduces hallucinations and erroneous compilations of unrelated data
Where does the Research get put to use?
Prompt Engineering: Clearer understanding of users, their context, and their challenges provides the foundation for developing a prompt that will deliver the intended outcome.
Context Engineering: Context Engineering is emerging as a powerful tool in building and improving Agents. In order for an agent to efficiently maximize its capabilities, it must have not only a broad foundation of data but an effective contextual framework for understanding, understanding, and implementing that data. This is then translated and implemented into the most efficient code implementation to make the Agent as accurate and nimble as possible. But before doing so, clear research provides the framing for how to inform the context.
Workshops: Customer problems, articulated as Jobs to Be Done, personas, or other tools that can align teams. This gives the team the best ability to curate the workshop context and ensure the best input for its participants.
So what’s the impact?
I have been running analysis and synthesis steps on some internal Nooma projects. I’m finding time savings of about 50% – 60% with strong level of quality throughout. We’re still working to refine the process and output quality.
More importantly in consulting is the impact this can have on accelerating client engagements and establishing a shared understanding of client business. Initial diagnostic research of any kind is the cornerstone of any consulting endeavor. We come in and make it clear that we don’t know it all, no matter how many times we have worked in that client’s industry, in their market, or with their same functions. Each consulting engagement relies on establishing a shared set of facts in order to move forward. Any way we can accelerate that path to understanding is critical.