GenAI for UX Research – a Proposal

Note: if you’d like to jump to the proposal, jump to the “Proposed Rules of Engagement” section at the bottom. Though this article is quite nice to read, if I do say so myself.

As GenAI floods every aspect of our lives, questions about how it can be used for UX research have naturally sprung up around the industry. While I don’t personally subscribe to the current AI crusade rampaging the Tech countryside, I do believe there is a social contract to be drawn up between UX researchers and GenAI tools.

A Case Study

A few months ago, I was analyzing some focus group data, where I asked participants to fill out a questionnaire, and then had roundtable discussion. I wanted to use AI tools to analyze the data, and separated the questionnaire and discussion data to work through them separately.

Analyzing the transcriptions from the discussions revealed decent, albeit surface-level, patterns in the data. After tinkering with several different prompts, breaking the transcriptions down by theme, and introducing supplemental context from other preliminary studies landed me on a concise set of bullet points outlining sentiments, pain points, and product requests that came up frequently in the discussion. In my typical, non-AI, process, this would have been the first step of my analysis – a summary that directs me to specific topics in the data to deep dive further. So while I would hardly describe the AI outputs as “insights”, they certainly provided a solid map of where I should go explore next in the data.

Analyzing the quantitative data from the questionnaire was an entirely different matter. After cleaning the data and formatting it into a spreadsheet, I tried to calculate an average rating from a Likert-scale question; the result was simply incorrect. After attempting a few other prompts and similar failures in performing basic arithmetic calculations, I realized that GenAI would not be useful as even a first pass of quantitative data like it had been with the discussion transcripts. The failure to correctly calculate a simple average immediately validated my own suspicions about GenAI and its efficacy as a research tool.

However, GenAI has proven to be a valuable programming tool, with new tools and applications springing up everyday. I decided to take a pass with Q, embedded in SageMaker Unified Studio – both AWS offerings aimed at supporting developers code. And rather than having Q analyze the quantitative data for me and hallucinate incorrect results, I arrived at a new workflow:

  1. Plan out the steps of the algorithm I wanted to program in Python
  2. Give the model an abstract prompt to produce code snippets in Python that performed each step of the algorithm (e.g. “How do I…”)
  3. Ask the model to add any further nuances or complexities based on what I was trying to do with my data
  4. Copy the code snippet and insert the specific variables and values I was using in my code

While this is certainly not the most efficient way to use GenAI as a programmer, it gave me a sense of control and awareness of my code. Ultimately, I knew that a) my code was doing what I wanted it to do and b) if something was wrong, I could fix it myself. Furthermore, the efficiency of having all of Python documentation at my fingertips was undeniable, and I managed to process the data in about a third of the time.

Some Observations

Going through this exercise got my gears turning about what it means to be a UX researcher in a world that’s fundamentally changing its relationship with information and data.

First, there’s still a lot of “AI handholding” required. The initial data cleaning/massaging, oversight, fine-tuning, correction, and critical evaluation of outputs all came to the forefront of my thinking process while working with GenAI to analyze my data. And while it’s always necessary to interrogate your process and biases while analyzing data, this hyper-vigilance of what the AI was doing felt cumbersome and almost intrusive – like someone was sticking their nose into my work. Second, ensuring the integrity of my analysis suddenly became a major concern. In the pre-GenAI era, I could trust that the conclusions I was arriving at were tethered in the data. I knew, that while I might miss something in the data, I wouldn’t hallucinate entire insights

A Tangent into Computer Science

Working with GenAI and thinking about its role in the data analysis process reminded me of 2 different concepts from CS. Together, these two concepts provide a framework about how AI can be a tool for research.

Amdahl’s Law

Amdahl’s law provides a way to think about optimization and efficiency of computational processes, especially when the computation can be distributed across multiple CPUs. The law states that “the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used”. A key implication of Amdahl’s law is that came to mind for me was that there is a limit to how much faster a task can be completed due to parts of the task that cannot be optimized.

A basic example: when baking a cake, you have two options for making the batter. The first option would be to 1) mix the dry ingredients, 2) cream the butter and sugar, and 3) combine all the ingredients together. OR you can parallelize steps 1 and 2 – Cream the butter and sugar in a stand mixer, and while that’s happening, mix the dry ingredients. Then combine it all together. In simple terms, Amdahl’s law says that, because steps 1 and 2 are parallelizable but 3 is not, you can only optimize 2/3 of the cake-making process.

To apply the basics of Amdahl’s law to the data analysis process, we can start with the assumption that some parts of the analysis process can be optimized by GenAI, while other parts are the sole responsibility of the researcher. The overall value GenAI tools can bring to research is a function of:

  • The unique efficiency gained from using GenAI, which would not exist if we didn’t use it
  • The cost and effort of using GenAI tools
  • The proportion of the research process we are willing to (and can) handoff to GenAI

These conclusions me to the second computer science concept: Problem Reduction.

Problem Reduction

Problem reduction hails from the world of algorithm design, and it’s a nifty way of using the solutions from problems we already solved. It’s the process of transforming one problem (or a part of it) into another type of problem, which we already know how to solve.

Going back to our baking example, problem reduction allows us to use our knowledge of making cakes to make cupcakes (i.e. reduce the cupcake problem to the cake problem). And there would be some specific steps we need to take in order to do this. 1) Correctly measure out the ingredients for cupcakes, rather than a single cake, 2) Make the batter using the cake-making steps we already know, and 3) portion out the batter and bake in the cupcake tin at the right temperature and duration. In the reduction process, steps 1 and 3 are the transformations – the actions we need to take – that allow us to use prior knowledge of cake making to solve the new cupcake problem.

In the research context, I personally do not believe that GenAI tools are mature enough to directly answer the complex questions we ask as UX researchers. The case study above shows that while AI can highlight initial patterns, it still struggles to extract deeper insights about the user motivations, needs, and opportunities that stakeholders want to know about. However, problem reduction gives us a framework to think about how we can (or want to) break down some of these meaty and complex questions about human users.

GenAI and UXR Proposed Rules of Engagement

Putting together the observations from my case study, Amdahl’s Law, and problem reduction, I’ve arrived at a process to formalize when and how I might use GenAI in my own research process.

  1. Identify the portions of our process we are willing to/can handoff to GenAI. This is might vary from researcher to researcher, but an intentional division of labor between the human researcher and AI can help us apply AI where we know it can perform well, and also more precisely characterize where AI is bringing unique efficiency.
  2. Reduce those portions of the research process by formulating high quality inputs and prompts for GenAI tools. While the research questions and data sets of human research are complex and messy, giving AI well-structured data and targeted prompts will optimally leverage it to answer specific, lower-level questions about the data.
  3. Interpret GenAI outputs in the context of our original research goals and questions. Like step 3 in our cupcake making process where we turn the cake problem back into the cupcake problem, we need to take our AI outputs and re-contextualize them in light of our original goals for the project, other insights from the data, and stakeholder needs.

This is the process I ended up following in the case study above. While stakeholders came to me with an overarching “what do customers struggle with?” question, I broke it down into several questions like “what is the typical user journey?” and “which step of the journey is ranked as the most challenging?”- questions I could prompt AI to help me answer about the data. Taking these answers and going back through the data myself helped me identify deeper causalities and pain points that, not only answered the original question by highlighting what customer struggle with, but pinpointed why they struggle and specific things our team could do to improve the user experience.



Leave a comment