Higher Ed Hot Takes
• Issue #51

Discomfort Hacking in Usability Studies

Some eggs with funny faces drawn on them expressing a wide range of emotion.

This is a hot take space, right?

I am sharing with you a take that goes against many common usability research approaches.

Back when I facilitated usability studies full-time, I always felt kind of uncomfortable with participant discomfort.

We would shepherd research participants into an observation room, recite a prepared script to them, get their consent, and then start them on a series of tasks. It was a sterile and validationless environment. If a participant asked, “Am I doing this right?” my colleagues and I would respond with a counter-question like, “Does it seem to you like you’re doing this right?”

My empathy meter would be alarming itself out. I wanted to rush back into the observation room and say something like, “Hey, your questions are reasonable, and as a study participant, you’re doing great; just keep going.” I wanted to relieve that tension for them.

But, the discomfort was meant to be part of the process. The logic behind it was straightforward: when a person uses a product for the first time, it feels a little uncomfortable. They don’t know how it all works. So, letting them sit in an uncomfortable situation in the observation room is closer to the experience of working their way through a product on their own for the first time than if they had a usability cheerleader behind them answering their questions or providing encouragement.

I totally get this. All qualitative research requires disciplined approaches so that our anticipated or even hoped-for outcomes don’t surface and become a to-do list for our participants. Our research subjects want to do well. They want to provide us with the information we’re looking for (my goodness - especially if they are students!). They are scanning our expressions and mannerisms looking for acceptance.

How many basketball passes did you count?

This is a link to a somewhat memorable TED Talk video. For the purposes of this Hot Take, I recommend watching it from 2 mins to about 4 mins in.

Did you watch it? Okay, spoilers starting in 5… 4… 3… 2…

For those allergic to TED Talks, here’s a quick summary of those two minutes you didn’t watch. Daniel Simons asks the audience to count the passes between players wearing white t-shirts in a video. After the video is shown and he asks how many passes people counted, he then asks, “Did you see the gorilla?”

Watching this TED Talk for the first time felt like seeing a magic trick. I am in the 50% of people who definitely did NOT see the gorilla the first time I watched it. I was an instant convert to Daniel Simons’ research. Whatever he had to say, I was buying it in full because I was so surprised by my own “inattentional blindness.”

And, more or less, that’s Simons’ core thesis: that we all suffer from inattentional blindness. In Simons’ view, we all have an intuition that our senses will detect and make conscious for us phenomena in our environment that are BIG and worthy of our notice. But, this intuition is false, and it can lull us into a sense of overconfidence.

The academic paper, “Gorillas in Our Midst: Sustained Inattentional Blindness for Dynamic Events,” by Daniel Simons and Christopher Chabris (1999) has over 4,000 citations in Google Scholar. They even generated a book from it: _The Invisible Gorilla and Other Ways Our Intuition Deceives U_s (2010). Daniel Kahneman described this study’s effect as demonstrating that we are “blind to the obvious, and that we also are blind to our blindness.” This notion of our lack of awareness of our own limitations is the foundation of behavioral economics.

Enter Teppo Felin.

Do you remember when John Kerry was accused of being a flip-flopper in the 2004 presidential election? I am about to make him look like the king of consistency.

A few years ago, a researcher named Teppo Felin published a paper criticizing the “Gorillas in our Midst” type of study. His rationale was simple. Before you watch the basketball video, you’re instructed to pay attention to a somewhat complicated task - the task of counting passes between players wearing white shirts.

If you watched the video without any instruction, he argues, you probably wouldn’t count the passes between players in white shirts. That would be a weird thing just to do on your own. In fact, there are numerous, obvious things to pay attention to. And, the people who watch the video without Simons’ instructions, always see the Gorilla. Our intuition is right: as long as we’re not preoccupied with a complex task, our senses will pick up and bring to our attention the important stuff.

Teppo Felin calls this type of academic study “surprise hacking” and compares it to stage magic. I’ll be honest. Teppo Felin’s commentary also felt like a magic trick to me because it so utterly changed the way I saw so many research studies in behavioral economics. My senses weren’t dazzled, but my mind was. How could I have missed the fact that a diversion is not comparable to neutral viewing?

I didn’t mean to tell a long story so that I could draw similarities between myself and John Kerry. My purpose is to make an analogy: When we create uncomfortable situations for our usability participants, we are doing the equivalent of surprise hacking. Let’s call it “discomfort hacking.”

When Keeping It Weird is Holding You Back.

Imagine you are a usability study participant. You are instructed to use a product as though you were “out in the wild.” You talk out loud about your experiences using the product. And, you fumble around, making mistakes and discoveries in front of people.

The discomfort of diminished social feedback from the facilitator is supposed to serve two functions:

  • By keeping you off-kilter, it’s supposed to feel more realistic to being alone and figuring a product out.
  • By not providing social feedback, we’re reducing the risk of contaminating the study by inadvertently signaling what our desired responses are.

I would argue that nothing about a usability study environment is akin to experimenting with and discovering how to use a product independently. A research study environment is innately artificial. And, participants are aware they are being observed.

Allowing a person to sit in discomfort doesn’t automatically and unquestionably protect the integrity of the study. First, observations are all interpreted, and findings are inferred. So, it is never possible to remove people’s preferences and interests from a study. Second, the discomfort is its own kind of bias.

Being watched by others makes us feel exposed. Uncomfortable social interactions double our feelings of vulnerability. This can have multiple consequences for your study.

  1. It can make people feel self-conscious about expressing their difficulties.
  2. It can make people wish to appear smart or clever to counteract the feelings of vulnerability.
  3. It can prime people to prove their value to you.
  4. It can even cause people to internally confabulate a little bit. They’ll be looking for a way to release the valve of discomfort. They may project their discomfort on or blame the product for their ill-at-ease feeling.

I love qualitative research and believe we can learn many things from qualitative research projects. So, please understand me when I say that qualitative research is a human process with a certain amount of bias built in. I’m not dismissing all qualitative research or suggesting that we can’t take reasonable steps to avoid “leading the witness.”

What I am trying to say is that discomfort hacking has its own vulnerabilities and drawbacks.

What we can do

  1. As in all facilitation efforts, we can build rapport with our participants so they feel more comfortable sharing their experiences and insights honestly and authentically.
  2. If you’re worried about transmitting your desire for certain outcomes, you can always, always, always hire a neutral facilitator.
  3. If you are questioning the validity of this process, you can always design secondary or complementary studies to triangulate the results. In fact, if you want to go meta, you could always run two different sessions - one with discomfort hacking and one without - and use your secondary methods to determine which way elicited better feedback in the usability sessions.

And, what does this have to do with Higher Education?

First, the student experience affects enrollment, retention, and student success. It can even affect student wellbeing. Improving the student experience is at the core of what we do. Validating our websites and online products and services is critical to understanding our work.

Second, many types of usability testing and user research can be done in-house. And, Institutions of Higher Education have every reason to run their own testing. It’s cheaper than hiring consultants, and it can be more efficient to do so with team members who know and work on the products.

Third, our tech stacks and performance metrics in higher education are more layered and complicated than they used to be. Knowing what is intuitive and what is difficult can profoundly impact product adoption and product commitment. It can also have impacts on our adherence to state and federal reporting requirements and to understanding our academic competitors.

If you want to talk more about usability research methods and planning, that’s like my favorite thing! Hit me up anytime at inquiries@braverymedia.co.