It’s Complicated: Embracing the Beautiful Mess of Human Data
When companies bring me in to review or teach neuro-based research methods, I can usually predict where the conversation will go. We’ll talk about EEG. GSR. Maybe eye tracking. Often the Implicit Association Test (IAT). There’s usually excitement about the potential of these tools. And rightly so, they can provide valuable insights when used well.
But there's one message I find myself repeating in every session, like a drum I can’t stop beating: humans are complicated.
We all know that, of course. We experience it every day: in ourselves, in our consumers, in our colleagues. But somehow, when it comes to studying human behavior and emotion in a business context (and very often in the media), that complexity often gets quietly swept aside in favor of clean answers and tidy dashboards.
In the rush for insights, we forget the messiness of being human.
The Emotional Paradox
I often reference psychologist and neuroscientist Lisa Feldman Barrett, who coined what she calls the emotional paradox:
"We experience emotions as if they are obvious and automatic. But scientifically, they are anything but."
Her research, and that of many others, shows that emotions aren’t universal reactions that can be clearly mapped to specific brain regions or facial expressions. They are constructed, context-dependent, and deeply shaped by individual experience.
Yet in industry, we continue to treat tools like EEG or GSR as if they’re emotion detectors, like they can tell us if a consumer “liked” a product just by picking up a few signals.
Spoiler: they can’t. At least, not without a lot of nuance, context, and careful design.
My own copy of How Emotions are Made, https://lisafeldmanbarrett.com/books/how-emotions-are-made/
The Tool Is Not the Answer
Tools like EEG, GSR/HRV, eye tracking, and IAT are often marketed (or misinterpreted) as silver bullets.
Want to know if someone liked your ad? Strap on an EEG headset.
Want to test product appeal? Run an IAT.
Want to measure engagement? Check their eye gaze.
But here’s the catch: the tool doesn’t give you an answer. It gives you data. And that data is only as meaningful as the research question behind it.
More often than not, I see teams using tools without a clearly defined, testable hypothesis. Instead, the goal is simply to “see what the tool says.” This leads to overinterpretation, disappointment, or worse… insights that steer decisions in the wrong direction.
And let’s not forget reverse inference: assuming that if someone shows X brain signal, they must be feeling Y. It’s tempting, but dangerous. As I often remind teams, you can’t work backward from brain activity and confidently claim to know what someone felt. That’s like hearing a car engine and guessing the driver’s destination.
Not All Tools Are Created Equal
Another point of confusion? Not all versions of these tools are the same.
EEG headsets vary wildly in quality, resolution, and the algorithms used to “translate” signals into emotion or cognition metrics.
IATs differ in structure, timing, word pairings, so not all results are comparable, even if they seem similar on the surface.
GSR and HRV depend heavily on context (temperature, movement, even breathing).
Eye tracking can tell you where someone looked, not what they thought.
Without a basic understanding of the underlying science, and without knowing the strengths and limits of each tool, it’s easy to draw conclusions that sound exciting but don’t actually hold up.
Not all EEG are created equal. Images from Pexels and R-Net, Brain Vision (https://brainvision.com/products/rnet-eeg-net/).
Individual Differences Don’t Go Away—They Multiply
There’s also a misconception that these neuroscience tools offer more objective or consistent data than traditional surveys or interviews. But that’s not necessarily true.
In fact, neuro tools can magnify individual variability, not reduce it.
Humans aren’t just messy; they’re idiosyncratic. Two people can show very different physiological or neural responses to the same stimulus, even if they say they felt the same way. These differences are shaped by everything from past experiences to mood, culture, context, and baseline physiology.
In traditional research, we often average over these individual quirks. But in physiological or neural data, those quirks often get amplified, especially when sample sizes are small or data is interpreted without rigorous normalization.
So while it may feel like a brainwave or skin conductance measure is somehow more “true” than a self-report, it’s not inherently more valid. If anything, these methods require more thoughtful design, more statistical care, and more humility in interpretation.
From Monty Python’s Life of Brian (1979). https://m.imdb.com/title/tt0079470/
Lean Into the Complexity
It might seem like all this complexity is a problem. That the messiness of human variability, emotional ambiguity, and noisy neuro data makes research harder. And it does. But it also makes it more interesting.
When we stop trying to force clean answers and instead design studies that embrace variability, we open the door to deeper, richer insights. The average response can tell you what’s typical. But the margins, the outliers, the divergences, the contradictions, are where the most exciting questions live.
Instead of smoothing out the differences, what if we asked why people responded differently? What does it mean that one group showed greater physiological arousal while another didn’t? What can we learn from the quiet signals that don’t quite fit?
This is where neuro and behavioral science tools really shine. Not as emotion thermometers, but as exploration tools. Tools that help you notice patterns you wouldn’t otherwise see, that raise new hypotheses, and that help you understand people not just as a collective, but as individuals.
There’s beauty in that. And real business value, too.
So What Can You Do?
This is exactly why I’m brought in.
In my sessions, I don’t just explain how these tools work. I help teams build the critical thinking skills needed to ask the right questions, choose the right tools, and interpret results responsibly. I stress the importance of grounding research in theory (yes, even the messy, unfinished ones) and not skipping past the “why” just to get to the “what.”
Because the truth is, understanding people is hard. And no tool, however sleek or high-tech, will change that. But with the right mindset, the right design, and a bit of nerdy know-how, we can get better at it.
If your team is using (or thinking of using) neuroscience tools in your research, I’d love to help. I offer workshops, educational sessions, and research design consultations that bridge the gap between science and strategy, so you can unlock insights without losing sight of the complexity of being human.
Let’s make consumer research smarter, not just sexier.