I'd love to hear from you. Get in touch!
🔬 The Observation That Prompted This Rant
- We measure satisfaction, intention to use, overall liking — and then we go back to our teams and say "users don't trust it" or "satisfaction is low" and expect that to be actionable
🧠 How Experience Actually Works — A Quick Neuroscience Detour
- Experience isn't one thing — it moves through layers: sensation → perception → judgment
- Sensation is the raw signal reaching your sensors; perception is your brain integrating that into something meaningful; judgment is the conscious evaluation you emit at the end
- Most UX research only captures the judgment — the tip of the iceberg — and skips everything underneath it
- Knowing someone rated satisfaction a 3 out of 7 tells you nothing about what to change
🍷 The Sensory Evaluation Parallel
- My master's specialisation was in sensory evaluation — how do you extract what someone actually sensed from what they perceived overall?
- The wine, perfume, and automotive industries do this routinely: trained panels isolate attributes (texture, pitch, smell profile) and rate them independently from overall liking
- We can and should do the same with software
📐 Hassenzahl's Model — The Framework I Keep Coming Back To
- Three levels: intended qualities (what the conceiver aims to produce) → perceived qualities (what the user actually experiences) → final judgment (satisfaction, purchase intent, etc.)
- The gap between level one and level two is where most products fail — you can intend a premium feel without ever checking whether users actually perceive it as premium
- Decompose until you can't decompose further: "premium" means nothing to an engineer — "high-pitched sound perceived as alarming rather than reassuring" does
💡 What I'm Actually Asking UX Researchers to Do
- When evaluating a product, go beyond overall satisfaction — ask about the attributes that compose the experience: reliability, accuracy, responsiveness, tone, whatever is relevant to your context
- Use rating scales so you can track change over time and compare across studies — even imperfect numbers beat no numbers
- If you don't have time or budget to do this with users, do it internally — train your team to evaluate the attributes so that when you go back to the developers, you're speaking their language
⚠️ The Cost of Not Doing This
- You end up doing redundant research rounds because you never captured the full picture the first time
- Your feedback loop stays shallow — one round of iteration, and then the team doesn't know what to do next
- You are shooting in the air, and the product improves slowly or not at all
Support the show
Help me improve the show HERE