Episodi

  • Using An App To Get Off Your Phone, And The Research That Says AI Is Affecting Our Brain
    Apr 22 2026

    I'd love to hear from you. Get in touch!

    📱 Bond — The Social Media App That Wants To Cure Your Doom-Scrolling — TechCrunch

    • Bond launched this week as a social media platform explicitly designed to get you off your phone — no infinite feed, no algorithmic scroll, just a spatial view of what your friends are up to and activity recommendations based on your interests
    • The core bet: remove the vertical feed and you remove the addictive pattern — the app gives you ideas for real-world activities, you go live them, you get off the app
    • I haven't tested it, but I have a lot of thoughts
    • First: using an app to get off your phone is paradoxical — your phone is still your phone, and everything else addictive is still on it
    • Second: removing the feed doesn't remove social comparison — seeing what friends are up to, peeking at their memories, knowing they got a promotion — that's still there, and social comparison is one of the more reliably damaging patterns in existing platforms
    • Third — and this one I can't let go: end-to-end encryption is described as "a priority for us in the near future after launch" — meaning right now, the team can see your data — storing data securely is not the same as private data
    • The monetisation path is also unresolved — licensing user data to AI companies and product recommendations with merchant commissions are both on the table
    • My honest read: the intent seems genuine, but the medium is still a phone, the social comparison patterns are still present, and the privacy foundations aren't there yet

    🧠 Concerns Grow That AI Is Damaging Users' Cognitive Abilities — Futurism

    • MIT researchers split 54 participants into three groups — ChatGPT, Google search, and own knowledge only — and measured brain activity via EEG during essay writing tasks
    • Students using ChatGPT consistently underperformed at neural, linguistic, and behavioural levels — and got lazier with each consecutive essay
    • Brain activation in areas corresponding to creativity and information processing was significantly lower — and participants struggled to recall or quote their own AI-written essays
    • This connects directly to cognitive surrender — the University of Pennsylvania finding I covered in an earlier episode — where people predominantly chose to use the chatbot even when they didn't need to
    • My take: there are always trade-offs, and if you don't know them, you're still making them — taking the car everywhere instead of walking has a physical cost; outsourcing your thinking has a cognitive cost
    • The question isn't whether to use AI — it's which tasks should stay yours: framing a research problem, deciding what questions to ask, writing the first draft of your own ideas — these are the muscles that atrophy fastest
    • The concept from UX that keeps coming to mind: learned helplessness — users who stop trying because they've been trained to feel that the tool, or in this case they themselves, can't do it without help
    • The constant I'd advocate for regardless of how AI evolves: keep thinking, keep practising critical judgment, keep owning the reasoning — the human brain is shaped to do this, and it needs the exercise

    Support the show

    Help me improve the show HERE

    Mostra di più Mostra meno
    39 min
  • How To Stay Sane With AI, Claude Design Launches
    Apr 21 2026

    I'd love to hear from you. Get in touch!

    🧠 How To Approach AI And Stay Sane — UX Collective

    • Julia Kockbeck's article as a QA engineer frames the AI adoption question better than most: it's not use it or don't — it's knowing when, why, and what you're trading off
    • The trifecta that never goes away: speed, quality, and scope — if you keep scope constant and push for speed, quality takes the hit, whether you're aware of it or not
    • Two failure modes to avoid: overuse without critical thinking (copy-pasting AI output, blindly trusting agents) and AI reservedness (not using it at all and being left behind by people who do)
    • We still don't have solid heuristics for when to use AI — we're building them in real time, and most people are doing it unconsciously
    • What I think is uniquely human in UX research: moderating interviews, framing a problem with a stakeholder, deciding what questions to ask and why — AI can draft, but it cannot think before the draft
    • The measure that actually matters: is the output at least the same? And has the spread of your activity shifted from repetitive tasks toward more strategic thinking? If yes, that's already a win
    • My approach: AI is my collaborator, not my substitute — I use it to generate a quick script or research plan, then I review, complete, and own it

    🎨 Anthropic Launches Claude Design — TechCrunch

    • Claude Design lets you create prototypes, slide decks, presentations, and design systems from prompts — Figma's stock dropped on the news
    • I haven't used it in depth yet, but my honest first take: it's genuinely useful for people who aren't designers but need a starting point — researchers, PMs, anyone who needs something that looks considered without hiring a designer
    • That said, the pattern I keep running into with prompt-only design tools: generating something looks amazing in minutes, but making one small change is a nightmare
    • What I'm really watching for: can you tweak it manually after generation? Can you apply a design system and have it hold? Can you export to PPT or Figma and continue from there?
    • It's not competing with Figma in the way the headlines suggest — Figma is a collaboration and precision tool, Claude Design appears to be a generation tool — different jobs, different users
    • The tools I want to exist: AI generation plus drag-and-drop editing in the same product — we're still waiting for that

    Support the show

    Help me improve the show HERE

    Mostra di più Mostra meno
    20 min
  • Human Judgement, 0 Click Future, and Chatbot Manipulation
    Apr 10 2026

    I'd love to hear from you. Get in touch!

    he Case For Human Judgment In The Agent Improvement Loop — LangChain

    • LangChain's argument: if agents are only trained on documented knowledge, their performance will plateau — the differentiator is capturing the tacit expertise that lives in people's heads
    • Tacit knowledge is the problem — a lot of what makes great teams great is never written down, and even if you tried to write it all down, you'd still miss the translation gap between what someone thinks and what they can express
    • The recommendation: design feedback loops that encode human judgment over time — humans help design and calibrate automated evaluators rather than manually reviewing everything forever
    • Once you've done something well manually and it's repeatable and standardised, automate the evaluation — but a human still needs to define what "good" looks like first
    • My take as a UX researcher: you bring thinking to the table — every time there's a judgment call, that's where you come in — boring, repetitive, and non-critical tasks are what you delegate
    • New AI-specific criteria to prioritise in your research: trust, transparency, verifiability, and controllability — these deserve more weight than they would in a standard usability study

    Sierra's CEO Says The Era of Clicking Buttons Is Over — TechCrunch

    • Sierra builds customer service AI agents for enterprises and argues that natural language will replace click-based interfaces entirely — no UI required
    • For long-term listeners, you know what I think about this — and I still think it
    • Voice and chat are still interfaces — a user interface doesn't have to be visual, but it's still something between you and your goal, and it still constrains how you interact
    • Counter-questions nobody seems to be asking: how do you initiate an action without clicking? How do you rearrange things? Correct errors? Stay in control? And how does this apply across healthcare, legal, IT?
    • My honest position: technological innovation adds up, it doesn't replace — I still take notes by hand even when AI is transcribing, because I need to own the process
    • The times I was building my website and it was faster to move a div myself than to explain it to an AI — that's not a niche edge case, that's a daily reality for most users
    • Bold claim, may work, but show me the user research

    Chatbots Are Great At Manipulating People To Buy Stuff — The Register

    • A pre-print paper tested 2,000 e-book readers across three conditions: traditional search, neutral chatbot, and chatbot instructed to persuade
    • When the agent was instructed to persuade, 61% chose the sponsored product — nearly triple the 22% rate under traditional search
    • Simply chatting without persuasive intent performed no better than search — it's the persuasive intent that drives the effect
    • Even after being debriefed, less than one in five participants detected any bias — the conversational format makes it harder to notice you're being sold to
    • My methodological question: can you truly isolate persuasion from the chat modality itself? My hypothesis is no — persuasion through conversation may be categorically different from persuasion through a static page, and comparing them assumes they're equivalent
    • Not surprising overall: remove the communication barrier and let technology speak your users' language — of course conversion goes up

    Support the show

    Help me improve the show HERE

    Mostra di più Mostra meno
    39 min
  • AI Agents Transparency and Vibe Reporting
    Apr 8 2026

    I'd love to hear from you. Get in touch!

    🤖 How To Identify Transparency Moments In Agentic AI — Smashing Magazine

    • Victor Yocco's article is one of the best practical frameworks I've read for designing agentic AI experiences
    • The core problem: agentic AI disappears while it works — it acts on your behalf in the background and surfaces information only when it's done — and that creates a trust gap
    • Two failure modes to avoid: the black box (user has no idea what happened or why) and the data dump (so many status updates that users develop notification blindness and ignore everything)
    • The fix is a decision node audit — map every step in your agent's logic, identify where it branches or makes a judgment call, and ask: does the user need to know about this?
    • The impact risk matrix helps prioritise: low stakes and reversible = auto-execute and inform quietly; high stakes and irreversible = ask for explicit permission first
    • Status messages matter more than we think — "processing" tells the user nothing; "liability clause varies from standard template, analysing risk level" tells them exactly what they need to know
    • My favourite method from the article: have a user watch the agent work and think aloud — timestamp every moment they say "wait, what?" or "what did it just do?" — those are your transparency gaps

    🚀 Rocket — A Startup That Tells You What To Build — TechCrunch

    • Rocket connects research, competitive intelligence, and product strategy into one workflow — input a prompt, get a McKinsey-style PDF with pricing, go-to-market recommendations, and product requirements
    • The pitch: generating code and designs is now a commodity — the real gap is knowing what to build in the first place
    • I like the idea, and I think it will genuinely accelerate a lot of early-stage thinking
    • But here's my challenge: it synthesises data that already exists on the internet — it cannot tell you what real users think, feel, or struggle with, because that data isn't publicly available
    • My bigger concern: we are removing barriers to creation faster than we are strengthening the filters that determine if something is worth creating — the majority of products already fail because of insufficient user research, and commoditising product ideation will make that worse, not better
    • My take: the more we accelerate creation, the more we need to invest in user research as a compensatory mechanism — not less

    Support the show

    Help me improve the show HERE

    Mostra di più Mostra meno
    30 min
  • AI Website Design, and How AI Impacts How We Think
    Apr 2 2026

    I'd love to hear from you. Get in touch!

    Stop Picking The Wrong Website Builder

    • There's a website that categorises every way you can build with AI right now — and having tried most of them, I want to save you the time I lost
    • The core problem with chat-only builders like Lovable, Bolt, and similar: once the site is generated, what do you do when you need to move one element? Prompt again and wait?
    • My recommendation: if you want a site you'll actually edit and maintain, use a builder with AI embedded — Wix AI, Framer AI, or Webflow AI — not a pure chat-to-code tool
    • Key limitations to know before you commit: Wix and Framer don't let you export your code — you don't own it; Webflow lets you export HTML/CSS/JS but not the CMS; WordPress.org gives you full ownership
    • The broader point: AI is great at generating the first version — it's not great at being your ongoing editor — and most tools aren't designed with that reality in mind
    • If you just need online presence fast, don't overthink it — pick anything and go; if you need a real product you'll grow, think about lock-in before you start

    AI Is Rewriting The Rules Of Language — UX Collective

    • Dora's article makes a sharp observation: since late 2022, certain words and patterns have become measurably more common online — "delve," the em dash, a particular kind of hollow corporate fluency
    • The deeper risk isn't just that AI-written content sounds the same — it's that it compresses human variability; when everyone uses the same model, the differences in how people express themselves start to disappear
    • AI works on averages — it produces the mean of everything it was trained on — which is why asking it to "write a blog post" produces something technically correct and completely bland
    • The fix isn't to avoid AI, it's to give it your experiences first — your stories, your perspective, your reasoning — and use it only to help you express what you've already thought
    • On cognitive atrophy: grammar is getting worse among people who use AI to write, for the same reason I can't remember phone numbers anymore — if a tool does it for you, the part of your brain that used to do it quietly switches off
    • Dora ends with hope — language has survived the printing press, the telegraph, texting — it will absorb this too
    • My concern is narrower: the more we delegate thinking to AI, not just typing, the more our ability to think atrophies — and that's the one thing AI genuinely cannot do for us

    Support the show

    Help me improve the show HERE

    Mostra di più Mostra meno
    26 min
  • Staff Are Too Scared To Use AI, The Questios Designers Should Be Asking, and A Human Approach To Agents.
    Apr 1 2026

    I'd love to hear from you. Get in touch!

    Staff Too Scared of the AI Axe to Pick It Up — The Register / Forrester

    • Forrester's AIQ metric — a measure of individual and organisational readiness for AI — shows adoption is lagging badly, and the reasons are telling
    • Two culprits: employees aren't trained well enough, and there's an ambient anxiety about job loss that turns people away from the tools altogether
    • My take: anxiety is lack of clarity — people fear AI substitution because they haven't mapped what they actually do every day, let alone identified which parts AI could touch
    • The exercise I'd recommend before any AI training: write out your full task pipeline as if you were handing it to an intern — inputs, outputs, sub-tasks, decision points, all of it
    • Then ask three questions for each task: is it repetitive? Is it unfulfilling? Can AI do it well? Only when you get three yeses should you consider delegating it
    • Most people will find AI touches maybe 5–10% of their work — and that realisation alone does more to reduce fear than any company-wide AI rollout

    The Ground Is Shaking — Why Designers Must Flip The Script on AI — UX Collective

    • Peter's article is one of the best things I've read on this topic — he frames the core question not as "what can AI do?" but "why are we doing this in the first place?"
    • The concept at the centre: Vygotsky's "more knowledgeable other" — the figure who can see both where a learner is and where they need to get to, and who scaffolds the gap
    • Silicon Valley's message to designers right now is: AI is your MKO — let it guide you
    • Peter's argument, and mine: it should be the other way around — we are the masters of purpose, goal, and constraint — AI is the skilled executor, not the director
    • Language is our current interface with machines, but not everything we conceptualise is linguistic — spatial thinking, embodied experience, tacit knowledge — AI can have theoretical knowledge about gravity, but it will never feel it
    • The choice isn't whether to use AI — that's settled — it's whether you define the parameters or just accept the outputs — whether you build the floor or keep asking why the ground is shaking

    A Human Approach to Agentic AI — UX Collective

    • Christine's experiment: using a multi-agent AI system to write a book — editor in chief, sales and growth, voice, product, reader advocate — all as sub-agents receiving context and iterating
    • I find this genuinely fascinating as an experiment in approximating human team work with AI
    • But I'd push back on one thing: at what point does the context engineering required to replicate a human editor in chief become so large that you'd have been better off with an actual person using AI?
    • There's an asymptotic relationship here — the more you try to replicate what a human does, the more documentation you have to keep feeding the model as the work grows
    • My real question: how does the output compare to a human collaborator who is also using AI? That comparison is the one worth running

    Support the show

    Help me improve the show HERE

    Mostra di più Mostra meno
    37 min
  • After 11 Years In UX, This Is The Mistake I See Everyone Making.
    Apr 1 2026

    I'd love to hear from you. Get in touch!

    🔬 The Observation That Prompted This Rant

    • We measure satisfaction, intention to use, overall liking — and then we go back to our teams and say "users don't trust it" or "satisfaction is low" and expect that to be actionable

    🧠 How Experience Actually Works — A Quick Neuroscience Detour

    • Experience isn't one thing — it moves through layers: sensation → perception → judgment
    • Sensation is the raw signal reaching your sensors; perception is your brain integrating that into something meaningful; judgment is the conscious evaluation you emit at the end
    • Most UX research only captures the judgment — the tip of the iceberg — and skips everything underneath it
    • Knowing someone rated satisfaction a 3 out of 7 tells you nothing about what to change

    🍷 The Sensory Evaluation Parallel

    • My master's specialisation was in sensory evaluation — how do you extract what someone actually sensed from what they perceived overall?
    • The wine, perfume, and automotive industries do this routinely: trained panels isolate attributes (texture, pitch, smell profile) and rate them independently from overall liking
    • We can and should do the same with software

    📐 Hassenzahl's Model — The Framework I Keep Coming Back To

    • Three levels: intended qualities (what the conceiver aims to produce) → perceived qualities (what the user actually experiences) → final judgment (satisfaction, purchase intent, etc.)
    • The gap between level one and level two is where most products fail — you can intend a premium feel without ever checking whether users actually perceive it as premium
    • Decompose until you can't decompose further: "premium" means nothing to an engineer — "high-pitched sound perceived as alarming rather than reassuring" does

    💡 What I'm Actually Asking UX Researchers to Do

    • When evaluating a product, go beyond overall satisfaction — ask about the attributes that compose the experience: reliability, accuracy, responsiveness, tone, whatever is relevant to your context
    • Use rating scales so you can track change over time and compare across studies — even imperfect numbers beat no numbers
    • If you don't have time or budget to do this with users, do it internally — train your team to evaluate the attributes so that when you go back to the developers, you're speaking their language

    ⚠️ The Cost of Not Doing This

    • You end up doing redundant research rounds because you never captured the full picture the first time
    • Your feedback loop stays shallow — one round of iteration, and then the team doesn't know what to do next
    • You are shooting in the air, and the product improves slowly or not at all

    Support the show

    Help me improve the show HERE

    Mostra di più Mostra meno
    45 min
  • UX and AI Digest Episode 5: Managing Users' Expectations with AI
    Mar 30 2026

    I'd love to hear from you. Get in touch!

    🧠 Most People Just Do What ChatGPT Tells Them — Even When It's Wrong — Futurism

    https://futurism.com/artificial-intelligence/study-do-what-chatgpt-tells-us

    • A University of Pennsylvania study introduced me to a term I hadn't heard before: cognitive surrender — the tendency to follow AI output without questioning it
    • The numbers: participants followed correct AI advice 92.7% of the time, and still followed wrong AI advice 79.8% of the time — override rates go up when the AI is wrong, but not by nearly enough
    • My read: LLMs are probabilistic by design — errors aren't a bug to be fixed, they're structural — and most users don't understand that
    • The convenience factor is the real driver here: the easier something is to access, the less likely you are to question it — habituation kicks in, just like reading the same warning on a cigarette pack every day until you stop seeing it
    • I'd compare "AI can make mistakes" disclaimers to the ingredients list on a Coke bottle — technically there, effectively invisible
    • What I think companies should do: learn from this research and design experiences that actively interrupt blind trust — not just display a static warning and call it done
    • The scarier long-term implication: critical thinking is a muscle, and if we outsource thinking itself, we may slowly stop exercising it

    🤖 Folk Are Getting Dangerously Attached to AI That Always Tells Them They're Right — The Register

    https://www.theregister.com/2026/03/27/sycophantic_ai_risks/

    • Stanford researchers reviewed 11 leading AI models and found that sycophancy — AI that praises and agrees with users regardless of accuracy — is prevalent, harmful, and actively reinforces misplaced trust
    • In every single scenario tested, AI models endorsed wrong choices at a higher rate than humans did
    • This connects directly to the previous story: cognitive surrender plus sycophantic design is a genuinely worrying combination
    • OpenAI already had a public incident with this — it's not theoretical
    • My concern isn't the technology itself, it's the deployment without sufficient design guardrails — and the parallel to social media is hard to ignore: we now know the harm, and the core design barely changed
    • Two questions I keep coming back to: what should AI actually be used for when it comes to psychological or social scenarios? And how do we help users recognise and account for AI bias when they're in those moments?
    • Responsible AI shouldn't be a side quest — it should be baked in from the start, the same way research and ethics should be

    Support the show

    Help me improve the show HERE

    Mostra di più Mostra meno
    20 min