HCI Deep Dives copertina

HCI Deep Dives

HCI Deep Dives

Di: Kai Kunze
Ascolta gratuitamente

A proposito di questo titolo

HCI Deep Dives is your go-to podcast for exploring the latest trends, research, and innovations in Human Computer Interaction (HCI). AI-generated using the latest publications in the field, each episode dives into in-depth discussions on topics like wearable computing, augmented perception, cognitive augmentation, and digitalized emotions. Whether you’re a researcher, practitioner, or just curious about the intersection of technology and human senses, this podcast offers thought-provoking insights and ideas to keep you at the forefront of HCI.Copyright 2024 All rights reserved. Scienza
  • AHs 2025 texTENG: Fabricating Wearable Textile-Based Triboelectric Nanogenerators
    Mar 6 2026

    What if your clothes could power your wearable devices? texTENG introduces a maker-friendly framework for creating textile-based triboelectric nanogenerators (TENGs)—fabrics that harvest energy from everyday human motion like walking, arm swings, or even breathing. While TENGs have shown promise for sustainable wearable power, adoption has been slow due to hard-to-find materials and complex fabrication. texTENG solves this with accessible techniques using common textiles and conductive threads that makers can replicate. The system demonstrates versatile applications: from self-powered touch sensors that detect gestures without batteries, to energy-harvesting patches that can charge small devices. Technical evaluations confirm the devices can reliably detect touch input and store harvested energy efficiently. This work opens the door for HCI researchers and makers to explore battery-free wearable computing.

    Ritik Batra, Narjes Pourjafarian, Samantha Chang, Margaret Tsai, Jacob Revelo, and Cindy Hsin-Liu Kao. 2025. texTENG: Fabricating Wearable Textile-Based Triboelectric Nanogenerators. In Augmented Humans International Conference 2025 (AHs '25). ACM, New York, NY, USA. https://doi.org/10.1145/3745900.3746071

    Mostra di più Mostra meno
    12 min
  • TAFFC 2025 Micro-Expressions: Could Micro-Expressions Be Quantified? Electromyography Gives Affirmative Evidence
    Jan 27 2026

    Micro-expressions are fleeting facial movements lasting just 40-200 milliseconds that are believed to reveal concealed emotions. But can these subtle expressions actually be measured objectively? This study provides the first direct electromyographic (EMG) evidence that micro-expressions are real, quantifiable muscle activations—not just visual artifacts. By placing electrodes on participants' faces while they attempted to suppress genuine emotions, researchers captured the electrical activity of facial muscles during both macro-expressions and micro-expressions. The results show that micro-expressions involve significantly smaller muscle contractions than regular expressions, explaining why they're so hard to detect visually. The findings also reveal that micro-expressions are truly involuntary "leakage"—participants couldn't fully suppress their emotional responses even when trying. This research has important implications for lie detection, clinical assessment, and understanding the fundamental nature of emotional expression.

    Jingting Li, Shaoyuan Lu, Yan Wang, Zizhao Dong, Su-Jing Wang, and Xiaolan Fu. 2024. Could Micro-Expressions Be Quantified? Electromyography Gives Affirmative Evidence. IEEE Transactions on Affective Computing, vol. 16, no. 4, 2024. https://doi.org/10.1109/TAFFC.2025.3575127

    Mostra di più Mostra meno
    14 min
  • TAFFC 2025 Music Emotion: Are We There Yet? A Brief Survey of Music Emotion Prediction Datasets, Models and Outstanding Challenges
    Jan 20 2026

    Music has long been known to evoke powerful emotions, but can machines truly understand and predict these emotional responses? This survey paper takes stock of the field of music emotion recognition (MER), examining the datasets, computational models, and persistent challenges that shape this research area. The authors review how emotion is represented—from categorical labels to dimensional models like valence-arousal—and analyze the most widely used datasets including the Million Song Dataset and MediaEval benchmarks. They trace the evolution from traditional machine learning approaches using hand-crafted audio features to modern deep learning architectures. Despite significant progress, the paper identifies fundamental challenges: the subjective nature of emotional responses to music, the difficulty of obtaining reliable ground truth labels, and the gap between controlled laboratory studies and real-world listening contexts.

    Jaeyong Kang and Dorien Herremans. 2024. Are We There Yet? A Brief Survey of Music Emotion Prediction Datasets, Models and Outstanding Challenges. IEEE Transactions on Affective Computing, vol. 16, no. 4, 2024. https://doi.org/10.1109/TAFFC.2025.3583505

    Mostra di più Mostra meno
    12 min
Ancora nessuna recensione