cixd creative interaction design lab.

Humanlikeness in AI Services

2020 - 2022

TBD

  • Understanding How Users Experience the Physiological Expression of Non-humanoid Voice-based Conversational Agent in Healthcare Services

    Jung, I., Kim, H., and Lim, Y., "Understanding How Users Experience the Physiological Expression of Non-humanoid Voice-based Conversational Agent in Healthcare Services," Proceedings of DIS 2021, ACM Press, (Virtual Conference, June 28-July 2), pp.1433-1446.
    Abstract

    Interactions with voice-based conversational agents (VCAs) in non-humanoid forms are becoming increasingly pervasive, and researches on non-humanoid VCAs engaging diverse human traits have been conducted. However, there have never been studies employing living body's physiological states to be expressed solely through the voice of such VCAs yet. As physiological expressions of such VCAs can have potential for manifesting health-related issues in a human-like way, we selected healthcare scenarios as a case for exploring novel user experiences that they can induce. We conducted design workshops for identifying design considerations and design opportunities for the physiologically expressible VCAs in the healthcare service domain. Following these findings, we designed the new concept of physiologically expressible healthcare VCAs and conducted a Wizard-of-Oz user study. Finally, we summarize the unique user experiences on physiologically expressible VCA's healthcare services and user perceptions of its physiological expressions, and discuss design implications for physiologically expressible VCAs.

  • Understanding the Negative Aspects of User Experience in Human-likeness of Voice-based Conversational Agents

    Kim, H., Jung, I., & Lim, Y. K. (2022, June). Understanding the Negative Aspects of User Experience in Human-likeness of Voice-based Conversational Agents. In Designing Interactive Systems Conference (pp. 1418-1427).
    Abstract

    With advances in artificial intelligence technology, Voice-based Conversational Agents (VCAs) can now imitate human abilities, sometimes almost indistinguishably from humans. However, concerns have been raised that too much perceived similarity can trigger threats and fears among users. This raises a question: Should VCAs be able to imitate humans perfectly? To address this, we explored what influences the negative aspects of user experience in human-like VCAs. We conducted a qualitative exploratory study to elicit participants’ perceptions and feelings of human-like VCAs through comparable video prototypes of human–agent conversation and human–human conversation. We discovered that the dialogues of the human-likeness outside of the expressed purpose of a VCA and expressions pretending to come from a human identity could lead to negative experiences with VCAs. Based on our findings, we discussed design directions for overcoming potential issues of human imitation.