A recommender system has become an integral part of everyday online content platforms like Youtube, Spotify, and Instagram. Meanwhile, researchers have continuously reported user experiential issues which imply the lack of knowledge for user-centered design of user-recommender interactions (e.g., a lack of user’s sense of agency, user’s unawareness of recommender inner-workings). However, recent studies still focus on recommendation technique and its performance, and discussion on design with considerations of those user experiential issues is in its infancy. Moreover, designing for user-recommender interaction requires consideration of the contextual specificity of each platform domain. For example, although the expected user experience with the recommendation in the social network service differs from other platforms due to a unique aspect of social interaction, there is yet insufficient research on the user-recommender interaction in the social network context. This project aims to understand the user-centered design of user-recommender interactions and propose tactics for designing user-recommender interactions.
Self-explanatory features of intelligent agents have been emphasized in terms of helping users build mental models. While previous studies have revealed what traits an agent’s explanations should have, investigations on how to design such explanations from a dialogue perspective (e.g., through a conversational UI) have not been addressed. Thus, the purpose of this work is to explore the roles of explanations in recommender chatbots and to suggest design considerations for such chatbots’ explanations based on a better understanding of the user experience. For this study, we designed a recommender chatbot to act as a probe. It provides explanations for its recommendations, based on service customization in recommender systems that reflect how it learns and evolves. Each participant experienced the recommender chatbot for 5 days and underwent a semi-structured post-interview. We discovered three roles of explanations in user mental model development. First, each user rapidly built a mental model of the chatbot and became more tolerant of unsatisfactory recommendations. Second, users willingly gave information to the chatbot and acquired a sense of ownership towards it. Third, users reflected on their own habitual activities and used the chatbot reliably. Based on these findings, we suggest three design considerations for chatbot explanations. First, the explanations should be grounded by data from diverse channels. Second, the explanations should be logical and distinguish between personal and generic data. Third, the explanations should gradually become more complete through inferences made from comprehensive information.