Exploring consumers response to text-based chatbots in e-commerce: The
moderating role of task complexity and chatbot disclosure
- URL: http://arxiv.org/abs/2401.12247v1
- Date: Sat, 20 Jan 2024 15:17:50 GMT
- Title: Exploring consumers response to text-based chatbots in e-commerce: The
moderating role of task complexity and chatbot disclosure
- Authors: Xusen Cheng, Ying Bao, Alex Zarifis, Wankun Gong and Jian Mou
- Abstract summary: This study aims to explore consumers trust and response to a text-based chatbots in ecommerce.
The consumers perception of both the empathy and friendliness of the chatbots positively impacts their trust in it.
The disclosure of the text based chatbots negatively moderates the relationship between empathy and consumers trust, while it positively moderates the relationship between friendliness and consumers trust.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial intelligence based chatbots have brought unprecedented business
potential. This study aims to explore consumers trust and response to a
text-based chatbot in ecommerce, involving the moderating effects of task
complexity and chatbot identity disclosure. A survey method with 299 useable
responses was conducted in this research. This study adopted the ordinary least
squares regression to test the hypotheses. First, the consumers perception of
both the empathy and friendliness of the chatbot positively impacts their trust
in it. Second, task complexity negatively moderates the relationship between
friendliness and consumers trust. Third, disclosure of the text based chatbot
negatively moderates the relationship between empathy and consumers trust,
while it positively moderates the relationship between friendliness and
consumers trust. Fourth, consumers trust in the chatbot increases their
reliance on the chatbot and decreases their resistance to the chatbot in future
interactions. Adopting the stimulus organism response framework, this study
provides important insights on consumers perception and response to the
text-based chatbot. The findings of this research also make suggestions that
can increase consumers positive responses to text based chatbots. Extant
studies have investigated the effects of automated bots attributes on consumers
perceptions. However, the boundary conditions of these effects are largely
ignored. This research is one of the first attempts to provide a deep
understanding of consumers responses to a chatbot.
Related papers
- The Illusion of Empathy: How AI Chatbots Shape Conversation Perception [10.061399479158903]
GPT-based chatbots were perceived as less empathetic than human conversational partners.
Empathy ratings from GPT-4o annotations aligned with users' ratings, reinforcing the perception of lower empathy.
Empathy models trained on human-human conversations detected no significant differences in empathy language.
arXiv Detail & Related papers (2024-11-19T21:47:08Z) - Multi-Purpose NLP Chatbot : Design, Methodology & Conclusion [0.0]
This research paper provides a thorough analysis of the chatbots technology environment as it exists today.
It provides a very flexible system that makes use of reinforcement learning strategies to improve user interactions and conversational experiences.
The complexity of chatbots technology development is also explored in this study, along with the causes that have propelled these developments and their far-reaching effects on a range of sectors.
arXiv Detail & Related papers (2023-10-13T09:47:24Z) - Evaluating Chatbots to Promote Users' Trust -- Practices and Open
Problems [11.427175278545517]
This paper reviews current practices for testing chatbots.
It identifies gaps as open problems in pursuit of user trust.
It outlines a path forward to mitigate issues of trust related to service or product performance, user satisfaction and long-term unintended consequences for society.
arXiv Detail & Related papers (2023-09-09T22:40:30Z) - A Categorical Archive of ChatGPT Failures [47.64219291655723]
ChatGPT, developed by OpenAI, has been trained using massive amounts of data and simulates human conversation.
It has garnered significant attention due to its ability to effectively answer a broad range of human inquiries.
However, a comprehensive analysis of ChatGPT's failures is lacking, which is the focus of this study.
arXiv Detail & Related papers (2023-02-06T04:21:59Z) - Neural Generation Meets Real People: Building a Social, Informative
Open-Domain Dialogue Agent [65.68144111226626]
Chirpy Cardinal aims to be both informative and conversational.
We let both the user and bot take turns driving the conversation.
Chirpy Cardinal placed second out of nine bots in the Alexa Prize Socialbot Grand Challenge.
arXiv Detail & Related papers (2022-07-25T09:57:23Z) - A Deep Learning Approach to Integrate Human-Level Understanding in a
Chatbot [0.4632366780742501]
Unlike humans, chatbots can serve multiple customers at a time, are available 24/7 and reply in less than a fraction of a second.
We performed sentiment analysis, emotion detection, intent classification and named-entity recognition using deep learning to develop chatbots with humanistic understanding and intelligence.
arXiv Detail & Related papers (2021-12-31T22:26:41Z) - EmpBot: A T5-based Empathetic Chatbot focusing on Sentiments [75.11753644302385]
Empathetic conversational agents should not only understand what is being discussed, but also acknowledge the implied feelings of the conversation partner.
We propose a method based on a transformer pretrained language model (T5)
We evaluate our model on the EmpatheticDialogues dataset using both automated metrics and human evaluation.
arXiv Detail & Related papers (2021-10-30T19:04:48Z) - CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement
Learning [60.348822346249854]
This study presents a framework whereby several empathetic chatbots are based on understanding users' implied feelings and replying empathetically for multiple dialogue turns.
We call these chatbots CheerBots. CheerBots can be retrieval-based or generative-based and were finetuned by deep reinforcement learning.
To respond in an empathetic way, we develop a simulating agent, a Conceptual Human Model, as aids for CheerBots in training with considerations on changes in user's emotional states in the future to arouse sympathy.
arXiv Detail & Related papers (2021-10-08T07:44:47Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z) - Can You be More Social? Injecting Politeness and Positivity into
Task-Oriented Conversational Agents [60.27066549589362]
Social language used by human agents is associated with greater users' responsiveness and task completion.
The model uses a sequence-to-sequence deep learning architecture, extended with a social language understanding element.
Evaluation in terms of content preservation and social language level using both human judgment and automatic linguistic measures shows that the model can generate responses that enable agents to address users' issues in a more socially appropriate way.
arXiv Detail & Related papers (2020-12-29T08:22:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.