Privacy Concerns in Chatbot Interactions: When to Trust and When to
Worry
- URL: http://arxiv.org/abs/2107.03959v1
- Date: Thu, 8 Jul 2021 16:31:58 GMT
- Title: Privacy Concerns in Chatbot Interactions: When to Trust and When to
Worry
- Authors: Rahime Belen Saglam and Jason R.C. Nurse and Duncan Hodges
- Abstract summary: We surveyed a representative sample of 491 British citizens.
Our results show that the user concerns focus on deleting personal information and concerns about their data's inappropriate use.
We also identified that individuals were concerned about losing control over their data after a conversation with conversational agents.
- Score: 3.867363075280544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Through advances in their conversational abilities, chatbots have started to
request and process an increasing variety of sensitive personal information.
The accurate disclosure of sensitive information is essential where it is used
to provide advice and support to users in the healthcare and finance sectors.
In this study, we explore users' concerns regarding factors associated with the
use of sensitive data by chatbot providers. We surveyed a representative sample
of 491 British citizens. Our results show that the user concerns focus on
deleting personal information and concerns about their data's inappropriate
use. We also identified that individuals were concerned about losing control
over their data after a conversation with conversational agents. We found no
effect from a user's gender or education but did find an effect from the user's
age, with those over 45 being more concerned than those under 45. We also
considered the factors that engender trust in a chatbot. Our respondents'
primary focus was on the chatbot's technical elements, with factors such as the
response quality being identified as the most critical factor. We again found
no effect from the user's gender or education level; however, when we
considered some social factors (e.g. avatars or perceived 'friendliness'), we
found those under 45 years old rated these as more important than those over
45. The paper concludes with a discussion of these results within the context
of designing inclusive, digital systems that support a wide range of users.
Related papers
- First-Person Fairness in Chatbots [13.787745105316043]
We study "first-person fairness," which means fairness toward the user.
This includes providing high-quality responses to all users regardless of their identity or background.
We propose a scalable, privacy-preserving method for evaluating one aspect of first-person fairness.
arXiv Detail & Related papers (2024-10-16T17:59:47Z) - How Unique is Whose Web Browser? The role of demographics in browser fingerprinting among US users [50.699390248359265]
Browser fingerprinting can be used to identify and track users across the Web, even without cookies.
This technique and resulting privacy risks have been studied for over a decade.
We provide a first-of-its-kind dataset to enable further research.
arXiv Detail & Related papers (2024-10-09T14:51:58Z) - Trust No Bot: Discovering Personal Disclosures in Human-LLM Conversations in the Wild [40.57348900292574]
Measuring personal disclosures made in human-chatbot interactions can provide a better understanding of users' AI literacy.
We run an extensive, fine-grained analysis on the personal disclosures made by real users to commercial GPT models.
arXiv Detail & Related papers (2024-07-16T07:05:31Z) - Exploring consumers response to text-based chatbots in e-commerce: The
moderating role of task complexity and chatbot disclosure [0.0]
This study aims to explore consumers trust and response to a text-based chatbots in ecommerce.
The consumers perception of both the empathy and friendliness of the chatbots positively impacts their trust in it.
The disclosure of the text based chatbots negatively moderates the relationship between empathy and consumers trust, while it positively moderates the relationship between friendliness and consumers trust.
arXiv Detail & Related papers (2024-01-20T15:17:50Z) - Social AI Improves Well-Being Among Female Young Adults [0.5439020425819]
The rise of language models like ChatGPT has introduced Social AI as a new form of entertainment.
This paper investigates the effects of these interactions on users' social and mental well-being.
arXiv Detail & Related papers (2023-11-12T16:44:43Z) - SQuARe: A Large-Scale Dataset of Sensitive Questions and Acceptable
Responses Created Through Human-Machine Collaboration [75.62448812759968]
This dataset is a large-scale Korean dataset of 49k sensitive questions with 42k acceptable and 46k non-acceptable responses.
The dataset was constructed leveraging HyperCLOVA in a human-in-the-loop manner based on real news headlines.
arXiv Detail & Related papers (2023-05-28T11:51:20Z) - A Categorical Archive of ChatGPT Failures [47.64219291655723]
ChatGPT, developed by OpenAI, has been trained using massive amounts of data and simulates human conversation.
It has garnered significant attention due to its ability to effectively answer a broad range of human inquiries.
However, a comprehensive analysis of ChatGPT's failures is lacking, which is the focus of this study.
arXiv Detail & Related papers (2023-02-06T04:21:59Z) - Understanding How People Rate Their Conversations [73.17730062864314]
We conduct a study to better understand how people rate their interactions with conversational agents.
We focus on agreeableness and extraversion as variables that may explain variation in ratings.
arXiv Detail & Related papers (2022-06-01T00:45:32Z) - Automatic Evaluation and Moderation of Open-domain Dialogue Systems [59.305712262126264]
A long standing challenge that bothers the researchers is the lack of effective automatic evaluation metrics.
This paper describes the data, baselines and results obtained for the Track 5 at the Dialogue System Technology Challenge 10 (DSTC10)
arXiv Detail & Related papers (2021-11-03T10:08:05Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z) - Personalized Chatbot Trustworthiness Ratings [19.537492400265577]
We envision a personalized rating methodology for chatbots that relies on separate rating modules for each issue.
The method is independent of the specific trust issues and is parametric to the aggregation procedure.
arXiv Detail & Related papers (2020-05-13T22:42:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.