Designing Chatbots to Support Victims and Survivors of Domestic Abuse
- URL: http://arxiv.org/abs/2402.17393v1
- Date: Tue, 27 Feb 2024 10:40:15 GMT
- Title: Designing Chatbots to Support Victims and Survivors of Domestic Abuse
- Authors: Rahime Belen Saglam, Jason R. C. Nurse, Lisa Sugiura
- Abstract summary: We investigate the role that chatbots may play in supporting victims/survivors in situations such as these or where direct access to help is limited.
Interviews were conducted with experts working in domestic abuse support services and organizations.
Thematic content analysis was applied to assess and extract insights from the interview data and the content on victim-support websites.
- Score: 2.3020018305241337
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Objective: Domestic abuse cases have risen significantly over the last four
years, in part due to the COVID-19 pandemic and the challenges for victims and
survivors in accessing support. In this study, we investigate the role that
chatbots - Artificial Intelligence (AI) and rule-based - may play in supporting
victims/survivors in situations such as these or where direct access to help is
limited. Methods: Interviews were conducted with experts working in domestic
abuse support services and organizations (e.g., charities, law enforcement) and
the content of websites of related support-service providers was collected.
Thematic content analysis was then applied to assess and extract insights from
the interview data and the content on victim-support websites. We also reviewed
pertinent chatbot literature to reflect on studies that may inform design
principles and interaction patterns for agents used to support
victims/survivors. Results: From our analysis, we outlined a set of design
considerations/practices for chatbots that consider potential use cases and
target groups, dialog structure, personality traits that might be useful for
chatbots to possess, and finally, safety and privacy issues that should be
addressed. Of particular note are situations where AI systems (e.g., ChatGPT,
CoPilot, Gemini) are not recommended for use, the value of conveying emotional
support, the importance of transparency, and the need for a safe and
confidential space. Conclusion: It is our hope that these
considerations/practices will stimulate debate among chatbots and AI developers
and service providers and - for situations where chatbots are deemed
appropriate for use - inspire efficient use of chatbots in the support of
survivors of domestic abuse.
Related papers
- On the Reliability of Large Language Models to Misinformed and Demographically-Informed Prompts [20.84000437261526]
We investigate and observe Large Language Model (LLM)-backed chatbots in addressing misinformed prompts and questions with demographic information.
quantitative analysis using True/False questions reveals that these chatbots can be relied on to give the right answers to these close-ended questions.
qualitative insights, gathered from domain experts, shows that there are still concerns regarding privacy, ethical implications.
arXiv Detail & Related papers (2024-10-06T07:40:11Z) - The Typing Cure: Experiences with Large Language Model Chatbots for
Mental Health Support [35.61580610996628]
People experiencing severe distress increasingly use Large Language Model (LLM) chatbots as mental health support tools.
This study builds on interviews with 21 individuals from globally diverse backgrounds to analyze how users create unique support roles.
We introduce the concept of therapeutic alignment, or aligning AI with therapeutic values for mental health contexts.
arXiv Detail & Related papers (2024-01-25T18:08:53Z) - Deceptive AI Ecosystems: The Case of ChatGPT [8.128368463580715]
ChatGPT has gained popularity for its capability in generating human-like responses.
This paper investigates how ChatGPT operates in the real world where societal pressures influence its development and deployment.
We examine the ethical challenges stemming from ChatGPT's deceptive human-like interactions.
arXiv Detail & Related papers (2023-06-18T10:36:19Z) - ChatGPT for Us: Preserving Data Privacy in ChatGPT via Dialogue Text
Ambiguation to Expand Mental Health Care Delivery [52.73936514734762]
ChatGPT has gained popularity for its ability to generate human-like dialogue.
Data-sensitive domains face challenges in using ChatGPT due to privacy and data-ownership concerns.
We propose a text ambiguation framework that preserves user privacy.
arXiv Detail & Related papers (2023-05-19T02:09:52Z) - On the Robustness of ChatGPT: An Adversarial and Out-of-distribution
Perspective [67.98821225810204]
We evaluate the robustness of ChatGPT from the adversarial and out-of-distribution perspective.
Results show consistent advantages on most adversarial and OOD classification and translation tasks.
ChatGPT shows astounding performance in understanding dialogue-related texts.
arXiv Detail & Related papers (2023-02-22T11:01:20Z) - A Categorical Archive of ChatGPT Failures [47.64219291655723]
ChatGPT, developed by OpenAI, has been trained using massive amounts of data and simulates human conversation.
It has garnered significant attention due to its ability to effectively answer a broad range of human inquiries.
However, a comprehensive analysis of ChatGPT's failures is lacking, which is the focus of this study.
arXiv Detail & Related papers (2023-02-06T04:21:59Z) - Making the case for audience design in conversational AI: Rapport
expectations and language ideologies in a task-oriented chatbot [0.0]
This paper argues that insights into users' language ideologies and their rapport expectations can be used to inform the audience design of the bot's language and interaction patterns.
I will define audience design for conversational AI and discuss how user analyses of interactions and socio-linguistically informed theoretical approaches can be used to support audience design.
arXiv Detail & Related papers (2022-06-21T19:21:30Z) - Fragments of the Past: Curating Peer Support with Perpetrators of
Domestic Violence [88.37416552778178]
We report on a ten-month study where we worked with six support workers and eighteen perpetrators in the design and deployment of Fragments of the Past.
We share how crafting digitally-augmented artefacts - 'fragments' - of experiences of desisting from violence can translate messages for motivation and rapport between peers.
These insights provide the basis for practical considerations for future network design with challenging populations.
arXiv Detail & Related papers (2021-07-09T22:57:43Z) - Towards Emotional Support Dialog Systems [61.58828606097423]
We define the Emotional Support Conversation task and propose an ESC Framework, which is grounded on the Helping Skills Theory.
We construct an Emotion Support Conversation dataset (ESConv) with rich annotation (especially support strategy) in a help-seeker and supporter mode.
We evaluate state-of-the-art dialog models with respect to the ability to provide emotional support.
arXiv Detail & Related papers (2021-06-02T13:30:43Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z) - Did Chatbots Miss Their 'Apollo Moment'? A Survey of the Potential, Gaps
and Lessons from Using Collaboration Assistants During COVID-19 [6.4126050820406]
We look at how AI in general, and collaboration assistants (CAs or chatbots for short) have been used during a true global exigency - the COVID-19 pandemic.
The key observation is that chatbots missed their "Apollo moment" when they could have really provided contextual, personalized, reliable decision support at scale that the state-of-the-art makes possible.
arXiv Detail & Related papers (2021-02-27T19:08:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.