ProsocialDialog: A Prosocial Backbone for Conversational Agents
- URL: http://arxiv.org/abs/2205.12688v1
- Date: Wed, 25 May 2022 11:48:47 GMT
- Title: ProsocialDialog: A Prosocial Backbone for Conversational Agents
- Authors: Hyunwoo Kim, Youngjae Yu, Liwei Jiang, Ximing Lu, Daniel Khashabi,
Gunhee Kim, Yejin Choi, Maarten Sap
- Abstract summary: We introduce ProsocialDialog, the first large-scale dialogue dataset to teach conversational agents to respond to problematic content following social norms.
Created via a human-AI collaborative framework, ProsocialDialog consists of 58K dialogues, with 331K utterances, 160K RoTs, and 497K dialogue safety labels.
With this dataset, we introduce a dialogue safety detection module, Canary, capable of generating RoTs given conversational context, and a socially-informed dialogue agent, Prost.
- Score: 104.92776607564583
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing dialogue systems fail to respond properly to potentially unsafe
user utterances by either ignoring or passively agreeing with them. To address
this issue, we introduce ProsocialDialog, the first large-scale multi-turn
dialogue dataset to teach conversational agents to respond to problematic
content following social norms. Covering diverse unethical, problematic,
biased, and toxic situations, ProsocialDialog contains responses that encourage
prosocial behavior, grounded in commonsense social rules (i.e., rules-of-thumb,
RoTs). Created via a human-AI collaborative framework, ProsocialDialog consists
of 58K dialogues, with 331K utterances, 160K RoTs, and 497K dialogue safety
labels accompanied by free-form rationales.
With this dataset, we introduce a dialogue safety detection module, Canary,
capable of generating RoTs given conversational context, and a
socially-informed dialogue agent, Prost. Empirical results show that Prost
generates more socially acceptable dialogues compared to other state-of-the-art
language and dialogue models in both in-domain and out-of-domain settings.
Additionally, Canary effectively guides conversational agents and off-the-shelf
language models to generate significantly more prosocial responses. Our work
highlights the promise and importance of creating and steering conversational
AI to be socially responsible.
Related papers
- Let's Go Real Talk: Spoken Dialogue Model for Face-to-Face Conversation [55.043492250775294]
We introduce a novel Face-to-Face spoken dialogue model.
It processes audio-visual speech from user input and generates audio-visual speech as the response.
We also introduce MultiDialog, the first large-scale multimodal spoken dialogue corpus.
arXiv Detail & Related papers (2024-06-12T04:48:36Z) - Improving Dialog Safety using Socially Aware Contrastive Learning [8.503001932363704]
We study prosociality in both adversarial and casual dialog contexts.
We propose a dual-step fine-tuning process to address these issues.
We train a base model that integrates prosocial behavior by leveraging datasets like Moral Integrity Corpus (MIC) and ProsocialDialog.
arXiv Detail & Related papers (2024-02-01T09:24:33Z) - SocialDial: A Benchmark for Socially-Aware Dialogue Systems [45.3266270265532]
We present the first socially-aware dialogue corpus - SocialDial, based on Chinese social culture.
SocialDial consists of two parts: 1,563 multi-turn dialogues between two human speakers with fine-grained labels, and 4,870 synthetic conversations generated by ChatGPT.
The human corpus covers five categories of social norms, which have 14 sub-categories in total.
arXiv Detail & Related papers (2023-04-24T11:55:22Z) - Grounding in social media: An approach to building a chit-chat dialogue
model [9.247397520986999]
Building open-domain dialogue systems capable of rich human-like conversational ability is one of the fundamental challenges in language generation.
Current work on knowledge-grounded dialogue generation primarily focuses on persona incorporation or searching a fact-based structured knowledge source such as Wikipedia.
Our method takes a broader and simpler approach, which aims to improve the raw conversation ability of the system by mimicking the human response behavior on social media.
arXiv Detail & Related papers (2022-06-12T09:01:57Z) - Converse -- A Tree-Based Modular Task-Oriented Dialogue System [99.78110192324843]
Converse is a flexible tree-based modular task-oriented dialogue system.
Converse supports task dependency and task switching, which are unique features compared to other open-source dialogue frameworks.
arXiv Detail & Related papers (2022-03-23T04:19:05Z) - UniDS: A Unified Dialogue System for Chit-Chat and Task-oriented
Dialogues [59.499965460525694]
We propose a unified dialogue system (UniDS) with the two aforementioned skills.
We design a unified dialogue data schema, compatible for both chit-chat and task-oriented dialogues.
We train UniDS with mixed dialogue data from a pretrained chit-chat dialogue model.
arXiv Detail & Related papers (2021-10-15T11:56:47Z) - Saying No is An Art: Contextualized Fallback Responses for Unanswerable
Dialogue Queries [3.593955557310285]
Most dialogue systems rely on hybrid approaches for generating a set of ranked responses.
We design a neural approach which generates responses which are contextually aware with the user query.
Our simple approach makes use of rules over dependency parses and a text-to-text transformer fine-tuned on synthetic data of question-response pairs.
arXiv Detail & Related papers (2020-12-03T12:34:22Z) - Towards Conversational Recommendation over Multi-Type Dialogs [78.52354759386296]
We propose a new task of conversational recommendation over multi-type dialogs, where the bots can proactively and naturally lead a conversation from a non-recommendation dialog to a recommendation dialog.
To facilitate the study of this task, we create a human-to-human Chinese dialog dataset emphDuRecDial (about 10k dialogs, 156k utterances)
In each dialog, the recommender proactively leads a multi-type dialog to approach recommendation targets and then makes multiple recommendations with rich interaction behavior.
arXiv Detail & Related papers (2020-05-08T11:01:21Z) - Will I Sound Like Me? Improving Persona Consistency in Dialogues through
Pragmatic Self-Consciousness [62.55060760615656]
Recent models tackling consistency often train with additional Natural Language Inference (NLI) labels or attach trained extra modules to the generative agent for maintaining consistency.
Inspired by social cognition and pragmatics, we endow existing dialogue agents with public self-consciousness on the fly through an imaginary listener.
Our approach, based on the Rational Speech Acts framework, can enforce dialogue agents to refrain from uttering contradiction.
arXiv Detail & Related papers (2020-04-13T08:16:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.