Bridging Cultural Nuances in Dialogue Agents through Cultural Value
Surveys
- URL: http://arxiv.org/abs/2401.10352v2
- Date: Fri, 2 Feb 2024 12:35:15 GMT
- Title: Bridging Cultural Nuances in Dialogue Agents through Cultural Value
Surveys
- Authors: Yong Cao, Min Chen, Daniel Hershcovich
- Abstract summary: cuDialog is a first-of-its-kind benchmark for dialogue generation with a cultural lens.
We develop baseline models capable of extracting cultural attributes from dialogue exchanges.
We propose to incorporate cultural dimensions with dialogue encoding features.
- Score: 20.82269206759988
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The cultural landscape of interactions with dialogue agents is a compelling
yet relatively unexplored territory. It's clear that various sociocultural
aspects -- from communication styles and beliefs to shared metaphors and
knowledge -- profoundly impact these interactions. To delve deeper into this
dynamic, we introduce cuDialog, a first-of-its-kind benchmark for dialogue
generation with a cultural lens. We also develop baseline models capable of
extracting cultural attributes from dialogue exchanges, with the goal of
enhancing the predictive accuracy and quality of dialogue agents. To
effectively co-learn cultural understanding and multi-turn dialogue
predictions, we propose to incorporate cultural dimensions with dialogue
encoding features. Our experimental findings highlight that incorporating
cultural value surveys boosts alignment with references and cultural markers,
demonstrating its considerable influence on personalization and dialogue
quality. To facilitate further exploration in this exciting domain, we publish
our benchmark publicly accessible at https://github.com/yongcaoplus/cuDialog.
Related papers
- CROPE: Evaluating In-Context Adaptation of Vision and Language Models to Culture-Specific Concepts [45.77570690529597]
We introduce CROPE, a visual question answering benchmark designed to probe the knowledge of culture-specific concepts.
Our evaluation of several state-of-the-art open Vision and Language models shows large performance disparities between culture-specific and common concepts.
Experiments with contextual knowledge indicate that models struggle to effectively utilize multimodal information and bind culture-specific concepts to their depictions.
arXiv Detail & Related papers (2024-10-20T17:31:19Z) - Extrinsic Evaluation of Cultural Competence in Large Language Models [53.626808086522985]
We focus on extrinsic evaluation of cultural competence in two text generation tasks.
We evaluate model outputs when an explicit cue of culture, specifically nationality, is perturbed in the prompts.
We find weak correlations between text similarity of outputs for different countries and the cultural values of these countries.
arXiv Detail & Related papers (2024-06-17T14:03:27Z) - CulturePark: Boosting Cross-cultural Understanding in Large Language Models [63.452948673344395]
This paper introduces CulturePark, an LLM-powered multi-agent communication framework for cultural data collection.
It generates high-quality cross-cultural dialogues encapsulating human beliefs, norms, and customs.
We evaluate these models across three downstream tasks: content moderation, cultural alignment, and cultural education.
arXiv Detail & Related papers (2024-05-24T01:49:02Z) - Cultural Commonsense Knowledge for Intercultural Dialogues [31.079990829088857]
This paper presents MANGO, a methodology for distilling high-accuracy, high-recall assertions of cultural knowledge.
Running the MANGO method with GPT-3.5 as underlying LLM yields 167K high-accuracy assertions for 30K concepts and 11K cultures.
We find that adding knowledge from MANGO improves the overall quality, specificity, and cultural sensitivity of dialogue responses.
arXiv Detail & Related papers (2024-02-16T13:46:38Z) - Massively Multi-Cultural Knowledge Acquisition & LM Benchmarking [48.21982147529661]
This paper introduces a novel approach for massively multicultural knowledge acquisition.
Our method strategically navigates from densely informative Wikipedia documents on cultural topics to an extensive network of linked pages.
Our work marks an important step towards deeper understanding and bridging the gaps of cultural disparities in AI.
arXiv Detail & Related papers (2024-02-14T18:16:54Z) - Assessing Cross-Cultural Alignment between ChatGPT and Human Societies:
An Empirical Study [9.919972416590124]
ChatGPT has garnered widespread recognition for its exceptional ability to generate human-like responses in dialogue.
We investigate the underlying cultural background of ChatGPT by analyzing its responses to questions designed to quantify human cultural differences.
arXiv Detail & Related papers (2023-03-30T15:43:39Z) - Is this Dialogue Coherent? Learning from Dialogue Acts and Entities [82.44143808977209]
We create the Switchboard Coherence (SWBD-Coh) corpus, a dataset of human-human spoken dialogues annotated with turn coherence ratings.
Our statistical analysis of the corpus indicates how turn coherence perception is affected by patterns of distribution of entities.
We find that models combining both DA and entity information yield the best performances both for response selection and turn coherence rating.
arXiv Detail & Related papers (2020-06-17T21:02:40Z) - Rethinking Dialogue State Tracking with Reasoning [76.0991910623001]
This paper proposes to track dialogue states gradually with reasoning over dialogue turns with the help of the back-end data.
Empirical results demonstrate that our method significantly outperforms the state-of-the-art methods by 38.6% in terms of joint belief accuracy for MultiWOZ 2.1.
arXiv Detail & Related papers (2020-05-27T02:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.