Fostering User Engagement in the Critical Reflection of Arguments
- URL: http://arxiv.org/abs/2308.09061v1
- Date: Thu, 17 Aug 2023 15:48:23 GMT
- Title: Fostering User Engagement in the Critical Reflection of Arguments
- Authors: Klaus Weber, Annalena Aicher, Wolfang Minker, Stefan Ultes, Elisabeth
Andr\'e
- Abstract summary: We propose a system that engages in a deliberative dialogue with a human.
We enable the system to intervene if the user is too focused on their pre-existing opinion.
We report on a user study with 58 participants to test our model and the effect of the intervention mechanism.
- Score: 3.26297440422721
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A natural way to resolve different points of view and form opinions is
through exchanging arguments and knowledge. Facing the vast amount of available
information on the internet, people tend to focus on information consistent
with their beliefs. Especially when the issue is controversial, information is
often selected that does not challenge one's beliefs. To support a fair and
unbiased opinion-building process, we propose a chatbot system that engages in
a deliberative dialogue with a human. In contrast to persuasive systems, the
envisioned chatbot aims to provide a diverse and representative overview -
embedded in a conversation with the user. To account for a reflective and
unbiased exploration of the topic, we enable the system to intervene if the
user is too focused on their pre-existing opinion. Therefore we propose a model
to estimate the users' reflective engagement (RUE), defined as their critical
thinking and open-mindedness. We report on a user study with 58 participants to
test our model and the effect of the intervention mechanism, discuss the
implications of the results, and present perspectives for future work. The
results show a significant effect on both user reflection and total user focus,
proving our proposed approach's validity.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - ArguMentor: Augmenting User Experiences with Counter-Perspectives [4.84187718353576]
We designed ArguMentor, a human-AI collaboration system that highlights claims in opinion pieces.
It identifies counter-arguments for them using a LLM and generates a context-based summary of based on current events.
Our evaluation shows that participants can generate more arguments and counter-arguments and have, on average, have more moderate views after engaging with the system.
arXiv Detail & Related papers (2024-06-04T21:43:56Z) - Rethinking the Evaluation of Dialogue Systems: Effects of User Feedback on Crowdworkers and LLMs [57.16442740983528]
In ad-hoc retrieval, evaluation relies heavily on user actions, including implicit feedback.
The role of user feedback in annotators' assessment of turns in a conversational perception has been little studied.
We focus on how the evaluation of task-oriented dialogue systems ( TDSs) is affected by considering user feedback, explicit or implicit, as provided through the follow-up utterance of a turn being evaluated.
arXiv Detail & Related papers (2024-04-19T16:45:50Z) - Perspectives on Large Language Models for Relevance Judgment [56.935731584323996]
Large language models (LLMs) claim that they can assist with relevance judgments.
It is not clear whether automated judgments can reliably be used in evaluations of retrieval systems.
arXiv Detail & Related papers (2023-04-13T13:08:38Z) - Thread With Caution: Proactively Helping Users Assess and Deescalate
Tension in Their Online Discussions [13.455968033357065]
Incivility remains a major challenge for online discussion platforms.
Traditionally, platforms have relied on moderators to -- with or without algorithmic assistance -- take corrective actions such as removing comments or banning users.
We propose a complementary paradigm that directly empowers users by proactively enhancing their awareness about existing tension in the conversation they are engaging in.
arXiv Detail & Related papers (2022-12-02T19:00:03Z) - Persua: A Visual Interactive System to Enhance the Persuasiveness of
Arguments in Online Discussion [52.49981085431061]
Enhancing people's ability to write persuasive arguments could contribute to the effectiveness and civility in online communication.
We derived four design goals for a tool that helps users improve the persuasiveness of arguments in online discussions.
Persua is an interactive visual system that provides example-based guidance on persuasive strategies to enhance the persuasiveness of arguments.
arXiv Detail & Related papers (2022-04-16T08:07:53Z) - Linking the Dynamics of User Stance to the Structure of Online
Discussions [6.853826783413853]
We investigate whether users' stance concerning contentious subjects is influenced by the online discussions they are exposed to.
We set up a series of predictive exercises based on machine learning models.
We find that the most informative features relate to the stance composition of the discussion in which users prefer to engage.
arXiv Detail & Related papers (2021-01-25T02:08:54Z) - Helping users discover perspectives: Enhancing opinion mining with joint
topic models [5.2424255020469595]
This paper explores how opinion mining can be enhanced with joint topic modeling.
We evaluate four joint topic models (TAM, JST, VODUM, and LAM) in a user study assessing human understandability of the extracted perspectives.
arXiv Detail & Related papers (2020-10-23T16:13:06Z) - You Impress Me: Dialogue Generation via Mutual Persona Perception [62.89449096369027]
The research in cognitive science suggests that understanding is an essential signal for a high-quality chit-chat conversation.
Motivated by this, we propose P2 Bot, a transmitter-receiver based framework with the aim of explicitly modeling understanding.
arXiv Detail & Related papers (2020-04-11T12:51:07Z) - What Changed Your Mind: The Roles of Dynamic Topics and Discourse in
Argumentation Process [78.4766663287415]
This paper presents a study that automatically analyzes the key factors in argument persuasiveness.
We propose a novel neural model that is able to track the changes of latent topics and discourse in argumentative conversations.
arXiv Detail & Related papers (2020-02-10T04:27:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.