Keeping Up Appearances: Computational Modeling of Face Acts in
Persuasion Oriented Discussions
- URL: http://arxiv.org/abs/2009.10815v2
- Date: Thu, 24 Sep 2020 02:10:44 GMT
- Title: Keeping Up Appearances: Computational Modeling of Face Acts in
Persuasion Oriented Discussions
- Authors: Ritam Dutt, Rishabh Joshi, Carolyn Penstein Rose
- Abstract summary: We propose a framework for modeling face acts in persuasion conversations.
The framework reveals insights about differences in face act utilization between asymmetric roles in persuasion conversations.
Using computational models, we are able to successfully identify face acts as well as predict a key conversational outcome.
- Score: 2.9628298226732612
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The notion of face refers to the public self-image of an individual that
emerges both from the individual's own actions as well as from the interaction
with others. Modeling face and understanding its state changes throughout a
conversation is critical to the study of maintenance of basic human needs in
and through interaction. Grounded in the politeness theory of Brown and
Levinson (1978), we propose a generalized framework for modeling face acts in
persuasion conversations, resulting in a reliable coding manual, an annotated
corpus, and computational models. The framework reveals insights about
differences in face act utilization between asymmetric roles in persuasion
conversations. Using computational models, we are able to successfully identify
face acts as well as predict a key conversational outcome (e.g. donation
success). Finally, we model a latent representation of the conversational state
to analyze the impact of predicted face acts on the probability of a positive
conversational outcome and observe several correlations that corroborate
previous findings.
Related papers
- Persuasion in Online Conversations Is Associated with Alignment in Expressed Human Values [3.52359746858894]
We investigate how the expression and alignment of human values in back-and-forth online discussions relate to persuasion.<n>Using data from Reddit's ChangeMyView subreddit, we analyze one-on-one exchanges and characterize participants' value expression.
arXiv Detail & Related papers (2026-01-19T03:08:25Z) - On the Fallacy of Global Token Perplexity in Spoken Language Model Evaluation [88.77441715819366]
Generative spoken language models pretrained on large-scale raw audio can continue a speech prompt with appropriate content.<n>We propose a variety of likelihood- and generative-based evaluation methods that serve in place of naive global token perplexity.
arXiv Detail & Related papers (2026-01-09T22:01:56Z) - Learning Time-Varying Turn-Taking Behavior in Group Conversations [45.44339759125884]
We propose a flexible probabilistic model for predicting turn-taking patterns in group conversations based solely on individual characteristics and past speaking behavior.<n>Our results demonstrate that previous behavioral models may not always be realistic, motivating our data-driven yet theoretically grounded approach.
arXiv Detail & Related papers (2025-10-21T13:58:43Z) - Conceptual Contrastive Edits in Textual and Vision-Language Retrieval [1.8591405259852054]
We employ post-hoc conceptual contrastive edits to expose noteworthy patterns and biases imprinted in representations of retrieval models.
We apply these edits to explain both linguistic and visiolinguistic pre-trained models in a black-box manner.
We also introduce a novel metric to assess the per-word impact of contrastive interventions on model outcomes.
arXiv Detail & Related papers (2025-03-01T10:14:28Z) - Intention and Face in Dialog [4.984601297028258]
We present an analysis of three computational systems trained for classifying both intention and politeness.
In politeness theory, agents attend to the desire to have their wants appreciated (positive face), and a complementary desire to act unimpeded and maintain freedom (negative face)
Similar to speech acts, utterances can perform so-called face acts which can either raise or threaten the positive or negative face of the speaker or hearer.
arXiv Detail & Related papers (2024-06-06T14:26:35Z) - When to generate hedges in peer-tutoring interactions [1.0466434989449724]
The study uses a naturalistic face-to-face dataset annotated for natural language turns, conversational strategies, tutoring strategies, and nonverbal behaviours.
Results show that embedding layers, that capture the semantic information of the previous turns, significantly improves the model's performance.
We discover that the eye gaze of both the tutor and the tutee has a significant impact on hedge prediction.
arXiv Detail & Related papers (2023-07-28T14:29:19Z) - MindDial: Belief Dynamics Tracking with Theory-of-Mind Modeling for Situated Neural Dialogue Generation [62.44907105496227]
MindDial is a novel conversational framework that can generate situated free-form responses with theory-of-mind modeling.
We introduce an explicit mind module that can track the speaker's belief and the speaker's prediction of the listener's belief.
Our framework is applied to both prompting and fine-tuning-based models, and is evaluated across scenarios involving both common ground alignment and negotiation.
arXiv Detail & Related papers (2023-06-27T07:24:32Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - A Probabilistic Model Of Interaction Dynamics for Dyadic Face-to-Face
Settings [1.9544213396776275]
We develop a probabilistic model to capture the interaction dynamics between pairs of participants in a face-to-face setting.
This interaction encoding is then used to influence the generation when predicting one agent's future dynamics.
We show that our model successfully delineates between the modes, based on their interacting dynamics.
arXiv Detail & Related papers (2022-07-10T23:31:27Z) - Facetron: Multi-speaker Face-to-Speech Model based on Cross-modal Latent
Representations [22.14238843571225]
We propose an effective method to synthesize speaker-specific speech waveforms by conditioning on videos of an individual's face.
The linguistic features are extracted from lip movements using a lip-reading model, and the speaker characteristic features are predicted from face images.
We show the superiority of our proposed model over conventional methods in terms of both objective and subjective evaluation results.
arXiv Detail & Related papers (2021-07-26T07:36:02Z) - Who Responded to Whom: The Joint Effects of Latent Topics and Discourse
in Conversation Structure [53.77234444565652]
We identify the responding relations in the conversation discourse, which link response utterances to their initiations.
We propose a model to learn latent topics and discourse in word distributions, and predict pairwise initiation-response links.
Experimental results on both English and Chinese conversations show that our model significantly outperforms the previous state of the arts.
arXiv Detail & Related papers (2021-04-17T17:46:00Z) - I Only Have Eyes for You: The Impact of Masks On Convolutional-Based
Facial Expression Recognition [78.07239208222599]
We evaluate how the recently proposed FaceChannel adapts towards recognizing facial expressions from persons with masks.
We also perform specific feature-level visualization to demonstrate how the inherent capabilities of the FaceChannel to learn and combine facial features change when in a constrained social interaction scenario.
arXiv Detail & Related papers (2021-04-16T20:03:30Z) - I Beg to Differ: A study of constructive disagreement in online
conversations [15.581515781839656]
We construct a corpus of 7 425 Wikipedia Talk page conversations that contain content disputes.
We define the task of predicting whether disagreements will be escalated to mediation by a moderator.
We develop a variety of neural models and show that taking into account the structure of the conversation improves predictive accuracy.
arXiv Detail & Related papers (2021-01-26T16:36:43Z) - You Impress Me: Dialogue Generation via Mutual Persona Perception [62.89449096369027]
The research in cognitive science suggests that understanding is an essential signal for a high-quality chit-chat conversation.
Motivated by this, we propose P2 Bot, a transmitter-receiver based framework with the aim of explicitly modeling understanding.
arXiv Detail & Related papers (2020-04-11T12:51:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.