Dimensions of Interpersonal Dynamics in Text: Group Membership and
Fine-grained Interpersonal Emotion
- URL: http://arxiv.org/abs/2209.06687v1
- Date: Wed, 14 Sep 2022 14:46:55 GMT
- Title: Dimensions of Interpersonal Dynamics in Text: Group Membership and
Fine-grained Interpersonal Emotion
- Authors: Venkata S Govindarajan, Katherine Atwell, Barea Sinno, Malihe
Alikhani, David I. Beaver, Junyi Jessy Li
- Abstract summary: This work aims to re-align the study of bias in NLP away from specific instances of bias to one which encapsulates the relationship between speaker, text, target and social dynamics.
- Score: 27.125210491924243
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The ability of language to perpetuate inequality is most evident when
individuals refer to, or talk about, other individuals in their utterances.
While current studies of bias in NLP rely mainly on identifying hate speech or
bias towards a specific group, we believe we can reach a more subtle and
nuanced understanding of the interaction between bias and language use by
modeling the speaker, the text, and the target in the text. In this paper, we
introduce a dataset of 3033 English tweets by US Congress members annotated for
interpersonal emotion, and `found supervision' for interpersonal group
membership labels. We find that negative emotions such as anger and disgust are
used predominantly in out-group situations, and directed predominantly at
leaders of opposite parties. While humans can perform better than chance at
identifying interpersonal group membership given an utterance, neural models
perform much better; furthermore, a shared encoding between interpersonal group
membership and interpersonal perceived emotion enabled some performance gains
in the latter. This work aims to re-align the study of bias in NLP away from
specific instances of bias to one which encapsulates the relationship between
speaker, text, target and social dynamics. Data and code for this paper are
available at https://github.com/venkatasg/Interpersonal-Dynamics
Related papers
- Persona Setting Pitfall: Persistent Outgroup Biases in Large Language Models Arising from Social Identity Adoption [10.35915254696156]
We show that outgroup bias manifests as strongly as ingroup favoritism.
Our findings highlight the potential to develop more equitable and balanced language models.
arXiv Detail & Related papers (2024-09-05T18:08:47Z) - Large Language Models Can Infer Personality from Free-Form User Interactions [0.0]
GPT-4 can infer personality with moderate accuracy, outperforming previous approaches.
Results show that the direct focus on personality assessment did not result in a less positive user experience.
Preliminary analyses suggest that the accuracy of personality inferences varies only marginally across different socio-demographic subgroups.
arXiv Detail & Related papers (2024-05-19T20:33:36Z) - Affective-NLI: Towards Accurate and Interpretable Personality Recognition in Conversation [30.820334868031537]
Personality Recognition in Conversation (PRC) aims to identify the personality traits of speakers through textual dialogue content.
We propose Affective Natural Language Inference (Affective-NLI) for accurate and interpretable PRC.
arXiv Detail & Related papers (2024-04-03T09:14:24Z) - Effect of Attention and Self-Supervised Speech Embeddings on
Non-Semantic Speech Tasks [3.570593982494095]
We look at speech emotion understanding as a perception task which is a more realistic setting.
We leverage ComParE rich dataset of multilingual speakers and multi-label regression target of 'emotion share' or perception of that emotion.
Our results show that HuBERT-Large with a self-attention-based light-weight sequence model provides 4.6% improvement over the reported baseline.
arXiv Detail & Related papers (2023-08-28T07:11:27Z) - Comparing Biases and the Impact of Multilingual Training across Multiple
Languages [70.84047257764405]
We present a bias analysis across Italian, Chinese, English, Hebrew, and Spanish on the downstream sentiment analysis task.
We adapt existing sentiment bias templates in English to Italian, Chinese, Hebrew, and Spanish for four attributes: race, religion, nationality, and gender.
Our results reveal similarities in bias expression such as favoritism of groups that are dominant in each language's culture.
arXiv Detail & Related papers (2023-05-18T18:15:07Z) - The Face of Populism: Examining Differences in Facial Emotional Expressions of Political Leaders Using Machine Learning [50.24983453990065]
We use a deep-learning approach to process a sample of 220 YouTube videos of political leaders from 15 different countries.
We observe statistically significant differences in the average score of negative emotions between groups of leaders with varying degrees of populist rhetoric.
arXiv Detail & Related papers (2023-04-19T18:32:49Z) - Understanding How People Rate Their Conversations [73.17730062864314]
We conduct a study to better understand how people rate their interactions with conversational agents.
We focus on agreeableness and extraversion as variables that may explain variation in ratings.
arXiv Detail & Related papers (2022-06-01T00:45:32Z) - Mental Disorders on Online Social Media Through the Lens of Language and
Behaviour: Analysis and Visualisation [7.133136338850781]
We study the factors that characterise and differentiate social media users affected by mental disorders.
Our findings reveal significant differences on the use of function words, such as adverbs and verb tense, and topic-specific vocabulary.
We find evidence suggesting that language use on micro-blogging platforms is less distinguishable for users who have a mental disorder.
arXiv Detail & Related papers (2022-02-07T15:29:01Z) - Revealing Persona Biases in Dialogue Systems [64.96908171646808]
We present the first large-scale study on persona biases in dialogue systems.
We conduct analyses on personas of different social classes, sexual orientations, races, and genders.
In our studies of the Blender and DialoGPT dialogue systems, we show that the choice of personas can affect the degree of harms in generated responses.
arXiv Detail & Related papers (2021-04-18T05:44:41Z) - Vyaktitv: A Multimodal Peer-to-Peer Hindi Conversations based Dataset
for Personality Assessment [50.15466026089435]
We present a novel peer-to-peer Hindi conversation dataset- Vyaktitv.
It consists of high-quality audio and video recordings of the participants, with Hinglish textual transcriptions for each conversation.
The dataset also contains a rich set of socio-demographic features, like income, cultural orientation, amongst several others, for all the participants.
arXiv Detail & Related papers (2020-08-31T17:44:28Z) - Annotation of Emotion Carriers in Personal Narratives [69.07034604580214]
We are interested in the problem of understanding personal narratives (PN) - spoken or written - recollections of facts, events, and thoughts.
In PN, emotion carriers are the speech or text segments that best explain the emotional state of the user.
This work proposes and evaluates an annotation model for identifying emotion carriers in spoken personal narratives.
arXiv Detail & Related papers (2020-02-27T15:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.