"We Demand Justice!": Towards Social Context Grounding of Political
Texts
- URL: http://arxiv.org/abs/2311.09106v2
- Date: Mon, 26 Feb 2024 09:34:42 GMT
- Title: "We Demand Justice!": Towards Social Context Grounding of Political
Texts
- Authors: Rajkumar Pujari and Chengfei Wu and Dan Goldwasser
- Abstract summary: Social media discourse frequently consists of'seemingly similar language used by opposing sides of the political spectrum'
This paper defines the context required to fully understand such ambiguous statements in a computational setting.
We propose two challenging datasets that require an understanding of the real-world context of the text.
- Score: 22.016345507132808
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Social media discourse frequently consists of 'seemingly similar language
used by opposing sides of the political spectrum', often translating to starkly
contrasting perspectives. E.g., 'thoughts and prayers', could express sympathy
for mass-shooting victims, or criticize the lack of legislative action on the
issue. This paper defines the context required to fully understand such
ambiguous statements in a computational setting and ground them in real-world
entities, actions, and attitudes. We propose two challenging datasets that
require an understanding of the real-world context of the text. We benchmark
these datasets against models built upon large pre-trained models, such as
RoBERTa and GPT-3. Additionally, we develop and benchmark more structured
models building upon existing Discourse Contextualization Framework and
Political Actor Representation models. We analyze the datasets and the
predictions to obtain further insights into the pragmatic language
understanding challenges posed by the proposed social grounding tasks.
Related papers
- How Well Do Text Embedding Models Understand Syntax? [50.440590035493074]
The ability of text embedding models to generalize across a wide range of syntactic contexts remains under-explored.
Our findings reveal that existing text embedding models have not sufficiently addressed these syntactic understanding challenges.
We propose strategies to augment the generalization ability of text embedding models in diverse syntactic scenarios.
arXiv Detail & Related papers (2023-11-14T08:51:00Z) - Transcending the Attention Paradigm: Representation Learning from
Geospatial Social Media Data [1.8311821879979955]
This study challenges the paradigm of performance benchmarking by investigating social media data as a source of distributed patterns.
To properly represent these abstract relationships, this research dissects empirical social media corpora into their elemental components, analyzing over two billion tweets across population-dense locations.
arXiv Detail & Related papers (2023-10-09T03:27:05Z) - The Empty Signifier Problem: Towards Clearer Paradigms for
Operationalising "Alignment" in Large Language Models [18.16062736448993]
We address the concept of "alignment" in large language models (LLMs) through the lens of post-structuralist socio-political theory.
We propose a framework that demarcates: 1) which dimensions of model behaviour are considered important, then 2) how meanings and definitions are ascribed to these dimensions.
We aim to foster a culture of transparency and critical evaluation, aiding the community in navigating the complexities of aligning LLMs with human populations.
arXiv Detail & Related papers (2023-10-03T22:02:17Z) - Foundational Models Defining a New Era in Vision: A Survey and Outlook [151.49434496615427]
Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world.
The models learned to bridge the gap between such modalities coupled with large-scale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time.
The output of such models can be modified through human-provided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions.
arXiv Detail & Related papers (2023-07-25T17:59:18Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - Natural Language Decompositions of Implicit Content Enable Better Text
Representations [56.85319224208865]
We introduce a method for the analysis of text that takes implicitly communicated content explicitly into account.
We use a large language model to produce sets of propositions that are inferentially related to the text that has been observed.
Our results suggest that modeling the meanings behind observed language, rather than the literal text alone, is a valuable direction for NLP.
arXiv Detail & Related papers (2023-05-23T23:45:20Z) - An Inclusive Notion of Text [69.36678873492373]
We argue that clarity on the notion of text is crucial for reproducible and generalizable NLP.
We introduce a two-tier taxonomy of linguistic and non-linguistic elements that are available in textual sources and can be used in NLP modeling.
arXiv Detail & Related papers (2022-11-10T14:26:43Z) - Optimizing text representations to capture (dis)similarity between
political parties [1.2891210250935146]
We look at the problem of modeling pairwise similarities between political parties.
Our research question is what level of structural information is necessary to create robust text representation.
We evaluate our models on the manifestos of German parties for the 2021 federal election.
arXiv Detail & Related papers (2022-10-21T14:24:57Z) - PAR: Political Actor Representation Learning with Social Context and
Expert Knowledge [45.215862050840116]
We propose textbfPAR, a textbfPolitical textbfActor textbfRepresentation learning framework.
We retrieve and extract factual statements about legislators to leverage social context information.
We then construct a heterogeneous information network to incorporate social context and use relational graph neural networks to learn legislator representations.
arXiv Detail & Related papers (2022-10-15T19:28:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.