Detecting Extreme Ideologies in Shifting Landscapes: an Automatic &
Context-Agnostic Approach
- URL: http://arxiv.org/abs/2208.04097v3
- Date: Wed, 29 Mar 2023 03:19:39 GMT
- Title: Detecting Extreme Ideologies in Shifting Landscapes: an Automatic &
Context-Agnostic Approach
- Authors: Rohit Ram, Emma Thomas, David Kernot and Marian-Andrei Rizoiu
- Abstract summary: This work presents an end-to-end ideology detection pipeline applicable to large-scale datasets.
We construct context-agnostic and automatic ideological signals from widely available media slant data.
We employ the pipeline for left-right ideology, and (the more concerning) detection of extreme ideologies.
- Score: 7.197469507060225
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In democratic countries, the ideology landscape is foundational to individual
and collective political action; conversely, fringe ideology drives
Ideologically Motivated Violent Extremism (IMVE). Therefore, quantifying
ideology is a crucial first step to an ocean of downstream problems, such as;
understanding and countering IMVE, detecting and intervening in disinformation
campaigns, and broader empirical opinion dynamics modeling. However, online
ideology detection faces two significant hindrances. Firstly, the ground truth
that forms the basis for ideology detection is often prohibitively
labor-intensive for practitioners to collect, requires access to domain experts
and is specific to the context of its collection (i.e., time, location, and
platform). Secondly, to circumvent this expense, researchers generate ground
truth via other ideological signals (like hashtags used or politicians
followed). However, the bias this introduces has not been quantified and often
still requires expert intervention. This work presents an end-to-end ideology
detection pipeline applicable to large-scale datasets. We construct
context-agnostic and automatic ideological signals from widely available media
slant data; show the derived pipeline is performant, compared to pipelines of
common ideology signals and state-of-the-art baselines; employ the pipeline for
left-right ideology, and (the more concerning) detection of extreme ideologies;
generate psychosocial profiles of the inferred ideological groups; and,
generate insights into their morality and preoccupations.
Related papers
- Dissecting Subjectivity and the "Ground Truth" Illusion in Data Annotation [23.545262620377887]
In machine learning, "ground truth" refers to the assumed correct labels used to train and evaluate models.<n>This systematic literature review analyzes research published between 2020 and 2025 across seven premier venues.
arXiv Detail & Related papers (2026-02-11T19:45:17Z) - When Large Language Models Do Not Work: Online Incivility Prediction through Graph Neural Networks [3.353377687171614]
We propose a Graph Neural Network framework for detecting three types of uncivil behavior within the English Wikipedia community.<n>Our model represents each user comment as a node, with textual similarity between comments defining the edges.<n>We also introduce a dynamically adjusted attention mechanism that adaptively balances nodal and topological features during information aggregation.
arXiv Detail & Related papers (2025-12-08T16:22:40Z) - Demystifying deep search: a holistic evaluation with hint-free multi-hop questions and factorised metrics [89.1999907891494]
We present WebDetective, a benchmark of hint-free multi-hop questions paired with a controlled Wikipedia sandbox.<n>Our evaluation of 25 state-of-the-art models reveals systematic weaknesses across all architectures.<n>We develop an agentic workflow, EvidenceLoop, that explicitly targets the challenges our benchmark identifies.
arXiv Detail & Related papers (2025-10-01T07:59:03Z) - RealUnify: Do Unified Models Truly Benefit from Unification? A Comprehensive Benchmark [71.3555284685426]
We introduce RealUnify, a benchmark designed to evaluate bidirectional capability synergy.<n>RealUnify comprises 1,000 meticulously human-annotated instances spanning 10 categories and 32 subtasks.<n>We find that current unified models still struggle to achieve effective synergy, indicating that architectural unification alone is insufficient.
arXiv Detail & Related papers (2025-09-29T15:07:28Z) - Thinking Before You Speak: A Proactive Test-time Scaling Approach [54.8205006555199]
We implement our idea as a reasoning framework, named emphThinking Before You Speak (TBYS)<n>We design a pipeline for automatically collecting and filtering in-context examples for the generation of emphinsights.<n>Experiments on challenging mathematical datasets verify the effectiveness of TBYS.
arXiv Detail & Related papers (2025-08-26T03:43:32Z) - Explain Before You Answer: A Survey on Compositional Visual Reasoning [74.27548620675748]
Compositional visual reasoning has emerged as a key research frontier in multimodal AI.<n>This survey systematically reviews 260+ papers from top venues (CVPR, ICCV, NeurIPS, ICML, ACL, etc.)<n>We then catalog 60+ benchmarks and corresponding metrics that probe compositional visual reasoning along dimensions such as grounding accuracy, chain-of-thought faithfulness, and high-resolution perception.
arXiv Detail & Related papers (2025-08-24T11:01:51Z) - Fair Deepfake Detectors Can Generalize [51.21167546843708]
We show that controlling for confounders (data distribution and model capacity) enables improved generalization via fairness interventions.<n>Motivated by this insight, we propose Demographic Attribute-insensitive Intervention Detection (DAID), a plug-and-play framework composed of: i) Demographic-aware data rebalancing, which employs inverse-propensity weighting and subgroup-wise feature normalization to neutralize distributional biases; and ii) Demographic-agnostic feature aggregation, which uses a novel alignment loss to suppress sensitive-attribute signals.<n>DAID consistently achieves superior performance in both fairness and generalization compared to several state-of-the-art
arXiv Detail & Related papers (2025-07-03T14:10:02Z) - Democratic or Authoritarian? Probing a New Dimension of Political Biases in Large Language Models [72.89977583150748]
We propose a novel methodology to assess how Large Language Models align with broader geopolitical value systems.<n>We find that LLMs generally favor democratic values and leaders, but exhibit increases favorability toward authoritarian figures when prompted in Mandarin.
arXiv Detail & Related papers (2025-06-15T07:52:07Z) - Stereotype Detection in Natural Language Processing [47.91542090964054]
Stereotypes influence social perceptions and can escalate into discrimination and violence.<n>This work is presented a survey of existing research, analyzing definitions from psychology, sociology, and philosophy.<n>Findings emphasize stereotype detection as a potential early-monitoring tool to prevent bias escalation and the rise of hate speech.
arXiv Detail & Related papers (2025-05-23T09:03:56Z) - Bridging Cognition and Emotion: Empathy-Driven Multimodal Misinformation Detection [56.644686934050576]
Social media has become a major conduit for information dissemination, yet it also facilitates the rapid spread of misinformation.
Traditional misinformation detection methods primarily focus on surface-level features, overlooking the crucial roles of human empathy in the propagation process.
We propose the Dual-Aspect Empathy Framework (DAE), which integrates cognitive and emotional empathy to analyze misinformation from both the creator and reader perspectives.
arXiv Detail & Related papers (2025-04-24T07:48:26Z) - Probing the Subtle Ideological Manipulation of Large Language Models [0.3745329282477067]
Large Language Models (LLMs) have transformed natural language processing, but concerns have emerged about their susceptibility to ideological manipulation.
We introduce a novel multi-task dataset designed to reflect diverse ideological positions through tasks such as ideological QA, statement ranking, manifesto cloze completion, and Congress bill comprehension.
Our findings indicate that fine-tuning significantly enhances nuanced ideological alignment, while explicit prompts provide only minor refinements.
arXiv Detail & Related papers (2025-04-19T13:11:50Z) - Talking Point based Ideological Discourse Analysis in News Events [62.18747509565779]
We propose a framework motivated by the theory of ideological discourse analysis to analyze news articles related to real-world events.
Our framework represents the news articles using a relational structure - talking points, which captures the interaction between entities, their roles, and media frames along with a topic of discussion.
We evaluate our framework's ability to generate these perspectives through automated tasks - ideology and partisan classification tasks, supplemented by human validation.
arXiv Detail & Related papers (2025-04-10T02:52:34Z) - A Survey of Stance Detection on Social Media: New Directions and Perspectives [50.27382951812502]
stance detection has emerged as a crucial subfield within affective computing.
Recent years have seen a surge of research interest in developing effective stance detection methods.
This paper provides a comprehensive survey of stance detection techniques on social media.
arXiv Detail & Related papers (2024-09-24T03:06:25Z) - Fairness and Bias Mitigation in Computer Vision: A Survey [61.01658257223365]
Computer vision systems are increasingly being deployed in high-stakes real-world applications.
There is a dire need to ensure that they do not propagate or amplify any discriminatory tendencies in historical or human-curated data.
This paper presents a comprehensive survey on fairness that summarizes and sheds light on ongoing trends and successes in the context of computer vision.
arXiv Detail & Related papers (2024-08-05T13:44:22Z) - MOTIV: Visual Exploration of Moral Framing in Social Media [9.314312944316962]
We present a visual computing framework for analyzing moral rhetoric on social media around controversial topics.
We propose a methodology for deconstructing and visualizing the textitwhen, textitwhere, and textitwho behind each of these moral dimensions as expressed in microblog data.
Our results indicate that this visual approach supports rapid, collaborative hypothesis testing, and can help give insights into the underlying moral values behind controversial political issues.
arXiv Detail & Related papers (2024-03-15T16:11:58Z) - SADAS: A Dialogue Assistant System Towards Remediating Norm Violations
in Bilingual Socio-Cultural Conversations [56.31816995795216]
Socially-Aware Dialogue Assistant System (SADAS) is designed to ensure that conversations unfold with respect and understanding.
Our system's novel architecture includes: (1) identifying the categories of norms present in the dialogue, (2) detecting potential norm violations, (3) evaluating the severity of these violations, and (4) implementing targeted remedies to rectify the breaches.
arXiv Detail & Related papers (2024-01-29T08:54:21Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Down the Rabbit Hole: Detecting Online Extremism, Radicalisation, and
Politicised Hate Speech [1.0323063834827415]
This study provides the first cross-examination of textual, network visual approaches to detecting extremist content.
We identify consensus-driven ERH definitions and propose solutions, particularly due to the lack of research in Oceania/Australasia.
We conclude with vital recommendations for ERH mining researchers and propose roadmap with guidelines for researchers, industries, and governments to enable safer cyberspace.
arXiv Detail & Related papers (2023-01-27T07:59:31Z) - Examining Political Rhetoric with Epistemic Stance Detection [13.829628375546568]
We develop a simple RoBERTa-based model for multi-source stance predictions that outperforms more complex state-of-the-art modeling.
We demonstrate its novel application to political science by conducting a large-scale analysis of the Mass Market Manifestos corpus of U.S. political opinion books.
arXiv Detail & Related papers (2022-12-29T23:47:14Z) - Self-supervised Hypergraph Representation Learning for Sociological
Analysis [52.514283292498405]
We propose a fundamental methodology to support the further fusion of data mining techniques and sociological behavioral criteria.
First, we propose an effective hypergraph awareness and a fast line graph construction framework.
Second, we propose a novel hypergraph-based neural network to learn social influence flowing from users to users.
arXiv Detail & Related papers (2022-12-22T01:20:29Z) - Unsupervised Detection of Contextualized Embedding Bias with Application
to Ideology [20.81930455526026]
We propose a fully unsupervised method to detect bias in contextualized embeddings.
We show how it can be found by applying our method to online discussion forums, and present techniques to probe it.
Our experiments suggest that the ideological subspace encodes abstract evaluative semantics and reflects changes in the political left-right spectrum during the presidency of Donald Trump.
arXiv Detail & Related papers (2022-12-14T23:31:14Z) - PAR: Political Actor Representation Learning with Social Context and
Expert Knowledge [45.215862050840116]
We propose textbfPAR, a textbfPolitical textbfActor textbfRepresentation learning framework.
We retrieve and extract factual statements about legislators to leverage social context information.
We then construct a heterogeneous information network to incorporate social context and use relational graph neural networks to learn legislator representations.
arXiv Detail & Related papers (2022-10-15T19:28:06Z) - O-Dang! The Ontology of Dangerous Speech Messages [53.15616413153125]
We present O-Dang!: The Ontology of Dangerous Speech Messages, a systematic and interoperable Knowledge Graph (KG)
O-Dang! is designed to gather and organize Italian datasets into a structured KG, according to the principles shared within the Linguistic Linked Open Data community.
It provides a model for encoding both gold standard and single-annotator labels in the KG.
arXiv Detail & Related papers (2022-07-13T11:50:05Z) - Encoding Heterogeneous Social and Political Context for Entity Stance
Prediction [7.477393857078695]
We propose the novel task of entity stance prediction.
We retrieve facts from Wikipedia about social entities regarding contemporary U.S. politics.
We then annotate social entities' stances towards political ideologies with the help of domain experts.
arXiv Detail & Related papers (2021-08-09T08:59:43Z) - Political Ideology and Polarization of Policy Positions: A
Multi-dimensional Approach [19.435030285532854]
We study the ideology of the policy under discussion teasing apart the nuanced co-existence of stance and ideology.
Aligned with the theoretical accounts in political science, we treat ideology as a multi-dimensional construct.
We showcase that this framework enables quantitative analysis of polarization, a temporal, multifaceted measure of ideological distance.
arXiv Detail & Related papers (2021-06-28T04:03:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.