Learning Ideological Embeddings from Information Cascades
- URL: http://arxiv.org/abs/2109.13589v1
- Date: Tue, 28 Sep 2021 09:58:02 GMT
- Title: Learning Ideological Embeddings from Information Cascades
- Authors: Corrado Monti, Giuseppe Manco, Cigdem Aslay, Francesco Bonchi
- Abstract summary: We propose a model to learn the ideological leaning of each user in a multidimensional ideological space.
Our model is able to learn the political stance of the social media users in a multidimensional ideological space.
- Score: 11.898833102736255
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling information cascades in a social network through the lenses of the
ideological leaning of its users can help understanding phenomena such as
misinformation propagation and confirmation bias, and devising techniques for
mitigating their toxic effects.
In this paper we propose a stochastic model to learn the ideological leaning
of each user in a multidimensional ideological space, by analyzing the way
politically salient content propagates. In particular, our model assumes that
information propagates from one user to another if both users are interested in
the topic and ideologically aligned with each other. To infer the parameters of
our model, we devise a gradient-based optimization procedure maximizing the
likelihood of an observed set of information cascades. Our experiments on
real-world political discussions on Twitter and Reddit confirm that our model
is able to learn the political stance of the social media users in a
multidimensional ideological space.
Related papers
- Balancing Transparency and Accuracy: A Comparative Analysis of Rule-Based and Deep Learning Models in Political Bias Classification [5.550237524713089]
The study highlights the sensitivity of modern self-learning systems to unconstrained data ingestion.
Applying both models to left-leaning (CNN) and right-leaning (FOX) news articles, we assess their effectiveness on data beyond the original training and test sets.
We contrast the opaque architecture of a deep learning model with the transparency of a linguistically informed rule-based model.
arXiv Detail & Related papers (2024-11-07T00:09:18Z) - Political Leaning Inference through Plurinational Scenarios [4.899818550820576]
This work focuses on three diverse regions in Spain (Basque Country, Catalonia and Galicia) to explore various methods for multi-party categorization.
We use a two-step method involving unsupervised user representations obtained from the retweets and their subsequent use for political leaning detection.
arXiv Detail & Related papers (2024-06-12T07:42:12Z) - Generative Active Learning for Image Synthesis Personalization [57.01364199734464]
This paper explores the application of active learning, traditionally studied in the context of discriminative models, to generative models.
The primary challenge in conducting active learning on generative models lies in the open-ended nature of querying.
We introduce the concept of anchor directions to transform the querying process into a semi-open problem.
arXiv Detail & Related papers (2024-03-22T06:45:45Z) - Learning Interpretable Concepts: Unifying Causal Representation Learning
and Foundation Models [51.43538150982291]
We study how to learn human-interpretable concepts from data.
Weaving together ideas from both fields, we show that concepts can be provably recovered from diverse data.
arXiv Detail & Related papers (2024-02-14T15:23:59Z) - InfoPattern: Unveiling Information Propagation Patterns in Social Media [59.67008841974645]
InfoPattern centers on the interplay between language and human ideology.
The demo is capable of: (1) red teaming to simulate adversary responses from opposite ideology communities; (2) stance detection to identify the underlying political sentiments in each message; (3) information propagation graph discovery to reveal the evolution of claims across various communities over time.
arXiv Detail & Related papers (2023-11-27T09:12:35Z) - Foundational Models Defining a New Era in Vision: A Survey and Outlook [151.49434496615427]
Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world.
The models learned to bridge the gap between such modalities coupled with large-scale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time.
The output of such models can be modified through human-provided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions.
arXiv Detail & Related papers (2023-07-25T17:59:18Z) - Transferring Foundation Models for Generalizable Robotic Manipulation [82.12754319808197]
We propose a novel paradigm that effectively leverages language-reasoning segmentation mask generated by internet-scale foundation models.
Our approach can effectively and robustly perceive object pose and enable sample-efficient generalization learning.
Demos can be found in our submitted video, and more comprehensive ones can be found in link1 or link2.
arXiv Detail & Related papers (2023-06-09T07:22:12Z) - Unsupervised Detection of Contextualized Embedding Bias with Application
to Ideology [20.81930455526026]
We propose a fully unsupervised method to detect bias in contextualized embeddings.
We show how it can be found by applying our method to online discussion forums, and present techniques to probe it.
Our experiments suggest that the ideological subspace encodes abstract evaluative semantics and reflects changes in the political left-right spectrum during the presidency of Donald Trump.
arXiv Detail & Related papers (2022-12-14T23:31:14Z) - Fine-Grained Prediction of Political Leaning on Social Media with
Unsupervised Deep Learning [0.9137554315375922]
We propose a novel unsupervised technique for learning fine-grained political leaning from social media posts.
Our results pave the way for the development of new and better unsupervised approaches for the detection of fine-grained political leaning.
arXiv Detail & Related papers (2022-02-23T09:18:13Z) - Unsupervised Belief Representation Learning in Polarized Networks with
Information-Theoretic Variational Graph Auto-Encoders [26.640917190618612]
We develop an unsupervised algorithm for belief representation learning in polarized networks.
It learns to project both users and content items (e.g., posts that represent user views) into an appropriate disentangled latent space.
The latent representation of users and content can then be used to quantify their ideological leaning and detect/predict their stances on issues.
arXiv Detail & Related papers (2021-10-01T04:35:01Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.