The drivers of online polarization: fitting models to data
- URL: http://arxiv.org/abs/2205.15958v3
- Date: Fri, 12 May 2023 09:01:05 GMT
- Title: The drivers of online polarization: fitting models to data
- Authors: Carlo Michele Valensise, Matteo Cinelli, Walter Quattrociocchi
- Abstract summary: echo chamber effect and opinion polarization may be driven by several factors including human biases in information consumption and personalized recommendations produced by feed algorithms.
Until now, studies have mainly used opinion dynamic models to explore the mechanisms behind the emergence of polarization and echo chambers.
We provide a method to numerically compare the opinion distributions obtained from simulations with those measured on social media.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Users online tend to join polarized groups of like-minded peers around shared
narratives, forming echo chambers. The echo chamber effect and opinion
polarization may be driven by several factors including human biases in
information consumption and personalized recommendations produced by feed
algorithms. Until now, studies have mainly used opinion dynamic models to
explore the mechanisms behind the emergence of polarization and echo chambers.
The objective was to determine the key factors contributing to these phenomena
and identify their interplay. However, the validation of model predictions with
empirical data still displays two main drawbacks: lack of systematicity and
qualitative analysis. In our work, we bridge this gap by providing a method to
numerically compare the opinion distributions obtained from simulations with
those measured on social media. To validate this procedure, we develop an
opinion dynamic model that takes into account the interplay between human and
algorithmic factors. We subject our model to empirical testing with data from
diverse social media platforms and benchmark it against two state-of-the-art
models. To further enhance our understanding of social media platforms, we
provide a synthetic description of their characteristics in terms of the
model's parameter space. This representation has the potential to facilitate
the refinement of feed algorithms, thus mitigating the detrimental effects of
extreme polarization on online discourse.
Related papers
- From Efficiency to Equity: Measuring Fairness in Preference Learning [3.2132738637761027]
We evaluate fairness in preference learning models inspired by economic theories of inequality and Rawlsian justice.
We propose metrics adapted from the Gini Coefficient, Atkinson Index, and Kuznets Ratio to quantify fairness in these models.
arXiv Detail & Related papers (2024-10-24T15:25:56Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Are Neural Topic Models Broken? [81.15470302729638]
We study the relationship between automated and human evaluation of topic models.
We find that neural topic models fare worse in both respects compared to an established classical method.
arXiv Detail & Related papers (2022-10-28T14:38:50Z) - Towards Explaining Demographic Bias through the Eyes of Face Recognition
Models [6.889667606945215]
Biases inherent in both data and algorithms make the fairness of machine learning (ML)-based decision-making systems less than optimal.
We aim at providing a set of explainability tool that analyse the difference in the face recognition models' behaviors when processing different demographic groups.
We do that by leveraging higher-order statistical information based on activation maps to build explainability tools that link the FR models' behavior differences to certain facial regions.
arXiv Detail & Related papers (2022-08-29T07:23:06Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Cascade-based Echo Chamber Detection [16.35164446890934]
echo chambers in social media have been under considerable scrutiny.
We propose a probabilistic generative model that explains social media footprints.
We show how our model can improve accuracy in auxiliary predictive tasks, such as stance detection and prediction of future propagations.
arXiv Detail & Related papers (2022-08-09T09:30:38Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Human Trajectory Forecasting in Crowds: A Deep Learning Perspective [89.4600982169]
We present an in-depth analysis of existing deep learning-based methods for modelling social interactions.
We propose two knowledge-based data-driven methods to effectively capture these social interactions.
We develop a large scale interaction-centric benchmark TrajNet++, a significant yet missing component in the field of human trajectory forecasting.
arXiv Detail & Related papers (2020-07-07T17:19:56Z) - Learning Opinion Dynamics From Social Traces [25.161493874783584]
We propose an inference mechanism for fitting a generative, agent-like model of opinion dynamics to real-world social traces.
We showcase our proposal by translating a classical agent-based model of opinion dynamics into its generative counterpart.
We apply our model to real-world data from Reddit to explore the long-standing question about the impact of backfire effect.
arXiv Detail & Related papers (2020-06-02T14:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.