Opinion Dynamics with Highly Oscillating Opinions
- URL: http://arxiv.org/abs/2506.20472v1
- Date: Wed, 25 Jun 2025 14:22:13 GMT
- Title: Opinion Dynamics with Highly Oscillating Opinions
- Authors: Víctor A. Vargas-Pérez, Jesús Giráldez-Cru, Oscar Cordón,
- Abstract summary: We study the ability of several Opinion Dynamics (OD) models to reproduce highly oscillating dynamics.<n>Our experiments show that the ATBCR, based on both rational and emotional mechanisms of opinion update, is the most accurate OD model for capturing highly oscillating opinions.
- Score: 2.5862278972437998
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Opinion Dynamics (OD) models are a particular case of Agent-Based Models in which the evolution of opinions within a population is studied. In most OD models, opinions evolve as a consequence of interactions between agents, and the opinion fusion rule defines how those opinions are updated. In consequence, despite being simplistic, OD models provide an explainable and interpretable mechanism for understanding the underlying dynamics of opinion evolution. Unfortunately, existing OD models mainly focus on explaining the evolution of (usually synthetic) opinions towards consensus, fragmentation, or polarization, but they usually fail to analyze scenarios of (real-world) highly oscillating opinions. This work overcomes this limitation by studying the ability of several OD models to reproduce highly oscillating dynamics. To this end, we formulate an optimization problem which is further solved using Evolutionary Algorithms, providing both quantitative results on the performance of the optimization and qualitative interpretations on the obtained results. Our experiments on a real-world opinion dataset about immigration from the monthly barometer of the Spanish Sociological Research Center show that the ATBCR, based on both rational and emotional mechanisms of opinion update, is the most accurate OD model for capturing highly oscillating opinions.
Related papers
- Don't Overthink It: A Survey of Efficient R1-style Large Reasoning Models [49.598776427454176]
Large Reasoning Models (LRMs) have gradually become a research hotspot due to their outstanding performance in handling complex tasks.<n>However, with the widespread application of these models, the problem of overthinking has gradually emerged.<n>Various efficient reasoning methods have been proposed, aiming to reduce the length of reasoning paths without compromising model performance and reasoning capability.
arXiv Detail & Related papers (2025-08-04T06:54:31Z) - Consistent World Models via Foresight Diffusion [56.45012929930605]
We argue that a key bottleneck in learning consistent diffusion-based world models lies in the suboptimal predictive ability.<n>We propose Foresight Diffusion (ForeDiff), a diffusion-based world modeling framework that enhances consistency by decoupling condition understanding from target denoising.
arXiv Detail & Related papers (2025-05-22T10:01:59Z) - VACT: A Video Automatic Causal Testing System and a Benchmark [55.53300306960048]
VACT is an **automated** framework for modeling, evaluating, and measuring the causal understanding of VGMs in real-world scenarios.<n>We introduce multi-level causal evaluation metrics to provide a detailed analysis of the causal performance of VGMs.
arXiv Detail & Related papers (2025-03-08T10:54:42Z) - UniGO: A Unified Graph Neural Network for Modeling Opinion Dynamics on Graphs [12.887980453980393]
This paper constructs a unified opinion dynamics model to integrate different opinion fusion rules and generates corresponding synthetic datasets.<n>To fully leverage the advantages of unified opinion dynamics, we introduces UniGO, a framework for modeling opinion evolution on graphs.<n>UniGO efficiently models opinion dynamics through a graph neural network, mitigating over-smoothing while preserving equilibrium phenomena.
arXiv Detail & Related papers (2025-02-17T07:40:32Z) - Unfamiliar Finetuning Examples Control How Language Models Hallucinate [75.03210107477157]
Large language models are known to hallucinate when faced with unfamiliar queries.
We find that unfamiliar examples in the models' finetuning data are crucial in shaping these errors.
Our work further investigates RL finetuning strategies for improving the factuality of long-form model generations.
arXiv Detail & Related papers (2024-03-08T18:28:13Z) - Simulating Opinion Dynamics with Networks of LLM-based Agents [7.697132934635411]
We propose a new approach to simulating opinion dynamics based on populations of Large Language Models (LLMs)
Our findings reveal a strong inherent bias in LLM agents towards producing accurate information, leading simulated agents to consensus in line with scientific reality.
After inducing confirmation bias through prompt engineering, however, we observed opinion fragmentation in line with existing agent-based modeling and opinion dynamics research.
arXiv Detail & Related papers (2023-11-16T07:01:48Z) - Variational Causal Dynamics: Discovering Modular World Models from
Interventions [25.084146613277973]
Latent world models allow agents to reason about complex environments with high-dimensional observations.
We present variational causal dynamics (VCD), a structured world model that exploits the invariance of causal mechanisms across environments.
arXiv Detail & Related papers (2022-06-22T14:28:40Z) - The drivers of online polarization: fitting models to data [0.0]
echo chamber effect and opinion polarization may be driven by several factors including human biases in information consumption and personalized recommendations produced by feed algorithms.
Until now, studies have mainly used opinion dynamic models to explore the mechanisms behind the emergence of polarization and echo chambers.
We provide a method to numerically compare the opinion distributions obtained from simulations with those measured on social media.
arXiv Detail & Related papers (2022-05-31T17:00:41Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Learning Opinion Dynamics From Social Traces [25.161493874783584]
We propose an inference mechanism for fitting a generative, agent-like model of opinion dynamics to real-world social traces.
We showcase our proposal by translating a classical agent-based model of opinion dynamics into its generative counterpart.
We apply our model to real-world data from Reddit to explore the long-standing question about the impact of backfire effect.
arXiv Detail & Related papers (2020-06-02T14:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.