Biased processing and opinion polarization: experimental refinement of
argument communication theory in the context of the energy debate
- URL: http://arxiv.org/abs/2212.10117v1
- Date: Tue, 20 Dec 2022 09:35:53 GMT
- Title: Biased processing and opinion polarization: experimental refinement of
argument communication theory in the context of the energy debate
- Authors: Sven Banisch and Hawal Shamon
- Abstract summary: We combine experimental research on biased argument processing with a computational theory of group deliberation.
The experiment reveals a strong tendency to consider arguments aligned with the current attitude more persuasive and to downgrade those speaking against it.
We derive a mathematical model that allows to relate the strength of biased processing to expected attitude changes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In sociological research, the study of macro processes, such as opinion
polarization, faces a fundamental problem, the so-called micro-macro problem.
To overcome this problem, we combine empirical experimental research on biased
argument processing with a computational theory of group deliberation in order
to clarify the role of biased processing in debates around energy. The
experiment reveals a strong tendency to consider arguments aligned with the
current attitude more persuasive and to downgrade those speaking against it.
This is integrated into the framework of argument communication theory in which
agents exchange arguments about a certain topic and adapt opinions accordingly.
We derive a mathematical model that allows to relate the strength of biased
processing to expected attitude changes given the specific experimental
conditions and find a clear signature of moderate biased processing. We further
show that this model fits significantly better to the experimentally observed
attitude changes than the neutral argument processing assumption made in
previous models. Our approach provides new insight into the relationship
between biased processing and opinion polarization. At the individual level our
analysis reveals a sharp qualitative transition from attitude moderation to
polarization. At the collective level we find (i.) that weak biased processing
significantly accelerates group decision processes whereas (ii.) strong biased
processing leads to a persistent conflictual state of subgroup polarization.
While this shows that biased processing alone is sufficient for the emergence
of polarization, we also demonstrate that homophily may lead to intra-group
conflict at significantly lower rates of biased processing.
Related papers
- An Effective Theory of Bias Amplification [18.648588509429167]
Machine learning models may capture and amplify biases present in data, leading to disparate test performance across social groups.
We propose a precise analytical theory in the context of ridge regression, where the former models neural networks in a simplified regime.
Our theory offers a unified and rigorous explanation of machine learning bias, providing insights into phenomena such as bias amplification and minority-group bias.
arXiv Detail & Related papers (2024-10-07T08:43:22Z) - How social reinforcement learning can lead to metastable polarisation and the voter model [0.0]
A recent simulation study shows that polarization is persistent when agents form their opinions using social reinforcement learning.
We show that the polarization observed in the model of the simulation study cannot persist indefinitely, and exhibits consensusally with probability one.
By constructing a link between the reinforcement learning model and the voter model, we argue that the observed polarization is metastable.
arXiv Detail & Related papers (2024-06-12T08:38:47Z) - Polarity Calibration for Opinion Summarization [46.83053173308394]
Polarity calibration aims to align the polarity of output summary with that of input text.
We evaluate our model on two types of opinions summarization tasks: summarizing product reviews and political opinions articles.
arXiv Detail & Related papers (2024-04-02T07:43:12Z) - Confronting Reward Overoptimization for Diffusion Models: A Perspective of Inductive and Primacy Biases [76.9127853906115]
Bridging the gap between diffusion models and human preferences is crucial for their integration into practical generative.
We propose Temporal Diffusion Policy Optimization with critic active neuron Reset (TDPO-R), a policy gradient algorithm that exploits the temporal inductive bias of diffusion models.
Empirical results demonstrate the superior efficacy of our methods in mitigating reward overoptimization.
arXiv Detail & Related papers (2024-02-13T15:55:41Z) - Exploring the Jungle of Bias: Political Bias Attribution in Language Models via Dependency Analysis [86.49858739347412]
Large Language Models (LLMs) have sparked intense debate regarding the prevalence of bias in these models and its mitigation.
We propose a prompt-based method for the extraction of confounding and mediating attributes which contribute to the decision process.
We find that the observed disparate treatment can at least in part be attributed to confounding and mitigating attributes and model misalignment.
arXiv Detail & Related papers (2023-11-15T00:02:25Z) - Causal Inference from Text: Unveiling Interactions between Variables [20.677407402398405]
Existing methods only account for confounding covariables that affect both treatment and outcome.
This bias arises from insufficient consideration of non-confounding covariables.
In this work, we aim to mitigate the bias by unveiling interactions between different variables.
arXiv Detail & Related papers (2023-11-09T11:29:44Z) - Mitigating Framing Bias with Polarity Minimization Loss [56.24404488440295]
Framing bias plays a significant role in exacerbating political polarization by distorting the perception of actual events.
We propose a new loss function that encourages the model to minimize the polarity difference between the polarized input articles to reduce framing bias.
arXiv Detail & Related papers (2023-11-03T09:50:23Z) - Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures [93.17009514112702]
Pruning, setting a significant subset of the parameters of a neural network to zero, is one of the most popular methods of model compression.
Despite existing evidence for this phenomenon, the relationship between neural network pruning and induced bias is not well-understood.
arXiv Detail & Related papers (2023-04-25T07:42:06Z) - Validating argument-based opinion dynamics with survey experiments [0.0]
The empirical validation of models remains one of the most important challenges in opinion dynamics.
We show that the extended argument-based model provides a solid bridge from the micro processes of argument-induced attitude change to macro level opinion distributions.
arXiv Detail & Related papers (2022-12-20T10:21:30Z) - Balancing out Bias: Achieving Fairness Through Training Reweighting [58.201275105195485]
Bias in natural language processing arises from models learning characteristics of the author such as gender and race.
Existing methods for mitigating and measuring bias do not directly account for correlations between author demographics and linguistic variables.
This paper introduces a very simple but highly effective method for countering bias using instance reweighting.
arXiv Detail & Related papers (2021-09-16T23:40:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.