Neural embedding of beliefs reveals the role of relative dissonance in human decision-making
- URL: http://arxiv.org/abs/2408.07237v1
- Date: Tue, 13 Aug 2024 23:58:45 GMT
- Title: Neural embedding of beliefs reveals the role of relative dissonance in human decision-making
- Authors: Byunghwee Lee, Rachith Aiyappa, Yong-Yeol Ahn, Haewoon Kwak, Jisun An,
- Abstract summary: We propose a method for extracting nuanced relations between thousands of beliefs by leveraging large-scale user participation data from an online debate platform.
This belief embedding space effectively encapsulates the interconnectedness of diverse beliefs as well as polarization across various social issues.
We find that the relative distance between one existing beliefs and new beliefs can serve as a quantitative estimate of cognitive dissonance.
- Score: 6.558951808581431
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Beliefs serve as the foundation for human cognition and decision-making. They guide individuals in deriving meaning from their lives, shaping their behaviors, and forming social connections. Therefore, a model that encapsulates beliefs and their interrelationships is crucial for quantitatively studying the influence of beliefs on our actions. Despite its importance, research on the interplay between human beliefs has often been limited to a small set of beliefs pertaining to specific issues, with a heavy reliance on surveys or experiments. Here, we propose a method for extracting nuanced relations between thousands of beliefs by leveraging large-scale user participation data from an online debate platform and mapping these beliefs to an embedding space using a fine-tuned large language model (LLM). This belief embedding space effectively encapsulates the interconnectedness of diverse beliefs as well as polarization across various social issues. We discover that the positions within this belief space predict new beliefs of individuals. Furthermore, we find that the relative distance between one's existing beliefs and new beliefs can serve as a quantitative estimate of cognitive dissonance, allowing us to predict new beliefs. Our study highlights how modern LLMs, when combined with collective online records of human beliefs, can offer insights into the fundamental principles that govern human belief formation and decision-making processes.
Related papers
- Measurement of LLM's Philosophies of Human Nature [113.47929131143766]
We design the standardized psychological scale specifically targeting large language models (LLM)
We show that current LLMs exhibit a systemic lack of trust in humans.
We propose a mental loop learning framework, which enables LLM to continuously optimize its value system.
arXiv Detail & Related papers (2025-04-03T06:22:19Z) - How Deep is Love in LLMs' Hearts? Exploring Semantic Size in Human-like Cognition [75.11808682808065]
This study investigates whether large language models (LLMs) exhibit similar tendencies in understanding semantic size.
Our findings reveal that multi-modal training is crucial for LLMs to achieve more human-like understanding.
Lastly, we examine whether LLMs are influenced by attention-grabbing headlines with larger semantic sizes in a real-world web shopping scenario.
arXiv Detail & Related papers (2025-03-01T03:35:56Z) - Belief in the Machine: Investigating Epistemological Blind Spots of Language Models [51.63547465454027]
Language models (LMs) are essential for reliable decision-making in fields like healthcare, law, and journalism.
This study systematically evaluates the capabilities of modern LMs, including GPT-4, Claude-3, and Llama-3, using a new dataset, KaBLE.
Our results reveal key limitations. First, while LMs achieve 86% accuracy on factual scenarios, their performance drops significantly with false scenarios.
Second, LMs struggle with recognizing and affirming personal beliefs, especially when those beliefs contradict factual data.
arXiv Detail & Related papers (2024-10-28T16:38:20Z) - A Survey of Stance Detection on Social Media: New Directions and Perspectives [50.27382951812502]
stance detection has emerged as a crucial subfield within affective computing.
Recent years have seen a surge of research interest in developing effective stance detection methods.
This paper provides a comprehensive survey of stance detection techniques on social media.
arXiv Detail & Related papers (2024-09-24T03:06:25Z) - Learning mental states estimation through self-observation: a developmental synergy between intentions and beliefs representations in a deep-learning model of Theory of Mind [0.35154948148425685]
Theory of Mind (ToM) is the ability to attribute beliefs, intentions, or mental states to others.
We show a developmental synergy between learning to predict low-level mental states and attributing high-level ones.
We propose that our computational approach can inform the understanding of human social cognitive development.
arXiv Detail & Related papers (2024-07-25T13:15:25Z) - Explicit Modelling of Theory of Mind for Belief Prediction in Nonverbal Social Interactions [9.318796743761224]
We propose MToMnet - a Theory of Mind (ToM) neural network for predicting beliefs and their dynamics during human social interactions from multimodal input.
MToMnet encodes contextual cues and integrates them with person-specific cues (human gaze and body language) in a separate MindNet for each person.
Our results demonstrate that MToMnet surpasses existing methods by a large margin while at the same time requiring a significantly smaller number of parameters.
arXiv Detail & Related papers (2024-07-09T11:15:51Z) - Belief sharing: a blessing or a curse [3.2614942160776823]
Communication can be cast as sharing beliefs between free-energy minimizing agents.
We demonstrate that naively sharing posterior beliefs can give rise to the negative social dynamics of echo chambers and self-doubt.
arXiv Detail & Related papers (2024-07-02T17:46:42Z) - Grounding Language about Belief in a Bayesian Theory-of-Mind [5.058204320571824]
We take a step towards an answer by grounding the semantics of belief statements in a Bayesian theory-of-mind.
By modeling how humans jointly infer coherent sets of goals, beliefs, and plans, our framework provides a conceptual role semantics for belief.
We evaluate this framework by studying how humans attribute goals and beliefs while watching an agent solve a doors-and-keys gridworld puzzle.
arXiv Detail & Related papers (2024-02-16T02:47:09Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - On the steerability of large language models toward data-driven personas [98.9138902560793]
Large language models (LLMs) are known to generate biased responses where the opinions of certain groups and populations are underrepresented.
Here, we present a novel approach to achieve controllable generation of specific viewpoints using LLMs.
arXiv Detail & Related papers (2023-11-08T19:01:13Z) - Leveraging Contextual Counterfactuals Toward Belief Calibration [1.418033127602866]
meta-alignment problem is that human beliefs are diverse and not aligned across populations.
In high regret situations, we observe that contextual counterfactuals and recourse costs are important in updating a decision maker's beliefs and the strengths to which such beliefs are held.
We introduce the belief calibration cycle' framework to more holistically calibrate this diversity of beliefs with context-driven counterfactual reasoning.
arXiv Detail & Related papers (2023-07-13T01:22:18Z) - Causal Deep Learning [77.49632479298745]
Causality has the potential to transform the way we solve real-world problems.
But causality often requires crucial assumptions which cannot be tested in practice.
We propose a new way of thinking about causality -- we call this causal deep learning.
arXiv Detail & Related papers (2023-03-03T19:19:18Z) - Flexible social inference facilitates targeted social learning when
rewards are not observable [58.762004496858836]
Groups coordinate more effectively when individuals are able to learn from others' successes.
We suggest that social inference capacities may help bridge this gap, allowing individuals to update their beliefs about others' underlying knowledge and success from observable trajectories of behavior.
arXiv Detail & Related papers (2022-12-01T21:04:03Z) - Robot Learning Theory of Mind through Self-Observation: Exploiting the
Intentions-Beliefs Synergy [0.0]
Theory of Mind (TOM) is the ability to attribute to other agents' beliefs, intentions, or mental states in general.
We show the synergy between learning to predict low-level mental states, such as intentions and goals, and attributing high-level ones, such as beliefs.
We propose that our architectural approach can be relevant for the design of future adaptive social robots.
arXiv Detail & Related papers (2022-10-17T21:12:39Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - Do Language Models Have Beliefs? Methods for Detecting, Updating, and
Visualizing Model Beliefs [76.6325846350907]
Dennett (1995) famously argues that even thermostats have beliefs, on the view that a belief is simply an informational state decoupled from any motivational state.
In this paper, we discuss approaches to detecting when models have beliefs about the world, and we improve on methods for updating model beliefs to be more truthful.
arXiv Detail & Related papers (2021-11-26T18:33:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.