Can a Hallucinating Model help in Reducing Human "Hallucination"?
- URL: http://arxiv.org/abs/2405.00843v1
- Date: Wed, 1 May 2024 20:10:44 GMT
- Title: Can a Hallucinating Model help in Reducing Human "Hallucination"?
- Authors: Sowmya S Sundaram, Balaji Alwar,
- Abstract summary: This study explores the capabilities of large language models (LLMs) vis-a-vis the average human in detecting prevalent logical pitfalls.
We propose methodologies for harnessing LLMs to counter misconceptions, drawing upon psychological models of persuasion.
- Score: 2.3633885460047774
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The prevalence of unwarranted beliefs, spanning pseudoscience, logical fallacies, and conspiracy theories, presents substantial societal hurdles and the risk of disseminating misinformation. Utilizing established psychometric assessments, this study explores the capabilities of large language models (LLMs) vis-a-vis the average human in detecting prevalent logical pitfalls. We undertake a philosophical inquiry, juxtaposing the rationality of humans against that of LLMs. Furthermore, we propose methodologies for harnessing LLMs to counter misconceptions, drawing upon psychological models of persuasion such as cognitive dissonance theory and elaboration likelihood theory. Through this endeavor, we highlight the potential of LLMs as personalized misinformation debunking agents.
Related papers
- Going Whole Hog: A Philosophical Defense of AI Cognition [0.0]
We argue against prevailing methodologies in AI philosophy, rejecting starting points based on low-level computational details.
We employ 'Holistic Network Assumptions' to argue for the full suite of cognitive states.
We conclude by speculating on the possibility of LLMs possessing 'alien' contents beyond human conceptual schemes.
arXiv Detail & Related papers (2025-04-18T11:36:25Z) - Measurement of LLM's Philosophies of Human Nature [113.47929131143766]
We design the standardized psychological scale specifically targeting large language models (LLM)
We show that current LLMs exhibit a systemic lack of trust in humans.
We propose a mental loop learning framework, which enables LLM to continuously optimize its value system.
arXiv Detail & Related papers (2025-04-03T06:22:19Z) - Delusions of Large Language Models [62.43923767408462]
Large Language Models often generate factually incorrect but plausible outputs, known as hallucinations.
We identify a more insidious phenomenon, LLM delusion, defined as high belief hallucinations, incorrect outputs with abnormally high confidence, making them harder to detect and mitigate.
arXiv Detail & Related papers (2025-03-09T17:59:16Z) - How Deep is Love in LLMs' Hearts? Exploring Semantic Size in Human-like Cognition [75.11808682808065]
This study investigates whether large language models (LLMs) exhibit similar tendencies in understanding semantic size.
Our findings reveal that multi-modal training is crucial for LLMs to achieve more human-like understanding.
Lastly, we examine whether LLMs are influenced by attention-grabbing headlines with larger semantic sizes in a real-world web shopping scenario.
arXiv Detail & Related papers (2025-03-01T03:35:56Z) - Take Caution in Using LLMs as Human Surrogates: Scylla Ex Machina [7.155982875107922]
Studies suggest large language models (LLMs) can exhibit human-like reasoning, aligning with human behavior in economic experiments, surveys, and political discourse.
This has led many to propose that LLMs can be used as surrogates or simulations for humans in social science research.
We assess the reasoning depth of LLMs using the 11-20 money request game.
arXiv Detail & Related papers (2024-10-25T14:46:07Z) - Cognitive phantoms in LLMs through the lens of latent variables [0.3441021278275805]
Large language models (LLMs) increasingly reach real-world applications, necessitating a better understanding of their behaviour.
Recent studies administering psychometric questionnaires to LLMs report human-like traits in LLMs, potentially influencing behaviour.
This approach suffers from a validity problem: it presupposes that these traits exist in LLMs and that they are measurable with tools designed for humans.
This study investigates this problem by comparing latent structures of personality between humans and three LLMs using two validated personality questionnaires.
arXiv Detail & Related papers (2024-09-06T12:42:35Z) - Perceptions to Beliefs: Exploring Precursory Inferences for Theory of Mind in Large Language Models [51.91448005607405]
We evaluate key human ToM precursors by annotating characters' perceptions on ToMi and FANToM.
We present PercepToM, a novel ToM method leveraging LLMs' strong perception inference capability while supplementing their limited perception-to-belief inference.
arXiv Detail & Related papers (2024-07-08T14:58:29Z) - LLM Internal States Reveal Hallucination Risk Faced With a Query [62.29558761326031]
Humans have a self-awareness process that allows us to recognize what we don't know when faced with queries.
This paper investigates whether Large Language Models can estimate their own hallucination risk before response generation.
By a probing estimator, we leverage LLM self-assessment, achieving an average hallucination estimation accuracy of 84.32% at run time.
arXiv Detail & Related papers (2024-07-03T17:08:52Z) - What if...?: Thinking Counterfactual Keywords Helps to Mitigate Hallucination in Large Multi-modal Models [50.97705264224828]
We propose Counterfactual Inception, a novel method that implants counterfactual thinking into Large Multi-modal Models.
We aim for the models to engage with and generate responses that span a wider contextual scene understanding.
Comprehensive analyses across various LMMs, including both open-source and proprietary models, corroborate that counterfactual thinking significantly reduces hallucination.
arXiv Detail & Related papers (2024-03-20T11:27:20Z) - Redefining "Hallucination" in LLMs: Towards a psychology-informed
framework for mitigating misinformation [0.7826806223782052]
We propose a psychological taxonomy based on cognitive biases and other psychological phenomena.
By leveraging insights from how humans internally resolve similar challenges, we aim to develop strategies to mitigate hallucinations.
arXiv Detail & Related papers (2024-02-01T03:01:11Z) - MoCa: Measuring Human-Language Model Alignment on Causal and Moral
Judgment Tasks [49.60689355674541]
A rich literature in cognitive science has studied people's causal and moral intuitions.
This work has revealed a number of factors that systematically influence people's judgments.
We test whether large language models (LLMs) make causal and moral judgments about text-based scenarios that align with human participants.
arXiv Detail & Related papers (2023-10-30T15:57:32Z) - Violation of Expectation via Metacognitive Prompting Reduces Theory of
Mind Prediction Error in Large Language Models [0.0]
Large Language Models (LLMs) exhibit a compelling level of proficiency in Theory of Mind (ToM) tasks.
This ability to impute unobservable mental states to others is vital to human social cognition and may prove equally important in principal-agent relations between humans and Artificial Intelligences (AIs)
arXiv Detail & Related papers (2023-10-10T20:05:13Z) - Avalon's Game of Thoughts: Battle Against Deception through Recursive
Contemplation [80.126717170151]
This study utilizes the intricate Avalon game as a testbed to explore LLMs' potential in deceptive environments.
We introduce a novel framework, Recursive Contemplation (ReCon), to enhance LLMs' ability to identify and counteract deceptive information.
arXiv Detail & Related papers (2023-10-02T16:27:36Z) - Siren's Song in the AI Ocean: A Survey on Hallucination in Large
Language Models [116.01843550398183]
Large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks.
LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge.
arXiv Detail & Related papers (2023-09-03T16:56:48Z) - Deception Abilities Emerged in Large Language Models [0.0]
Large language models (LLMs) are currently at the forefront of intertwining artificial intelligence (AI) systems with human communication and everyday life.
This study reveals that such strategies emerged in state-of-the-art LLMs, such as GPT-4, but were non-existent in earlier LLMs.
We conduct a series of experiments showing that state-of-the-art LLMs are able to understand and induce false beliefs in other agents.
arXiv Detail & Related papers (2023-07-31T09:27:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.