Boosting Theory-of-Mind Performance in Large Language Models via
Prompting
- URL: http://arxiv.org/abs/2304.11490v3
- Date: Wed, 26 Apr 2023 04:02:04 GMT
- Title: Boosting Theory-of-Mind Performance in Large Language Models via
Prompting
- Authors: Shima Rahimi Moghaddam, Christopher J. Honey
- Abstract summary: This study measures the ToM performance of GPT-4 and three GPT-3.5 variants.
We investigated the effectiveness of in-context learning in improving ToM comprehension.
- Score: 2.538209532048867
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) excel in many tasks in 2023, but they still face
challenges in complex reasoning. Theory-of-mind (ToM) tasks, which require
understanding agents' beliefs, goals, and mental states, are essential for
common-sense reasoning involving humans, making it crucial to enhance LLM
performance in this area. This study measures the ToM performance of GPT-4 and
three GPT-3.5 variants (Davinci-2, Davinci-3, GPT-3.5-Turbo), and investigates
the effectiveness of in-context learning in improving their ToM comprehension.
We evaluated prompts featuring two-shot chain of thought reasoning and
step-by-step thinking instructions. We found that LLMs trained with
Reinforcement Learning from Human Feedback (RLHF) (all models excluding
Davinci-2) improved their ToM accuracy via in-context learning. GPT-4 performed
best in zero-shot settings, reaching nearly 80% ToM accuracy, but still fell
short of the 87% human accuracy on the test set. However, when supplied with
prompts for in-context learning, all RLHF-trained LLMs exceeded 80% ToM
accuracy, with GPT-4 reaching 100%. These results demonstrate that appropriate
prompting enhances LLM ToM reasoning, and they underscore the context-dependent
nature of LLM cognitive capacities.
Related papers
- Exposing the Achilles' Heel: Evaluating LLMs Ability to Handle Mistakes in Mathematical Reasoning [11.63133816413199]
Large Language Models (LLMs) have been applied to Math Word Problems (MWPs)
We introduce a novel dataset MWP-MISTAKE, incorporating MWPs with both correct and incorrect reasoning steps generated through rule-based methods and smaller language models.
We highlight GPT-$o's superior performance in mistake detection and rectification and the persistent challenges faced by smaller models.
arXiv Detail & Related papers (2024-06-16T08:06:05Z) - MindStar: Enhancing Math Reasoning in Pre-trained LLMs at Inference Time [51.5039731721706]
MindStar is a purely inference-based searching method for large language models.
It formulates reasoning tasks as searching problems and proposes two search ideas to identify the optimal reasoning paths.
It significantly enhances the reasoning abilities of open-source models, such as Llama-2-13B and Mistral-7B, and achieves comparable performance to GPT-3.5 and Grok-1.
arXiv Detail & Related papers (2024-05-25T15:07:33Z) - Evaluating Students' Open-ended Written Responses with LLMs: Using the RAG Framework for GPT-3.5, GPT-4, Claude-3, and Mistral-Large [0.0]
evaluating open-ended written responses from students is an essential yet time-intensive task for educators.
Recent developments in Large Language Models (LLMs) present a promising opportunity to balance the need for thorough evaluation with efficient use of educators' time.
arXiv Detail & Related papers (2024-05-08T22:23:58Z) - Benchmarking and Analyzing In-context Learning, Fine-tuning and
Supervised Learning for Biomedical Knowledge Curation: a focused study on
chemical entities of biological interest [2.8216292452982668]
This study compares and analyzes three NLP paradigms for curation: in-context learning (ICL), fine-tuning (FT), and supervised learning (ChML)
For ICL, three prompting strategies were employed with GPT-4, GPT-3.5, BioGPT.
For ML, six embedding models were utilized for training Random Forest and Long-Short Term Memory models.
arXiv Detail & Related papers (2023-12-20T12:46:44Z) - Mixed Distillation Helps Smaller Language Model Better Reasoning [27.934081882868902]
We introduce Mixed Distillation (MD) framework, which capitalizes on the strengths of Program of Thought (PoT) and Chain of Thought (CoT) capabilities within large language models (LLMs)
Our experimental results show that MD significantly enhances the single-path and multi-path reasoning ability of smaller models in various tasks.
arXiv Detail & Related papers (2023-12-17T14:28:28Z) - Branch-Solve-Merge Improves Large Language Model Evaluation and Generation [136.7876524839751]
Large Language Models (LLMs) are frequently used for multi-faceted language generation and evaluation tasks.
We propose Branch-Merge (BSM), a Large Language Model program (Schlag et al., 2023) for tackling such challenging natural language tasks.
BSM improves the evaluation correctness and consistency for each LLM by enhancing human-LLM agreement by up to 26%.
arXiv Detail & Related papers (2023-10-23T17:29:48Z) - Assessing the Reliability of Large Language Model Knowledge [78.38870272050106]
Large language models (LLMs) have been treated as knowledge bases due to their strong performance in knowledge probing tasks.
How do we evaluate the capabilities of LLMs to consistently produce factually correct answers?
We propose MOdel kNowledge relIabiliTy scORe (MONITOR), a novel metric designed to directly measure LLMs' factual reliability.
arXiv Detail & Related papers (2023-10-15T12:40:30Z) - TRACE: A Comprehensive Benchmark for Continual Learning in Large
Language Models [52.734140807634624]
Aligned large language models (LLMs) demonstrate exceptional capabilities in task-solving, following instructions, and ensuring safety.
Existing continual learning benchmarks lack sufficient challenge for leading aligned LLMs.
We introduce TRACE, a novel benchmark designed to evaluate continual learning in LLMs.
arXiv Detail & Related papers (2023-10-10T16:38:49Z) - Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs [60.61002524947733]
Previous confidence elicitation methods rely on white-box access to internal model information or model fine-tuning.
This leads to a growing need to explore the untapped area of black-box approaches for uncertainty estimation.
We define a systematic framework with three components: prompting strategies for eliciting verbalized confidence, sampling methods for generating multiple responses, and aggregation techniques for computing consistency.
arXiv Detail & Related papers (2023-06-22T17:31:44Z) - LIMA: Less Is More for Alignment [112.93890201395477]
We train LIMA, a 65B parameter LLaMa language model fine-tuned with the standard supervised loss on only 1,000 carefully curated prompts and responses.
LIMA demonstrates remarkably strong performance, learning to follow specific response formats from only a handful of examples.
In a controlled human study, responses from LIMA are either equivalent or strictly preferred to GPT-4 in 43% of cases.
arXiv Detail & Related papers (2023-05-18T17:45:22Z) - Evaluating Large Language Models in Theory of Mind Tasks [11.622327857276389]
Eleven Large Language Models (LLMs) were assessed using a custom-made battery of false-belief tasks.
The battery included 640 prompts spread across 40 diverse tasks, each one including a false-belief scenario.
To solve a single task, a model needed to correctly answer 16 prompts across all eight scenarios.
arXiv Detail & Related papers (2023-02-04T03:50:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.