Thinking LLMs: General Instruction Following with Thought Generation
- URL: http://arxiv.org/abs/2410.10630v1
- Date: Mon, 14 Oct 2024 15:38:56 GMT
- Title: Thinking LLMs: General Instruction Following with Thought Generation
- Authors: Tianhao Wu, Janice Lan, Weizhe Yuan, Jiantao Jiao, Jason Weston, Sainbayar Sukhbaatar,
- Abstract summary: We propose a training method for equipping existing LLMs with such thinking abilities for general instruction following without use of additional human data.
For each instruction, the thought candidates are scored using a judge model to evaluate their responses only, and then optimized via preference optimization.
We show that this procedure leads to superior performance on AlpacaEval and Arena-Hard, and shows gains from thinking on non-reasoning categories such as marketing, health and general knowledge.
- Score: 56.30755438254918
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: LLMs are typically trained to answer user questions or follow instructions similarly to how human experts respond. However, in the standard alignment framework they lack the basic ability of explicit thinking before answering. Thinking is important for complex questions that require reasoning and planning -- but can be applied to any task. We propose a training method for equipping existing LLMs with such thinking abilities for general instruction following without use of additional human data. We achieve this by an iterative search and optimization procedure that explores the space of possible thought generations, allowing the model to learn how to think without direct supervision. For each instruction, the thought candidates are scored using a judge model to evaluate their responses only, and then optimized via preference optimization. We show that this procedure leads to superior performance on AlpacaEval and Arena-Hard, and shows gains from thinking on non-reasoning categories such as marketing, health and general knowledge, in addition to more traditional reasoning & problem-solving tasks.
Related papers
- Prejudge-Before-Think: Enhancing Large Language Models at Test-Time by Process Prejudge Reasoning [13.865037985388575]
We introduce a new emphprocess prejudge strategy in LLM reasoning.
We define a prejudge node in the rationale, which represents a reasoning step.
We present an automated reasoning framework with a dynamic tree-searching strategy.
arXiv Detail & Related papers (2025-04-18T06:42:30Z) - Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs [86.79757571440082]
Large language models (LLMs) such as OpenAI's o1 have demonstrated remarkable abilities in complex reasoning tasks.
We identify a phenomenon we term underthinking, where o1-like LLMs frequently switch between different reasoning thoughts.
We propose a decoding strategy with thought switching penalty TIP that discourages premature transitions between thoughts.
arXiv Detail & Related papers (2025-01-30T18:58:18Z) - Prompting Large Language Models with Rationale Heuristics for Knowledge-based Visual Question Answering [6.745948705869626]
We argue that prior methods do not sufficiently activate the capacities of Large Language Models (LLMs)
We propose a framework called PLRH that Prompts LLMs with Rationale Heuristics for knowledge-based VQA.
arXiv Detail & Related papers (2024-12-22T09:14:35Z) - RAG-Star: Enhancing Deliberative Reasoning with Retrieval Augmented Verification and Refinement [85.08223786819532]
Existing large language models (LLMs) show exceptional problem-solving capabilities but might struggle with complex reasoning tasks.
We propose textbfRAG-Star, a novel RAG approach that integrates retrieved information to guide the tree-based deliberative reasoning process.
Our experiments involving Llama-3.1-8B-Instruct and GPT-4o demonstrate that RAG-Star significantly outperforms previous RAG and reasoning methods.
arXiv Detail & Related papers (2024-12-17T13:05:36Z) - ReGenesis: LLMs can Grow into Reasoning Generalists via Self-Improvement [70.09541267910974]
Post-training Large Language Models (LLMs) with explicit reasoning trajectories can enhance their reasoning abilities.
Existing self-synthesizing methods suffer from poor generalization to out-of-domain (OOD) reasoning tasks.
We propose Reasoning Generalist via Self-Improvement (ReGenesis), a method to self-synthesize reasoning paths as post-training data.
arXiv Detail & Related papers (2024-10-03T00:09:15Z) - Ranking Generated Answers: On the Agreement of Retrieval Models with Humans on Consumer Health Questions [25.158868133182025]
We present a method for evaluating the output of generative large language models (LLMs)
Our scoring method correlates with the preferences of human experts.
We validate it by investigating the well-known fact that the quality of generated answers improves with the size of the model.
arXiv Detail & Related papers (2024-08-19T09:27:45Z) - Recursive Introspection: Teaching Language Model Agents How to Self-Improve [30.086494067593268]
We develop RISE: Recursive IntroSpEction, an approach for fine-tuning large language models.
Our experiments show that RISE enables Llama2, Llama3, and Mistral models to improve themselves with more turns on math reasoning tasks.
arXiv Detail & Related papers (2024-07-25T17:35:59Z) - Leveraging LLM Reasoning Enhances Personalized Recommender Systems [25.765908301183188]
We show that the application of Large Language Models (LLMs) reasoning in recommendation systems (RecSys) presents a distinct challenge.
Our study explores several aspects to better understand reasoning for RecSys and demonstrate how task quality improves.
arXiv Detail & Related papers (2024-07-22T20:18:50Z) - Learning to Generate Explainable Stock Predictions using Self-Reflective
Large Language Models [54.21695754082441]
We propose a framework to teach Large Language Models (LLMs) to generate explainable stock predictions.
A reflective agent learns how to explain past stock movements through self-reasoning, while the PPO trainer trains the model to generate the most likely explanations.
Our framework can outperform both traditional deep-learning and LLM methods in prediction accuracy and Matthews correlation coefficient.
arXiv Detail & Related papers (2024-02-06T03:18:58Z) - LaRS: Latent Reasoning Skills for Chain-of-Thought Reasoning [61.7853049843921]
Chain-of-thought (CoT) prompting is a popular in-context learning approach for large language models (LLMs)
This paper introduces a new approach named Latent Reasoning Skills (LaRS) that employs unsupervised learning to create a latent space representation of rationales.
arXiv Detail & Related papers (2023-12-07T20:36:10Z) - Self-Knowledge Guided Retrieval Augmentation for Large Language Models [59.771098292611846]
Large language models (LLMs) have shown superior performance without task-specific fine-tuning.
Retrieval-based methods can offer non-parametric world knowledge and improve the performance on tasks such as question answering.
Self-Knowledge guided Retrieval augmentation (SKR) is a simple yet effective method which can let LLMs refer to the questions they have previously encountered.
arXiv Detail & Related papers (2023-10-08T04:22:33Z) - Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate [85.3444184685235]
We propose a Multi-Agent Debate (MAD) framework, in which multiple agents express their arguments in the state of "tit for tat" and a judge manages the debate process to obtain a final solution.
Our framework encourages divergent thinking in LLMs which would be helpful for tasks that require deep levels of contemplation.
arXiv Detail & Related papers (2023-05-30T15:25:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.