Flow-DPO: Improving LLM Mathematical Reasoning through Online   Multi-Agent Learning
        - URL: http://arxiv.org/abs/2410.22304v1
 - Date: Tue, 29 Oct 2024 17:50:31 GMT
 - Title: Flow-DPO: Improving LLM Mathematical Reasoning through Online   Multi-Agent Learning
 - Authors: Yihe Deng, Paul Mineiro, 
 - Abstract summary: This paper introduces a novel approach to produce high-quality reasoning traces for Large Language Models fine-tuning.
Our method employs an incremental output production Flow, where component LLMs collaboratively construct solutions.
We train the Flow using online Direct Preference Optimization (DPO) learning with rollouts, generating DPO pairs for each training example and updating models in real-time.
 - Score: 14.156753196673598
 - License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
 - Abstract:   Mathematical reasoning is a crucial capability for Large Language Models (LLMs), yet generating detailed and accurate reasoning traces remains a significant challenge. This paper introduces a novel approach to produce high-quality reasoning traces for LLM fine-tuning using online learning \textbf{Flows}. Our method employs an incremental output production Flow, where component LLMs collaboratively construct solutions through iterative communication. We train the Flow using online Direct Preference Optimization (DPO) learning with rollouts, generating DPO pairs for each training example and updating models in real-time. We directly compare the quality of reasoning traces generated by our method with those produced through direct model inference, demonstrating the effectiveness of our approach in improving LLM performance in mathematical reasoning tasks. 
 
       
      
        Related papers
        - Revisiting LLM Reasoning via Information Bottleneck [57.519119962528166]
Large language models (LLMs) have recently demonstrated remarkable progress in reasoning capabilities through reinforcement learning with verifiable rewards (RLVR)<n>We present a theoretical characterization of LLM reasoning grounded in information bottleneck (IB) principle.<n>We propose IB-aware reasoning optimization (IBRO), a framework that encourages reasoning trajectories to be both informative about the final correct answer and generalizable.
arXiv  Detail & Related papers  (2025-07-24T13:14:25Z) - Efficient Model Selection for Time Series Forecasting via LLMs [52.31535714387368]
We propose to leverage Large Language Models (LLMs) as a lightweight alternative for model selection.
Our method eliminates the need for explicit performance matrices by utilizing the inherent knowledge and reasoning capabilities of LLMs.
arXiv  Detail & Related papers  (2025-04-02T20:33:27Z) - From Human Annotation to LLMs: SILICON Annotation Workflow for   Management Research [13.818244562506138]
Large Language Models (LLMs) provide a cost-effective and efficient alternative to human annotation.
This paper introduces the SILICON" (Systematic Inference with LLMs for Information Classification and Notation) workflow.
The workflow integrates established principles of human annotation with systematic prompt optimization and model selection.
arXiv  Detail & Related papers  (2024-12-19T02:21:41Z) - Enhancing Multi-Step Reasoning Abilities of Language Models through   Direct Q-Function Optimization [50.485788083202124]
Reinforcement Learning (RL) plays a crucial role in aligning large language models with human preferences and improving their ability to perform complex tasks.
We introduce Direct Q-function Optimization (DQO), which formulates the response generation process as a Markov Decision Process (MDP) and utilizes the soft actor-critic (SAC) framework to optimize a Q-function directly parameterized by the language model.
 Experimental results on two math problem-solving datasets, GSM8K and MATH, demonstrate that DQO outperforms previous methods, establishing it as a promising offline reinforcement learning approach for aligning language models.
arXiv  Detail & Related papers  (2024-10-11T23:29:20Z) - Attribute Controlled Fine-tuning for Large Language Models: A Case Study   on Detoxification [76.14641982122696]
We propose a constraint learning schema for fine-tuning Large Language Models (LLMs) with attribute control.
We show that our approach leads to an LLM that produces fewer inappropriate responses while achieving competitive performance on benchmarks and a toxicity detection task.
arXiv  Detail & Related papers  (2024-10-07T23:38:58Z) - zsLLMCode: An Effective Approach for Functional Code Embedding via LLM   with Zero-Shot Learning [6.976968804436321]
Large language models (LLMs) have the capability of zero-shot learning, which does not require training or fine-tuning.
We propose zsLLMCode, a novel approach that generates functional code embeddings using LLMs.
arXiv  Detail & Related papers  (2024-09-23T01:03:15Z) - The Ultimate Guide to Fine-Tuning LLMs from Basics to Breakthroughs: An   Exhaustive Review of Technologies, Research, Best Practices, Applied Research   Challenges and Opportunities [0.35998666903987897]
This report examines the fine-tuning of Large Language Models (LLMs)
It outlines the historical evolution of LLMs from traditional Natural Language Processing (NLP) models to their pivotal role in AI.
The report introduces a structured seven-stage pipeline for fine-tuning LLMs.
arXiv  Detail & Related papers  (2024-08-23T14:48:02Z) - CoMMIT: Coordinated Instruction Tuning for Multimodal Large Language   Models [68.64605538559312]
In this paper, we analyze the MLLM instruction tuning from both theoretical and empirical perspectives.
Inspired by our findings, we propose a measurement to quantitatively evaluate the learning balance.
In addition, we introduce an auxiliary loss regularization method to promote updating of the generation distribution of MLLMs.
arXiv  Detail & Related papers  (2024-07-29T23:18:55Z) - Q*: Improving Multi-step Reasoning for LLMs with Deliberative Planning [53.6472920229013]
Large Language Models (LLMs) have demonstrated impressive capability in many natural language tasks.
LLMs are prone to produce errors, hallucinations and inconsistent statements when performing multi-step reasoning.
We introduce Q*, a framework for guiding LLMs decoding process with deliberative planning.
arXiv  Detail & Related papers  (2024-06-20T13:08:09Z) - Aligning Language Models with Demonstrated Feedback [58.834937450242975]
Demonstration ITerated Task Optimization (DITTO) directly aligns language model outputs to a user's demonstrated behaviors.
We evaluate DITTO's ability to learn fine-grained style and task alignment across domains such as news articles, emails, and blog posts.
arXiv  Detail & Related papers  (2024-06-02T23:13:56Z) - Online Cascade Learning for Efficient Inference over Streams [9.516197133796437]
Large Language Models (LLMs) have a natural role in answering complex queries about data streams.
We propose online cascade learning, the first approach to address this challenge.
We formulate the task of learning cascades online as an imitation-learning problem.
arXiv  Detail & Related papers  (2024-02-07T01:46:50Z) - Let's reward step by step: Step-Level reward model as the Navigators for
  Reasoning [64.27898739929734]
Process-Supervised Reward Model (PRM) furnishes LLMs with step-by-step feedback during the training phase.
We propose a greedy search algorithm that employs the step-level feedback from PRM to optimize the reasoning pathways explored by LLMs.
To explore the versatility of our approach, we develop a novel method to automatically generate step-level reward dataset for coding tasks and observed similar improved performance in the code generation tasks.
arXiv  Detail & Related papers  (2023-10-16T05:21:50Z) - Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM
  Inference Pipeline [22.08897444328099]
Large language models (LLMs) have revolutionized the field of AI, demonstrating unprecedented capacity across various tasks.
In this paper, we propose an efficient LLM inference pipeline that harnesses the power of LLMs.
arXiv  Detail & Related papers  (2023-05-22T15:36:06Z) - Guiding Large Language Models via Directional Stimulus Prompting [114.84930073977672]
We introduce Directional Stimulus Prompting, a novel framework for guiding black-box large language models (LLMs) toward specific desired outputs.
Instead of directly adjusting LLMs, our method employs a small tunable policy model to generate an auxiliary directional stimulus prompt for each input instance.
arXiv  Detail & Related papers  (2023-02-22T17:44:15Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.