DSFT: Inspiring Diffusion Large Language Models to Comprehend Mathematical and Logical Patterns
- URL: http://arxiv.org/abs/2509.18164v1
- Date: Wed, 17 Sep 2025 06:46:51 GMT
- Title: DSFT: Inspiring Diffusion Large Language Models to Comprehend Mathematical and Logical Patterns
- Authors: Ranfei Chen, Ming Chen,
- Abstract summary: Diffusion large language models (dLLMs) have emerged as a new architecture following auto regressive models.<n>They present significant challenges in learning and understanding numerically sensitive mathematical and order-sensitive logical tasks.<n>We propose DSFT, a simple yet effective Diffusion SFT strategy, by adjusting the masking strategy and loss function.
- Score: 4.193537335690018
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion large language models (dLLMs) have emerged as a new architecture following auto regressive models. Their denoising process offers a powerful generative advantage, but they present significant challenges in learning and understanding numerically sensitive mathematical and order-sensitive logical tasks. Current training methods, including pre-training, fine-tuning, and reinforcement learning, focus primarily on improving general knowledge retention and reasoning abilities, but lack a comprehensive understanding of mathematical and logical patterns. We propose DSFT, a simple yet effective Diffusion SFT strategy, by adjusting the masking strategy and loss function, guiding models to understand mathematical and logical patterns. This strategy can be flexibly combined with pre-training, reinforcement learning, and other training methods. Validated on models such as LLaDA and Dream series, we prove that DSFT on small-scale data can achieve improvements of 5-10% and approximately 2% on mathematical and logical problems, respectively. This inspiring masking approach offers insights for future learning of specific patterns, which can be easily and efficiently combined with other training methods and applied to various dLLMs. Our code is publicly available at https://anonymous.4open.science/r/DSFT-0FFB/
Related papers
- DéjàQ: Open-Ended Evolution of Diverse, Learnable and Verifiable Problems [19.381443841718596]
We introduce DéjQ, a framework that evolves a diverse set of synthetic mathematical problems alongside model training.<n>This evolutionary process adapts to the model's ability throughout training, optimising problems for learnability.<n>We find that the model can generate novel and meaningful problems, and that these LLM-driven mutations improve RL training.
arXiv Detail & Related papers (2026-01-05T09:27:49Z) - Nested Learning: The Illusion of Deep Learning Architectures [57.41377373511876]
We present a new learning paradigm, called Nested Learning (NL), that coherently represents a machine learning model with a set of nested, multi-level, and/or parallel problems.<n>We show three core contributions: Expressive generalizations are in fact as generalizations with deep memory and/or more powerful learning rules.<n>We present a new continuum for memory system that generalizes the traditional viewpoint of long/short-term memory.
arXiv Detail & Related papers (2025-12-31T07:59:43Z) - Revolutionizing Reinforcement Learning Framework for Diffusion Large Language Models [49.911784762244814]
TraceRL is a trajectory-aware reinforcement learning framework for diffusion language models (DLMs)<n>We derive a series of state-of-the-art diffusion language models, namely TraDo.<n>TraDo-8B-Instruct achieves relative accuracy improvements of 6.1% over Qwen2.5-7B-Instruct and 51.3% over Llama3.1-8B-Instruct on mathematical reasoning benchmarks.
arXiv Detail & Related papers (2025-09-08T17:58:06Z) - SPaRFT: Self-Paced Reinforcement Fine-Tuning for Large Language Models [51.74498855100541]
Large language models (LLMs) have shown strong reasoning capabilities when fine-tuned with reinforcement learning (RL)<n>We propose textbfSPaRFT, a self-paced learning framework that enables efficient learning based on the capability of the model being trained.
arXiv Detail & Related papers (2025-08-07T03:50:48Z) - Reinforcement Fine-Tuning Enables MLLMs Learning Novel Tasks Stably [80.36077974826865]
Post-training algorithms such as Supervised Fine-Tuning (SFT) and Reinforcement Fine-Tuning (RFT) are widely used to adapt multimodal large language models to downstream tasks.<n>We study the behavior of SFT and RFT on an open-source multimodal model, Qwen2.5-VL.<n>Our experiments reveal a sharp trade-off: SFT enables rapid task acquisition but leads to catastrophic forgetting, whereas RFT learns more slowly on novel tasks but maintains prior knowledge.
arXiv Detail & Related papers (2025-06-30T04:15:01Z) - Learn to Think: Bootstrapping LLM Reasoning Capability Through Graph Representation Learning [19.75678229122211]
Large Language Models (LLMs) have achieved remarkable success across various domains.<n>They still face significant challenges, including high computational costs for training and limitations in solving complex reasoning problems.<n>We propose a novel framework that leverages graph learning to enable more flexible and adaptive reasoning capabilities.
arXiv Detail & Related papers (2025-05-09T02:51:22Z) - SIKeD: Self-guided Iterative Knowledge Distillation for mathematical reasoning [49.29200323760457]
Large Language Models (LLMs) can transfer their reasoning skills to smaller models.
Smaller models are not expressive enough to fit the LLMs distribution on all strategies when distilled.
This reliance on one strategy poses a challenge for smaller models when attempting to solve reasoning tasks that may be difficult with their preferred strategy.
arXiv Detail & Related papers (2024-10-24T09:29:18Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - MinT: Boosting Generalization in Mathematical Reasoning via Multi-View
Fine-Tuning [53.90744622542961]
Reasoning in mathematical domains remains a significant challenge for small language models (LMs)
We introduce a new method that exploits existing mathematical problem datasets with diverse annotation styles.
Experimental results show that our strategy enables a LLaMA-7B model to outperform prior approaches.
arXiv Detail & Related papers (2023-07-16T05:41:53Z) - Model Sparsity Can Simplify Machine Unlearning [33.18951938708467]
In response to recent data regulation requirements, machine unlearning (MU) has emerged as a critical process.
Our study introduces a novel model-based perspective: model sparsification via weight pruning.
We show in both theory and practice that model sparsity can boost the multi-criteria unlearning performance of an approximate unlearner.
arXiv Detail & Related papers (2023-04-11T02:12:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.