A Theoretical Framework for LLM Fine-tuning Using Early Stopping for Non-random Initialization
- URL: http://arxiv.org/abs/2602.13942v1
- Date: Sun, 15 Feb 2026 00:43:21 GMT
- Title: A Theoretical Framework for LLM Fine-tuning Using Early Stopping for Non-random Initialization
- Authors: Zexuan Sun, Garvesh Raskutti,
- Abstract summary: A central question is why only a few epochs of fine-tuning are typically sufficient to achieve strong performance on many different tasks.<n>We develop a statistical framework, combining rigorous early stopping theory with the attention-based Neural Tangent Kernel (NTK) for large language models.
- Score: 2.635536317968963
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the era of large language models (LLMs), fine-tuning pretrained models has become ubiquitous. Yet the theoretical underpinning remains an open question. A central question is why only a few epochs of fine-tuning are typically sufficient to achieve strong performance on many different tasks. In this work, we approach this question by developing a statistical framework, combining rigorous early stopping theory with the attention-based Neural Tangent Kernel (NTK) for LLMs, offering new theoretical insights on fine-tuning practices. Specifically, we formally extend classical NTK theory [Jacot et al., 2018] to non-random (i.e., pretrained) initializations and provide a convergence guarantee for attention-based fine-tuning. One key insight provided by the theory is that the convergence rate with respect to sample size is closely linked to the eigenvalue decay rate of the empirical kernel matrix induced by the NTK. We also demonstrate how the framework can be used to explain task vectors for multiple tasks in LLMs. Finally, experiments with modern language models on real-world datasets provide empirical evidence supporting our theoretical insights.
Related papers
- On Multi-Step Theorem Prediction via Non-Parametric Structural Priors [50.16583672681106]
In this work, we explore training-free theorem prediction through the lens of in-context learning (ICL)<n>We propose Theorem Precedence Graphs, which encode temporal dependencies from historical solution traces as directed graphs, and impose explicit topological constraints that effectively prune the search space during inference.<n>Experiments on the FormalGeo7k benchmark show that our method achieves 89.29% accuracy, substantially outperforming ICL baselines and matching state-of-the-art supervised models.
arXiv Detail & Related papers (2026-03-05T06:08:50Z) - How and Why LLMs Generalize: A Fine-Grained Analysis of LLM Reasoning from Cognitive Behaviors to Low-Level Patterns [51.02752099869218]
Large Language Models (LLMs) display strikingly different generalization behaviors.<n>We introduce a novel benchmark that decomposes reasoning into atomic core skills.<n>We show that RL-tuned models maintain more stable behavioral profiles and resist collapse in reasoning skills, whereas SFT models exhibit sharper drift and overfit to surface patterns.
arXiv Detail & Related papers (2025-12-30T08:16:20Z) - ReNF: Rethinking the Design Space of Neural Long-Term Time Series Forecasters [48.79331759671512]
We introduce a Multiple Neural Forecasting Theorem that provides a theoretical basis for our approach.<n>We propose Boosted Direct Output (BDO), a novel forecasting strategy that combines the advantages of both Auto-Regressive (AR) and Direct Output (DO)
arXiv Detail & Related papers (2025-09-30T08:05:59Z) - Specialization after Generalization: Towards Understanding Test-Time Training in Foundation Models [64.02612380298228]
Recent studies have explored the idea of continuing to train a model at test-time for a given task, known as test-time training (TTT)<n>We propose a model in which TTT achieves a substantially smaller in-distribution test error than global training.<n>We empirically validate our model's key assumptions by training a sparse autoencoder on ImageNet.
arXiv Detail & Related papers (2025-09-29T09:24:52Z) - CoT-Space: A Theoretical Framework for Internal Slow-Thinking via Reinforcement Learning [14.337056020596465]
CoT-Space is a novel theoretical framework that recasts Large Language Models (LLMs) reasoning from a discrete token-prediction task to an optimization process within a continuous, reasoning-level semantic space.<n>We show that the convergence to an optimal CoT length is a natural consequence of the fundamental trade-off between underfitting and overfitting.
arXiv Detail & Related papers (2025-09-04T09:02:16Z) - Near-Optimal Sample Complexity in Reward-Free Kernel-Based Reinforcement Learning [18.784248829596486]
We ask how many samples are required to design a near-optimal policy in kernel-based RL.<n>Existing work addresses this question under restrictive assumptions about the class of kernel functions.<n>We tackle this fundamental problem using a broad class of kernels and a simpler algorithm compared to prior work.
arXiv Detail & Related papers (2025-02-11T17:15:55Z) - Fine-Tuning on Diverse Reasoning Chains Drives Within-Inference CoT Refinement in LLMs [63.36637269634553]
We introduce a novel approach where LLMs are fine-tuned to generate a sequence of Diverse Chains of Thought (DCoT) within a single inference step.<n>We show that fine-tuning on DCoT improves performance over the CoT baseline across model families and scales.<n>Our work is also significant because both quantitative analyses and manual evaluations reveal the observed gains stem from the models' ability to refine an initial reasoning chain.
arXiv Detail & Related papers (2024-07-03T15:01:18Z) - A Kernel-Based View of Language Model Fine-Tuning [94.75146965041131]
We investigate whether the Neural Tangent Kernel (NTK) describes fine-tuning of pre-trained LMs.
We show that formulating the downstream task as a masked word prediction problem through prompting often induces kernel-based dynamics during fine-tuning.
arXiv Detail & Related papers (2022-10-11T17:34:32Z) - The Eigenlearning Framework: A Conservation Law Perspective on Kernel
Regression and Wide Neural Networks [1.6519302768772166]
We derive simple closed-form estimates for the test risk and other generalization metrics of kernel ridge regression.
We identify a sharp conservation law which limits the ability of KRR to learn any orthonormal basis of functions.
arXiv Detail & Related papers (2021-10-08T06:32:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.