Learning Regularizers: Learning Optimizers that can Regularize
- URL: http://arxiv.org/abs/2510.08968v1
- Date: Fri, 10 Oct 2025 03:23:28 GMT
- Title: Learning Regularizers: Learning Optimizers that can Regularize
- Authors: Suraj Kumar Sahoo, Narayanan C Krishnan,
- Abstract summary: We show that Learneds (LOs) can be trained to internalize the effects of traditional regularization techniques without explicitly applying them to the objective function.<n>Regularized LOs consistently outperform their unregularized counterparts in terms of test accuracy and generalization.<n>Our results suggest that LOs can inherently learn regularization properties, textitchallenging the conventional necessity of explicit optimizee loss regularization.
- Score: 0.959678985347394
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Learned Optimizers (LOs), a type of Meta-learning, have gained traction due to their ability to be parameterized and trained for efficient optimization. Traditional gradient-based methods incorporate explicit regularization techniques such as Sharpness-Aware Minimization (SAM), Gradient-norm Aware Minimization (GAM), and Gap-guided Sharpness-Aware Minimization (GSAM) to enhance generalization and convergence. In this work, we explore a fundamental question: \textbf{Can regularizers be learned?} We empirically demonstrate that LOs can be trained to learn and internalize the effects of traditional regularization techniques without explicitly applying them to the objective function. We validate this through extensive experiments on standard benchmarks (including MNIST, FMNIST, CIFAR and Neural Networks such as MLP, MLP-Relu and CNN), comparing LOs trained with and without access to explicit regularizers. Regularized LOs consistently outperform their unregularized counterparts in terms of test accuracy and generalization. Furthermore, we show that LOs retain and transfer these regularization effects to new optimization tasks by inherently seeking minima similar to those targeted by these regularizers. Our results suggest that LOs can inherently learn regularization properties, \textit{challenging the conventional necessity of explicit optimizee loss regularization.
Related papers
- Data Distribution as a Lever for Guiding Optimizers Toward Superior Generalization in LLMs [60.68927774057402]
We show, for the first time, that a lower simplicity bias induces a better generalization.<n>Motivated by this insight, we demonstrate that the training data distribution by upsampling or augmenting examples learned later in training similarly reduces SB and leads to improved generalization.<n>Our strategy improves the performance of multiple language models including Phi2-2.7B, Llama3.2-1B, Gemma3-1B-PT, Qwen3-0.6B-Base-achieving relative accuracy gains up to 18% when fine-tuned with AdamW and Muon.
arXiv Detail & Related papers (2026-01-31T07:40:36Z) - Beyond Sharpness: A Flatness Decomposition Framework for Efficient Continual Learning [27.583428955764774]
Continual Learning aims to enable models to sequentially learn multiple tasks without forgetting previous knowledge.<n>Existing sharpness-aware methods for Continual Learning suffer from two key limitations.<n>We propose FLAD, a novel optimization framework that decomposes perturbations into sharpness-aligned and gradient-noise components.
arXiv Detail & Related papers (2026-01-12T15:17:04Z) - Forget Less, Retain More: A Lightweight Regularizer for Rehearsal-Based Continual Learning [51.07663354001582]
Deep neural networks suffer from catastrophic forgetting, where performance on previous tasks degrades after training on a new task.<n>We present a novel approach to address this challenge, focusing on the intersection of memory-based methods and regularization approaches.<n>We formulate a regularization strategy, termed Information Maximization (IM) regularizer, for memory-based continual learning methods.
arXiv Detail & Related papers (2025-12-01T15:56:00Z) - Frustratingly Easy Task-aware Pruning for Large Language Models [33.84349099489764]
We propose a simple yet effective pruning approach for large language models (LLMs)<n>Our framework computes separate importance scores using both general and task-specific calibration data.<n> Experiments on widely used benchmarks demonstrate that our approach is effective and consistently outperforms the baselines.
arXiv Detail & Related papers (2025-10-26T02:09:22Z) - SFT Doesn't Always Hurt General Capabilities: Revisiting Domain-Specific Fine-Tuning in LLMs [53.77646961962239]
Supervised Fine-Tuning (SFT) is a common approach to adapt Large Language Models (LLMs) to specialized tasks.<n>We show that SFT does not always hurt: using a smaller learning rate can substantially mitigate general performance degradation.
arXiv Detail & Related papers (2025-09-25T05:28:22Z) - TopoSizing: An LLM-aided Framework of Topology-based Understanding and Sizing for AMS Circuits [7.615431299673158]
Traditional black-box optimization achieves sampling efficiency but lacks circuit understanding.<n>We propose TopoSizing, an end-to-end framework that performs robust circuit understanding directly from raw netlists.
arXiv Detail & Related papers (2025-09-17T16:52:46Z) - Accelerated Learning with Linear Temporal Logic using Differentiable Simulation [21.84092672461171]
Traditional safety assurance approaches, such as state avoidance and constrained Markov decision processes, often inadequately capture trajectory requirements.<n>We propose the first method, that integrates with differentiable simulators, facilitating efficient gradient-based learning directly from specifications.<n>Our approach introduces soft labeling to achieve differentiable rewards and states, effectively mitigating the sparse-reward issue intrinsic to without compromising objective correctness.
arXiv Detail & Related papers (2025-06-01T20:59:40Z) - Fast Graph Sharpness-Aware Minimization for Enhancing and Accelerating Few-Shot Node Classification [53.727688136434345]
Graph Neural Networks (GNNs) have shown superior performance in node classification.
We present Fast Graph Sharpness-Aware Minimization (FGSAM) that integrates the rapid training of Multi-Layer Perceptrons with the superior performance of GNNs.
Our proposed algorithm outperforms the standard SAM with lower computational costs in FSNC tasks.
arXiv Detail & Related papers (2024-10-22T09:33:29Z) - Enhancing Robustness of Vision-Language Models through Orthogonality Learning and Self-Regularization [77.62516752323207]
We introduce an orthogonal fine-tuning method for efficiently fine-tuning pretrained weights and enabling enhanced robustness and generalization.
A self-regularization strategy is further exploited to maintain the stability in terms of zero-shot generalization of VLMs, dubbed OrthSR.
For the first time, we revisit the CLIP and CoOp with our method to effectively improve the model on few-shot image classficiation scenario.
arXiv Detail & Related papers (2024-07-11T10:35:53Z) - Learning Continually by Spectral Regularization [45.55508032009977]
Continual learning algorithms seek to mitigate loss of plasticity by sustaining good performance while maintaining network trainability.
We develop a new technique for improving continual learning inspired by the observation that the singular values of the neural network parameters at initialization are an important factor for trainability during early phases of learning.
We present an experimental analysis that shows how the proposed spectral regularizer can sustain trainability and performance across a range of model architectures in continual supervised and reinforcement learning settings.
arXiv Detail & Related papers (2024-06-10T21:34:43Z) - Regularization and Optimal Multiclass Learning [10.168670899305232]
This work is to characterize the role of regularization in perhaps the simplest setting for which empirical risk minimization fails: multiclass learning with arbitrary label sets.
Using one-inclusion graphs (OIGs), we exhibit optimal learning algorithms that dovetail with tried-and-true algorithmic principles.
arXiv Detail & Related papers (2023-09-24T16:49:55Z) - Beyond Losses Reweighting: Empowering Multi-Task Learning via the Generalization Perspective [61.10883077161432]
Multi-task learning (MTL) trains deep neural networks to optimize several objectives simultaneously using a shared backbone.<n>We introduce a novel MTL framework that leverages weight perturbation to regulate gradient norms, thus improving generalization.<n>Our method significantly outperforms existing gradient-based MTL techniques in terms of task performance and overall model robustness.
arXiv Detail & Related papers (2022-11-24T17:19:30Z) - On Connections between Regularizations for Improving DNN Robustness [67.28077776415724]
This paper analyzes regularization terms proposed recently for improving the adversarial robustness of deep neural networks (DNNs)
We study possible connections between several effective methods, including input-gradient regularization, Jacobian regularization, curvature regularization, and a cross-Lipschitz functional.
arXiv Detail & Related papers (2020-07-04T23:43:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.