Low-Pass Filtering SGD for Recovering Flat Optima in the Deep Learning
Optimization Landscape
- URL: http://arxiv.org/abs/2201.08025v1
- Date: Thu, 20 Jan 2022 07:13:04 GMT
- Title: Low-Pass Filtering SGD for Recovering Flat Optima in the Deep Learning
Optimization Landscape
- Authors: Devansh Bisla, Jing Wang, Anna Choromanska
- Abstract summary: We show that LPF-SGD converges to a better optimal point with smaller generalization error than SGD.
We show that our algorithm achieves superior generalization performance compared to the common DL training strategies.
- Score: 15.362190838843915
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we study the sharpness of a deep learning (DL) loss landscape
around local minima in order to reveal systematic mechanisms underlying the
generalization abilities of DL models. Our analysis is performed across varying
network and optimizer hyper-parameters, and involves a rich family of different
sharpness measures. We compare these measures and show that the low-pass
filter-based measure exhibits the highest correlation with the generalization
abilities of DL models, has high robustness to both data and label noise, and
furthermore can track the double descent behavior for neural networks. We next
derive the optimization algorithm, relying on the low-pass filter (LPF), that
actively searches the flat regions in the DL optimization landscape using
SGD-like procedure. The update of the proposed algorithm, that we call LPF-SGD,
is determined by the gradient of the convolution of the filter kernel with the
loss function and can be efficiently computed using MC sampling. We empirically
show that our algorithm achieves superior generalization performance compared
to the common DL training strategies. On the theoretical front, we prove that
LPF-SGD converges to a better optimal point with smaller generalization error
than SGD.
Related papers
- LoKO: Low-Rank Kalman Optimizer for Online Fine-Tuning of Large Models [21.889177019111525]
Training large models with millions or even billions of parameters from scratch incurs substantial computational costs.
We use Low-Rank Adaptation (LoRA) to adapt only a reduced number of parameters to specific tasks with gradient-baseds.
We propose robust approaches that work well across a vast range of well-established computer vision and language models.
arXiv Detail & Related papers (2024-10-15T12:41:31Z) - SubZero: Random Subspace Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning [66.27334633749734]
As language models grow in size, memory demands for backpropagation increase.
Zeroth-order (ZOZO) optimization methods offer a memory-efficient alternative.
We show that SubZero enhances fine-tuning and achieves faster results compared to standard ZOZO approaches.
arXiv Detail & Related papers (2024-10-11T17:01:43Z) - Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.
To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.
Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - Beyond Single-Model Views for Deep Learning: Optimization versus
Generalizability of Stochastic Optimization Algorithms [13.134564730161983]
This paper adopts a novel approach to deep learning optimization, focusing on gradient descent (SGD) and its variants.
We show that SGD and its variants demonstrate performance on par with flat-minimas like SAM, albeit with half the gradient evaluations.
Our study uncovers several key findings regarding the relationship between training loss and hold-out accuracy, as well as the comparable performance of SGD and noise-enabled variants.
arXiv Detail & Related papers (2024-03-01T14:55:22Z) - Low-Rank Representations Meets Deep Unfolding: A Generalized and
Interpretable Network for Hyperspectral Anomaly Detection [41.50904949744355]
Current hyperspectral anomaly detection (HAD) benchmark datasets suffer from low resolution, simple background, and small size of the detection data.
These factors also limit the performance of the well-known low-rank representation (LRR) models in terms of robustness.
We build a new set of HAD benchmark datasets for improving the robustness of the HAD algorithm in complex scenarios, AIR-HAD for short.
arXiv Detail & Related papers (2024-02-23T14:15:58Z) - SING: A Plug-and-Play DNN Learning Technique [25.563053353709627]
We propose SING (StabIlized and Normalized Gradient), a plug-and-play technique that improves the stability and robustness of Adam(W)
SING is straightforward to implement and has minimal computational overhead, requiring only a layer-wise standardization of gradients fed to Adam(W)
arXiv Detail & Related papers (2023-05-25T12:39:45Z) - Sample-efficient Iterative Lower Bound Optimization of Deep Reactive
Policies for Planning in Continuous MDPs [27.41101006357176]
In this work, we take a minorization-maximization perspective to iteratively optimize the.
w.r.t. a locally tight lower-bounded objective.
This novel formulation of learning as iterative lower bound optimization (ILBO) is particularly appealing because (i) each step is structurally easier to optimize than the overall objective.
Empirical evaluation confirms that ILBO is significantly more sample-efficient than the state-of-the-art planner.
arXiv Detail & Related papers (2022-03-23T19:06:16Z) - Large-scale Optimization of Partial AUC in a Range of False Positive
Rates [51.12047280149546]
The area under the ROC curve (AUC) is one of the most widely used performance measures for classification models in machine learning.
We develop an efficient approximated gradient descent method based on recent practical envelope smoothing technique.
Our proposed algorithm can also be used to minimize the sum of some ranked range loss, which also lacks efficient solvers.
arXiv Detail & Related papers (2022-03-03T03:46:18Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.