Accelerating Neural Network Training Along Sharp and Flat Directions
- URL: http://arxiv.org/abs/2505.11972v1
- Date: Sat, 17 May 2025 12:13:05 GMT
- Title: Accelerating Neural Network Training Along Sharp and Flat Directions
- Authors: Daniyar Zakarin, Sidak Pal Singh,
- Abstract summary: We study Bulk-SGD, a variant of SGD that restricts updates to the complement of the Dominant subspace.<n>We show that updates along the Bulk subspace, corresponding to flatter directions in the loss landscape, can accelerate convergence but may compromise stability.<n>Our findings suggest a principled approach to designing curvature-awares.
- Score: 6.576051895863941
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recent work has highlighted a surprising alignment between gradients and the top eigenspace of the Hessian -- termed the Dominant subspace -- during neural network training. Concurrently, there has been growing interest in the distinct roles of sharp and flat directions in the Hessian spectrum. In this work, we study Bulk-SGD, a variant of SGD that restricts updates to the orthogonal complement of the Dominant subspace. Through ablation studies, we characterize the stability properties of Bulk-SGD and identify critical hyperparameters that govern its behavior. We show that updates along the Bulk subspace, corresponding to flatter directions in the loss landscape, can accelerate convergence but may compromise stability. To balance these effects, we introduce interpolated gradient methods that unify SGD, Dom-SGD, and Bulk-SGD. Finally, we empirically connect this subspace decomposition to the Generalized Gauss-Newton and Functional Hessian terms, showing that curvature energy is largely concentrated in the Dominant subspace. Our findings suggest a principled approach to designing curvature-aware optimizers.
Related papers
- Description of the Training Process of Neural Networks via Ergodic Theorem : Ghost nodes [3.637162892228131]
We present a unified framework for understanding and accelerating deep neural networks via training gradient descent (SGD)<n>We introduce a practical diagnostic, the running estimate of the largest Lyapunov exponent, which distinguishes genuine convergence toward stablers.<n>We propose a ghost category extension for standard classifiers that adds auxiliary ghost output nodes so the model gains extra descent directions.
arXiv Detail & Related papers (2025-07-01T17:54:35Z) - On the Convergence of (Stochastic) Gradient Descent for Kolmogorov--Arnold Networks [56.78271181959529]
Kolmogorov--Arnold Networks (KANs) have gained significant attention in the deep learning community.
Empirical investigations demonstrate that KANs optimized via gradient descent (SGD) are capable of achieving near-zero training loss.
arXiv Detail & Related papers (2024-10-10T15:34:10Z) - Adaptive Federated Learning Over the Air [108.62635460744109]
We propose a federated version of adaptive gradient methods, particularly AdaGrad and Adam, within the framework of over-the-air model training.
Our analysis shows that the AdaGrad-based training algorithm converges to a stationary point at the rate of $mathcalO( ln(T) / T 1 - frac1alpha ).
arXiv Detail & Related papers (2024-03-11T09:10:37Z) - Momentum Does Not Reduce Stochastic Noise in Stochastic Gradient Descent [0.6906005491572401]
In neural deep networks, gradient descent (SGD) with momentum is said to converge faster and have better generalizability than SGD without momentum.<n>In particular, adding momentum is thought to reduce this batch noise.<n>We analyzed the effect of search direction noise, which is noise defined as the error between the search direction and the steepest descent direction.
arXiv Detail & Related papers (2024-02-04T02:48:28Z) - Convergence of mean-field Langevin dynamics: Time and space
discretization, stochastic gradient, and variance reduction [49.66486092259376]
The mean-field Langevin dynamics (MFLD) is a nonlinear generalization of the Langevin dynamics that incorporates a distribution-dependent drift.
Recent works have shown that MFLD globally minimizes an entropy-regularized convex functional in the space of measures.
We provide a framework to prove a uniform-in-time propagation of chaos for MFLD that takes into account the errors due to finite-particle approximation, time-discretization, and gradient approximation.
arXiv Detail & Related papers (2023-06-12T16:28:11Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - Stability and Generalization Analysis of Gradient Methods for Shallow
Neural Networks [59.142826407441106]
We study the generalization behavior of shallow neural networks (SNNs) by leveraging the concept of algorithmic stability.
We consider gradient descent (GD) and gradient descent (SGD) to train SNNs, for both of which we develop consistent excess bounds.
arXiv Detail & Related papers (2022-09-19T18:48:00Z) - Beyond the Edge of Stability via Two-step Gradient Updates [49.03389279816152]
Gradient Descent (GD) is a powerful workhorse of modern machine learning.
GD's ability to find local minimisers is only guaranteed for losses with Lipschitz gradients.
This work focuses on simple, yet representative, learning problems via analysis of two-step gradient updates.
arXiv Detail & Related papers (2022-06-08T21:32:50Z) - The Limiting Dynamics of SGD: Modified Loss, Phase Space Oscillations,
and Anomalous Diffusion [29.489737359897312]
We study the limiting dynamics of deep neural networks trained with gradient descent (SGD)
We show that the key ingredient driving these dynamics is not the original training loss, but rather the combination of a modified loss, which implicitly regularizes the velocity and probability currents, which cause oscillations in phase space.
arXiv Detail & Related papers (2021-07-19T20:18:57Z) - Do Not Escape From the Manifold: Discovering the Local Coordinates on
the Latent Space of GANs [7.443321740418409]
We propose a method to find local-geometry-aware traversal directions on the intermediate latent space of Generative Adversarial Networks (GANs)
Motivated by the intrinsic sparsity of the latent space, the basis is discovered by solving the low-rank approximation problem of the differential of the partial network.
arXiv Detail & Related papers (2021-06-13T10:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.