Taxonomizing local versus global structure in neural network loss
landscapes
- URL: http://arxiv.org/abs/2107.11228v1
- Date: Fri, 23 Jul 2021 13:37:14 GMT
- Title: Taxonomizing local versus global structure in neural network loss
landscapes
- Authors: Yaoqing Yang, Liam Hodgkinson, Ryan Theisen, Joe Zou, Joseph E.
Gonzalez, Kannan Ramchandran, Michael W. Mahoney
- Abstract summary: We show that the best test accuracy is obtained when the loss landscape is globally well-connected.
We also show that globally poorly-connected landscapes can arise when models are small or when they are trained to lower quality data.
- Score: 60.206524503782006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Viewing neural network models in terms of their loss landscapes has a long
history in the statistical mechanics approach to learning, and in recent years
it has received attention within machine learning proper. Among other things,
local metrics (such as the smoothness of the loss landscape) have been shown to
correlate with global properties of the model (such as good generalization).
Here, we perform a detailed empirical analysis of the loss landscape structure
of thousands of neural network models, systematically varying learning tasks,
model architectures, and/or quantity/quality of data. By considering a range of
metrics that attempt to capture different aspects of the loss landscape, we
demonstrate that the best test accuracy is obtained when: the loss landscape is
globally well-connected; ensembles of trained models are more similar to each
other; and models converge to locally smooth regions. We also show that
globally poorly-connected landscapes can arise when models are small or when
they are trained to lower quality data; and that, if the loss landscape is
globally poorly-connected, then training to zero loss can actually lead to
worse test accuracy. Based on these results, we develop a simple
one-dimensional model with load-like and temperature-like parameters, we
introduce the notion of an \emph{effective loss landscape} depending on these
parameters, and we interpret our results in terms of a \emph{rugged convexity}
of the loss landscape. When viewed through this lens, our detailed empirical
results shed light on phases of learning (and consequent double descent
behavior), fundamental versus incidental determinants of good generalization,
the role of load-like and temperature-like parameters in the learning process,
different influences on the loss landscape from model and data, and the
relationships between local and global metrics, all topics of recent interest.
Related papers
- Unraveling the Hessian: A Key to Smooth Convergence in Loss Function Landscapes [0.0]
We theoretically analyze the convergence of the loss landscape in a fully connected neural network and derive upper bounds for the difference in loss function values when adding a new object to the sample.
Our empirical study confirms these results on various datasets, demonstrating the convergence of the loss function surface for image classification tasks.
arXiv Detail & Related papers (2024-09-18T14:04:15Z) - Understanding and Improving Model Averaging in Federated Learning on Heterogeneous Data [9.792805355704203]
We study the loss landscape of model averaging in federated learning (FL)
We decompose the expected loss of the global model into five factors related to the client models.
We propose utilizing IMA on the global model at the late training phase to reduce its deviation from the expected speed.
arXiv Detail & Related papers (2023-05-13T06:19:55Z) - Training trajectories, mini-batch losses and the curious role of the
learning rate [13.848916053916618]
We show that validated gradient descent plays a fundamental role in nearly all applications of deep learning.
We propose a simple model and a geometric interpretation that allows to analyze the relationship between the gradients of mini-batches and the full batch.
In particular, a very low loss value can be reached just one step of descent with large enough learning rate.
arXiv Detail & Related papers (2023-01-05T21:58:46Z) - Are All Losses Created Equal: A Neural Collapse Perspective [36.0354919583995]
Cross entropy (CE) is the most commonly used loss to train deep neural networks for classification tasks.
We show through global solution and landscape analyses that a broad family of loss functions including commonly used label smoothing (LS) and focal loss (FL) exhibits Neural Collapse.
arXiv Detail & Related papers (2022-10-04T00:36:45Z) - Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning [112.69497636932955]
Federated learning aims to train models across different clients without the sharing of data for privacy considerations.
We study how data heterogeneity affects the representations of the globally aggregated models.
We propose sc FedDecorr, a novel method that can effectively mitigate dimensional collapse in federated learning.
arXiv Detail & Related papers (2022-10-01T09:04:17Z) - FuNNscope: Visual microscope for interactively exploring the loss
landscape of fully connected neural networks [77.34726150561087]
We show how to explore high-dimensional landscape characteristics of neural networks.
We generalize observations on small neural networks to more complex systems.
An interactive dashboard opens up a number of possible application networks.
arXiv Detail & Related papers (2022-04-09T16:41:53Z) - Extracting Global Dynamics of Loss Landscape in Deep Learning Models [0.0]
We present a toolkit for the Dynamical Organization Of Deep Learning Loss Landscapes, or DOODL3.
DOODL3 formulates the training of neural networks as a dynamical system, analyzes the learning process, and presents an interpretable global view of trajectories in the loss landscape.
arXiv Detail & Related papers (2021-06-14T18:07:05Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations [71.00754846434744]
We show that imperceptible additive perturbations can significantly alter the disparity map.
We show that, when used for adversarial data augmentation, our perturbations result in trained models that are more robust.
arXiv Detail & Related papers (2020-09-21T19:20:09Z) - Think Locally, Act Globally: Federated Learning with Local and Global
Representations [92.68484710504666]
Federated learning is a method of training models on private data distributed over multiple devices.
We propose a new federated learning algorithm that jointly learns compact local representations on each device.
We also evaluate on the task of personalized mood prediction from real-world mobile data where privacy is key.
arXiv Detail & Related papers (2020-01-06T12:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.