Learning Curves for SGD on Structured Features
- URL: http://arxiv.org/abs/2106.02713v1
- Date: Fri, 4 Jun 2021 20:48:20 GMT
- Title: Learning Curves for SGD on Structured Features
- Authors: Blake Bordelon and Cengiz Pehlevan
- Abstract summary: We show that the geometry of the data in the induced feature space is crucial to accurately predict the test error throughout learning.
We show that modeling the geometry of the data in the induced feature space is indeed crucial to accurately predict the test error throughout learning.
- Score: 23.40229188549055
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The generalization performance of a machine learning algorithm such as a
neural network depends in a non-trivial way on the structure of the data
distribution. Models of generalization in machine learning theory often ignore
the low-dimensional structure of natural signals, either by considering
data-agnostic bounds or by studying the performance of the algorithm when
trained on uncorrelated features. To analyze the influence of data structure on
test loss dynamics, we study an exactly solveable model of stochastic gradient
descent (SGD) which predicts test loss when training on features with arbitrary
covariance structure. We solve the theory exactly for both Gaussian features
and arbitrary features and we show that the simpler Gaussian model accurately
predicts test loss of nonlinear random-feature models and deep neural networks
trained with SGD on real datasets such as MNIST and CIFAR-10. We show that
modeling the geometry of the data in the induced feature space is indeed
crucial to accurately predict the test error throughout learning.
Related papers
- CGNSDE: Conditional Gaussian Neural Stochastic Differential Equation for Modeling Complex Systems and Data Assimilation [1.4322470793889193]
A new knowledge-based and machine learning hybrid modeling approach, called conditional neural differential equation (CGNSDE), is developed.
In contrast to the standard neural network predictive models, the CGNSDE is designed to effectively tackle both forward prediction tasks and inverse state estimation problems.
arXiv Detail & Related papers (2024-04-10T05:32:03Z) - Deep learning for full-field ultrasonic characterization [7.120879473925905]
This study takes advantage of recent advances in machine learning to establish a physics-based data analytic platform.
Two logics, namely the direct inversion and physics-informed neural networks (PINNs), are explored.
arXiv Detail & Related papers (2023-01-06T05:01:05Z) - Towards a mathematical understanding of learning from few examples with
nonlinear feature maps [68.8204255655161]
We consider the problem of data classification where the training set consists of just a few data points.
We reveal key relationships between the geometry of an AI model's feature space, the structure of the underlying data distributions, and the model's generalisation capabilities.
arXiv Detail & Related papers (2022-11-07T14:52:58Z) - A Causality-Based Learning Approach for Discovering the Underlying
Dynamics of Complex Systems from Partial Observations with Stochastic
Parameterization [1.2882319878552302]
This paper develops a new iterative learning algorithm for complex turbulent systems with partial observations.
It alternates between identifying model structures, recovering unobserved variables, and estimating parameters.
Numerical experiments show that the new algorithm succeeds in identifying the model structure and providing suitable parameterizations for many complex nonlinear systems.
arXiv Detail & Related papers (2022-08-19T00:35:03Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Online Tensor-Based Learning for Multi-Way Data [1.0953917735844645]
A new efficient tensor-based feature extraction, named NeSGD, is proposed for online $CANDECOMP/PARAFAC$ decomposition.
Results show that the proposed methods significantly improved the classification error rates, were able to assimilate the changes in the positive data distribution over time, and maintained a high predictive accuracy in all case studies.
arXiv Detail & Related papers (2020-03-10T02:04:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.