One-Pass Learning via Bridging Orthogonal Gradient Descent and Recursive
Least-Squares
- URL: http://arxiv.org/abs/2207.13853v1
- Date: Thu, 28 Jul 2022 02:01:31 GMT
- Title: One-Pass Learning via Bridging Orthogonal Gradient Descent and Recursive
Least-Squares
- Authors: Youngjae Min, Kwangjun Ahn, Navid Azizan
- Abstract summary: We develop an algorithm for one-pass learning which seeks to perfectly fit every new datapoint while changing the parameters in a direction that causes the least change to the predictions on previous datapoints.
Our algorithm uses the memory efficiently by exploiting the structure of the streaming data via an incremental principal component analysis (IPCA)
Our experiments show the effectiveness of the proposed method compared to the baselines.
- Score: 8.443742714362521
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While deep neural networks are capable of achieving state-of-the-art
performance in various domains, their training typically requires iterating for
many passes over the dataset. However, due to computational and memory
constraints and potential privacy concerns, storing and accessing all the data
is impractical in many real-world scenarios where the data arrives in a stream.
In this paper, we investigate the problem of one-pass learning, in which a
model is trained on sequentially arriving data without retraining on previous
datapoints. Motivated by the increasing use of overparameterized models, we
develop Orthogonal Recursive Fitting (ORFit), an algorithm for one-pass
learning which seeks to perfectly fit every new datapoint while changing the
parameters in a direction that causes the least change to the predictions on
previous datapoints. By doing so, we bridge two seemingly distinct algorithms
in adaptive filtering and machine learning, namely the recursive least-squares
(RLS) algorithm and orthogonal gradient descent (OGD). Our algorithm uses the
memory efficiently by exploiting the structure of the streaming data via an
incremental principal component analysis (IPCA). Further, we show that, for
overparameterized linear models, the parameter vector obtained by our algorithm
is what stochastic gradient descent (SGD) would converge to in the standard
multi-pass setting. Finally, we generalize the results to the nonlinear setting
for highly overparameterized models, relevant for deep learning. Our
experiments show the effectiveness of the proposed method compared to the
baselines.
Related papers
- Online Learning Under A Separable Stochastic Approximation Framework [20.26530917721778]
We propose an online learning algorithm for a class of machine learning models under a separable approximation framework.
We show that the proposed algorithm produces more robust and test performance when compared to other popular learning algorithms.
arXiv Detail & Related papers (2023-05-12T13:53:03Z) - Unsupervised Learning of Initialization in Deep Neural Networks via
Maximum Mean Discrepancy [74.34895342081407]
We propose an unsupervised algorithm to find good initialization for input data.
We first notice that each parameter configuration in the parameter space corresponds to one particular downstream task of d-way classification.
We then conjecture that the success of learning is directly related to how diverse downstream tasks are in the vicinity of the initial parameters.
arXiv Detail & Related papers (2023-02-08T23:23:28Z) - Efficient Parametric Approximations of Neural Network Function Space
Distance [6.117371161379209]
It is often useful to compactly summarize important properties of model parameters and training data so that they can be used later without storing and/or iterating over the entire dataset.
We consider estimating the Function Space Distance (FSD) over a training set, i.e. the average discrepancy between the outputs of two neural networks.
We propose a Linearized Activation TRick (LAFTR) and derive an efficient approximation to FSD for ReLU neural networks.
arXiv Detail & Related papers (2023-02-07T15:09:23Z) - A Hybrid Framework for Sequential Data Prediction with End-to-End
Optimization [0.0]
We investigate nonlinear prediction in an online setting and introduce a hybrid model that effectively mitigates hand-designed features and manual model selection issues.
We employ a recurrent neural network (LSTM) for adaptive feature extraction from sequential data and a gradient boosting machinery (soft GBDT) for effective supervised regression.
We demonstrate the learning behavior of our algorithm on synthetic data and the significant performance improvements over the conventional methods over various real life datasets.
arXiv Detail & Related papers (2022-03-25T17:13:08Z) - Implicit Parameter-free Online Learning with Truncated Linear Models [51.71216912089413]
parameter-free algorithms are online learning algorithms that do not require setting learning rates.
We propose new parameter-free algorithms that can take advantage of truncated linear models through a new update that has an "implicit" flavor.
Based on a novel decomposition of the regret, the new update is efficient, requires only one gradient at each step, never overshoots the minimum of the truncated model, and retains the favorable parameter-free properties.
arXiv Detail & Related papers (2022-03-19T13:39:49Z) - Transfer-Learning Across Datasets with Different Input Dimensions: An
Algorithm and Analysis for the Linear Regression Case [7.674023644408741]
We propose a transfer learning algorithm that combines new and historical data with different input dimensions.
Our approach achieves state-of-the-art performance on 9 real-life datasets.
arXiv Detail & Related papers (2022-02-10T14:57:15Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - SreaMRAK a Streaming Multi-Resolution Adaptive Kernel Algorithm [60.61943386819384]
Existing implementations of KRR require that all the data is stored in the main memory.
We propose StreaMRAK - a streaming version of KRR.
We present a showcase study on two synthetic problems and the prediction of the trajectory of a double pendulum.
arXiv Detail & Related papers (2021-08-23T21:03:09Z) - Exploiting Adam-like Optimization Algorithms to Improve the Performance
of Convolutional Neural Networks [82.61182037130405]
gradient descent (SGD) is the main approach for training deep networks.
In this work, we compare Adam based variants based on the difference between the present and the past gradients.
We have tested ensemble of networks and the fusion with ResNet50 trained with gradient descent.
arXiv Detail & Related papers (2021-03-26T18:55:08Z) - Online Robust and Adaptive Learning from Data Streams [22.319483572757097]
In online learning, it is necessary to learn robustly to outliers and to adapt quickly to changes in the underlying data generating mechanism.
In this paper, we refer to the former attribute of online learning algorithms as robustness and to the latter as adaptivity.
We propose a novel approximation-based robustness-adaptivity algorithm (SRA) to evaluate the tradeoff.
arXiv Detail & Related papers (2020-07-23T17:49:04Z) - Pre-Trained Models for Heterogeneous Information Networks [57.78194356302626]
We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network.
PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets.
arXiv Detail & Related papers (2020-07-07T03:36:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.