Distributed Continual Learning with CoCoA in High-dimensional Linear
Regression
- URL: http://arxiv.org/abs/2312.01795v1
- Date: Mon, 4 Dec 2023 10:35:46 GMT
- Title: Distributed Continual Learning with CoCoA in High-dimensional Linear
Regression
- Authors: Martin Hellkvist, Ay\c{c}a \"Oz\c{c}elikkale, Anders Ahl\'en
- Abstract summary: We consider estimation under scenarios where the signals of interest exhibit change of characteristics over time.
In particular, we consider the continual learning problem where different tasks, e.g., data with different distributions, arrive sequentially.
We consider the well-established distributed learning algorithm COCOA, which distributes the model parameters and the corresponding features over the network.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider estimation under scenarios where the signals of interest exhibit
change of characteristics over time. In particular, we consider the continual
learning problem where different tasks, e.g., data with different
distributions, arrive sequentially and the aim is to perform well on the newly
arrived task without performance degradation on the previously seen tasks. In
contrast to the continual learning literature focusing on the centralized
setting, we investigate the problem from a distributed estimation perspective.
We consider the well-established distributed learning algorithm COCOA, which
distributes the model parameters and the corresponding features over the
network. We provide exact analytical characterization for the generalization
error of COCOA under continual learning for linear regression in a range of
scenarios, where overparameterization is of particular interest. These
analytical results characterize how the generalization error depends on the
network structure, the task similarity and the number of tasks, and show how
these dependencies are intertwined. In particular, our results show that the
generalization error can be significantly reduced by adjusting the network
size, where the most favorable network size depends on task similarity and the
number of tasks. We present numerical results verifying the theoretical
analysis and illustrate the continual learning performance of COCOA with a
digit classification task.
Related papers
- Distributed Networked Multi-task Learning [3.10770247120758]
We consider a distributed multi-task learning scheme that accounts for multiple linear model estimation tasks.
We provide a finite-time characterization of convergence of the estimators and task relation.
arXiv Detail & Related papers (2024-10-04T13:10:31Z) - An MRP Formulation for Supervised Learning: Generalized Temporal Difference Learning Models [20.314426291330278]
In traditional statistical learning, data points are usually assumed to be independently and identically distributed (i.i.d.)
This paper presents a contrasting viewpoint, perceiving data points as interconnected and employing a Markov reward process (MRP) for data modeling.
We reformulate the typical supervised learning as an on-policy policy evaluation problem within reinforcement learning (RL), introducing a generalized temporal difference (TD) learning algorithm as a resolution.
arXiv Detail & Related papers (2024-04-23T21:02:58Z) - Regularization Through Simultaneous Learning: A Case Study on Plant
Classification [0.0]
This paper introduces Simultaneous Learning, a regularization approach drawing on principles of Transfer Learning and Multi-task Learning.
We leverage auxiliary datasets with the target dataset, the UFOP-HVD, to facilitate simultaneous classification guided by a customized loss function.
Remarkably, our approach demonstrates superior performance over models without regularization.
arXiv Detail & Related papers (2023-05-22T19:44:57Z) - Modeling Uncertain Feature Representation for Domain Generalization [49.129544670700525]
We show that our method consistently improves the network generalization ability on multiple vision tasks.
Our methods are simple yet effective and can be readily integrated into networks without additional trainable parameters or loss constraints.
arXiv Detail & Related papers (2023-01-16T14:25:02Z) - Continual Learning with Distributed Optimization: Does CoCoA Forget? [0.0]
We focus on the continual learning problem where the tasks arrive sequentially.
The aim is to perform well on the newly arrived task without performance degradation on the previously seen tasks.
We consider the well-established distributed learning algorithm COCOA.
arXiv Detail & Related papers (2022-11-30T13:49:43Z) - Decentralized Local Stochastic Extra-Gradient for Variational
Inequalities [125.62877849447729]
We consider distributed variational inequalities (VIs) on domains with the problem data that is heterogeneous (non-IID) and distributed across many devices.
We make a very general assumption on the computational network that covers the settings of fully decentralized calculations.
We theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone settings.
arXiv Detail & Related papers (2021-06-15T17:45:51Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - Optimization and Generalization of Regularization-Based Continual
Learning: a Loss Approximation Viewpoint [35.5156045701898]
We provide a novel viewpoint of regularization-based continual learning by formulating it as a second-order Taylor approximation of the loss function of each task.
Based on this viewpoint, we study the optimization aspects (i.e., convergence) as well as generalization properties (i.e., finite-sample guarantees) of regularization-based continual learning.
arXiv Detail & Related papers (2020-06-19T06:08:40Z) - Robust Learning Through Cross-Task Consistency [92.42534246652062]
We propose a broadly applicable and fully computational method for augmenting learning with Cross-Task Consistency.
We observe that learning with cross-task consistency leads to more accurate predictions and better generalization to out-of-distribution inputs.
arXiv Detail & Related papers (2020-06-07T09:24:33Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.