Multi-task learning on the edge: cost-efficiency and theoretical
optimality
- URL: http://arxiv.org/abs/2110.04639v1
- Date: Sat, 9 Oct 2021 19:59:02 GMT
- Title: Multi-task learning on the edge: cost-efficiency and theoretical
optimality
- Authors: Sami Fakhry (1 and 2) and Romain Couillet (1 and 2 and 3) and Malik
Tiomoko (1 and 2) ((1) GIPSA-Lab, (2) Grenoble-Alps University, (3) LIG-Lab)
- Abstract summary: This article proposes a distributed multi-task learning (MTL) algorithm based on supervised principal component analysis (SPCA)
Supporting experiments on synthetic and real benchmark data demonstrate that significant energy gains can be obtained with no performance loss.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article proposes a distributed multi-task learning (MTL) algorithm based
on supervised principal component analysis (SPCA) which is: (i) theoretically
optimal for Gaussian mixtures, (ii) computationally cheap and scalable.
Supporting experiments on synthetic and real benchmark data demonstrate that
significant energy gains can be obtained with no performance loss.
Related papers
- Theoretical Insights into Fine-Tuning Attention Mechanism: Generalization and Optimization [22.317176475276725]
We investigate two remarkable phenomena observed during the fine-tuning of Large Language Models (LLMs)
Fine-tuning only the $mathbfW_q$ and $mathbfW_v$ matrix significantly improves performance over optimizing the $mathbfW_k$ matrix.
We propose a new strategy that improves fine-tuning efficiency in terms of both storage and time.
arXiv Detail & Related papers (2024-10-03T06:37:37Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Delegating Data Collection in Decentralized Machine Learning [67.0537668772372]
Motivated by the emergence of decentralized machine learning (ML) ecosystems, we study the delegation of data collection.
We design optimal and near-optimal contracts that deal with two fundamental information asymmetries.
We show that a principal can cope with such asymmetry via simple linear contracts that achieve 1-1/e fraction of the optimal utility.
arXiv Detail & Related papers (2023-09-04T22:16:35Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Probabilistic Inverse Optimal Transport [11.425633112192521]
Optimal transport (OT) formalizes the problem of finding an optimal coupling between probability measures given a cost matrix.
The inverse problem of inferring the cost given a coupling is Inverse Optimal Transport (IOT)
We formalize and systematically analyze the properties of IOT using tools from the study of entropy-regularized OT.
arXiv Detail & Related papers (2021-12-17T20:33:27Z) - PCA-based Multi Task Learning: a Random Matrix Approach [40.49988553835459]
The article proposes and theoretically analyses a emphcomputationally efficient multi-task learning (MTL) extension of popular principal component analysis (PCA)-based supervised learning schemes citebarshan2011supervised,bair2006prediction.
arXiv Detail & Related papers (2021-11-01T13:13:38Z) - Fast and Efficient MMD-based Fair PCA via Optimization over Stiefel
Manifold [41.58534159822546]
This paper defines fair principal component analysis (PCA) as minimizing the maximum discrepancy (MMD) between dimensionality-reduced conditional distributions.
We provide optimality guarantees and explicitly show the theoretical effect in practical settings.
arXiv Detail & Related papers (2021-09-23T08:06:02Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - Large Dimensional Analysis and Improvement of Multi Task Learning [38.86699890656948]
Multi Task Learning (MTL) efficiently leverages useful information contained in multiple related tasks to help improve the generalization performance of all tasks.
This article conducts a large dimensional analysis of a simple but, as we shall see, extremely powerful when carefully tuned, Least Square Support Vector Machine (LSSVM) version of MTL.
arXiv Detail & Related papers (2020-09-03T11:40:14Z) - Approximation Algorithms for Sparse Principal Component Analysis [57.5357874512594]
Principal component analysis (PCA) is a widely used dimension reduction technique in machine learning and statistics.
Various approaches to obtain sparse principal direction loadings have been proposed, which are termed Sparse Principal Component Analysis.
We present thresholding as a provably accurate, time, approximation algorithm for the SPCA problem.
arXiv Detail & Related papers (2020-06-23T04:25:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.