Exact Support Recovery in Federated Regression with One-shot
Communication
- URL: http://arxiv.org/abs/2006.12583v1
- Date: Mon, 22 Jun 2020 19:48:39 GMT
- Title: Exact Support Recovery in Federated Regression with One-shot
Communication
- Authors: Adarsh Barik, Jean Honorio
- Abstract summary: Federated learning provides a framework to address the challenges of distributed computing, data ownership and privacy.
We provide a simple communication algorithm which only needs one-shot communication with the centralized server to compute the exact support.
Our method does not require the clients to solve any optimization problem and thus, can be run on devices with low computational capabilities.
- Score: 31.061339148448006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning provides a framework to address the challenges of
distributed computing, data ownership and privacy over a large number of
distributed clients with low computational and communication capabilities. In
this paper, we study the problem of learning the exact support of sparse linear
regression in the federated learning setup. We provide a simple communication
efficient algorithm which only needs one-shot communication with the
centralized server to compute the exact support. Our method does not require
the clients to solve any optimization problem and thus, can be run on devices
with low computational capabilities. Our method is naturally robust to the
problems of client failure, model poisoning and straggling clients. We formally
prove that our method requires a number of samples per client that is
polynomial with respect to the support size, but independent of the dimension
of the problem. We require the number of distributed clients to be logarithmic
in the dimension of the problem. If the predictor variables are mutually
independent then the overall sample complexity matches the optimal sample
complexity of the non-federated centralized setting. Furthermore, our method is
easy to implement and has an overall polynomial time complexity.
Related papers
- Cohort Squeeze: Beyond a Single Communication Round per Cohort in Cross-Device Federated Learning [51.560590617691005]
We investigate whether it is possible to squeeze more juice" out of each cohort than what is possible in a single communication round.
Our approach leads to up to 74% reduction in the total communication cost needed to train a FL model in the cross-device setting.
arXiv Detail & Related papers (2024-06-03T08:48:49Z) - On the Necessity of Collaboration for Online Model Selection with Decentralized Data [53.244188985271606]
We consider online model selection with decentralized data over $M$ clients, and study the necessity of collaboration among clients.
Our results show (i) collaboration is unnecessary in the absence of computational constraints on clients; (ii) collaboration is necessary if the computational cost on each client is limited to $o(K)$, where $K$ is the number of candidate hypothesis spaces.
arXiv Detail & Related papers (2024-04-15T06:32:28Z) - Greedy Shapley Client Selection for Communication-Efficient Federated
Learning [32.38170282930876]
Standard client selection algorithms for Federated Learning (FL) are often unbiased and involve uniform random sampling of clients.
We develop a biased client selection strategy, GreedyFed, that identifies and greedily selects the most contributing clients in each communication round.
Compared to various client selection strategies on several real-world datasets, GreedyFed demonstrates fast and stable convergence with high accuracy under timing constraints.
arXiv Detail & Related papers (2023-12-14T16:44:38Z) - An Incentive Mechanism for Federated Learning Based on Multiple Resource
Exchange [5.385462087305977]
Federated Learning (FL) is a distributed machine learning paradigm that addresses privacy concerns in machine learning.
We introduce a multi-user collaborative computing framework, categorizing users into two roles: model owners (MOs) and data owner (DOs)
We show that the proposed collaborative computing framework can achieve an accuracy of more than 95% while minimizing the overall time to complete an FL task.
arXiv Detail & Related papers (2023-12-13T12:28:37Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Federated Gradient Matching Pursuit [17.695717854068715]
Traditional machine learning techniques require centralizing all training data on one server or data hub.
In particular, federated learning (FL) provides such a solution to learn a shared model while keeping training data at local clients.
We propose a novel algorithmic framework, federated gradient matching pursuit (FedGradMP), to solve the sparsity constrained minimization problem in the FL setting.
arXiv Detail & Related papers (2023-02-20T16:26:29Z) - Federated Learning with Regularized Client Participation [1.433758865948252]
Federated Learning (FL) is a distributed machine learning approach where multiple clients work together to solve a machine learning task.
One of the key challenges in FL is the issue of partial participation, which occurs when a large number of clients are involved in the training process.
We propose a new technique and design a novel regularized client participation scheme.
arXiv Detail & Related papers (2023-02-07T18:26:07Z) - Optimizing Server-side Aggregation For Robust Federated Learning via
Subspace Training [80.03567604524268]
Non-IID data distribution across clients and poisoning attacks are two main challenges in real-world federated learning systems.
We propose SmartFL, a generic approach that optimize the server-side aggregation process.
We provide theoretical analyses of the convergence and generalization capacity for SmartFL.
arXiv Detail & Related papers (2022-11-10T13:20:56Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - FedChain: Chained Algorithms for Near-Optimal Communication Cost in
Federated Learning [24.812767482563878]
Federated learning (FL) aims to minimize the communication complexity of training a model over heterogeneous data distributed across many clients.
We propose FedChain, an algorithmic framework that combines the strengths of local methods and global methods to achieve fast convergence in terms of R.
arXiv Detail & Related papers (2021-08-16T02:57:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.