Resource-constrained Federated Edge Learning with Heterogeneous Data:
Formulation and Analysis
- URL: http://arxiv.org/abs/2110.07567v1
- Date: Thu, 14 Oct 2021 17:35:24 GMT
- Title: Resource-constrained Federated Edge Learning with Heterogeneous Data:
Formulation and Analysis
- Authors: Yi Liu, Yuanshao Zhu, James J.Q. Yu
- Abstract summary: We propose a distributed approximate Newton-type Newton-type training scheme, namely FedOVA, to solve the heterogeneous statistical challenge brought by heterogeneous data.
FedOVA decomposes a multi-class classification problem into more straightforward binary classification problems and then combines their respective outputs using ensemble learning.
- Score: 8.863089484787835
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Efficient collaboration between collaborative machine learning and wireless
communication technology, forming a Federated Edge Learning (FEEL), has spawned
a series of next-generation intelligent applications. However, due to the
openness of network connections, the FEEL framework generally involves hundreds
of remote devices (or clients), resulting in expensive communication costs,
which is not friendly to resource-constrained FEEL. To address this issue, we
propose a distributed approximate Newton-type algorithm with fast convergence
speed to alleviate the problem of FEEL resource (in terms of communication
resources) constraints. Specifically, the proposed algorithm is improved based
on distributed L-BFGS algorithm and allows each client to approximate the
high-cost Hessian matrix by computing the low-cost Fisher matrix in a
distributed manner to find a "better" descent direction, thereby speeding up
convergence. Second, we prove that the proposed algorithm has linear
convergence in strongly convex and non-convex cases and analyze its
computational and communication complexity. Similarly, due to the heterogeneity
of the connected remote devices, FEEL faces the challenge of heterogeneous data
and non-IID (Independent and Identically Distributed) data. To this end, we
design a simple but elegant training scheme, namely FedOVA, to solve the
heterogeneous statistical challenge brought by heterogeneous data. In this way,
FedOVA first decomposes a multi-class classification problem into more
straightforward binary classification problems and then combines their
respective outputs using ensemble learning. In particular, the scheme can be
well integrated with our communication efficient algorithm to serve FEEL.
Numerical results verify the effectiveness and superiority of the proposed
algorithm.
Related papers
- Joint Optimization of Resource Allocation and Data Selection for Fast and Cost-Efficient Federated Edge Learning [9.460012379815423]
Deploying learning at the wireless edge introduces FEEL (FEEL)
We propose an efficient system, a federated joint resource allocation and data selection.
The superiority of our proposed scheme of joint resource allocation and data selection is validated.
arXiv Detail & Related papers (2024-07-03T08:03:59Z) - Faster Convergence on Heterogeneous Federated Edge Learning: An Adaptive Clustered Data Sharing Approach [27.86468387141422]
Federated Edge Learning (FEEL) emerges as a pioneering distributed machine learning paradigm for the 6G Hyper-Connectivity.
Current FEEL algorithms struggle with non-independent and non-identically distributed (non-IID) data, leading to elevated communication costs and compromised model accuracy.
We introduce a clustered data sharing framework, mitigating data heterogeneity by selectively sharing partial data from cluster heads to trusted associates.
Experiments show that the proposed framework facilitates FEEL on non-IID datasets with faster convergence rate and higher model accuracy in a limited communication environment.
arXiv Detail & Related papers (2024-06-14T07:22:39Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - A Differentiable Approach to Combinatorial Optimization using Dataless
Neural Networks [20.170140039052455]
We propose a radically different approach in that no data is required for training the neural networks that produce the solution.
In particular, we reduce the optimization problem to a neural network and employ a dataless training scheme to refine the parameters of the network such that those parameters yield the structure of interest.
arXiv Detail & Related papers (2022-03-15T19:21:31Z) - Sample-based and Feature-based Federated Learning via Mini-batch SSCA [18.11773963976481]
This paper investigates sample-based and feature-based federated optimization.
We show that the proposed algorithms can preserve data privacy through the model aggregation mechanism.
We also show that the proposed algorithms converge to Karush-Kuhn-Tucker points of the respective federated optimization problems.
arXiv Detail & Related papers (2021-04-13T08:23:46Z) - Reconfigurable Intelligent Surface Assisted Mobile Edge Computing with
Heterogeneous Learning Tasks [53.1636151439562]
Mobile edge computing (MEC) provides a natural platform for AI applications.
We present an infrastructure to perform machine learning tasks at an MEC with the assistance of a reconfigurable intelligent surface (RIS)
Specifically, we minimize the learning error of all participating users by jointly optimizing transmit power of mobile users, beamforming vectors of the base station, and the phase-shift matrix of the RIS.
arXiv Detail & Related papers (2020-12-25T07:08:50Z) - FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
to Non-IID Data [59.50904660420082]
Federated Learning (FL) has become a popular paradigm for learning from distributed data.
To effectively utilize data at different devices without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a "computation then aggregation" (CTA) model.
arXiv Detail & Related papers (2020-05-22T23:07:42Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z) - Federated Matrix Factorization: Algorithm Design and Application to Data
Clustering [18.917444528804463]
Recent demands on data privacy have called for federated learning (FL) as a new distributed learning paradigm in massive and heterogeneous networks.
We propose two new FedMF algorithms, namely FedMAvg and FedMGS, based on the model averaging and gradient sharing principles.
arXiv Detail & Related papers (2020-02-12T11:48:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.