Graph Federated Learning for CIoT Devices in Smart Home Applications
- URL: http://arxiv.org/abs/2212.14395v1
- Date: Thu, 29 Dec 2022 17:57:19 GMT
- Title: Graph Federated Learning for CIoT Devices in Smart Home Applications
- Authors: Arash Rasti-Meymandi, Seyed Mohammad Sheikholeslami, Jamshid Abouei,
Konstantinos N. Plataniotis
- Abstract summary: We propose a novel Graph Signal Processing (GSP)-inspired aggregation rule based on graph filtering dubbed G-Fedfilt''
The proposed aggregator enables a structured flow of information based on the graph's topology.
It is capable of yielding up to $2.41%$ higher accuracy than FedAvg in the case of testing the generalization of the models.
- Score: 23.216140264163535
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This paper deals with the problem of statistical and system heterogeneity in
a cross-silo Federated Learning (FL) framework where there exist a limited
number of Consumer Internet of Things (CIoT) devices in a smart building. We
propose a novel Graph Signal Processing (GSP)-inspired aggregation rule based
on graph filtering dubbed ``G-Fedfilt''. The proposed aggregator enables a
structured flow of information based on the graph's topology. This behavior
allows capturing the interconnection of CIoT devices and training
domain-specific models. The embedded graph filter is equipped with a tunable
parameter which enables a continuous trade-off between domain-agnostic and
domain-specific FL. In the case of domain-agnostic, it forces G-Fedfilt to act
similar to the conventional Federated Averaging (FedAvg) aggregation rule. The
proposed G-Fedfilt also enables an intrinsic smooth clustering based on the
graph connectivity without explicitly specified which further boosts the
personalization of the models in the framework. In addition, the proposed
scheme enjoys a communication-efficient time-scheduling to alleviate the system
heterogeneity. This is accomplished by adaptively adjusting the amount of
training data samples and sparsity of the models' gradients to reduce
communication desynchronization and latency. Simulation results show that the
proposed G-Fedfilt achieves up to $3.99\% $ better classification accuracy than
the conventional FedAvg when concerning model personalization on the
statistically heterogeneous local datasets, while it is capable of yielding up
to $2.41\%$ higher accuracy than FedAvg in the case of testing the
generalization of the models.
Related papers
- Degree Distribution based Spiking Graph Networks for Domain Adaptation [17.924123705983792]
Spiking Graph Networks (SGNs) have garnered significant attraction from both researchers and industry due to their ability to address energy consumption challenges in graph classification.
We first propose the domain adaptation problem in SGNs, and introduce a novel framework named Degree-aware Spiking Graph Domain Adaptation for Classification.
The proposed DeSGDA addresses the spiking graph domain adaptation problem by three aspects: node degree-aware personalized spiking representation, adversarial feature distribution alignment, and pseudo-label distillation.
arXiv Detail & Related papers (2024-10-09T13:45:54Z) - Chasing Fairness in Graphs: A GNN Architecture Perspective [73.43111851492593]
We propose textsfFair textsfMessage textsfPassing (FMP) designed within a unified optimization framework for graph neural networks (GNNs)
In FMP, the aggregation is first adopted to utilize neighbors' information and then the bias mitigation step explicitly pushes demographic group node presentation centers together.
Experiments on node classification tasks demonstrate that the proposed FMP outperforms several baselines in terms of fairness and accuracy on three real-world datasets.
arXiv Detail & Related papers (2023-12-19T18:00:15Z) - ASWT-SGNN: Adaptive Spectral Wavelet Transform-based Self-Supervised
Graph Neural Network [20.924559944655392]
This paper proposes an Adaptive Spectral Wavelet Transform-based Self-Supervised Graph Neural Network (ASWT-SGNN)
ASWT-SGNN accurately approximates the filter function in high-density spectral regions, avoiding costly eigen-decomposition.
It achieves comparable performance to state-of-the-art models in node classification tasks.
arXiv Detail & Related papers (2023-12-10T03:07:42Z) - Data-Agnostic Model Poisoning against Federated Learning: A Graph
Autoencoder Approach [65.2993866461477]
This paper proposes a data-agnostic, model poisoning attack on Federated Learning (FL)
The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability.
Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it.
arXiv Detail & Related papers (2023-11-30T12:19:10Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Distributed Learning over Networks with Graph-Attention-Based
Personalization [49.90052709285814]
We propose a graph-based personalized algorithm (GATTA) for distributed deep learning.
In particular, the personalized model in each agent is composed of a global part and a node-specific part.
By treating each agent as one node in a graph the node-specific parameters as its features, the benefits of the graph attention mechanism can be inherited.
arXiv Detail & Related papers (2023-05-22T13:48:30Z) - GLASU: A Communication-Efficient Algorithm for Federated Learning with
Vertically Distributed Graph Data [44.02629656473639]
We propose a model splitting method that splits a backbone GNN across the clients and the server and a communication-efficient algorithm, GLASU, to train such a model.
We offer a theoretical analysis and conduct extensive numerical experiments on real-world datasets, showing that the proposed algorithm effectively trains a GNN model, whose performance matches that of the backbone GNN when trained in a centralized manner.
arXiv Detail & Related papers (2023-03-16T17:47:55Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Self-Constructing Graph Convolutional Networks for Semantic Labeling [23.623276007011373]
We propose a novel architecture called the Self-Constructing Graph (SCG), which makes use of learnable latent variables to generate embeddings.
SCG can automatically obtain optimized non-local context graphs from complex-shaped objects in aerial imagery.
We demonstrate the effectiveness and flexibility of the proposed SCG on the publicly available ISPRS Vaihingen dataset.
arXiv Detail & Related papers (2020-03-15T21:55:24Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.