Robust Collaborative Inference with Vertically Split Data Over Dynamic Device Environments
- URL: http://arxiv.org/abs/2312.16638v3
- Date: Sat, 26 Apr 2025 00:06:51 GMT
- Title: Robust Collaborative Inference with Vertically Split Data Over Dynamic Device Environments
- Authors: Surojit Ganguli, Zeyu Zhou, Christopher G. Brinton, David I. Inouye,
- Abstract summary: In safety-critical applications, collaborative inference must be robust to significant network failures caused by environmental disruptions or extreme weather.<n>We first formalize the problem of robust collaborative inference over a dynamic network of devices that could experience significant network faults.<n>Then, we develop a minimalistic yet impactful method called Multiple Aggregation with Gossip Rounds and Simulated Faults (MAGS) that synthesizes simulated faults via dropout, replication, and gossiping to significantly improve robustness over baselines.
- Score: 15.757660512833006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When each edge device of a network only perceives a local part of the environment, collaborative inference across multiple devices is often needed to predict global properties of the environment. In safety-critical applications, collaborative inference must be robust to significant network failures caused by environmental disruptions or extreme weather. Existing collaborative learning approaches, such as privacy-focused Vertical Federated Learning (VFL), typically assume a centralized setup or that one device never fails. However, these assumptions make prior approaches susceptible to significant network failures. To address this problem, we first formalize the problem of robust collaborative inference over a dynamic network of devices that could experience significant network faults. Then, we develop a minimalistic yet impactful method called Multiple Aggregation with Gossip Rounds and Simulated Faults (MAGS) that synthesizes simulated faults via dropout, replication, and gossiping to significantly improve robustness over baselines. We also theoretically analyze our proposed approach to explain why each component enhances robustness. Extensive empirical results validate that MAGS is robust across a range of fault rates-including extreme fault rates.
Related papers
- Collaborative Value Function Estimation Under Model Mismatch: A Federated Temporal Difference Analysis [55.13545823385091]
Federated reinforcement learning (FedRL) enables collaborative learning while preserving data privacy by preventing direct data exchange between agents.
In real-world applications, each agent may experience slightly different transition dynamics, leading to inherent model mismatches.
We show that even moderate levels of information sharing can significantly mitigate environment-specific errors.
arXiv Detail & Related papers (2025-03-21T18:06:28Z) - Vertical Federated Learning for Failure-Cause Identification in Disaggregated Microwave Networks [5.789459834052429]
We study the application of Federated Learning in disaggregated microwave networks for failure-cause identification.
Our experimental results show that VFL-based scenarios can achieve F1-Scores consistently within at most a 1% gap with respect to a centralized scenario.
arXiv Detail & Related papers (2025-02-05T04:09:15Z) - Failure-Resilient Distributed Inference with Model Compression over Heterogeneous Edge Devices [9.423705897088672]
We present RoCoIn, a robust cooperative inference mechanism for locally distributed execution of deep neural network-based inference tasks over heterogeneous edge devices.
It creates a set of independent and compact student models that are learned from a large model using knowledge distillation for distributed deployment.
In particular, the devices are strategically grouped to redundantly deploy and execute the same student model such that the inference process is resilient to any local failures.
arXiv Detail & Related papers (2024-06-20T10:43:53Z) - AdaptSFL: Adaptive Split Federated Learning in Resource-constrained Edge Networks [15.195798715517315]
Split federated learning (SFL) is a promising solution by of floading the primary training workload to a server via model partitioning.
We propose AdaptSFL, a novel resource-adaptive SFL framework, to expedite SFL under resource-constrained edge computing systems.
arXiv Detail & Related papers (2024-03-19T19:05:24Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust
Closed-Loop Control [63.310780486820796]
We show how a parameterization of recurrent connectivity influences robustness in closed-loop settings.
We find that closed-form continuous-time neural networks (CfCs) with fewer parameters can outperform their full-rank, fully-connected counterparts.
arXiv Detail & Related papers (2023-10-05T21:44:18Z) - Federated Learning for Computationally-Constrained Heterogeneous
Devices: A Survey [3.219812767529503]
Federated learning (FL) offers a privacy-preserving trade-off between communication overhead and model accuracy.
We outline the challengesFL has to overcome to be widely applicable in real-world applications.
arXiv Detail & Related papers (2023-07-18T12:05:36Z) - VertiBench: Advancing Feature Distribution Diversity in Vertical
Federated Learning Benchmarks [31.08004805380727]
This paper introduces two key factors affecting VFL performance - feature importance and feature correlation.
We also introduce a real VFL dataset to address the deficit in image-image VFL scenarios.
arXiv Detail & Related papers (2023-07-05T05:55:08Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Low-Latency Cooperative Spectrum Sensing via Truncated Vertical
Federated Learning [51.51440623636274]
We propose a vertical federated learning (VFL) framework to exploit the distributed features across multiple secondary users (SUs) without compromising data privacy.
To accelerate the training process, we propose a truncated vertical federated learning (T-VFL) algorithm.
The convergence performance of T-VFL is provided via mathematical analysis and justified by simulation results.
arXiv Detail & Related papers (2022-08-07T10:39:27Z) - Towards Communication-efficient Vertical Federated Learning Training via
Cache-enabled Local Updates [25.85564668511386]
We introduce CELU-VFL, a novel and efficient Vertical Learning framework.
CELU-VFL exploits the local update technique to reduce the cross-party communication rounds.
We show that CELU-VFL can be up to six times faster than the existing works.
arXiv Detail & Related papers (2022-07-29T12:10:36Z) - FedHiSyn: A Hierarchical Synchronous Federated Learning Framework for
Resource and Data Heterogeneity [56.82825745165945]
Federated Learning (FL) enables training a global model without sharing the decentralized raw data stored on multiple devices to protect data privacy.
We propose a hierarchical synchronous FL framework, i.e., FedHiSyn, to tackle the problems of straggler effects and outdated models.
We evaluate the proposed framework based on MNIST, EMNIST, CIFAR10 and CIFAR100 datasets and diverse heterogeneous settings of devices.
arXiv Detail & Related papers (2022-06-21T17:23:06Z) - SlimFL: Federated Learning with Superposition Coding over Slimmable
Neural Networks [56.68149211499535]
Federated learning (FL) is a key enabler for efficient communication and computing leveraging devices' distributed computing capabilities.
This paper proposes a novel learning framework by integrating FL and width-adjustable slimmable neural networks (SNNs)
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2022-03-26T15:06:13Z) - Desirable Companion for Vertical Federated Learning: New Zeroth-Order
Gradient Based Algorithm [140.25480610981504]
A complete list of metrics to evaluate VFL algorithms should include model applicability, privacy, communication, and computation efficiency.
We propose a novel VFL framework with black-box scalability, which is inseparably inseparably scalable.
arXiv Detail & Related papers (2022-03-19T13:55:47Z) - Joint Superposition Coding and Training for Federated Learning over
Multi-Width Neural Networks [52.93232352968347]
This paper aims to integrate two synergetic technologies, federated learning (FL) and width-adjustable slimmable neural network (SNN)
FL preserves data privacy by exchanging the locally trained models of mobile devices. SNNs are however non-trivial, particularly under wireless connections with time-varying channel conditions.
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2021-12-05T11:17:17Z) - Federated Learning Based on Dynamic Regularization [43.137064459520886]
We propose a novel federated learning method for distributively training neural network models.
Server orchestrates cooperation between a subset of randomly chosen devices in each round.
arXiv Detail & Related papers (2021-11-08T03:58:28Z) - Decentralized Personalized Federated Learning for Min-Max Problems [79.61785798152529]
This paper is the first to study PFL for saddle point problems encompassing a broader range of optimization problems.
We propose new algorithms to address this problem and provide a theoretical analysis of the smooth (strongly) convex-(strongly) concave saddle point problems.
Numerical experiments for bilinear problems and neural networks with adversarial noise demonstrate the effectiveness of the proposed methods.
arXiv Detail & Related papers (2021-06-14T10:36:25Z) - Bayesian Federated Learning over Wireless Networks [87.37301441859925]
Federated learning is a privacy-preserving and distributed training method using heterogeneous data sets stored at local devices.
This paper presents an efficient modified BFL algorithm called scalableBFL (SBFL)
arXiv Detail & Related papers (2020-12-31T07:32:44Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z) - Neural Network Approximation of Graph Fourier Transforms for Sparse
Sampling of Networked Flow Dynamics [13.538871180763156]
Infrastructure monitoring is critical for safe operations and sustainability. Water distribution networks (WDNs) are large-scale networked critical systems with complex cascade dynamics which are difficult to predict.
Existing approaches use multi-objective optimisation to find the minimum set of essential monitoring points, but lack performance guarantees and a theoretical framework.
Here, we first develop Graph Fourier Transform (GFT) operators to compress networked contamination spreading dynamics to identify the essential principle data collection points with inference performance guarantees.
We then build autoencoder (AE) inspired neural networks (NN) to generalize the GFT sampling process and under-sample further from the initial
arXiv Detail & Related papers (2020-02-11T20:18:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.