RobustFSM: Submodular Maximization in Federated Setting with Malicious Clients
- URL: http://arxiv.org/abs/2511.02029v2
- Date: Sat, 08 Nov 2025 00:25:01 GMT
- Title: RobustFSM: Submodular Maximization in Federated Setting with Malicious Clients
- Authors: Duc A. Tran, Dung Truong, Duy Le,
- Abstract summary: We propose RobustFSM, a federated submodular solution that is robust to various practical client attacks.<n>The degree of this improvement depends on the dataset and attack scenarios, which can be as high as 200%.
- Score: 0.5194968784739241
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Submodular maximization is an optimization problem benefiting many machine learning applications, where we seek a small subset best representing an extremely large dataset. We focus on the federated setting where the data are locally owned by decentralized clients who have their own definitions for the quality of representability. This setting requires repetitive aggregation of local information computed by the clients. While the main motivation is to respect the privacy and autonomy of the clients, the federated setting is vulnerable to client misbehaviors: malicious clients might share fake information. An analogy is backdoor attack in conventional federated learning, but our challenge differs freshly due to the unique characteristics of submodular maximization. We propose RobustFSM, a federated submodular maximization solution that is robust to various practical client attacks. Its performance is substantiated with an empirical evaluation study using real-world datasets. Numerical results show that the solution quality of RobustFSM substantially exceeds that of the conventional federated algorithm when attacks are severe. The degree of this improvement depends on the dataset and attack scenarios, which can be as high as 200%
Related papers
- Replacing Parameters with Preferences: Federated Alignment of Heterogeneous Vision-Language Models [63.70401095689976]
We argue that replacing parameters with preferences represents a more scalable and privacy-preserving future.<n>We propose MoR, a federated alignment framework based on GRPO with Mixture-of-Rewards for heterogeneous VLMs.<n>MoR consistently outperforms federated alignment baselines in generalization, robustness, and cross-client adaptability.
arXiv Detail & Related papers (2026-01-31T03:11:51Z) - Federated Multi-Task Clustering [44.73672172790804]
This paper proposes a novel framework named Federated Multi-Task Clustering (i.e.,FMTC)<n>It is composed of two main components: client-side personalized clustering module and server-side tensorial correlation module.<n>We derive an efficient, privacy-preserving distributed algorithm based on the Alternating Direction Method of Multipliers.
arXiv Detail & Related papers (2025-12-28T12:02:32Z) - Byzantine-Robust Federated Learning with Learnable Aggregation Weights [7.448890820711754]
Federated Learning (FL) enables clients to collaboratively train a global model without sharing their private data.<n>The presence of malicious (Byzantine) clients poses significant challenges to the robustness of FL.<n>We propose a novel Byzantine-robust FL optimization problem that incorporates adaptive weighting into the aggregation process.
arXiv Detail & Related papers (2025-11-05T15:02:21Z) - Client-Centric Federated Adaptive Optimization [78.30827455292827]
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.<n>We propose Federated-Centric Adaptive Optimization, which is a class of novel federated optimization approaches.
arXiv Detail & Related papers (2025-01-17T04:00:50Z) - Hybrid-Regularized Magnitude Pruning for Robust Federated Learning under Covariate Shift [2.298932494750101]
We show that inconsistencies in client-side training distributions substantially degrade the performance of federated learning models.<n>We propose a novel FL framework using a combination of pruning and regularisation of clients' training to improve the sparsity, redundancy, and robustness of neural connections.
arXiv Detail & Related papers (2024-12-19T16:22:37Z) - Optimizing Cross-Client Domain Coverage for Federated Instruction Tuning of Large Language Models [87.49293964617128]
Federated domain-specific instruction tuning (FedDIT) for large language models (LLMs) aims to enhance performance in specialized domains using distributed private and limited data.<n>We empirically establish that cross-client domain coverage, rather than data heterogeneity, is the pivotal factor.<n>We introduce FedDCA, an algorithm that explicitly maximizes this coverage through diversity-oriented client center selection and retrieval-based augmentation.
arXiv Detail & Related papers (2024-09-30T09:34:31Z) - Beyond the Federation: Topology-aware Federated Learning for Generalization to Unseen Clients [10.397502254316645]
Federated learning is widely employed to tackle distributed sensitive data.
Topology-aware Federated Learning (TFL) trains robust models against out-of-federation (OOF) data.
We formulate a novel optimization problem for TFL, consisting of two key modules: Client Topology Learning and Learning on Client Topology.
Empirical evaluation on a variety of real-world datasets verifies TFL's superior OOF robustness and scalability.
arXiv Detail & Related papers (2024-07-06T03:57:05Z) - Personalized federated learning based on feature fusion [2.943623084019036]
Federated learning enables distributed clients to collaborate on training while storing their data locally to protect client privacy.
We propose a personalized federated learning approach called pFedPM.
In our process, we replace traditional gradient uploading with feature uploading, which helps reduce communication costs and allows for heterogeneous client models.
arXiv Detail & Related papers (2024-06-24T12:16:51Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Federated Learning for Semantic Parsing: Task Formulation, Evaluation
Setup, New Algorithms [29.636944156801327]
Multiple clients collaboratively train one global model without sharing their semantic parsing data.
Lorar adjusts each client's contribution to the global model update based on its training loss reduction during each round.
Clients with smaller datasets enjoy larger performance gains.
arXiv Detail & Related papers (2023-05-26T19:25:49Z) - Re-Weighted Softmax Cross-Entropy to Control Forgetting in Federated
Learning [14.196701066823499]
In Federated Learning, a global model is learned by aggregating model updates computed at a set of independent client nodes.
We show that individual client models experience a catastrophic forgetting with respect to data from other clients.
We propose an efficient approach that modifies the cross-entropy objective on a per-client basis by re-weighting the softmax logits prior to computing the loss.
arXiv Detail & Related papers (2023-04-11T14:51:55Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - FedFM: Anchor-based Feature Matching for Data Heterogeneity in Federated
Learning [91.74206675452888]
We propose a novel method FedFM, which guides each client's features to match shared category-wise anchors.
To achieve higher efficiency and flexibility, we propose a FedFM variant, called FedFM-Lite, where clients communicate with server with fewer synchronization times and communication bandwidth costs.
arXiv Detail & Related papers (2022-10-14T08:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.