DAMe: Personalized Federated Social Event Detection with Dual Aggregation Mechanism
- URL: http://arxiv.org/abs/2409.00614v1
- Date: Sun, 1 Sep 2024 04:56:41 GMT
- Title: DAMe: Personalized Federated Social Event Detection with Dual Aggregation Mechanism
- Authors: Xiaoyan Yu, Yifan Wei, Pu Li, Shuaishuai Zhou, Hao Peng, Li Sun, Liehuang Zhu, Philip S. Yu,
- Abstract summary: This paper proposes a personalized federated learning framework with a dual aggregation mechanism for social event detection, namely DAMe.
We introduce a global aggregation strategy to provide clients with maximum external knowledge of their preferences.
In addition, we incorporate a global-local event-centric constraint to prevent local overfitting and client-drift''
- Score: 55.45581907514175
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training social event detection models through federated learning (FedSED) aims to improve participants' performance on the task. However, existing federated learning paradigms are inadequate for achieving FedSED's objective and exhibit limitations in handling the inherent heterogeneity in social data. This paper proposes a personalized federated learning framework with a dual aggregation mechanism for social event detection, namely DAMe. We present a novel local aggregation strategy utilizing Bayesian optimization to incorporate global knowledge while retaining local characteristics. Moreover, we introduce a global aggregation strategy to provide clients with maximum external knowledge of their preferences. In addition, we incorporate a global-local event-centric constraint to prevent local overfitting and ``client-drift''. Experiments within a realistic simulation of a natural federated setting, utilizing six social event datasets spanning six languages and two social media platforms, along with an ablation study, have demonstrated the effectiveness of the proposed framework. Further robustness analyses have shown that DAMe is resistant to injection attacks.
Related papers
- MH-pFLGB: Model Heterogeneous personalized Federated Learning via Global Bypass for Medical Image Analysis [14.298460846515969]
We introduce a novel approach, MH-pFLGB, which employs a global bypass strategy to mitigate the reliance on public datasets and navigate the complexities of non-IID data distributions.
Our method enhances traditional federated learning by integrating a global bypass model, which would share the information among the clients, but also serves as part of the network to enhance the performance on each client.
arXiv Detail & Related papers (2024-06-29T15:38:37Z) - Worldwide Federated Training of Language Models [4.259910812836157]
We propose a Worldwide Federated Language Model Training(WorldLM) system based on federations of federations.
We show that WorldLM outperforms standard federations by up to $1.91times$, approaches the personalized performance of fully local models, and maintains these advantages under privacy-enhancing techniques.
arXiv Detail & Related papers (2024-05-23T11:25:19Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Generalizable Heterogeneous Federated Cross-Correlation and Instance
Similarity Learning [60.058083574671834]
This paper presents a novel FCCL+, federated correlation and similarity learning with non-target distillation.
For heterogeneous issue, we leverage irrelevant unlabeled public data for communication.
For catastrophic forgetting in local updating stage, FCCL+ introduces Federated Non Target Distillation.
arXiv Detail & Related papers (2023-09-28T09:32:27Z) - Turning Privacy-preserving Mechanisms against Federated Learning [22.88443008209519]
We design an attack capable of deceiving state-of-the-art defenses for federated learning.
The proposed attack includes two operating modes, the first one focusing on convergence inhibition (Adversarial Mode) and the second one aiming at building a deceptive rating injection on the global federated model (Backdoor Mode)
The experimental results show the effectiveness of our attack in both its modes, returning on average 60% performance detriment in all the tests on Adversarial Mode and fully effective backdoors in 93% of cases for the tests performed on Backdoor Mode.
arXiv Detail & Related papers (2023-05-09T11:43:31Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Knowledge-Aware Federated Active Learning with Non-IID Data [75.98707107158175]
We propose a federated active learning paradigm to efficiently learn a global model with limited annotation budget.
The main challenge faced by federated active learning is the mismatch between the active sampling goal of the global model on the server and that of the local clients.
We propose Knowledge-Aware Federated Active Learning (KAFAL), which consists of Knowledge-Specialized Active Sampling (KSAS) and Knowledge-Compensatory Federated Update (KCFU)
arXiv Detail & Related papers (2022-11-24T13:08:43Z) - FedSA: Accelerating Intrusion Detection in Collaborative Environments
with Federated Simulated Annealing [2.7011265453906983]
Federated learning emerges as a solution to collaborative training for an Intrusion Detection System (IDS)
This paper proposes the Federated Simulated Annealing (FedSA) metaheuristic to select the hyper parameters and a subset of participants for each aggregation round in federated learning.
The proposal requires up to 50% fewer aggregation rounds to achieve approximately 97% accuracy in attack detection than the conventional aggregation approach.
arXiv Detail & Related papers (2022-05-23T14:27:56Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.