Privacy Preserved Federated Learning with Attention-Based Aggregation for Biometric Recognition
- URL: http://arxiv.org/abs/2510.01113v1
- Date: Wed, 01 Oct 2025 16:58:59 GMT
- Title: Privacy Preserved Federated Learning with Attention-Based Aggregation for Biometric Recognition
- Authors: Kassahun Azezew, Minyechil Alehegn, Tsega Asresa, Bitew Mekuria, Tizazu Bayh, Ayenew Kassie, Amsalu Tesema, Animut Embiyale,
- Abstract summary: The A3-FL framework is evaluated using FVC2004 fingerprint data.<n>The accuracy, convergence speed, and robustness of the A3-FL framework are superior to those of standard FL (FedAvg) and static baselines.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Because biometric data is sensitive, centralized training poses a privacy risk, even though biometric recognition is essential for contemporary applications. Federated learning (FL), which permits decentralized training, provides a privacy-preserving substitute. Conventional FL, however, has trouble with interpretability and heterogeneous data (non-IID). In order to handle non-IID biometric data, this framework adds an attention mechanism at the central server that weights local model updates according to their significance. Differential privacy and secure update protocols safeguard data while preserving accuracy. The A3-FL framework is evaluated in this study using FVC2004 fingerprint data, with each client's features extracted using a Siamese Convolutional Neural Network (Siamese-CNN). By dynamically modifying client contributions, the attention mechanism increases the accuracy of the global model.The accuracy, convergence speed, and robustness of the A3-FL framework are superior to those of standard FL (FedAvg) and static baselines, according to experimental evaluations using fingerprint data (FVC2004). The accuracy of the attention-based approach was 0.8413, while FedAvg, Local-only, and Centralized approaches were 0.8164, 0.7664, and 0.7997, respectively. Accuracy stayed high at 0.8330 even with differential privacy. A scalable and privacy-sensitive biometric system for secure and effective recognition in dispersed environments is presented in this work.
Related papers
- A Secure and Private Distributed Bayesian Federated Learning Design [56.92336577799572]
Distributed Federated Learning (DFL) enables decentralized model training across large-scale systems without a central parameter server.<n>DFL faces three critical challenges: privacy leakage from honest-but-curious neighbors, slow convergence due to the lack of central coordination, and vulnerability to Byzantine adversaries aiming to degrade model accuracy.<n>We propose a novel DFL framework that integrates Byzantine robustness, privacy preservation, and convergence acceleration.
arXiv Detail & Related papers (2026-02-23T16:12:02Z) - Adaptive Dual-Weighting Framework for Federated Learning via Out-of-Distribution Detection [53.45696787935487]
Federated Learning (FL) enables collaborative model training across large-scale distributed service nodes.<n>In real-world service-oriented deployments, data generated by heterogeneous users, devices, and application scenarios are inherently non-IID.<n>We propose FLood, a novel FL framework inspired by out-of-distribution (OOD) detection.
arXiv Detail & Related papers (2026-02-01T05:54:59Z) - Tri-LLM Cooperative Federated Zero-Shot Intrusion Detection with Semantic Disagreement and Trust-Aware Aggregation [5.905949608791961]
This paper introduces a semantics-driven federated IDS framework that incorporates language-derived semantic supervision into federated optimization.<n>The framework achieves over 80% zero-shot detection accuracy on unseen attack patterns, improving zero-day discrimination by more than 10% compared to similarity-based baselines.
arXiv Detail & Related papers (2026-01-30T16:38:05Z) - FLARE: Adaptive Multi-Dimensional Reputation for Robust Client Reliability in Federated Learning [0.6524460254566904]
Federated learning (FL) enables collaborative model training while preserving data privacy.<n>It remains vulnerable to malicious clients who compromise model integrity through Byzantine attacks, data poisoning, or adaptive adversarial behaviors.<n>We propose FLARE, an adaptive reputation-based framework that transforms client reliability assessment from binary decisions to a continuous, multi-dimensional trust evaluation.
arXiv Detail & Related papers (2025-11-18T17:57:40Z) - CO-PFL: Contribution-Oriented Personalized Federated Learning for Heterogeneous Networks [51.43780477302533]
Contribution-Oriented PFL (CO-PFL) is a novel algorithm that dynamically estimates each client's contribution for global aggregation.<n>CO-PFL consistently surpasses state-of-the-art methods in robustness in personalization accuracy, robustness, scalability and convergence stability.
arXiv Detail & Related papers (2025-10-23T05:10:06Z) - Perfectly-Private Analog Secure Aggregation in Federated Learning [51.61616734974475]
In federated learning, multiple parties train models locally and share their parameters with a central server, which aggregates them to update a global model.<n>In this paper, a novel secure parameter aggregation method is proposed that employs the torus rather than a finite field.
arXiv Detail & Related papers (2025-09-10T15:22:40Z) - Blockchain-Enabled Privacy-Preserving Second-Order Federated Edge Learning in Personalized Healthcare [1.859970493489417]
Federated learning (FL) has attracted increasing attention to security and privacy challenges in traditional cloud-centric machine learning models.<n>First-order FL approaches face several challenges in personalized model training due to heterogeneous non-independent and identically distributed (non-iid) data.<n>Recently, second-order FL approaches maintain the stability and consistency of non-iid datasets while improving personalized model training.
arXiv Detail & Related papers (2025-05-31T06:41:04Z) - A Federated Random Forest Solution for Secure Distributed Machine Learning [44.99833362998488]
This paper introduces a federated learning framework for Random Forest classifiers that preserves data privacy and provides robust performance in distributed settings.<n>By leveraging PySyft for secure, privacy-aware computation, our method enables multiple institutions to collaboratively train Random Forest models on locally stored data.<n>Experiments on two real-world healthcare benchmarks demonstrate that the federated approach maintains competitive accuracy - within a maximum 9% margin of centralized methods.
arXiv Detail & Related papers (2025-05-12T21:40:35Z) - Securing Genomic Data Against Inference Attacks in Federated Learning Environments [0.31570310818616687]
Federated Learning (FL) offers a promising framework for collaboratively training machine learning models across decentralized genomic datasets without direct data sharing.<n>While this approach preserves data locality, it remains susceptible to sophisticated inference attacks that can compromise individual privacy.<n>In this study, we simulate a federated learning setup using synthetic genomic data and assess its vulnerability to three key attack vectors.
arXiv Detail & Related papers (2025-05-12T02:36:50Z) - Privacy-Preserved Automated Scoring using Federated Learning for Educational Research [1.2556373621040728]
We propose a federated learning (FL) framework for automated scoring of educational assessments.<n>We benchmark our model against two state-of-the-art FL methods and a centralized learning baseline.<n>Results show that our model achieves the highest accuracy (94.5%) among FL approaches.
arXiv Detail & Related papers (2025-03-12T19:06:25Z) - Enabling Privacy-preserving Model Evaluation in Federated Learning via Fully Homomorphic Encryption [1.9662978733004604]
Federated learning has become increasingly widespread due to its ability to train models collaboratively without centralizing sensitive data.<n>The evaluation phase presents significant privacy risks that have not been adequately addressed in the literature.<n>We propose a novel evaluation method that leverages fully homomorphic encryption.
arXiv Detail & Related papers (2024-03-21T14:36:55Z) - Enhancing Security in Federated Learning through Adaptive
Consensus-Based Model Update Validation [2.28438857884398]
This paper introduces an advanced approach for fortifying Federated Learning (FL) systems against label-flipping attacks.
We propose a consensus-based verification process integrated with an adaptive thresholding mechanism.
Our results indicate a significant mitigation of label-flipping attacks, bolstering the FL system's resilience.
arXiv Detail & Related papers (2024-03-05T20:54:56Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.