Physics-Driven Spectrum-Consistent Federated Learning for Palmprint
Verification
- URL: http://arxiv.org/abs/2308.00451v1
- Date: Tue, 1 Aug 2023 11:01:17 GMT
- Title: Physics-Driven Spectrum-Consistent Federated Learning for Palmprint
Verification
- Authors: Ziyuan Yang and Andrew Beng Jin Teoh and Bob Zhang and Lu Leng and Yi
Zhang
- Abstract summary: We propose a physics-driven spectrum-consistent federated learning method for palmprint verification, dubbed as PSFed-Palm.
Our approach first partitions clients into short- and long-spectrum groups according to the wavelength range of their local spectrum images.
We impose constraints on the local models to ensure their consistency with the global model, effectively preventing model drift.
- Score: 47.35171881187345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Palmprint as biometrics has gained increasing attention recently due to its
discriminative ability and robustness. However, existing methods mainly improve
palmprint verification within one spectrum, which is challenging to verify
across different spectrums. Additionally, in distributed server-client-based
deployment, palmprint verification systems predominantly necessitate clients to
transmit private data for model training on the centralized server, thereby
engendering privacy apprehensions. To alleviate the above issues, in this
paper, we propose a physics-driven spectrum-consistent federated learning
method for palmprint verification, dubbed as PSFed-Palm. PSFed-Palm draws upon
the inherent physical properties of distinct wavelength spectrums, wherein
images acquired under similar wavelengths display heightened resemblances. Our
approach first partitions clients into short- and long-spectrum groups
according to the wavelength range of their local spectrum images. Subsequently,
we introduce anchor models for short- and long-spectrum, which constrain the
optimization directions of local models associated with long- and
short-spectrum images. Specifically, a spectrum-consistent loss that enforces
the model parameters and feature representation to align with their
corresponding anchor models is designed. Finally, we impose constraints on the
local models to ensure their consistency with the global model, effectively
preventing model drift. This measure guarantees spectrum consistency while
protecting data privacy, as there is no need to share local data. Extensive
experiments are conducted to validate the efficacy of our proposed PSFed-Palm
approach. The proposed PSFed-Palm demonstrates compelling performance despite
only a limited number of training data. The codes will be released at
https://github.com/Zi-YuanYang/PSFed-Palm.
Related papers
- Immersion and Invariance-based Coding for Privacy-Preserving Federated Learning [1.4226399196408985]
Federated learning (FL) has emerged as a method to preserve privacy in collaborative distributed learning.
We introduce a privacy-preserving FL framework that combines differential privacy and system immersion tools from control theory.
We demonstrate that the proposed privacy-preserving scheme can be tailored to offer any desired level of differential privacy for both local and global model parameters.
arXiv Detail & Related papers (2024-09-25T15:04:42Z) - CorBin-FL: A Differentially Private Federated Learning Mechanism using Common Randomness [6.881974834597426]
Federated learning (FL) has emerged as a promising framework for distributed machine learning.
We introduce CorBin-FL, a privacy mechanism that uses correlated binary quantization to achieve differential privacy.
We also propose AugCorBin-FL, an extension that, in addition to PLDP, user-level and sample-level central differential privacy guarantees.
arXiv Detail & Related papers (2024-09-20T00:23:44Z) - Cross-Chirality Palmprint Verification: Left is Right for the Right Palmprint [11.388567430575783]
This paper introduces a novel Cross-Chirality Palmprint Verification (CCPV) framework that challenges the conventional wisdom in traditional palmprint verification systems.
Unlike existing methods that typically require storing both left and right palmprints, our approach enables verification using either palm while storing only one palmprint template.
arXiv Detail & Related papers (2024-09-19T19:10:21Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - Fed-CVLC: Compressing Federated Learning Communications with
Variable-Length Codes [54.18186259484828]
In Federated Learning (FL) paradigm, a parameter server (PS) concurrently communicates with distributed participating clients for model collection, update aggregation, and model distribution over multiple rounds.
We show strong evidences that variable-length is beneficial for compression in FL.
We present Fed-CVLC (Federated Learning Compression with Variable-Length Codes), which fine-tunes the code length in response to the dynamics of model updates.
arXiv Detail & Related papers (2024-02-06T07:25:21Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - Joint Privacy Enhancement and Quantization in Federated Learning [23.36363480217293]
Federated learning (FL) is an emerging paradigm for training machine learning models using possibly private data available at edge devices.
We propose a method coined joint privacy enhancement and quantization (JoPEQ)
We show that JoPEQ simultaneously quantizes data according to a required bit-rate while holding a desired privacy level.
arXiv Detail & Related papers (2022-08-23T11:42:58Z) - Differentially private federated deep learning for multi-site medical
image segmentation [56.30543374146002]
Collaborative machine learning techniques such as federated learning (FL) enable the training of models on effectively larger datasets without data transfer.
Recent initiatives have demonstrated that segmentation models trained with FL can achieve performance similar to locally trained models.
However, FL is not a fully privacy-preserving technique and privacy-centred attacks can disclose confidential patient data.
arXiv Detail & Related papers (2021-07-06T12:57:32Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.