Physics-Driven Spectrum-Consistent Federated Learning for Palmprint
Verification
- URL: http://arxiv.org/abs/2308.00451v1
- Date: Tue, 1 Aug 2023 11:01:17 GMT
- Title: Physics-Driven Spectrum-Consistent Federated Learning for Palmprint
Verification
- Authors: Ziyuan Yang and Andrew Beng Jin Teoh and Bob Zhang and Lu Leng and Yi
Zhang
- Abstract summary: We propose a physics-driven spectrum-consistent federated learning method for palmprint verification, dubbed as PSFed-Palm.
Our approach first partitions clients into short- and long-spectrum groups according to the wavelength range of their local spectrum images.
We impose constraints on the local models to ensure their consistency with the global model, effectively preventing model drift.
- Score: 47.35171881187345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Palmprint as biometrics has gained increasing attention recently due to its
discriminative ability and robustness. However, existing methods mainly improve
palmprint verification within one spectrum, which is challenging to verify
across different spectrums. Additionally, in distributed server-client-based
deployment, palmprint verification systems predominantly necessitate clients to
transmit private data for model training on the centralized server, thereby
engendering privacy apprehensions. To alleviate the above issues, in this
paper, we propose a physics-driven spectrum-consistent federated learning
method for palmprint verification, dubbed as PSFed-Palm. PSFed-Palm draws upon
the inherent physical properties of distinct wavelength spectrums, wherein
images acquired under similar wavelengths display heightened resemblances. Our
approach first partitions clients into short- and long-spectrum groups
according to the wavelength range of their local spectrum images. Subsequently,
we introduce anchor models for short- and long-spectrum, which constrain the
optimization directions of local models associated with long- and
short-spectrum images. Specifically, a spectrum-consistent loss that enforces
the model parameters and feature representation to align with their
corresponding anchor models is designed. Finally, we impose constraints on the
local models to ensure their consistency with the global model, effectively
preventing model drift. This measure guarantees spectrum consistency while
protecting data privacy, as there is no need to share local data. Extensive
experiments are conducted to validate the efficacy of our proposed PSFed-Palm
approach. The proposed PSFed-Palm demonstrates compelling performance despite
only a limited number of training data. The codes will be released at
https://github.com/Zi-YuanYang/PSFed-Palm.
Related papers
- PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - Fed-CVLC: Compressing Federated Learning Communications with
Variable-Length Codes [54.18186259484828]
In Federated Learning (FL) paradigm, a parameter server (PS) concurrently communicates with distributed participating clients for model collection, update aggregation, and model distribution over multiple rounds.
We show strong evidences that variable-length is beneficial for compression in FL.
We present Fed-CVLC (Federated Learning Compression with Variable-Length Codes), which fine-tunes the code length in response to the dynamics of model updates.
arXiv Detail & Related papers (2024-02-06T07:25:21Z) - QMGeo: Differentially Private Federated Learning via Stochastic Quantization with Mixed Truncated Geometric Distribution [1.565361244756411]
Federated learning (FL) is a framework which allows multiple users to jointly train a global machine learning (ML) model.
One key motivation of such distributed frameworks is to provide privacy guarantees to the users.
We present a novel quantization method, utilizing a mixed geometric distribution to introduce the randomness needed to provide DP.
arXiv Detail & Related papers (2023-12-10T04:44:53Z) - Balancing Privacy Protection and Interpretability in Federated Learning [8.759803233734624]
Federated learning (FL) aims to collaboratively train the global model in a distributed manner by sharing the model parameters from local clients to a central server.
Recent studies have illustrated that FL still suffers from information leakage as adversaries try to recover the training data by analyzing shared parameters from local clients.
We propose a simple yet effective adaptive differential privacy (ADP) mechanism that selectively adds noisy perturbations to the gradients of client models in FL.
arXiv Detail & Related papers (2023-02-16T02:58:22Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
FedLAP-DP is a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - Joint Privacy Enhancement and Quantization in Federated Learning [23.36363480217293]
Federated learning (FL) is an emerging paradigm for training machine learning models using possibly private data available at edge devices.
We propose a method coined joint privacy enhancement and quantization (JoPEQ)
We show that JoPEQ simultaneously quantizes data according to a required bit-rate while holding a desired privacy level.
arXiv Detail & Related papers (2022-08-23T11:42:58Z) - Differentially private federated deep learning for multi-site medical
image segmentation [56.30543374146002]
Collaborative machine learning techniques such as federated learning (FL) enable the training of models on effectively larger datasets without data transfer.
Recent initiatives have demonstrated that segmentation models trained with FL can achieve performance similar to locally trained models.
However, FL is not a fully privacy-preserving technique and privacy-centred attacks can disclose confidential patient data.
arXiv Detail & Related papers (2021-07-06T12:57:32Z) - FLAME: Differentially Private Federated Learning in the Shuffle Model [25.244726600260748]
Federated Learning (FL) is a promising machine learning paradigm that enables the analyzer to train a model without collecting users' raw data.
We propose an FL framework in the shuffle model and a simple protocol (SS-Simple) extended from existing work.
We find that SS-Simple only provides an insufficient privacy amplification effect in FL since the dimension of the model parameter is quite large.
For boosting the utility when the model size is greater than the user population, we propose an advanced protocol (SS-Topk) with gradient sparsification techniques.
arXiv Detail & Related papers (2020-09-17T04:44:27Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.