Prototype-Based Layered Federated Cross-Modal Hashing
- URL: http://arxiv.org/abs/2210.15678v1
- Date: Thu, 27 Oct 2022 15:11:12 GMT
- Title: Prototype-Based Layered Federated Cross-Modal Hashing
- Authors: Jiale Liu, Yu-Wei Zhan, Xin Luo, Zhen-Duo Chen, Yongxin Wang, Xin-Shun
Xu
- Abstract summary: In this paper, we propose a novel method called prototype-based layered federated cross-modal hashing.
Specifically, the prototype is introduced to learn the similarity between instances and classes on server.
To realize personalized federated learning, a hypernetwork is deployed on server to dynamically update different layers' weights of local model.
- Score: 14.844848099134648
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, deep cross-modal hashing has gained increasing attention. However,
in many practical cases, data are distributed and cannot be collected due to
privacy concerns, which greatly reduces the cross-modal hashing performance on
each client. And due to the problems of statistical heterogeneity, model
heterogeneity, and forcing each client to accept the same parameters, applying
federated learning to cross-modal hash learning becomes very tricky. In this
paper, we propose a novel method called prototype-based layered federated
cross-modal hashing. Specifically, the prototype is introduced to learn the
similarity between instances and classes on server, reducing the impact of
statistical heterogeneity (non-IID) on different clients. And we monitor the
distance between local and global prototypes to further improve the
performance. To realize personalized federated learning, a hypernetwork is
deployed on server to dynamically update different layers' weights of local
model. Experimental results on benchmark datasets show that our method
outperforms state-of-the-art methods.
Related papers
- FedHide: Federated Learning by Hiding in the Neighbors [12.71494268219787]
We propose a prototype-based federated learning method designed for embedding networks in classification or verification tasks.
Our approach generates proxy class prototypes by linearly combining them with their nearest neighbors.
This technique conceals the true class prototype while enabling clients to learn discriminative embedding networks.
arXiv Detail & Related papers (2024-09-12T07:37:49Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Unimodal Training-Multimodal Prediction: Cross-modal Federated Learning
with Hierarchical Aggregation [16.308470947384134]
HA-Fedformer is a novel transformer-based model that empowers unimodal training with only a unimodal dataset at the client.
We develop an uncertainty-aware aggregation method for the local encoders with layer-wise Markov Chain Monte Carlo sampling.
Our experiments on popular sentiment analysis benchmarks, CMU-MOSI and CMU-MOSEI, demonstrate that HA-Fedformer significantly outperforms state-of-the-art multimodal models.
arXiv Detail & Related papers (2023-03-27T07:07:33Z) - Prototype Helps Federated Learning: Towards Faster Convergence [38.517903009319994]
Federated learning (FL) is a distributed machine learning technique in which multiple clients cooperate to train a shared model without exchanging their raw data.
In this paper, a prototype-based federated learning framework is proposed, which can achieve better inference performance with only a few changes to the last global iteration of the typical federated learning process.
arXiv Detail & Related papers (2023-03-22T04:06:29Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - One-shot Federated Learning without Server-side Training [42.59845771101823]
One-shot federated learning is gaining popularity as a way to reduce communication cost between clients and the server.
Most of the existing one-shot FL methods are based on Knowledge Distillation; however, distillation based approach requires an extra training phase and depends on publicly available data sets or generated pseudo samples.
In this work, we consider a novel and challenging cross-silo setting: performing a single round of parameter aggregation on the local models without server-side training.
arXiv Detail & Related papers (2022-04-26T01:45:37Z) - Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
Heterogeneous Data [77.88594632644347]
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks.
In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge.
We propose a novel momentum-based method to mitigate this decentralized training difficulty.
arXiv Detail & Related papers (2021-02-09T11:27:14Z) - CIMON: Towards High-quality Hash Codes [63.37321228830102]
We propose a new method named textbfComprehensive stextbfImilarity textbfMining and ctextbfOnsistency leartextbfNing (CIMON)
First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes.
arXiv Detail & Related papers (2020-10-15T14:47:14Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.