Multi-Party Dual Learning
- URL: http://arxiv.org/abs/2104.06677v1
- Date: Wed, 14 Apr 2021 07:39:23 GMT
- Title: Multi-Party Dual Learning
- Authors: Maoguo Gong, Yuan Gao, Yu Xie, A. K. Qin, Ke Pan, and Yew-Soon Ong
- Abstract summary: We propose a multi-party dual learning (MPDL) framework to alleviate the problem of limited data with poor quality in an isolated party.
MPDL framework achieves significant improvement compared with state-of-the-art multi-party learning methods.
- Score: 34.360153917562755
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of machine learning algorithms heavily relies on the
availability of a large amount of training data. However, in reality, data
usually reside in distributed parties such as different institutions and may
not be directly gathered and integrated due to various data policy constraints.
As a result, some parties may suffer from insufficient data available for
training machine learning models. In this paper, we propose a multi-party dual
learning (MPDL) framework to alleviate the problem of limited data with poor
quality in an isolated party. Since the knowledge sharing processes for
multiple parties always emerge in dual forms, we show that dual learning is
naturally suitable to handle the challenge of missing data, and explicitly
exploits the probabilistic correlation and structural relationship between dual
tasks to regularize the training process. We introduce a feature-oriented
differential privacy with mathematical proof, in order to avoid possible
privacy leakage of raw features in the dual inference process. The approach
requires minimal modifications to the existing multi-party learning structure,
and each party can build flexible and powerful models separately, whose
accuracy is no less than non-distributed self-learning approaches. The MPDL
framework achieves significant improvement compared with state-of-the-art
multi-party learning methods, as we demonstrated through simulations on
real-world datasets.
Related papers
- Cross-Modal Few-Shot Learning: a Generative Transfer Learning Framework [58.362064122489166]
This paper introduces the Cross-modal Few-Shot Learning task, which aims to recognize instances from multiple modalities when only a few labeled examples are available.
We propose a Generative Transfer Learning framework consisting of two stages: the first involves training on abundant unimodal data, and the second focuses on transfer learning to adapt to novel data.
Our finds demonstrate that GTL has superior performance compared to state-of-the-art methods across four distinct multi-modal datasets.
arXiv Detail & Related papers (2024-10-14T16:09:38Z) - Encapsulating Knowledge in One Prompt [56.31088116526825]
KiOP encapsulates knowledge from various models into a solitary prompt without altering the original models or requiring access to the training data.
From a practicality standpoint, this paradigm proves the effectiveness of Visual Prompt in data inaccessible contexts.
Experiments across various datasets and models demonstrate the efficacy of the proposed KiOP knowledge transfer paradigm.
arXiv Detail & Related papers (2024-07-16T16:35:23Z) - FedMM: Federated Multi-Modal Learning with Modality Heterogeneity in
Computational Pathology [3.802258033231335]
Federated Multi-Modal (FedMM) is a learning framework that trains multiple single-modal feature extractors to enhance subsequent classification performance.
FedMM notably outperforms two baselines in accuracy and AUC metrics.
arXiv Detail & Related papers (2024-02-24T16:58:42Z) - MultiDelete for Multimodal Machine Unlearning [14.755831733659699]
MultiDelete is designed to decouple associations between unimodal data points during unlearning.
It can maintain the multimodal and unimodal knowledge of the original model post unlearning.
It can provide better protection to unlearned data against adversarial attacks.
arXiv Detail & Related papers (2023-11-18T08:30:38Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - Partitioned Variational Inference: A Framework for Probabilistic
Federated Learning [45.9225420256808]
We introduce partitioned variational inference (PVI), a framework for performing VI in the federated setting.
We develop new supporting theory for PVI, demonstrating a number of properties that make it an attractive choice for practitioners.
arXiv Detail & Related papers (2022-02-24T18:15:30Z) - Non-IID data and Continual Learning processes in Federated Learning: A
long road ahead [58.720142291102135]
Federated Learning is a novel framework that allows multiple devices or institutions to train a machine learning model collaboratively while preserving their data private.
In this work, we formally classify data statistical heterogeneity and review the most remarkable learning strategies that are able to face it.
At the same time, we introduce approaches from other machine learning frameworks, such as Continual Learning, that also deal with data heterogeneity and could be easily adapted to the Federated Learning settings.
arXiv Detail & Related papers (2021-11-26T09:57:11Z) - Towards Explainable Multi-Party Learning: A Contrastive Knowledge
Sharing Framework [23.475874929905192]
We propose a novel contrastive multi-party learning framework for knowledge refinement and sharing.
The proposed scheme achieves significant improvement in model performance in a variety of scenarios.
arXiv Detail & Related papers (2021-04-14T07:33:48Z) - Relating by Contrasting: A Data-efficient Framework for Multimodal
Generative Models [86.9292779620645]
We develop a contrastive framework for generative model learning, allowing us to train the model not just by the commonality between modalities, but by the distinction between "related" and "unrelated" multimodal data.
Under our proposed framework, the generative model can accurately identify related samples from unrelated ones, making it possible to make use of the plentiful unlabeled, unpaired multimodal data.
arXiv Detail & Related papers (2020-07-02T15:08:11Z) - An Online Method for A Class of Distributionally Robust Optimization
with Non-Convex Objectives [54.29001037565384]
We propose a practical online method for solving a class of online distributionally robust optimization (DRO) problems.
Our studies demonstrate important applications in machine learning for improving the robustness of networks.
arXiv Detail & Related papers (2020-06-17T20:19:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.