Towards Explainable Multi-Party Learning: A Contrastive Knowledge
Sharing Framework
- URL: http://arxiv.org/abs/2104.06670v1
- Date: Wed, 14 Apr 2021 07:33:48 GMT
- Title: Towards Explainable Multi-Party Learning: A Contrastive Knowledge
Sharing Framework
- Authors: Yuan Gao, Jiawei Li, Maoguo Gong, Yu Xie and A. K. Qin
- Abstract summary: We propose a novel contrastive multi-party learning framework for knowledge refinement and sharing.
The proposed scheme achieves significant improvement in model performance in a variety of scenarios.
- Score: 23.475874929905192
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-party learning provides solutions for training joint models with
decentralized data under legal and practical constraints. However, traditional
multi-party learning approaches are confronted with obstacles such as system
heterogeneity, statistical heterogeneity, and incentive design. How to deal
with these challenges and further improve the efficiency and performance of
multi-party learning has become an urgent problem to be solved. In this paper,
we propose a novel contrastive multi-party learning framework for knowledge
refinement and sharing with an accountable incentive mechanism. Since the
existing naive model parameter averaging method is contradictory to the
learning paradigm of neural networks, we simulate the process of human
cognition and communication, and analogy multi-party learning as a many-to-one
knowledge sharing problem. The approach is capable of integrating the acquired
explicit knowledge of each client in a transparent manner without privacy
disclosure, and it reduces the dependence on data distribution and
communication environments. The proposed scheme achieves significant
improvement in model performance in a variety of scenarios, as we demonstrated
through experiments on several real-world datasets.
Related papers
- Accelerated Stochastic ExtraGradient: Mixing Hessian and Gradient Similarity to Reduce Communication in Distributed and Federated Learning [50.382793324572845]
Distributed computing involves communication between devices, which requires solving two key problems: efficiency and privacy.
In this paper, we analyze a new method that incorporates the ideas of using data similarity and clients sampling.
To address privacy concerns, we apply the technique of additional noise and analyze its impact on the convergence of the proposed method.
arXiv Detail & Related papers (2024-09-22T00:49:10Z) - Federated Learning driven Large Language Models for Swarm Intelligence: A Survey [2.769238399659845]
Federated learning (FL) offers a compelling framework for training large language models (LLMs)
We focus on machine unlearning, a crucial aspect for complying with privacy regulations like the Right to be Forgotten.
We explore various strategies that enable effective unlearning, such as perturbation techniques, model decomposition, and incremental learning.
arXiv Detail & Related papers (2024-06-14T08:40:58Z) - Heterogeneous Contrastive Learning for Foundation Models and Beyond [73.74745053250619]
In the era of big data and Artificial Intelligence, an emerging paradigm is to utilize contrastive self-supervised learning to model large-scale heterogeneous data.
This survey critically evaluates the current landscape of heterogeneous contrastive learning for foundation models.
arXiv Detail & Related papers (2024-03-30T02:55:49Z) - Personalized Federated Learning with Contextual Modulation and
Meta-Learning [2.7716102039510564]
Federated learning has emerged as a promising approach for training machine learning models on decentralized data sources.
We propose a novel framework that combines federated learning with meta-learning techniques to enhance both efficiency and generalization capabilities.
arXiv Detail & Related papers (2023-12-23T08:18:22Z) - UNIDEAL: Curriculum Knowledge Distillation Federated Learning [17.817181326740698]
Federated Learning (FL) has emerged as a promising approach to enable collaborative learning among multiple clients.
In this paper, we present UNI, a novel FL algorithm specifically designed to tackle the challenges of cross-domain scenarios.
Our results demonstrate that UNI achieves superior performance in terms of both model accuracy and communication efficiency.
arXiv Detail & Related papers (2023-09-16T11:30:29Z) - Learning Unseen Modality Interaction [54.23533023883659]
Multimodal learning assumes all modality combinations of interest are available during training to learn cross-modal correspondences.
We pose the problem of unseen modality interaction and introduce a first solution.
It exploits a module that projects the multidimensional features of different modalities into a common space with rich information preserved.
arXiv Detail & Related papers (2023-06-22T10:53:10Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Non-IID data and Continual Learning processes in Federated Learning: A
long road ahead [58.720142291102135]
Federated Learning is a novel framework that allows multiple devices or institutions to train a machine learning model collaboratively while preserving their data private.
In this work, we formally classify data statistical heterogeneity and review the most remarkable learning strategies that are able to face it.
At the same time, we introduce approaches from other machine learning frameworks, such as Continual Learning, that also deal with data heterogeneity and could be easily adapted to the Federated Learning settings.
arXiv Detail & Related papers (2021-11-26T09:57:11Z) - Hybrid Contrastive Learning of Tri-Modal Representation for Multimodal
Sentiment Analysis [18.4364234071951]
We propose a novel framework HyCon for hybrid contrastive learning of tri-modal representation.
Specifically, we simultaneously perform intra-/inter-modal contrastive learning and semi-contrastive learning.
Our proposed method outperforms existing works.
arXiv Detail & Related papers (2021-09-04T06:04:21Z) - Multi-Party Dual Learning [34.360153917562755]
We propose a multi-party dual learning (MPDL) framework to alleviate the problem of limited data with poor quality in an isolated party.
MPDL framework achieves significant improvement compared with state-of-the-art multi-party learning methods.
arXiv Detail & Related papers (2021-04-14T07:39:23Z) - Emerging Trends in Federated Learning: From Model Fusion to Federated X Learning [65.06445195580622]
Federated learning is a new paradigm that decouples data collection and model training via multi-party computation and model aggregation.
We conduct a focused survey of federated learning in conjunction with other learning algorithms.
arXiv Detail & Related papers (2021-02-25T15:18:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.