Multi-network Contrastive Learning Based on Global and Local
Representations
- URL: http://arxiv.org/abs/2306.15930v2
- Date: Sun, 30 Jul 2023 11:00:29 GMT
- Title: Multi-network Contrastive Learning Based on Global and Local
Representations
- Authors: Weiquan Li, Xianzhong Long, Yun Li
- Abstract summary: This paper proposes a multi-network contrastive learning framework based on global and local representations.
We introduce global and local feature information for self-supervised contrastive learning through multiple networks.
The framework also expands the number of samples used for contrast and improves the training efficiency of the model.
- Score: 4.190134425277768
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The popularity of self-supervised learning has made it possible to train
models without relying on labeled data, which saves expensive annotation costs.
However, most existing self-supervised contrastive learning methods often
overlook the combination of global and local feature information. This paper
proposes a multi-network contrastive learning framework based on global and
local representations. We introduce global and local feature information for
self-supervised contrastive learning through multiple networks. The model
learns feature information at different scales of an image by contrasting the
embedding pairs generated by multiple networks. The framework also expands the
number of samples used for contrast and improves the training efficiency of the
model. Linear evaluation results on three benchmark datasets show that our
method outperforms several existing classical self-supervised learning methods.
Related papers
- Personalized Federated Learning via Feature Distribution Adaptation [3.410799378893257]
Federated learning (FL) is a distributed learning framework that leverages commonalities between distributed client datasets to train a global model.
personalized federated learning (PFL) seeks to address this by learning individual models tailored to each client.
We propose an algorithm, pFedFDA, that efficiently generates personalized models by adapting global generative classifiers to their local feature distributions.
arXiv Detail & Related papers (2024-11-01T03:03:52Z) - Federated Learning of Models Pre-Trained on Different Features with
Consensus Graphs [19.130197923214123]
Learning an effective global model on private and decentralized datasets has become an increasingly important challenge of machine learning.
We propose a feature fusion approach that extracts local representations from local models and incorporates them into a global representation that improves the prediction performance.
This paper presents solutions to these problems and demonstrates them in real-world applications on time series data such as power grids and traffic networks.
arXiv Detail & Related papers (2023-06-02T02:24:27Z) - Deep Learning Model with GA based Feature Selection and Context
Integration [2.3472688456025756]
We propose a novel three-layered deep learning model that assiminlates or learns independently global and local contextual information alongside visual features.
The novelty of the proposed model is that One-vs-All binary class-based learners are introduced to learn Genetic Algorithm (GA) optimized features in the visual layer.
optimized visual features with global and local contextual information play a significant role to improve accuracy and produce stable predictions comparable to state-of-the-art deep CNN models.
arXiv Detail & Related papers (2022-04-13T06:28:41Z) - Distillation with Contrast is All You Need for Self-Supervised Point
Cloud Representation Learning [53.90317574898643]
We propose a simple and general framework for self-supervised point cloud representation learning.
Inspired by how human beings understand the world, we utilize knowledge distillation to learn both global shape information and the relationship between global shape and local structures.
Our method achieves the state-of-the-art performance on linear classification and multiple other downstream tasks.
arXiv Detail & Related papers (2022-02-09T02:51:59Z) - Multimodal Clustering Networks for Self-supervised Learning from
Unlabeled Videos [69.61522804742427]
This paper proposes a self-supervised training framework that learns a common multimodal embedding space.
We extend the concept of instance-level contrastive learning with a multimodal clustering step to capture semantic similarities across modalities.
The resulting embedding space enables retrieval of samples across all modalities, even from unseen datasets and different domains.
arXiv Detail & Related papers (2021-04-26T15:55:01Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z) - Think Locally, Act Globally: Federated Learning with Local and Global
Representations [92.68484710504666]
Federated learning is a method of training models on private data distributed over multiple devices.
We propose a new federated learning algorithm that jointly learns compact local representations on each device.
We also evaluate on the task of personalized mood prediction from real-world mobile data where privacy is key.
arXiv Detail & Related papers (2020-01-06T12:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.