Single-round Self-supervised Distributed Learning using Vision
Transformer
- URL: http://arxiv.org/abs/2301.02064v3
- Date: Sat, 15 Apr 2023 06:36:01 GMT
- Title: Single-round Self-supervised Distributed Learning using Vision
Transformer
- Authors: Sangjoon Park, Ik-Jae Lee, Jun Won Kim, Jong Chul Ye
- Abstract summary: We propose a self-supervised masked sampling distillation method for the vision transformer.
This method can be implemented without continuous communication and can enhance privacy by utilizing a vision transformer-specific encryption technique.
- Score: 34.76985278888513
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Despite the recent success of deep learning in the field of medicine, the
issue of data scarcity is exacerbated by concerns about privacy and data
ownership. Distributed learning approaches, including federated learning, have
been investigated to address these issues. However, they are hindered by the
need for cumbersome communication overheads and weaknesses in privacy
protection. To tackle these challenges, we propose a self-supervised masked
sampling distillation method for the vision transformer. This method can be
implemented without continuous communication and can enhance privacy by
utilizing a vision transformer-specific encryption technique. We conducted
extensive experiments on two different tasks, which demonstrated the
effectiveness of our method. We achieved superior performance compared to the
existing distributed learning strategy as well as the fine-tuning only
baseline. Furthermore, since the self-supervised model created using our
proposed method can achieve a general semantic understanding of the image, we
demonstrate its potential as a task-agnostic self-supervised foundation model
for various downstream tasks, thereby expanding its applicability in the
medical domain.
Related papers
- Unsupervised Meta-Learning via In-Context Learning [3.4165401459803335]
We propose a novel approach to unsupervised meta-learning that leverages the generalization abilities of in-supervised learning.
Our method reframes meta-learning as a sequence modeling problem, enabling the transformer encoder to learn task context from support images.
arXiv Detail & Related papers (2024-05-25T08:29:46Z) - Stochastic Vision Transformers with Wasserstein Distance-Aware Attention [8.407731308079025]
Self-supervised learning is one of the most promising approaches to acquiring knowledge from limited labeled data.
We introduce a new vision transformer that integrates uncertainty and distance awareness into self-supervised learning pipelines.
Our proposed method achieves superior accuracy and calibration, surpassing the self-supervised baseline in a wide range of experiments on a variety of datasets.
arXiv Detail & Related papers (2023-11-30T15:53:37Z) - From Pretext to Purpose: Batch-Adaptive Self-Supervised Learning [32.18543787821028]
This paper proposes an adaptive technique of batch fusion for self-supervised contrastive learning.
It achieves state-of-the-art performance under equitable comparisons.
We suggest that the proposed method may contribute to the advancement of data-driven self-supervised learning research.
arXiv Detail & Related papers (2023-11-16T15:47:49Z) - Masking Improves Contrastive Self-Supervised Learning for ConvNets, and Saliency Tells You Where [63.61248884015162]
We aim to alleviate the burden of including masking operation into the contrastive-learning framework for convolutional neural networks.
We propose to explicitly take the saliency constraint into consideration in which the masked regions are more evenly distributed among the foreground and background.
arXiv Detail & Related papers (2023-09-22T09:58:38Z) - Visualizing Transferred Knowledge: An Interpretive Model of Unsupervised
Domain Adaptation [70.85686267987744]
Unsupervised domain adaptation problems can transfer knowledge from a labeled source domain to an unlabeled target domain.
We propose an interpretive model of unsupervised domain adaptation, as the first attempt to visually unveil the mystery of transferred knowledge.
Our method provides an intuitive explanation for the base model's predictions and unveils transfer knowledge by matching the image patches with the same semantics across both source and target domains.
arXiv Detail & Related papers (2023-03-04T03:02:12Z) - Evaluating the Label Efficiency of Contrastive Self-Supervised Learning
for Multi-Resolution Satellite Imagery [0.0]
Self-supervised learning has been applied in the remote sensing domain to exploit readily-available unlabeled data.
In this paper, we study self-supervised visual representation learning through the lens of label efficiency.
arXiv Detail & Related papers (2022-10-13T06:54:13Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation [51.21190751266442]
Domain adaptation (DA) tries to tackle the scenarios when the test data does not fully follow the same distribution of the training data.
By learning from large-scale unlabeled samples, self-supervised learning has now become a new trend in deep learning.
We propose a novel textbfSelf-textbfSupervised textbfGraph Neural Network (SSG) to enable more effective inter-task information exchange and knowledge sharing.
arXiv Detail & Related papers (2022-04-08T03:37:56Z) - Semantics-Preserved Distortion for Personal Privacy Protection in Information Management [65.08939490413037]
This paper suggests a linguistically-grounded approach to distort texts while maintaining semantic integrity.
We present two distinct frameworks for semantic-preserving distortion: a generative approach and a substitutive approach.
We also explore privacy protection in a specific medical information management scenario, showing our method effectively limits sensitive data memorization.
arXiv Detail & Related papers (2022-01-04T04:01:05Z) - Self-Supervised Representation Learning: Introduction, Advances and
Challenges [125.38214493654534]
Self-supervised representation learning methods aim to provide powerful deep feature learning without the requirement of large annotated datasets.
This article introduces this vibrant area including key concepts, the four main families of approach and associated state of the art, and how self-supervised methods are applied to diverse modalities of data.
arXiv Detail & Related papers (2021-10-18T13:51:22Z) - Domain-Robust Visual Imitation Learning with Mutual Information
Constraints [0.0]
We introduce a new algorithm called Disentangling Generative Adversarial Imitation Learning (DisentanGAIL)
Our algorithm enables autonomous agents to learn directly from high dimensional observations of an expert performing a task.
arXiv Detail & Related papers (2021-03-08T21:18:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.