Prior Knowledge-Guided Attention in Self-Supervised Vision Transformers
- URL: http://arxiv.org/abs/2209.03745v1
- Date: Wed, 7 Sep 2022 02:30:36 GMT
- Title: Prior Knowledge-Guided Attention in Self-Supervised Vision Transformers
- Authors: Kevin Miao, Akash Gokul, Raghav Singh, Suzanne Petryk, Joseph
Gonzalez, Kurt Keutzer, Trevor Darrell, Colorado Reed
- Abstract summary: We present spatial prior attention (SPAN), a framework that takes advantage of consistent spatial and semantic structure in unlabeled image datasets.
SPAN operates by regularizing attention masks from separate transformer heads to follow various priors over semantic regions.
We find that the resulting attention masks are more interpretable than those derived from domain-agnostic pretraining.
- Score: 79.60022233109397
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent trends in self-supervised representation learning have focused on
removing inductive biases from training pipelines. However, inductive biases
can be useful in settings when limited data are available or provide additional
insight into the underlying data distribution. We present spatial prior
attention (SPAN), a framework that takes advantage of consistent spatial and
semantic structure in unlabeled image datasets to guide Vision Transformer
attention. SPAN operates by regularizing attention masks from separate
transformer heads to follow various priors over semantic regions. These priors
can be derived from data statistics or a single labeled sample provided by a
domain expert. We study SPAN through several detailed real-world scenarios,
including medical image analysis and visual quality assurance. We find that the
resulting attention masks are more interpretable than those derived from
domain-agnostic pretraining. SPAN produces a 58.7 mAP improvement for lung and
heart segmentation. We also find that our method yields a 2.2 mAUC improvement
compared to domain-agnostic pretraining when transferring the pretrained model
to a downstream chest disease classification task. Lastly, we show that SPAN
pretraining leads to higher downstream classification performance in low-data
regimes compared to domain-agnostic pretraining.
Related papers
- Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Source-Free Domain Adaptation for Medical Image Segmentation via
Prototype-Anchored Feature Alignment and Contrastive Learning [57.43322536718131]
We present a two-stage source-free domain adaptation (SFDA) framework for medical image segmentation.
In the prototype-anchored feature alignment stage, we first utilize the weights of the pre-trained pixel-wise classifier as source prototypes.
Then, we introduce the bi-directional transport to align the target features with class prototypes by minimizing its expected cost.
arXiv Detail & Related papers (2023-07-19T06:07:12Z) - In-Domain Self-Supervised Learning Improves Remote Sensing Image Scene
Classification [5.323049242720532]
Self-supervised learning has emerged as a promising approach for remote sensing image classification.
We present a study of different self-supervised pre-training strategies and evaluate their effect across 14 downstream datasets.
arXiv Detail & Related papers (2023-07-04T10:57:52Z) - Curriculum-Based Augmented Fourier Domain Adaptation for Robust Medical
Image Segmentation [18.830738606514736]
This work proposes the Curriculum-based Augmented Fourier Domain Adaptation (Curri-AFDA) for robust medical image segmentation.
In particular, our curriculum learning strategy is based on the causal relationship of a model under different levels of data shift.
Experiments on two segmentation tasks of Retina and Nuclei collected from multiple sites and scanners suggest that our proposed method yields superior adaptation and generalization performance.
arXiv Detail & Related papers (2023-06-06T08:56:58Z) - AD-PT: Autonomous Driving Pre-Training with Large-scale Point Cloud
Dataset [25.935496432142976]
It is a long-term vision for Autonomous Driving (AD) community that the perception models can learn from a large-scale point cloud dataset.
We formulate the point-cloud pre-training task as a semi-supervised problem, which leverages the few-shot labeled and massive unlabeled point-cloud data.
We achieve significant performance gains on a series of downstream perception benchmarks including nuScenes, and KITTI, under different baseline models.
arXiv Detail & Related papers (2023-06-01T12:32:52Z) - Attentive Continuous Generative Self-training for Unsupervised Domain
Adaptive Medical Image Translation [12.080054869408213]
We develop a generative self-training framework for domain adaptive image translation with continuous value prediction and regression objectives.
We evaluate our framework on two cross-scanner/center, inter-subject translation tasks, including tagged-to-cine magnetic resonance (MR) image translation and T1-weighted MR-to-fractional anisotropy translation.
arXiv Detail & Related papers (2023-05-23T23:57:44Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Consecutive Pretraining: A Knowledge Transfer Learning Strategy with
Relevant Unlabeled Data for Remote Sensing Domain [25.84756140221655]
ConSecutive PreTraining (CSPT) is proposed based on the idea of not stopping pretraining in natural language processing (NLP)
The proposed CSPT also can release the huge potential of unlabeled data for task-aware model training.
The results show that by utilizing the proposed CSPT for task-aware model training, almost all downstream tasks in RSD can outperform the previous method of supervised pretraining-then-fine-tuning.
arXiv Detail & Related papers (2022-07-08T12:32:09Z) - Self-Supervised Pre-Training for Transformer-Based Person
Re-Identification [54.55281692768765]
Transformer-based supervised pre-training achieves great performance in person re-identification (ReID)
Due to the domain gap between ImageNet and ReID datasets, it usually needs a larger pre-training dataset to boost the performance.
This work aims to mitigate the gap between the pre-training and ReID datasets from the perspective of data and model structure.
arXiv Detail & Related papers (2021-11-23T18:59:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.