A Practical Contrastive Learning Framework for Single-Image
Super-Resolution
- URL: http://arxiv.org/abs/2111.13924v2
- Date: Sun, 16 Jul 2023 16:00:40 GMT
- Title: A Practical Contrastive Learning Framework for Single-Image
Super-Resolution
- Authors: Gang Wu and Junjun Jiang and Xianming Liu
- Abstract summary: We investigate contrastive learning-based single image super-resolution from two perspectives.
We propose a practical contrastive learning framework for SISR, named PCL-SR.
Compared with existing benchmark methods, we re-train them by our proposed PCL-SR framework and achieve superior performance.
- Score: 51.422185656787285
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive learning has achieved remarkable success on various high-level
tasks, but there are fewer contrastive learning-based methods proposed for
low-level tasks. It is challenging to adopt vanilla contrastive learning
technologies proposed for high-level visual tasks to low-level image
restoration problems straightly. Because the acquired high-level global visual
representations are insufficient for low-level tasks requiring rich texture and
context information. In this paper, we investigate the contrastive
learning-based single image super-resolution from two perspectives: positive
and negative sample construction and feature embedding. The existing methods
take naive sample construction approaches (e.g., considering the low-quality
input as a negative sample and the ground truth as a positive sample) and adopt
a prior model (e.g., pre-trained VGG model) to obtain the feature embedding. To
this end, we propose a practical contrastive learning framework for SISR, named
PCL-SR. We involve the generation of many informative positive and hard
negative samples in frequency space. Instead of utilizing an additional
pre-trained network, we design a simple but effective embedding network
inherited from the discriminator network which is more task-friendly. Compared
with existing benchmark methods, we re-train them by our proposed PCL-SR
framework and achieve superior performance. Extensive experiments have been
conducted to show the effectiveness and technical contributions of our proposed
PCL-SR thorough ablation studies. The code and pre-trained models can be found
at https://github.com/Aitical/PCL-SISR.
Related papers
- Model-Based Transfer Learning for Contextual Reinforcement Learning [5.5597941107270215]
We show how to systematically select good tasks to train, maximizing overall performance across a range of tasks.
Key idea behind our approach is to explicitly model the performance loss incurred by transferring a trained model.
We experimentally validate our methods using urban traffic and standard control benchmarks.
arXiv Detail & Related papers (2024-08-08T14:46:01Z) - InfRS: Incremental Few-Shot Object Detection in Remote Sensing Images [11.916941756499435]
In this paper, we explore the intricate task of incremental few-shot object detection in remote sensing images.
We introduce a pioneering fine-tuning-based technique, termed InfRS, designed to facilitate the incremental learning of novel classes.
We develop a prototypical calibration strategy based on the Wasserstein distance to mitigate the catastrophic forgetting problem.
arXiv Detail & Related papers (2024-05-18T13:39:50Z) - A Lightweight Parallel Framework for Blind Image Quality Assessment [7.9562077122537875]
We propose a lightweight parallel framework (LPF) for blind image quality assessment (BIQA)
First, we extract the visual features using a pre-trained feature extraction network. Furthermore, we construct a simple yet effective feature embedding network (FEN) to transform the visual features.
We present two novel self-supervised subtasks, including a sample-level category prediction task and a batch-level quality comparison task.
arXiv Detail & Related papers (2024-02-19T10:56:58Z) - Learning Deep Representations via Contrastive Learning for Instance
Retrieval [11.736450745549792]
This paper makes the first attempt that tackles the problem using instance-discrimination based contrastive learning (CL)
In this work, we approach this problem by exploring the capability of deriving discriminative representations from pre-trained and fine-tuned CL models.
arXiv Detail & Related papers (2022-09-28T04:36:34Z) - What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks? [59.51457877578138]
We study contrastive learning on the wearable-based activity recognition task.
This paper presents an open-source PyTorch library textttCL-HAR, which can serve as a practical tool for researchers.
arXiv Detail & Related papers (2022-02-12T06:10:15Z) - On Efficient Transformer and Image Pre-training for Low-level Vision [74.22436001426517]
Pre-training has marked numerous state of the arts in high-level computer vision.
We present an in-depth study of image pre-training.
We find pre-training plays strikingly different roles in low-level tasks.
arXiv Detail & Related papers (2021-12-19T15:50:48Z) - Activation to Saliency: Forming High-Quality Labels for Unsupervised
Salient Object Detection [54.92703325989853]
We propose a two-stage Activation-to-Saliency (A2S) framework that effectively generates high-quality saliency cues.
No human annotations are involved in our framework during the whole training process.
Our framework reports significant performance compared with existing USOD methods.
arXiv Detail & Related papers (2021-12-07T11:54:06Z) - Weakly Supervised Contrastive Learning [68.47096022526927]
We introduce a weakly supervised contrastive learning framework (WCL) to tackle this issue.
WCL achieves 65% and 72% ImageNet Top-1 Accuracy using ResNet50, which is even higher than SimCLRv2 with ResNet101.
arXiv Detail & Related papers (2021-10-10T12:03:52Z) - Low-Resolution Face Recognition In Resource-Constrained Environments [34.13093606945265]
A non-parametric low-resolution face recognition model is proposed in this work.
It can be trained on a small number of labeled data samples, with low training complexity, and low-resolution input images.
The effectiveness of the proposed model is demonstrated by experiments on the LFW and the CMU Multi-PIE datasets.
arXiv Detail & Related papers (2020-11-23T19:14:02Z) - Prototypical Contrastive Learning of Unsupervised Representations [171.3046900127166]
Prototypical Contrastive Learning (PCL) is an unsupervised representation learning method.
PCL implicitly encodes semantic structures of the data into the learned embedding space.
PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks.
arXiv Detail & Related papers (2020-05-11T09:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.