Sparsity-Aware Optimal Transport for Unsupervised Restoration Learning
- URL: http://arxiv.org/abs/2305.00273v1
- Date: Sat, 29 Apr 2023 15:09:48 GMT
- Title: Sparsity-Aware Optimal Transport for Unsupervised Restoration Learning
- Authors: Fei Wen, Wei Wang and Wenxian Yu
- Abstract summary: In this paper, we exploit the sparsity of degradation in the unsupervised restoration learning framework to significantly boost its performance on complex restoration tasks.
Experiments on real-world super-resolution, deraining, and dehazing demonstrate that SOT can improve the PSNR of OT by about 2.6 dB, 2.7 dB and 1.3 dB, respectively.
- Score: 17.098664719423404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies show that, without any prior model, the unsupervised
restoration learning problem can be optimally formulated as an optimal
transport (OT) problem, which has shown promising performance on denoising
tasks to approach the performance of supervised methods. However, it still
significantly lags behind state-of-the-art supervised methods on complex
restoration tasks such as super-resolution, deraining, and dehazing. In this
paper, we exploit the sparsity of degradation in the OT framework to
significantly boost its performance on these tasks. First, we disclose an
observation that the degradation in these tasks is quite sparse in the
frequency domain, and then propose a sparsity-aware optimal transport (SOT)
criterion for unsupervised restoration learning. Further, we provide an
analytic example to illustrate that exploiting the sparsity helps to reduce the
ambiguity in finding an inverse map for restoration. Experiments on real-world
super-resolution, deraining, and dehazing demonstrate that SOT can improve the
PSNR of OT by about 2.6 dB, 2.7 dB and 1.3 dB, respectively, while achieving
the best perception scores among the compared supervised and unsupervised
methods. Particularly, on the three tasks, SOT significantly outperforms
existing unsupervised methods and approaches the performance of
state-of-the-art supervised methods.
Related papers
- Efficient Diffusion as Low Light Enhancer [63.789138528062225]
Reflectance-Aware Trajectory Refinement (RATR) is a simple yet effective module to refine the teacher trajectory using the reflectance component of images.
textbfReflectance-aware textbfDiffusion with textbfDistilled textbfTrajectory (textbfReDDiT) is an efficient and flexible distillation framework tailored for Low-Light Image Enhancement (LLIE)
arXiv Detail & Related papers (2024-10-16T08:07:18Z) - ACTRESS: Active Retraining for Semi-supervised Visual Grounding [52.08834188447851]
A previous study, RefTeacher, makes the first attempt to tackle this task by adopting the teacher-student framework to provide pseudo confidence supervision and attention-based supervision.
This approach is incompatible with current state-of-the-art visual grounding models, which follow the Transformer-based pipeline.
Our paper proposes the ACTive REtraining approach for Semi-Supervised Visual Grounding, abbreviated as ACTRESS.
arXiv Detail & Related papers (2024-07-03T16:33:31Z) - Variational Delayed Policy Optimization [25.668512485348952]
In environments with delayed observation, state augmentation by including actions within the delay window is adopted to retrieve Markovian property to enable reinforcement learning (RL)
State-of-the-art (SOTA) RL techniques with Temporal-Difference (TD) learning frameworks often suffer from learning inefficiency, due to the significant expansion of the augmented state space with the delay.
This work introduces a novel framework called Variational Delayed Policy Optimization (VDPO), which reformulates delayed RL as a variational inference problem.
arXiv Detail & Related papers (2024-05-23T06:57:04Z) - Adaptive trajectory-constrained exploration strategy for deep
reinforcement learning [6.589742080994319]
Deep reinforcement learning (DRL) faces significant challenges in addressing the hard-exploration problems in tasks with sparse or deceptive rewards and large state spaces.
We propose an efficient adaptive trajectory-constrained exploration strategy for DRL.
We conduct experiments on two large 2D grid world mazes and several MuJoCo tasks.
arXiv Detail & Related papers (2023-12-27T07:57:15Z) - Gradient constrained sharpness-aware prompt learning for vision-language
models [99.74832984957025]
This paper targets a novel trade-off problem in generalizable prompt learning for vision-language models (VLM)
By analyzing the loss landscapes of the state-of-the-art method and vanilla Sharpness-aware Minimization (SAM) based method, we conclude that the trade-off performance correlates to both loss value and loss sharpness.
We propose a novel SAM-based method for prompt learning, denoted as Gradient Constrained Sharpness-aware Context Optimization (GCSCoOp)
arXiv Detail & Related papers (2023-09-14T17:13:54Z) - Majorization-Minimization for sparse SVMs [46.99165837639182]
Support Vector Machines (SVMs) were introduced for performing binary classification tasks, under a supervised framework, several decades ago.
They often outperform other supervised methods and remain one of the most popular approaches in the machine learning arena.
In this work, we investigate the training of SVMs through a smooth sparse-promoting-regularized squared hinge loss minimization.
arXiv Detail & Related papers (2023-08-31T17:03:16Z) - Efficient Deep Reinforcement Learning Requires Regulating Overfitting [91.88004732618381]
We show that high temporal-difference (TD) error on the validation set of transitions is the main culprit that severely affects the performance of deep RL algorithms.
We show that a simple online model selection method that targets the validation TD error is effective across state-based DMC and Gym tasks.
arXiv Detail & Related papers (2023-04-20T17:11:05Z) - Relation Extraction with Weighted Contrastive Pre-training on Distant
Supervision [22.904752492573504]
We propose a weighted contrastive learning method by leveraging the supervised data to estimate the reliability of pre-training instances.
Experimental results on three supervised datasets demonstrate the advantages of our proposed weighted contrastive learning approach.
arXiv Detail & Related papers (2022-05-18T07:45:59Z) - Digging into Uncertainty in Self-supervised Multi-view Stereo [57.04768354383339]
We propose a novel Uncertainty reduction Multi-view Stereo (UMVS) framework for self-supervised learning.
Our framework achieves the best performance among unsupervised MVS methods, with competitive performance with its supervised opponents.
arXiv Detail & Related papers (2021-08-30T02:53:08Z) - WSSOD: A New Pipeline for Weakly- and Semi-Supervised Object Detection [75.80075054706079]
We propose a weakly- and semi-supervised object detection framework (WSSOD)
An agent detector is first trained on a joint dataset and then used to predict pseudo bounding boxes on weakly-annotated images.
The proposed framework demonstrates remarkable performance on PASCAL-VOC and MSCOCO benchmark, achieving a high performance comparable to those obtained in fully-supervised settings.
arXiv Detail & Related papers (2021-05-21T11:58:50Z) - A Comprehensive Approach to Unsupervised Embedding Learning based on AND
Algorithm [18.670975246545208]
Unsupervised embedding learning aims to extract good representation from data without the need for any manual labels.
This paper proposes a new unsupervised embedding approach, called Super-AND, which extends the current state-of-the-art model.
Super-AND outperforms all existing approaches and achieves an accuracy of 89.2% on the image classification task for CIFAR-10.
arXiv Detail & Related papers (2020-02-26T13:22:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.