Self-supervised regression learning using domain knowledge: Applications
to improving self-supervised denoising in imaging
- URL: http://arxiv.org/abs/2205.04821v1
- Date: Tue, 10 May 2022 11:46:10 GMT
- Title: Self-supervised regression learning using domain knowledge: Applications
to improving self-supervised denoising in imaging
- Authors: Il Yong Chun, Dongwon Park, Xuehang Zheng, Se Young Chun, Yong Long
- Abstract summary: This paper proposes a general self-supervised regression learning (SSRL) framework that enables learning regression neural networks with only input data.
Numerical experiments for low-dose computational tomography denoising and camera image denoising demonstrate that proposed SSRL significantly improves the denoising quality.
- Score: 27.34785258514146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Regression that predicts continuous quantity is a central part of
applications using computational imaging and computer vision technologies. Yet,
studying and understanding self-supervised learning for regression tasks -
except for a particular regression task, image denoising - have lagged behind.
This paper proposes a general self-supervised regression learning (SSRL)
framework that enables learning regression neural networks with only input data
(but without ground-truth target data), by using a designable pseudo-predictor
that encapsulates domain knowledge of a specific application. The paper
underlines the importance of using domain knowledge by showing that under
different settings, the better pseudo-predictor can lead properties of SSRL
closer to those of ordinary supervised learning. Numerical experiments for
low-dose computational tomography denoising and camera image denoising
demonstrate that proposed SSRL significantly improves the denoising quality
over several existing self-supervised denoising methods.
Related papers
- Differential Privacy Mechanisms in Neural Tangent Kernel Regression [29.187250620950927]
We study differential privacy (DP) in the Neural Tangent Kernel (NTK) regression setting.
We show provable guarantees for both differential privacy and test accuracy of our NTK regression.
To our knowledge, this is the first work to provide a DP guarantee for NTK regression.
arXiv Detail & Related papers (2024-07-18T15:57:55Z) - USIM-DAL: Uncertainty-aware Statistical Image Modeling-based Dense
Active Learning for Super-resolution [47.38982697349244]
Dense regression is a widely used approach in computer vision for tasks such as image super-resolution, enhancement, depth estimation, etc.
We propose incorporating active learning into dense regression models to address this problem.
Active learning allows models to select the most informative samples for labeling, reducing the overall annotation cost while improving performance.
arXiv Detail & Related papers (2023-05-27T16:33:43Z) - CCLF: A Contrastive-Curiosity-Driven Learning Framework for
Sample-Efficient Reinforcement Learning [56.20123080771364]
We develop a model-agnostic Contrastive-Curiosity-Driven Learning Framework (CCLF) for reinforcement learning.
CCLF fully exploit sample importance and improve learning efficiency in a self-supervised manner.
We evaluate this approach on the DeepMind Control Suite, Atari, and MiniGrid benchmarks.
arXiv Detail & Related papers (2022-05-02T14:42:05Z) - Supervision by Denoising for Medical Image Segmentation [17.131944478890293]
We propose "supervision by denoising" (SUD), a framework that enables us to supervise models using their own soft labels.
SUD unifies averaging and spatial denoising techniques under a denoising framework and alternates denoising and model weight update steps.
As example applications, we apply SUD to two problems arising from biomedical imaging.
arXiv Detail & Related papers (2022-02-07T05:29:16Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Improving Music Performance Assessment with Contrastive Learning [78.8942067357231]
This study investigates contrastive learning as a potential method to improve existing MPA systems.
We introduce a weighted contrastive loss suitable for regression tasks applied to a convolutional neural network.
Our results show that contrastive-based methods are able to match and exceed SoTA performance for MPA regression tasks.
arXiv Detail & Related papers (2021-08-03T19:24:25Z) - Self-Adaptive Training: Bridging the Supervised and Self-Supervised
Learning [16.765461276790944]
Self-adaptive training is a unified training algorithm that dynamically calibrates and enhances training process by model predictions without incurring extra computational cost.
We analyze the training dynamics of deep networks on training data corrupted by, e.g., random noise and adversarial examples.
Our analysis shows that model predictions are able to magnify useful underlying information in data and this phenomenon occurs broadly even in the absence of emphany label information.
arXiv Detail & Related papers (2021-01-21T17:17:30Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - Provably Efficient Causal Reinforcement Learning with Confounded
Observational Data [135.64775986546505]
We study how to incorporate the dataset (observational data) collected offline, which is often abundantly available in practice, to improve the sample efficiency in the online setting.
We propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner.
arXiv Detail & Related papers (2020-06-22T14:49:33Z) - Supervised Learning of Sparsity-Promoting Regularizers for Denoising [13.203765985718205]
We present a method for supervised learning of sparsity-promoting regularizers for image denoising.
Our experiments show that the proposed method can learn an operator that outperforms well-known regularizers.
arXiv Detail & Related papers (2020-06-09T21:38:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.