Classification-Denoising Networks
- URL: http://arxiv.org/abs/2410.03505v1
- Date: Fri, 4 Oct 2024 15:20:57 GMT
- Title: Classification-Denoising Networks
- Authors: Louis Thiry, Florentin Guth,
- Abstract summary: Image classification and denoising suffer from complementary issues of lack of robustness or partially ignoring conditioning information.
We argue that they can be alleviated by unifying both tasks through a model of the joint probability of (noisy) images and class labels.
Numerical experiments on CIFAR-10 and ImageNet show competitive classification and denoising performance.
- Score: 6.783232060611113
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image classification and denoising suffer from complementary issues of lack of robustness or partially ignoring conditioning information. We argue that they can be alleviated by unifying both tasks through a model of the joint probability of (noisy) images and class labels. Classification is performed with a forward pass followed by conditioning. Using the Tweedie-Miyasawa formula, we evaluate the denoising function with the score, which can be computed by marginalization and back-propagation. The training objective is then a combination of cross-entropy loss and denoising score matching loss integrated over noise levels. Numerical experiments on CIFAR-10 and ImageNet show competitive classification and denoising performance compared to reference deep convolutional classifiers/denoisers, and significantly improves efficiency compared to previous joint approaches. Our model shows an increased robustness to adversarial perturbations compared to a standard discriminative classifier, and allows for a novel interpretation of adversarial gradients as a difference of denoisers.
Related papers
- Unsupervised Image Denoising with Score Function [18.814785792844738]
Current unsupervised learning methods for single image denoising usually have constraints in applications.
We propose a new approach which is more general and applicable to complicated noise models.
Our method is comparable when the noise model is simple, and has good performance in complicated cases where other methods are not applicable or perform poorly.
arXiv Detail & Related papers (2023-04-17T15:52:43Z) - Deep Semantic Statistics Matching (D2SM) Denoising Network [70.01091467628068]
We introduce the Deep Semantic Statistics Matching (D2SM) Denoising Network.
It exploits semantic features of pretrained classification networks, then it implicitly matches the probabilistic distribution of clear images at the semantic feature space.
By learning to preserve the semantic distribution of denoised images, we empirically find our method significantly improves the denoising capabilities of networks.
arXiv Detail & Related papers (2022-07-19T14:35:42Z) - Treatment Learning Causal Transformer for Noisy Image Classification [62.639851972495094]
In this work, we incorporate this binary information of "existence of noise" as treatment into image classification tasks to improve prediction accuracy.
Motivated from causal variational inference, we propose a transformer-based architecture, that uses a latent generative model to estimate robust feature representations for noise image classification.
We also create new noisy image datasets incorporating a wide range of noise factors for performance benchmarking.
arXiv Detail & Related papers (2022-03-29T13:07:53Z) - Diffusion-Based Representation Learning [65.55681678004038]
We augment the denoising score matching framework to enable representation learning without any supervised signal.
In contrast, the introduced diffusion-based representation learning relies on a new formulation of the denoising score matching objective.
Using the same approach, we propose to learn an infinite-dimensional latent code that achieves improvements of state-of-the-art models on semi-supervised image classification.
arXiv Detail & Related papers (2021-05-29T09:26:02Z) - Synergy Between Semantic Segmentation and Image Denoising via Alternate
Boosting [102.19116213923614]
We propose a boosting network to perform denoising and segmentation alternately.
We observe that not only denoising helps combat the drop of segmentation accuracy due to noise, but also pixel-wise semantic information boosts the capability of denoising.
Experimental results show that the denoised image quality is improved substantially and the segmentation accuracy is improved to close to that of clean images.
arXiv Detail & Related papers (2021-02-24T06:48:45Z) - Distribution Conditional Denoising: A Flexible Discriminative Image
Denoiser [0.0]
A flexible discriminative image denoiser is introduced in which multi-task learning methods are applied to a densoising FCN based on U-Net.
It has been shown that this conditional training method can generalise a fixed noise level U-Net denoiser to a variety of noise levels.
arXiv Detail & Related papers (2020-11-24T21:27:18Z) - Noise2Same: Optimizing A Self-Supervised Bound for Image Denoising [54.730707387866076]
We introduce Noise2Same, a novel self-supervised denoising framework.
In particular, Noise2Same requires neither J-invariance nor extra information about the noise model.
Our results show that our Noise2Same remarkably outperforms previous self-supervised denoising methods.
arXiv Detail & Related papers (2020-10-22T18:12:26Z) - Unpaired Learning of Deep Image Denoising [80.34135728841382]
This paper presents a two-stage scheme by incorporating self-supervised learning and knowledge distillation.
For self-supervised learning, we suggest a dilated blind-spot network (D-BSN) to learn denoising solely from real noisy images.
Experiments show that our unpaired learning method performs favorably on both synthetic noisy images and real-world noisy photographs.
arXiv Detail & Related papers (2020-08-31T16:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.