Random Weights Networks Work as Loss Prior Constraint for Image
Restoration
- URL: http://arxiv.org/abs/2303.16438v1
- Date: Wed, 29 Mar 2023 03:43:51 GMT
- Title: Random Weights Networks Work as Loss Prior Constraint for Image
Restoration
- Authors: Man Zhou, Naishan Zheng, Jie Huang, Xiangyu Rui, Chunle Guo, Deyu
Meng, Chongyi Li, Jinwei Gu
- Abstract summary: We present our belief Random Weights Networks can be Acted as Loss Prior Constraint for Image Restoration''
Our belief can be directly inserted into existing networks without any training and testing computational cost.
To emphasize, our main focus is to spark the realms of loss function and save their current neglected status.
- Score: 50.80507007507757
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, orthogonal to the existing data and model studies, we instead
resort our efforts to investigate the potential of loss function in a new
perspective and present our belief ``Random Weights Networks can Be Acted as
Loss Prior Constraint for Image Restoration''. Inspired by Functional theory,
we provide several alternative solutions to implement our belief in the strict
mathematical manifolds including Taylor's Unfolding Network, Invertible Neural
Network, Central Difference Convolution and Zero-order Filtering as ``random
weights network prototype'' with respect of the following four levels: 1) the
different random weights strategies; 2) the different network architectures,
\emph{eg,} pure convolution layer or transformer; 3) the different network
architecture depths; 4) the different numbers of random weights network
combination. Furthermore, to enlarge the capability of the randomly initialized
manifolds, we devise the manner of random weights in the following two
variants: 1) the weights are randomly initialized only once during the whole
training procedure; 2) the weights are randomly initialized at each training
iteration epoch. Our propose belief can be directly inserted into existing
networks without any training and testing computational cost. Extensive
experiments across multiple image restoration tasks, including image
de-noising, low-light image enhancement, guided image super-resolution
demonstrate the consistent performance gains obtained by introducing our
belief. To emphasize, our main focus is to spark the realms of loss function
and save their current neglected status. Code will be publicly available.
Related papers
- Efficient Training with Denoised Neural Weights [65.14892033932895]
This work takes a novel step towards building a weight generator to synthesize the neural weights for initialization.
We use the image-to-image translation task with generative adversarial networks (GANs) as an example due to the ease of collecting model weights.
By initializing the image translation model with the denoised weights predicted by our diffusion model, the training requires only 43.3 seconds.
arXiv Detail & Related papers (2024-07-16T17:59:42Z) - IF2Net: Innately Forgetting-Free Networks for Continual Learning [49.57495829364827]
Continual learning can incrementally absorb new concepts without interfering with previously learned knowledge.
Motivated by the characteristics of neural networks, we investigated how to design an Innately Forgetting-Free Network (IF2Net)
IF2Net allows a single network to inherently learn unlimited mapping rules without telling task identities at test time.
arXiv Detail & Related papers (2023-06-18T05:26:49Z) - A Perturbation Resistant Transformation and Classification System for
Deep Neural Networks [0.685316573653194]
Deep convolutional neural networks accurately classify a diverse range of natural images, but may be easily deceived when designed.
In this paper, we design a multi-pronged training, unbounded input transformation, and image ensemble system that is attack and not easily estimated.
arXiv Detail & Related papers (2022-08-25T02:58:47Z) - Signing the Supermask: Keep, Hide, Invert [0.9475039534437331]
We present a novel approach that either drops a neural network's initial weights or inverts their respective sign.
We achieve a pruning rate of up to 99%, while still matching or exceeding the performance of various baseline and previous models.
arXiv Detail & Related papers (2022-01-31T17:17:37Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - ZerO Initialization: Initializing Residual Networks with only Zeros and
Ones [44.66636787050788]
Deep neural networks are usually with random weights, with adequately selected initial variance to ensure stable signal propagation during training.
There is no consensus on how to select the variance, and this becomes challenging as the number of layers grows.
In this work, we replace the widely used random weight initialization with a fully deterministic initialization scheme ZerO, which initializes residual networks with only zeros and ones.
Surprisingly, we find that ZerO achieves state-of-the-art performance over various image classification datasets, including ImageNet.
arXiv Detail & Related papers (2021-10-25T06:17:33Z) - Image Restoration by Deep Projected GSURE [115.57142046076164]
Ill-posed inverse problems appear in many image processing applications, such as deblurring and super-resolution.
We propose a new image restoration framework that is based on minimizing a loss function that includes a "projected-version" of the Generalized SteinUnbiased Risk Estimator (GSURE) and parameterization of the latent image by a CNN.
arXiv Detail & Related papers (2021-02-04T08:52:46Z) - Compressive sensing with un-trained neural networks: Gradient descent
finds the smoothest approximation [60.80172153614544]
Un-trained convolutional neural networks have emerged as highly successful tools for image recovery and restoration.
We show that an un-trained convolutional neural network can approximately reconstruct signals and images that are sufficiently structured, from a near minimal number of random measurements.
arXiv Detail & Related papers (2020-05-07T15:57:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.