Online Exemplar Fine-Tuning for Image-to-Image Translation
- URL: http://arxiv.org/abs/2011.09330v1
- Date: Wed, 18 Nov 2020 15:13:16 GMT
- Title: Online Exemplar Fine-Tuning for Image-to-Image Translation
- Authors: Taewon Kang, Soohyun Kim, Sunwoo Kim, Seungryong Kim
- Abstract summary: Existing techniques to solve exemplar-based image-to-image translation within deep convolutional neural networks (CNNs) generally require a training phase to optimize the network parameters.
We propose a novel framework, for the first time, to solve exemplar-based translation through an online optimization given an input image pair.
Our framework does not require the off-line training phase, which has been the main challenge of existing methods, but the pre-trained networks to enable optimization in online.
- Score: 32.556050882376965
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing techniques to solve exemplar-based image-to-image translation within
deep convolutional neural networks (CNNs) generally require a training phase to
optimize the network parameters on domain-specific and task-specific
benchmarks, thus having limited applicability and generalization ability. In
this paper, we propose a novel framework, for the first time, to solve
exemplar-based translation through an online optimization given an input image
pair, called online exemplar fine-tuning (OEFT), in which we fine-tune the
off-the-shelf and general-purpose networks to the input image pair themselves.
We design two sub-networks, namely correspondence fine-tuning and multiple GAN
inversion, and optimize these network parameters and latent codes, starting
from the pre-trained ones, with well-defined loss functions. Our framework does
not require the off-line training phase, which has been the main challenge of
existing methods, but the pre-trained networks to enable optimization in
online. Experimental results prove that our framework is effective in having a
generalization power to unseen image pairs and clearly even outperforms the
state-of-the-arts needing the intensive training phase.
Related papers
- Rotation Equivariant Proximal Operator for Deep Unfolding Methods in Image Restoration [62.41329042683779]
We propose a high-accuracy rotation equivariant proximal network that embeds rotation symmetry priors into the deep unfolding framework.
This study makes efforts to suggest a high-accuracy rotation equivariant proximal network that effectively embeds rotation symmetry priors into the deep unfolding framework.
arXiv Detail & Related papers (2023-12-25T11:53:06Z) - Hyper-VolTran: Fast and Generalizable One-Shot Image to 3D Object
Structure via HyperNetworks [53.67497327319569]
We introduce a novel neural rendering technique to solve image-to-3D from a single view.
Our approach employs the signed distance function as the surface representation and incorporates generalizable priors through geometry-encoding volumes and HyperNetworks.
Our experiments show the advantages of our proposed approach with consistent results and rapid generation.
arXiv Detail & Related papers (2023-12-24T08:42:37Z) - D3C2-Net: Dual-Domain Deep Convolutional Coding Network for Compressive
Sensing [9.014593915305069]
Deep unfolding networks (DUNs) have achieved impressive success in compressive sensing (CS)
By unfolding the proposed framework into deep neural networks, we further design a novel Dual-Domain Deep Convolutional Coding Network (D3C2-Net)
Experiments on natural and MR images demonstrate that our D3C2-Net achieves higher performance and better accuracy-complexity trade-offs than other state-of-the-arts.
arXiv Detail & Related papers (2022-07-27T14:52:32Z) - Low-light Image Enhancement by Retinex Based Algorithm Unrolling and
Adjustment [50.13230641857892]
We propose a new deep learning framework for the low-light image enhancement (LIE) problem.
The proposed framework contains a decomposition network inspired by algorithm unrolling, and adjustment networks considering both global brightness and local brightness sensitivity.
Experiments on a series of typical LIE datasets demonstrated the effectiveness of the proposed method, both quantitatively and visually, as compared with existing methods.
arXiv Detail & Related papers (2022-02-12T03:59:38Z) - Deep Translation Prior: Test-time Training for Photorealistic Style
Transfer [36.82737412912885]
Recent techniques to solve photorealistic style transfer within deep convolutional neural networks (CNNs) generally require intensive training from large-scale datasets.
We propose a novel framework, dubbed Deep Translation Prior (DTP), to accomplish photorealistic style transfer through test-time training on given input image pair with untrained networks.
arXiv Detail & Related papers (2021-12-12T04:54:27Z) - Joint inference and input optimization in equilibrium networks [68.63726855991052]
deep equilibrium model is a class of models that foregoes traditional network depth and instead computes the output of a network by finding the fixed point of a single nonlinear layer.
We show that there is a natural synergy between these two settings.
We demonstrate this strategy on various tasks such as training generative models while optimizing over latent codes, training models for inverse problems like denoising and inpainting, adversarial training and gradient based meta-learning.
arXiv Detail & Related papers (2021-11-25T19:59:33Z) - Stochastic Primal-Dual Deep Unrolling Networks for Imaging Inverse
Problems [3.7819322027528113]
We present a new type of efficient deep-unrolling networks for solving imaging inverse problems.
In our unrolling network, we only use a subset of the forward and adjoint operator.
Our numerical results demonstrate the effectiveness of our approach in X-ray CT imaging task.
arXiv Detail & Related papers (2021-10-19T16:46:03Z) - Smoother Network Tuning and Interpolation for Continuous-level Image
Processing [7.730087303035803]
Filter Transition Network (FTN) is a structurally smoother module for continuous-level learning.
FTN generalizes well across various tasks and networks and cause fewer undesirable side effects.
For stable learning of FTN, we additionally propose a method to non-linear neural network layers with identity mappings.
arXiv Detail & Related papers (2020-10-05T18:29:52Z) - A Flexible Framework for Designing Trainable Priors with Adaptive
Smoothing and Game Encoding [57.1077544780653]
We introduce a general framework for designing and training neural network layers whose forward passes can be interpreted as solving non-smooth convex optimization problems.
We focus on convex games, solved by local agents represented by the nodes of a graph and interacting through regularization functions.
This approach is appealing for solving imaging problems, as it allows the use of classical image priors within deep models that are trainable end to end.
arXiv Detail & Related papers (2020-06-26T08:34:54Z) - Dynamic Hierarchical Mimicking Towards Consistent Optimization
Objectives [73.15276998621582]
We propose a generic feature learning mechanism to advance CNN training with enhanced generalization ability.
Partially inspired by DSN, we fork delicately designed side branches from the intermediate layers of a given neural network.
Experiments on both category and instance recognition tasks demonstrate the substantial improvements of our proposed method.
arXiv Detail & Related papers (2020-03-24T09:56:13Z) - Regularized Adaptation for Stable and Efficient Continuous-Level
Learning on Image Processing Networks [7.730087303035803]
We propose a novel continuous-level learning framework using a Filter Transition Network (FTN)
FTN is a non-linear module that easily adapt to new levels, and is regularized to prevent undesirable side-effects.
Extensive results for various image processing indicate that the performance of FTN is stable in terms of adaptation and adaptation.
arXiv Detail & Related papers (2020-03-11T07:46:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.