Smoother Network Tuning and Interpolation for Continuous-level Image
Processing
- URL: http://arxiv.org/abs/2010.02270v1
- Date: Mon, 5 Oct 2020 18:29:52 GMT
- Title: Smoother Network Tuning and Interpolation for Continuous-level Image
Processing
- Authors: Hyeongmin Lee, Taeoh Kim, Hanbin Son, Sangwook Baek, Minsu Cheon,
Sangyoun Lee
- Abstract summary: Filter Transition Network (FTN) is a structurally smoother module for continuous-level learning.
FTN generalizes well across various tasks and networks and cause fewer undesirable side effects.
For stable learning of FTN, we additionally propose a method to non-linear neural network layers with identity mappings.
- Score: 7.730087303035803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Convolutional Neural Network (CNN) based image processing, most studies
propose networks that are optimized to single-level (or single-objective);
thus, they underperform on other levels and must be retrained for delivery of
optimal performance. Using multiple models to cover multiple levels involves
very high computational costs. To solve these problems, recent approaches train
networks on two different levels and propose their own interpolation methods to
enable arbitrary intermediate levels. However, many of them fail to generalize
or have certain side effects in practical usage. In this paper, we define these
frameworks as network tuning and interpolation and propose a novel module for
continuous-level learning, called Filter Transition Network (FTN). This module
is a structurally smoother module than existing ones. Therefore, the frameworks
with FTN generalize well across various tasks and networks and cause fewer
undesirable side effects. For stable learning of FTN, we additionally propose a
method to initialize non-linear neural network layers with identity mappings.
Extensive results for various image processing tasks indicate that the
performance of FTN is comparable in multiple continuous levels, and is
significantly smoother and lighter than that of other frameworks.
Related papers
- Efficient Implementation of a Multi-Layer Gradient-Free Online-Trainable
Spiking Neural Network on FPGA [0.31498833540989407]
ODESA is the first network to have end-to-end multi-layer online local supervised training without using gradients.
This research shows that the network architecture and the online training of weights and thresholds can be implemented efficiently on a large scale in hardware.
arXiv Detail & Related papers (2023-05-31T00:34:15Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - All at Once Network Quantization via Collaborative Knowledge Transfer [56.95849086170461]
We develop a novel collaborative knowledge transfer approach for efficiently training the all-at-once quantization network.
Specifically, we propose an adaptive selection strategy to choose a high-precision enquoteteacher for transferring knowledge to the low-precision student.
To effectively transfer knowledge, we develop a dynamic block swapping method by randomly replacing the blocks in the lower-precision student network with the corresponding blocks in the higher-precision teacher network.
arXiv Detail & Related papers (2021-03-02T03:09:03Z) - Online Exemplar Fine-Tuning for Image-to-Image Translation [32.556050882376965]
Existing techniques to solve exemplar-based image-to-image translation within deep convolutional neural networks (CNNs) generally require a training phase to optimize the network parameters.
We propose a novel framework, for the first time, to solve exemplar-based translation through an online optimization given an input image pair.
Our framework does not require the off-line training phase, which has been the main challenge of existing methods, but the pre-trained networks to enable optimization in online.
arXiv Detail & Related papers (2020-11-18T15:13:16Z) - Neural Function Modules with Sparse Arguments: A Dynamic Approach to
Integrating Information across Layers [84.57980167400513]
Neural Function Modules (NFM) aims to introduce the same structural capability into deep learning.
Most of the work in the context of feed-forward networks combining top-down and bottom-up feedback is limited to classification problems.
The key contribution of our work is to combine attention, sparsity, top-down and bottom-up feedback, in a flexible algorithm.
arXiv Detail & Related papers (2020-10-15T20:43:17Z) - Sparse Coding Driven Deep Decision Tree Ensembles for Nuclear
Segmentation in Digital Pathology Images [15.236873250912062]
We propose an easily trained yet powerful representation learning approach with performance highly competitive to deep neural networks in a digital pathology image segmentation task.
The method, called sparse coding driven deep decision tree ensembles that we abbreviate as ScD2TE, provides a new perspective on representation learning.
arXiv Detail & Related papers (2020-08-13T02:59:31Z) - Iterative Network for Image Super-Resolution [69.07361550998318]
Single image super-resolution (SISR) has been greatly revitalized by the recent development of convolutional neural networks (CNN)
This paper provides a new insight on conventional SISR algorithm, and proposes a substantially different approach relying on the iterative optimization.
A novel iterative super-resolution network (ISRN) is proposed on top of the iterative optimization.
arXiv Detail & Related papers (2020-05-20T11:11:47Z) - Evolving Normalization-Activation Layers [100.82879448303805]
We develop efficient rejection protocols to quickly filter out candidate layers that do not work well.
Our method leads to the discovery of EvoNorms, a set of new normalization-activation layers with novel, and sometimes surprising structures.
Our experiments show that EvoNorms work well on image classification models including ResNets, MobileNets and EfficientNets.
arXiv Detail & Related papers (2020-04-06T19:52:48Z) - Dynamic Hierarchical Mimicking Towards Consistent Optimization
Objectives [73.15276998621582]
We propose a generic feature learning mechanism to advance CNN training with enhanced generalization ability.
Partially inspired by DSN, we fork delicately designed side branches from the intermediate layers of a given neural network.
Experiments on both category and instance recognition tasks demonstrate the substantial improvements of our proposed method.
arXiv Detail & Related papers (2020-03-24T09:56:13Z) - Exemplar Normalization for Learning Deep Representation [34.42934843556172]
This work investigates a novel dynamic learning-to-normalize (L2N) problem by proposing Exemplar Normalization (EN)
EN is able to learn different normalization methods for different convolutional layers and image samples of a deep network.
arXiv Detail & Related papers (2020-03-19T13:23:40Z) - Regularized Adaptation for Stable and Efficient Continuous-Level
Learning on Image Processing Networks [7.730087303035803]
We propose a novel continuous-level learning framework using a Filter Transition Network (FTN)
FTN is a non-linear module that easily adapt to new levels, and is regularized to prevent undesirable side-effects.
Extensive results for various image processing indicate that the performance of FTN is stable in terms of adaptation and adaptation.
arXiv Detail & Related papers (2020-03-11T07:46:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.