Regularized Adaptation for Stable and Efficient Continuous-Level
Learning on Image Processing Networks
- URL: http://arxiv.org/abs/2003.05145v2
- Date: Thu, 12 Mar 2020 03:52:48 GMT
- Title: Regularized Adaptation for Stable and Efficient Continuous-Level
Learning on Image Processing Networks
- Authors: Hyeongmin Lee, Taeoh Kim, Hanbin Son, Sangwook Baek, Minsu Cheon,
Sangyoun Lee
- Abstract summary: We propose a novel continuous-level learning framework using a Filter Transition Network (FTN)
FTN is a non-linear module that easily adapt to new levels, and is regularized to prevent undesirable side-effects.
Extensive results for various image processing indicate that the performance of FTN is stable in terms of adaptation and adaptation.
- Score: 7.730087303035803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Convolutional Neural Network (CNN) based image processing, most of the
studies propose networks that are optimized for a single-level (or a
single-objective); thus, they underperform on other levels and must be
retrained for delivery of optimal performance. Using multiple models to cover
multiple levels involves very high computational costs. To solve these
problems, recent approaches train the networks on two different levels and
propose their own interpolation methods to enable the arbitrary intermediate
levels. However, many of them fail to adapt hard tasks or interpolate smoothly,
or the others still require large memory and computational cost. In this paper,
we propose a novel continuous-level learning framework using a Filter
Transition Network (FTN) which is a non-linear module that easily adapt to new
levels, and is regularized to prevent undesirable side-effects. Additionally,
for stable learning of FTN, we newly propose a method to initialize non-linear
CNNs with identity mappings. Furthermore, FTN is extremely lightweight module
since it is a data-independent module, which means it is not affected by the
spatial resolution of the inputs. Extensive results for various image
processing tasks indicate that the performance of FTN is stable in terms of
adaptation and interpolation, and comparable to that of the other heavy
frameworks.
Related papers
- LeRF: Learning Resampling Function for Adaptive and Efficient Image Interpolation [64.34935748707673]
Recent deep neural networks (DNNs) have made impressive progress in performance by introducing learned data priors.
We propose a novel method of Learning Resampling (termed LeRF) which takes advantage of both the structural priors learned by DNNs and the locally continuous assumption.
LeRF assigns spatially varying resampling functions to input image pixels and learns to predict the shapes of these resampling functions with a neural network.
arXiv Detail & Related papers (2024-07-13T16:09:45Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Training Latency Minimization for Model-Splitting Allowed Federated Edge
Learning [16.8717239856441]
We propose a model-splitting allowed FL (SFL) framework to alleviate the shortage of computing power faced by clients in training deep neural networks (DNNs) using federated learning (FL)
Under the synchronized global update setting, the latency to complete a round of global training is determined by the maximum latency for the clients to complete a local training session.
To solve this mixed integer nonlinear programming problem, we first propose a regression method to fit the quantitative-relationship between the cut-layer and other parameters of an AI-model, and thus, transform the TLMP into a continuous problem.
arXiv Detail & Related papers (2023-07-21T12:26:42Z) - Efficient Implementation of a Multi-Layer Gradient-Free Online-Trainable
Spiking Neural Network on FPGA [0.31498833540989407]
ODESA is the first network to have end-to-end multi-layer online local supervised training without using gradients.
This research shows that the network architecture and the online training of weights and thresholds can be implemented efficiently on a large scale in hardware.
arXiv Detail & Related papers (2023-05-31T00:34:15Z) - NL-CS Net: Deep Learning with Non-Local Prior for Image Compressive
Sensing [7.600617428107161]
Deep learning has been applied to compressive sensing (CS) of images successfully in recent years.
This paper proposes a novel CS method using non-local prior which combines the interpretability of the traditional optimization methods with the speed of network-based methods, called NL-CS Net.
arXiv Detail & Related papers (2023-05-06T02:34:28Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Smoother Network Tuning and Interpolation for Continuous-level Image
Processing [7.730087303035803]
Filter Transition Network (FTN) is a structurally smoother module for continuous-level learning.
FTN generalizes well across various tasks and networks and cause fewer undesirable side effects.
For stable learning of FTN, we additionally propose a method to non-linear neural network layers with identity mappings.
arXiv Detail & Related papers (2020-10-05T18:29:52Z) - Learning to Learn Parameterized Classification Networks for Scalable
Input Images [76.44375136492827]
Convolutional Neural Networks (CNNs) do not have a predictable recognition behavior with respect to the input resolution change.
We employ meta learners to generate convolutional weights of main networks for various input scales.
We further utilize knowledge distillation on the fly over model predictions based on different input resolutions.
arXiv Detail & Related papers (2020-07-13T04:27:25Z) - Dynamic Hierarchical Mimicking Towards Consistent Optimization
Objectives [73.15276998621582]
We propose a generic feature learning mechanism to advance CNN training with enhanced generalization ability.
Partially inspired by DSN, we fork delicately designed side branches from the intermediate layers of a given neural network.
Experiments on both category and instance recognition tasks demonstrate the substantial improvements of our proposed method.
arXiv Detail & Related papers (2020-03-24T09:56:13Z) - Exemplar Normalization for Learning Deep Representation [34.42934843556172]
This work investigates a novel dynamic learning-to-normalize (L2N) problem by proposing Exemplar Normalization (EN)
EN is able to learn different normalization methods for different convolutional layers and image samples of a deep network.
arXiv Detail & Related papers (2020-03-19T13:23:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.