Concurrently Extrapolating and Interpolating Networks for Continuous
Model Generation
- URL: http://arxiv.org/abs/2001.03847v1
- Date: Sun, 12 Jan 2020 04:44:44 GMT
- Title: Concurrently Extrapolating and Interpolating Networks for Continuous
Model Generation
- Authors: Lijun Zhao, Jinjing Zhang, Fan Zhang, Anhong Wang, Huihui Bai, Yao
Zhao
- Abstract summary: We propose a simple yet effective model generation strategy to form a sequence of models that only requires a set of specific-effect label images.
We show that the proposed method is capable of producing a series of continuous models and achieves better performance than that of several state-of-the-art methods for image smoothing.
- Score: 34.72650269503811
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Most deep image smoothing operators are always trained repetitively when
different explicit structure-texture pairs are employed as label images for
each algorithm configured with different parameters. This kind of training
strategy often takes a long time and spends equipment resources in a costly
manner. To address this challenging issue, we generalize continuous network
interpolation as a more powerful model generation tool, and then propose a
simple yet effective model generation strategy to form a sequence of models
that only requires a set of specific-effect label images. To precisely learn
image smoothing operators, we present a double-state aggregation (DSA) module,
which can be easily inserted into most of current network architecture. Based
on this module, we design a double-state aggregation neural network structure
with a local feature aggregation block and a nonlocal feature aggregation block
to obtain operators with large expression capacity. Through the evaluation of
many objective and visual experimental results, we show that the proposed
method is capable of producing a series of continuous models and achieves
better performance than that of several state-of-the-art methods for image
smoothing.
Related papers
- Detecting and Approximating Redundant Computational Blocks in Neural Networks [25.436785396394804]
intra-network similarities present new opportunities for designing more efficient neural networks.
We introduce a simple metric, Block Redundancy, to detect redundant blocks, and propose Redundant Blocks Approximation (RBA) to approximate redundant blocks.
RBA reduces model parameters and time complexity while maintaining good performance.
arXiv Detail & Related papers (2024-10-07T11:35:24Z) - Few-Shot Medical Image Segmentation with High-Fidelity Prototypes [38.073371773707514]
We propose a novel Detail Self-refined Prototype Network (DSPNet) to construct high-fidelity prototypes representing the object foreground and the background more comprehensively.
To construct global semantics while maintaining the captured detail semantics, we learn the foreground prototypes by modelling the multi-modal structures with clustering and then fusing each in a channel-wise manner.
arXiv Detail & Related papers (2024-06-26T05:06:14Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - A Model-data-driven Network Embedding Multidimensional Features for
Tomographic SAR Imaging [5.489791364472879]
We propose a new model-data-driven network to achieve tomoSAR imaging based on multi-dimensional features.
We add two 2D processing modules, both convolutional encoder-decoder structures, to enhance multi-dimensional features of the imaging scene effectively.
Compared with the conventional CS-based FISTA method and DL-based gamma-Net method, the result of our proposed method has better performance on completeness while having decent imaging accuracy.
arXiv Detail & Related papers (2022-11-28T02:01:43Z) - Lightweight Long-Range Generative Adversarial Networks [58.16484259508973]
We introduce a novel lightweight generative adversarial networks, which can effectively capture long-range dependencies in the image generation process.
The proposed long-range module can highlight negative relations between pixels, working as a regularization to stabilize training.
Our novel long-range module only introduces few additional parameters and is easily inserted into existing models to capture long-range dependencies.
arXiv Detail & Related papers (2022-09-08T13:05:01Z) - CM-GAN: Image Inpainting with Cascaded Modulation GAN and Object-Aware
Training [112.96224800952724]
We propose cascaded modulation GAN (CM-GAN) to generate plausible image structures when dealing with large holes in complex images.
In each decoder block, global modulation is first applied to perform coarse semantic-aware synthesis structure, then spatial modulation is applied on the output of global modulation to further adjust the feature map in a spatially adaptive fashion.
In addition, we design an object-aware training scheme to prevent the network from hallucinating new objects inside holes, fulfilling the needs of object removal tasks in real-world scenarios.
arXiv Detail & Related papers (2022-03-22T16:13:27Z) - Meta Internal Learning [88.68276505511922]
Internal learning for single-image generation is a framework, where a generator is trained to produce novel images based on a single image.
We propose a meta-learning approach that enables training over a collection of images, in order to model the internal statistics of the sample image more effectively.
Our results show that the models obtained are as suitable as single-image GANs for many common image applications.
arXiv Detail & Related papers (2021-10-06T16:27:38Z) - MOGAN: Morphologic-structure-aware Generative Learning from a Single
Image [59.59698650663925]
Recently proposed generative models complete training based on only one image.
We introduce a MOrphologic-structure-aware Generative Adversarial Network named MOGAN that produces random samples with diverse appearances.
Our approach focuses on internal features including the maintenance of rational structures and variation on appearance.
arXiv Detail & Related papers (2021-03-04T12:45:23Z) - TSIT: A Simple and Versatile Framework for Image-to-Image Translation [103.92203013154403]
We introduce a simple and versatile framework for image-to-image translation.
We provide a carefully designed two-stream generative model with newly proposed feature transformations.
This allows multi-scale semantic structure information and style representation to be effectively captured and fused by the network.
A systematic study compares the proposed method with several state-of-the-art task-specific baselines, verifying its effectiveness in both perceptual quality and quantitative evaluations.
arXiv Detail & Related papers (2020-07-23T15:34:06Z) - Contextual Encoder-Decoder Network for Visual Saliency Prediction [42.047816176307066]
We propose an approach based on a convolutional neural network pre-trained on a large-scale image classification task.
We combine the resulting representations with global scene information for accurately predicting visual saliency.
Compared to state of the art approaches, the network is based on a lightweight image classification backbone.
arXiv Detail & Related papers (2019-02-18T16:15:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.