An Energy-Efficient Edge Computing Paradigm for Convolution-based Image
Upsampling
- URL: http://arxiv.org/abs/2107.07647v1
- Date: Thu, 15 Jul 2021 23:49:37 GMT
- Title: An Energy-Efficient Edge Computing Paradigm for Convolution-based Image
Upsampling
- Authors: Ian Colbert, Ken Kreutz-Delgado, Srinjoy Das
- Abstract summary: A novel energy-efficient edge computing paradigm is proposed for real-time deep learning-based image upsampling applications.
We transform learned convolution kernels to deconvolution kernels before deploying them for inference as a functionally equivalent deconvolution.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A novel energy-efficient edge computing paradigm is proposed for real-time
deep learning-based image upsampling applications. State-of-the-art deep
learning solutions for image upsampling are currently trained using either
resize or sub-pixel convolution to learn kernels that generate high fidelity
images with minimal artifacts. However, performing inference with these learned
convolution kernels requires memory-intensive feature map transformations that
dominate time and energy costs in real-time applications. To alleviate this
pressure on memory bandwidth, we confine the use of resize or sub-pixel
convolution to training in the cloud by transforming learned convolution
kernels to deconvolution kernels before deploying them for inference as a
functionally equivalent deconvolution. These kernel transformations, intended
as a one-time cost when shifting from training to inference, enable a systems
designer to use each algorithm in their optimal context by preserving the image
fidelity learned when training in the cloud while minimizing data transfer
penalties during inference at the edge. We also explore existing variants of
deconvolution inference algorithms and introduce a novel variant for
consideration. We analyze and compare the inference properties of
convolution-based upsampling algorithms using a quantitative model of incurred
time and energy costs and show that using deconvolution for inference at the
edge improves both system latency and energy efficiency when compared to their
sub-pixel or resize convolution counterparts.
Related papers
- Image-GS: Content-Adaptive Image Representation via 2D Gaussians [55.15950594752051]
We propose Image-GS, a content-adaptive image representation.
Using anisotropic 2D Gaussians as the basis, Image-GS shows high memory efficiency, supports fast random access, and offers a natural level of detail stack.
General efficiency and fidelity of Image-GS are validated against several recent neural image representations and industry-standard texture compressors.
We hope this research offers insights for developing new applications that require adaptive quality and resource control, such as machine perception, asset streaming, and content generation.
arXiv Detail & Related papers (2024-07-02T00:45:21Z) - Variational Bayes image restoration with compressive autoencoders [4.879530644978008]
Regularization of inverse problems is of paramount importance in computational imaging.
In this work, we first propose to use compressive autoencoders instead of state-of-the-art generative models.
As a second contribution, we introduce the Variational Bayes Latent Estimation (VBLE) algorithm.
arXiv Detail & Related papers (2023-11-29T15:49:31Z) - Self-Supervised Single-Image Deconvolution with Siamese Neural Networks [6.138671548064356]
Inverse problems in image reconstruction are fundamentally complicated by unknown noise properties.
Deep learning methods allow for flexible parametrization of the noise and learning its properties directly from the data.
We tackle this problem with Fast Fourier Transform convolutions that provide training speed-up in 3D deconvolution tasks.
arXiv Detail & Related papers (2023-08-18T09:51:11Z) - DELAD: Deep Landweber-guided deconvolution with Hessian and sparse prior [0.22940141855172028]
We present a model for non-blind image deconvolution that incorporates the classic iterative method into a deep learning application.
We build our network based on the iterative Landweber deconvolution algorithm, which is integrated with trainable convolutional layers to enhance the recovered image structures and details.
arXiv Detail & Related papers (2022-09-30T11:15:03Z) - Rank-Enhanced Low-Dimensional Convolution Set for Hyperspectral Image
Denoising [50.039949798156826]
This paper tackles the challenging problem of hyperspectral (HS) image denoising.
We propose rank-enhanced low-dimensional convolution set (Re-ConvSet)
We then incorporate Re-ConvSet into the widely-used U-Net architecture to construct an HS image denoising method.
arXiv Detail & Related papers (2022-07-09T13:35:12Z) - CSformer: Bridging Convolution and Transformer for Compressive Sensing [65.22377493627687]
This paper proposes a hybrid framework that integrates the advantages of leveraging detailed spatial information from CNN and the global context provided by transformer for enhanced representation learning.
The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery.
The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing.
arXiv Detail & Related papers (2021-12-31T04:37:11Z) - Wiener Guided DIP for Unsupervised Blind Image Deconvolution [10.440495513371747]
Blind deconvolution is an ill-posed problem arising in various fields ranging from microscopy to astronomy.
Deep learning architectures can serve as an image generation prior during unsupervised blind deconvolution optimization.
We propose to use Wiener-deconvolution to guide the image generator during optimization by providing it a sharpened version of the blurry image.
arXiv Detail & Related papers (2021-12-19T22:19:13Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Shared Prior Learning of Energy-Based Models for Image Reconstruction [69.72364451042922]
We propose a novel learning-based framework for image reconstruction particularly designed for training without ground truth data.
In the absence of ground truth data, we change the loss functional to a patch-based Wasserstein functional.
In shared prior learning, both aforementioned optimal control problems are optimized simultaneously with shared learned parameters of the regularizer.
arXiv Detail & Related papers (2020-11-12T17:56:05Z) - Learning Context-Based Non-local Entropy Modeling for Image Compression [140.64888994506313]
In this paper, we propose a non-local operation for context modeling by employing the global similarity within the context.
The entropy model is further adopted as the rate loss in a joint rate-distortion optimization.
Considering that the width of the transforms is essential in training low distortion models, we finally produce a U-Net block in the transforms to increase the width with manageable memory consumption and time complexity.
arXiv Detail & Related papers (2020-05-10T13:28:18Z) - Adaptive Fractional Dilated Convolution Network for Image Aesthetics
Assessment [33.945579916184364]
An adaptive fractional dilated convolution (AFDC) is developed to tackle this issue in convolutional kernel level.
We provide a concise formulation for mini-batch training and utilize a grouping strategy to reduce computational overhead.
Our experimental results demonstrate that our proposed method achieves state-of-the-art performance on image aesthetics assessment over the AVA dataset.
arXiv Detail & Related papers (2020-04-06T21:56:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.