Local Texture Estimator for Implicit Representation Function
- URL: http://arxiv.org/abs/2111.08918v1
- Date: Wed, 17 Nov 2021 06:01:17 GMT
- Title: Local Texture Estimator for Implicit Representation Function
- Authors: Jaewon Lee and Kyong Hwan Jin
- Abstract summary: Local Texture Estimator (LTE) is a dominant-frequency estimator for natural images.
LTE is capable of characterizing image textures in 2D Fourier space.
We show that an LTE-based neural function outperforms existing deep SR methods within an arbitrary-scale.
- Score: 10.165529175855712
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent works with an implicit neural function shed light on representing
images in arbitrary resolution. However, a standalone multi-layer perceptron
(MLP) shows limited performance in learning high-frequency components. In this
paper, we propose a Local Texture Estimator (LTE), a dominant-frequency
estimator for natural images, enabling an implicit function to capture fine
details while reconstructing images in a continuous manner. When jointly
trained with a deep super-resolution (SR) architecture, LTE is capable of
characterizing image textures in 2D Fourier space. We show that an LTE-based
neural function outperforms existing deep SR methods within an arbitrary-scale
for all datasets and all scale factors. Furthermore, we demonstrate that our
implementation takes the shortest running time compared to previous works.
Source code will be open.
Related papers
- LeRF: Learning Resampling Function for Adaptive and Efficient Image Interpolation [64.34935748707673]
Recent deep neural networks (DNNs) have made impressive progress in performance by introducing learned data priors.
We propose a novel method of Learning Resampling (termed LeRF) which takes advantage of both the structural priors learned by DNNs and the locally continuous assumption.
LeRF assigns spatially varying resampling functions to input image pixels and learns to predict the shapes of these resampling functions with a neural network.
arXiv Detail & Related papers (2024-07-13T16:09:45Z) - Leveraging Representations from Intermediate Encoder-blocks for Synthetic Image Detection [13.840950434728533]
State-of-the-art Synthetic Image Detection (SID) research has led to strong evidence on the advantages of feature extraction from foundation models.
We leverage the image representations extracted by intermediate Transformer blocks of CLIP's image-encoder via a lightweight network.
Our method is compared against the state-of-the-art by evaluating it on 20 test datasets and exhibits an average +10.6% absolute performance improvement.
arXiv Detail & Related papers (2024-02-29T12:18:43Z) - Dynamic Implicit Image Function for Efficient Arbitrary-Scale Image
Representation [24.429100808481394]
We propose Dynamic Implicit Image Function (DIIF), which is a fast and efficient method to represent images with arbitrary resolution.
We propose a coordinate grouping and slicing strategy, which enables the neural network to perform decoding from coordinate slices to pixel value slices.
With dynamic coordinate slicing, DIIF significantly reduces the computational cost when encountering arbitrary-scale SR.
arXiv Detail & Related papers (2023-06-21T15:04:34Z) - Learning Neural Duplex Radiance Fields for Real-Time View Synthesis [33.54507228895688]
We propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations.
We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.
arXiv Detail & Related papers (2023-04-20T17:59:52Z) - Multiscale Representation for Real-Time Anti-Aliasing Neural Rendering [84.37776381343662]
Mip-NeRF proposes a multiscale representation as a conical frustum to encode scale information.
We propose mip voxel grids (Mip-VoG), an explicit multiscale representation for real-time anti-aliasing rendering.
Our approach is the first to offer multiscale training and real-time anti-aliasing rendering simultaneously.
arXiv Detail & Related papers (2023-04-20T04:05:22Z) - Ultra-High-Definition Low-Light Image Enhancement: A Benchmark and
Transformer-Based Method [51.30748775681917]
We consider the task of low-light image enhancement (LLIE) and introduce a large-scale database consisting of images at 4K and 8K resolution.
We conduct systematic benchmarking studies and provide a comparison of current LLIE algorithms.
As a second contribution, we introduce LLFormer, a transformer-based low-light enhancement method.
arXiv Detail & Related papers (2022-12-22T09:05:07Z) - Combining Attention Module and Pixel Shuffle for License Plate
Super-Resolution [3.8831062015253055]
This work focuses on license plate (LP) reconstruction in low-resolution and low-quality images.
We present a Single-Image Super-Resolution (SISR) approach that extends the attention/transformer module concept.
In our experiments, the proposed method outperformed the baselines both quantitatively and qualitatively.
arXiv Detail & Related papers (2022-10-30T13:05:07Z) - Adaptive Local Implicit Image Function for Arbitrary-scale
Super-resolution [61.95533972380704]
Local implicit image function (LIIF) denotes images as a continuous function where pixel values are expansion by using the corresponding coordinates as inputs.
LIIF can be adopted for arbitrary-scale image super-resolution tasks, resulting in a single effective and efficient model for various up-scaling factors.
We propose a novel adaptive local image function (A-LIIF) to alleviate this problem.
arXiv Detail & Related papers (2022-08-07T11:23:23Z) - UltraSR: Spatial Encoding is a Missing Key for Implicit Image
Function-based Arbitrary-Scale Super-Resolution [74.82282301089994]
In this work, we propose UltraSR, a simple yet effective new network design based on implicit image functions.
We show that spatial encoding is indeed a missing key towards the next-stage high-accuracy implicit image function.
Our UltraSR sets new state-of-the-art performance on the DIV2K benchmark under all super-resolution scales.
arXiv Detail & Related papers (2021-03-23T17:36:42Z) - Hyperspectral Image Super-resolution via Deep Progressive Zero-centric
Residual Learning [62.52242684874278]
Cross-modality distribution of spatial and spectral information makes the problem challenging.
We propose a novel textitlightweight deep neural network-based framework, namely PZRes-Net.
Our framework learns a high resolution and textitzero-centric residual image, which contains high-frequency spatial details of the scene.
arXiv Detail & Related papers (2020-06-18T06:32:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.