Entropy Transformer Networks: A Learning Approach via Tangent Bundle
Data Manifold
- URL: http://arxiv.org/abs/2307.12517v1
- Date: Mon, 24 Jul 2023 04:21:51 GMT
- Title: Entropy Transformer Networks: A Learning Approach via Tangent Bundle
Data Manifold
- Authors: Pourya Shamsolmoali, Masoumeh Zareapoor
- Abstract summary: This paper focuses on an accurate and fast approach for image transformation employed in the design of CNN architectures.
A novel Entropy STN (ESTN) is proposed that interpolates on the data manifold distributions.
Experiments on challenging benchmarks show that the proposed ESTN can improve predictive accuracy over a range of computer vision tasks.
- Score: 8.893886200299228
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper focuses on an accurate and fast interpolation approach for image
transformation employed in the design of CNN architectures. Standard Spatial
Transformer Networks (STNs) use bilinear or linear interpolation as their
interpolation, with unrealistic assumptions about the underlying data
distributions, which leads to poor performance under scale variations.
Moreover, STNs do not preserve the norm of gradients in propagation due to
their dependency on sparse neighboring pixels. To address this problem, a novel
Entropy STN (ESTN) is proposed that interpolates on the data manifold
distributions. In particular, random samples are generated for each pixel in
association with the tangent space of the data manifold and construct a linear
approximation of their intensity values with an entropy regularizer to compute
the transformer parameters. A simple yet effective technique is also proposed
to normalize the non-zero values of the convolution operation, to fine-tune the
layers for gradients' norm-regularization during training. Experiments on
challenging benchmarks show that the proposed ESTN can improve predictive
accuracy over a range of computer vision tasks, including image reconstruction,
and classification, while reducing the computational cost.
Related papers
- Interpretable Lightweight Transformer via Unrolling of Learned Graph Smoothness Priors [16.04850782310842]
We build interpretable and lightweight transformer-like neural networks by unrolling iterative optimization algorithms.
A normalized signal-dependent graph learning module amounts to a variant of the basic self-attention mechanism in conventional transformers.
arXiv Detail & Related papers (2024-06-06T14:01:28Z) - Adaptive Multilevel Neural Networks for Parametric PDEs with Error Estimation [0.0]
A neural network architecture is presented to solve high-dimensional parameter-dependent partial differential equations (pPDEs)
It is constructed to map parameters of the model data to corresponding finite element solutions.
It outputs a coarse grid solution and a series of corrections as produced in an adaptive finite element method (AFEM)
arXiv Detail & Related papers (2024-03-19T11:34:40Z) - ASWT-SGNN: Adaptive Spectral Wavelet Transform-based Self-Supervised
Graph Neural Network [20.924559944655392]
This paper proposes an Adaptive Spectral Wavelet Transform-based Self-Supervised Graph Neural Network (ASWT-SGNN)
ASWT-SGNN accurately approximates the filter function in high-density spectral regions, avoiding costly eigen-decomposition.
It achieves comparable performance to state-of-the-art models in node classification tasks.
arXiv Detail & Related papers (2023-12-10T03:07:42Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - CSformer: Bridging Convolution and Transformer for Compressive Sensing [65.22377493627687]
This paper proposes a hybrid framework that integrates the advantages of leveraging detailed spatial information from CNN and the global context provided by transformer for enhanced representation learning.
The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery.
The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing.
arXiv Detail & Related papers (2021-12-31T04:37:11Z) - Revisiting Transformation Invariant Geometric Deep Learning: Are Initial
Representations All You Need? [80.86819657126041]
We show that transformation-invariant and distance-preserving initial representations are sufficient to achieve transformation invariance.
Specifically, we realize transformation-invariant and distance-preserving initial point representations by modifying multi-dimensional scaling.
We prove that TinvNN can strictly guarantee transformation invariance, being general and flexible enough to be combined with the existing neural networks.
arXiv Detail & Related papers (2021-12-23T03:52:33Z) - Patch Based Transformation for Minimum Variance Beamformer Image
Approximation Using Delay and Sum Pipeline [0.0]
In this work, a patch level U-Net based neural network is proposed, where the delay compensated radio frequency (RF) patch for a fixed region in space is transformed through a U-Net architecture.
The proposed approach treats the non-linear transformation of the RF data space that can account for the data driven weight adaptation done by the MVDR approach in the parameters of the network.
arXiv Detail & Related papers (2021-10-19T19:36:59Z) - Understanding when spatial transformer networks do not support
invariance, and what to do about it [0.0]
spatial transformer networks (STNs) were designed to enable convolutional neural networks (CNNs) to learn invariance to image transformations.
We show that STNs do not have the ability to align the feature maps of a transformed image with those of its original.
We investigate alternative STN architectures that make use of complex features.
arXiv Detail & Related papers (2020-04-24T12:20:35Z) - Spatially Adaptive Inference with Stochastic Feature Sampling and
Interpolation [72.40827239394565]
We propose to compute features only at sparsely sampled locations.
We then densely reconstruct the feature map with an efficient procedure.
The presented network is experimentally shown to save substantial computation while maintaining accuracy over a variety of computer vision tasks.
arXiv Detail & Related papers (2020-03-19T15:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.