Addressing a fundamental limitation in deep vision models: lack of spatial attention
- URL: http://arxiv.org/abs/2407.01782v4
- Date: Fri, 22 Nov 2024 05:56:30 GMT
- Title: Addressing a fundamental limitation in deep vision models: lack of spatial attention
- Authors: Ali Borji,
- Abstract summary: The aim of this manuscript is to underscore a significant limitation in current deep learning models, particularly vision models.
Unlike human vision, which efficiently selects only the essential visual areas for further processing, deep vision models process the entire image.
We propose two solutions that could pave the way for the next generation of more efficient vision models.
- Score: 43.37813040320147
- License:
- Abstract: The primary aim of this manuscript is to underscore a significant limitation in current deep learning models, particularly vision models. Unlike human vision, which efficiently selects only the essential visual areas for further processing, leading to high speed and low energy consumption, deep vision models process the entire image. In this work, we examine this issue from a broader perspective and propose two solutions that could pave the way for the next generation of more efficient vision models. In the first solution, convolution and pooling operations are selectively applied to altered regions, with a change map sent to subsequent layers. This map indicates which computations need to be repeated. In the second solution, only the modified regions are processed by a semantic segmentation model, and the resulting segments are inserted into the corresponding areas of the previous output map. The code is available at https://github.com/aliborji/spatial_attention.
Related papers
- Vision Eagle Attention: A New Lens for Advancing Image Classification [0.8158530638728501]
I introduce Vision Eagle Attention, a novel attention mechanism that enhances visual feature extraction using convolutional spatial attention.
The model applies convolution to capture local spatial features and generates an attention map that selectively emphasizes the most informative regions of the image.
I have integrated Vision Eagle Attention into a lightweight ResNet-18 architecture, demonstrating that this combination results in an efficient and powerful model.
arXiv Detail & Related papers (2024-11-15T20:21:59Z) - SIGMA:Sinkhorn-Guided Masked Video Modeling [69.31715194419091]
Sinkhorn-guided Masked Video Modelling ( SIGMA) is a novel video pretraining method.
We distribute features of space-time tubes evenly across a limited number of learnable clusters.
Experimental results on ten datasets validate the effectiveness of SIGMA in learning more performant, temporally-aware, and robust video representations.
arXiv Detail & Related papers (2024-07-22T08:04:09Z) - Learning 1D Causal Visual Representation with De-focus Attention Networks [108.72931590504406]
This paper explores the feasibility of representing images using 1D causal modeling.
We propose De-focus Attention Networks, which employ learnable bandpass filters to create varied attention patterns.
arXiv Detail & Related papers (2024-06-06T17:59:56Z) - Rethinking Range View Representation for LiDAR Segmentation [66.73116059734788]
"Many-to-one" mapping, semantic incoherence, and shape deformation are possible impediments against effective learning from range view projections.
We present RangeFormer, a full-cycle framework comprising novel designs across network architecture, data augmentation, and post-processing.
We show that, for the first time, a range view method is able to surpass the point, voxel, and multi-view fusion counterparts in the competing LiDAR semantic and panoptic segmentation benchmarks.
arXiv Detail & Related papers (2023-03-09T16:13:27Z) - Estimating Appearance Models for Image Segmentation via Tensor
Factorization [0.0]
We propose a new approach to directly estimate appearance models from the image without prior information on the underlying segmentation.
Our method uses local high order color statistics from the image as an input to tensor factorization-based estimator for latent variable models.
This approach is able to estimate models in multiregion images and automatically output the regions proportions without prior user interaction.
arXiv Detail & Related papers (2022-08-16T17:21:00Z) - Unsupervised Deep Learning Meets Chan-Vese Model [77.24463525356566]
We propose an unsupervised image segmentation approach that integrates the Chan-Vese (CV) model with deep neural networks.
Our basic idea is to apply a deep neural network that maps the image into a latent space to alleviate the violation of the piecewise constant assumption in image space.
arXiv Detail & Related papers (2022-04-14T13:23:57Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z) - iGOS++: Integrated Gradient Optimized Saliency by Bilateral
Perturbations [31.72311989250957]
Saliency maps are widely-used local explanation tools.
We present iGOS++, a framework to generate saliency maps optimized for altering the output of the black-box system.
arXiv Detail & Related papers (2020-12-31T18:04:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.