LOOPE: Learnable Optimal Patch Order in Positional Embeddings for Vision Transformers
- URL: http://arxiv.org/abs/2504.14386v1
- Date: Sat, 19 Apr 2025 19:20:47 GMT
- Title: LOOPE: Learnable Optimal Patch Order in Positional Embeddings for Vision Transformers
- Authors: Md Abtahi Majeed Chowdhury, Md Rifat Ur Rahman, Akil Ahmad Taki,
- Abstract summary: Positional embeddings play a crucial role in Vision Transformers (ViTs) by providing spatial information otherwise lost due to the permutation invariant nature of self attention.<n>Existing methods have mostly overlooked or never explored the impact of patch ordering in positional embeddings.<n>We propose LOOPE, a learnable patch-ordering method that optimize spatial representation for a given set of frequencies.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Positional embeddings (PE) play a crucial role in Vision Transformers (ViTs) by providing spatial information otherwise lost due to the permutation invariant nature of self attention. While absolute positional embeddings (APE) have shown theoretical advantages over relative positional embeddings (RPE), particularly due to the ability of sinusoidal functions to preserve spatial inductive biases like monotonicity and shift invariance, a fundamental challenge arises when mapping a 2D grid to a 1D sequence. Existing methods have mostly overlooked or never explored the impact of patch ordering in positional embeddings. To address this, we propose LOOPE, a learnable patch-ordering method that optimizes spatial representation for a given set of frequencies, providing a principled approach to patch order optimization. Empirical results show that our PE significantly improves classification accuracy across various ViT architectures. To rigorously evaluate the effectiveness of positional embeddings, we introduce the "Three Cell Experiment", a novel benchmarking framework that assesses the ability of PEs to retain relative and absolute positional information across different ViT architectures. Unlike standard evaluations, which typically report a performance gap of 4 to 6% between models with and without PE, our method reveals a striking 30 to 35% difference, offering a more sensitive diagnostic tool to measure the efficacy of PEs. Our experimental analysis confirms that the proposed LOOPE demonstrates enhanced effectiveness in retaining both relative and absolute positional information.
Related papers
- A Lightweight 3D Anomaly Detection Method with Rotationally Invariant Features [60.76577388438418]
3D anomaly detection (AD) is a crucial task in computer vision, aiming to identify anomalous points or regions from point cloud data.<n>Existing methods may encounter challenges when handling point clouds with changes in orientation and position because the resulting features may vary significantly.<n>We propose a novel Rotationally Invariant Features (RIF) framework for 3D AD, which maps each point into a rotationally invariant space to maintain consistency of representation.
arXiv Detail & Related papers (2025-11-17T08:16:05Z) - Through the Lens of Doubt: Robust and Efficient Uncertainty Estimation for Visual Place Recognition [11.33609434801822]
Visual Place Recognition enables robots to identify previously visited locations by matching current observations against a database of known places.<n>Failure-critical VPR applications, such as loop closure detection in simultaneous localization and mapping pipelines, require robust estimation of place matching uncertainty.<n>We propose three training-free uncertainty metrics that estimate prediction confidence by analyzing inherent statistical patterns in similarity scores from any existing VPR method.
arXiv Detail & Related papers (2025-10-15T12:12:55Z) - BEVUDA++: Geometric-aware Unsupervised Domain Adaptation for Multi-View 3D Object Detection [56.477525075806966]
Vision-centric Bird's Eye View (BEV) perception holds considerable promise for autonomous driving.<n>Recent studies have prioritized efficiency or accuracy enhancements, yet the issue of domain shift has been overlooked.<n>We introduce an innovative geometric-aware teacher-student framework, BEVUDA++, to diminish this issue.
arXiv Detail & Related papers (2025-09-17T16:31:40Z) - Beyond flattening: a geometrically principled positional encoding for vision transformers with Weierstrass elliptic functions [2.8199098530835127]
Vision Transformers have demonstrated remarkable success in computer vision tasks.<n>Traditional positional encoding approaches fail to establish monotonic correspondence between Euclidean spatial distances and sequential index distances.<n>We propose WEF-PE, a mathematically principled approach that directly addresses embedding two-dimensional coordinates through natural complex domain representation.
arXiv Detail & Related papers (2025-08-26T16:14:59Z) - PAID: Pairwise Angular-Invariant Decomposition for Continual Test-Time Adaptation [70.98107766265636]
This paper takes the geometric attributes of pre-trained weights as a starting point, systematically analyzing three key components: magnitude, absolute angle, and pairwise angular structure.<n>We find that the pairwise angular structure remains stable across diverse corrupted domains and encodes domain-invariant semantic information, suggesting it should be preserved during adaptation.
arXiv Detail & Related papers (2025-06-03T05:18:15Z) - Revisiting LRP: Positional Attribution as the Missing Ingredient for Transformer Explainability [53.21677928601684]
Layer-wise relevance propagation is one of the most promising approaches to explainability in deep learning.<n>We propose specialized theoretically-grounded LRP rules designed to propagate attributions across various positional encoding methods.<n>Our method significantly outperforms the state-of-the-art in both vision and NLP explainability tasks.
arXiv Detail & Related papers (2025-06-02T18:07:55Z) - Unpacking Positional Encoding in Transformers: A Spectral Analysis of Content-Position Coupling [10.931433906211534]
Positional encoding (PE) is essential for enabling Transformers to model sequential structure.<n>We present a unified framework that analyzes PE through the spectral properties of Toeplitz and related matrices.<n>We establish explicit content-relative mixing with relative-position Toeplitz signals as a key principle for effective PE design.
arXiv Detail & Related papers (2025-05-19T12:11:13Z) - Toward Relative Positional Encoding in Spiking Transformers [52.62008099390541]
Spiking neural networks (SNNs) are bio-inspired networks that model how neurons in the brain communicate through discrete spikes.<n>In this paper, we introduce an approximate method for relative positional encoding (RPE) in Spiking Transformers.
arXiv Detail & Related papers (2025-01-28T06:42:37Z) - ALoRE: Efficient Visual Adaptation via Aggregating Low Rank Experts [71.91042186338163]
ALoRE is a novel PETL method that reuses the hypercomplex parameterized space constructed by Kronecker product to Aggregate Low Rank Experts.<n>Thanks to the artful design, ALoRE maintains negligible extra parameters and can be effortlessly merged into the frozen backbone.
arXiv Detail & Related papers (2024-12-11T12:31:30Z) - PCF-Lift: Panoptic Lifting by Probabilistic Contrastive Fusion [80.79938369319152]
We design a new pipeline coined PCF-Lift based on our Probabilis-tic Contrastive Fusion (PCF)
Our PCF-lift not only significantly outperforms the state-of-the-art methods on widely used benchmarks including the ScanNet dataset and the Messy Room dataset (4.4% improvement of scene-level PQ)
arXiv Detail & Related papers (2024-10-14T16:06:59Z) - TraIL-Det: Transformation-Invariant Local Feature Networks for 3D LiDAR Object Detection with Unsupervised Pre-Training [21.56675189346088]
We introduce Transformation-Invariant Local (TraIL) features and the associated TraIL-Det architecture.
TraIL features exhibit rigid transformation invariance and effectively adapt to variations in point density.
They utilize the inherent isotropic radiation of LiDAR to enhance local representation.
Our method outperforms contemporary self-supervised 3D object detection approaches in terms of mAP on KITTI.
arXiv Detail & Related papers (2024-08-25T17:59:17Z) - Progress and Perspectives on Weak-value Amplification [9.675150350961202]
Weak-value amplification (WVA) is a metrological protocol that effectively amplifies ultra-small physical effects.
WVA provides new perspectives for recognizing the important role of post-selection in precision metrology.
arXiv Detail & Related papers (2024-07-14T05:26:53Z) - 2-D SSM: A General Spatial Layer for Visual Transformers [79.4957965474334]
A central objective in computer vision is to design models with appropriate 2-D inductive bias.
We leverage an expressive variation of the multidimensional State Space Model.
Our approach introduces efficient parameterization, accelerated computation, and a suitable normalization scheme.
arXiv Detail & Related papers (2023-06-11T09:41:37Z) - Parameter-Efficient Transformer with Hybrid Axial-Attention for Medical
Image Segmentation [10.441315305453504]
We propose a parameter-efficient transformer to explore intrinsic inductive bias via position information for medical image segmentation.
Motivated by this, we present a novel Hybrid Axial-Attention (HAA) that can be equipped with spatial pixel-wise information and relative position information as inductive bias.
arXiv Detail & Related papers (2022-11-17T13:54:55Z) - ERNIE-SPARSE: Learning Hierarchical Efficient Transformer Through
Regularized Self-Attention [48.697458429460184]
Two factors, information bottleneck sensitivity and inconsistency between different attention topologies, could affect the performance of the Sparse Transformer.
This paper proposes a well-designed model named ERNIE-Sparse.
It consists of two distinctive parts: (i) Hierarchical Sparse Transformer (HST) to sequentially unify local and global information, and (ii) Self-Attention Regularization (SAR) to minimize the distance for transformers with different attention topologies.
arXiv Detail & Related papers (2022-03-23T08:47:01Z) - Short Range Correlation Transformer for Occluded Person
Re-Identification [4.339510167603376]
We propose a partial feature transformer-based person re-identification framework named PFT.
The proposed PFT utilizes three modules to enhance the efficiency of vision transformer.
Experimental results over occluded and holistic re-identification datasets demonstrate that the proposed PFT network achieves superior performance consistently.
arXiv Detail & Related papers (2022-01-04T11:12:39Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.