Multiscale Video Transformers for Class Agnostic Segmentation in Autonomous Driving
- URL: http://arxiv.org/abs/2508.14729v1
- Date: Wed, 20 Aug 2025 14:23:11 GMT
- Title: Multiscale Video Transformers for Class Agnostic Segmentation in Autonomous Driving
- Authors: Leila Cheshmi, Mennatullah Siam,
- Abstract summary: We develop multiscale video transformers capable of detecting unknown objects using only motion cues.<n>Video semantic and panoptic segmentation often relies on known classes seen during training, overlooking novel categories.<n>We propose an efficient video transformer trained end-to-end for class-agnostic segmentation without optical flow.
- Score: 3.138395828947902
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ensuring safety in autonomous driving is a complex challenge requiring handling unknown objects and unforeseen driving scenarios. We develop multiscale video transformers capable of detecting unknown objects using only motion cues. Video semantic and panoptic segmentation often relies on known classes seen during training, overlooking novel categories. Recent visual grounding with large language models is computationally expensive, especially for pixel-level output. We propose an efficient video transformer trained end-to-end for class-agnostic segmentation without optical flow. Our method uses multi-stage multiscale query-memory decoding and a scale-specific random drop-token to ensure efficiency and accuracy, maintaining detailed spatiotemporal features with a shared, learnable memory module. Unlike conventional decoders that compress features, our memory-centric design preserves high-resolution information at multiple scales. We evaluate on DAVIS'16, KITTI, and Cityscapes. Our method consistently outperforms multiscale baselines while being efficient in GPU memory and run-time, demonstrating a promising direction for real-time, robust dense prediction in safety-critical robotics.
Related papers
- TrajTok: Learning Trajectory Tokens enables better Video Understanding [63.1260672430712]
Tokenization in video models, typically through patchification, generates an excessive and redundant number of tokens.<n>We propose TrajTok, an end-to-end video tokenizer module that is fully integrated and co-trained with video models for a downstream objective.<n>We show that it can be seamlessly integrated as either a probing head for pretrained visual features (TrajAdapter) or an alignment connector in vision-language models (TrajVLM) with especially strong performance in long-video reasoning.
arXiv Detail & Related papers (2026-02-26T09:15:34Z) - Towards Efficient and Effective Multi-Camera Encoding for End-to-End Driving [54.85072592658933]
We present Flex, an efficient and effective scene encoder that addresses the computational bottleneck of processing high-volume multi-camera data in autonomous driving.<n>By design, our approach is geometry-agnostic, learning a compact scene representation directly from data without relying on the explicit 3D inductive biases.<n>Our findings challenge the prevailing assumption that 3D priors are necessary, demonstrating that a data-driven, joint encoding strategy offers a more scalable, efficient and effective path for future autonomous driving systems.
arXiv Detail & Related papers (2025-12-11T18:59:46Z) - Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics [42.41787036246253]
We introduce the Multipole Attention Neural Operator (MANO), which computes attention in a distance-based multiscale fashion.<n>We show that MANO rivals state-of-the-art models such as ViT and Swin Transformer, while reducing runtime and peak memory usage by orders of magnitude.
arXiv Detail & Related papers (2025-07-03T16:05:26Z) - Learning Motion and Temporal Cues for Unsupervised Video Object Segmentation [49.113131249753714]
We propose an efficient algorithm, termed MTNet, which concurrently exploits motion and temporal cues.<n> MTNet is devised by effectively merging appearance and motion features during the feature extraction process within encoders.<n>We employ a cascade of decoders all feature levels across all feature levels to optimally exploit the derived features.
arXiv Detail & Related papers (2025-01-14T03:15:46Z) - StreamMOS: Streaming Moving Object Segmentation with Multi-View Perception and Dual-Span Memory [21.300636683882338]
We propose a streaming network with a memory mechanism, called StreamMOS, to build the association of features and predictions among multiple inferences.<n>Specifically, we utilize a short-term memory to convey historical features, which can be regarded as spatial prior to moving objects.<n>We also present multi-view encoder with projection and asymmetric convolution to extract motion feature of objects in different representations.
arXiv Detail & Related papers (2024-07-25T09:51:09Z) - TAM-VT: Transformation-Aware Multi-scale Video Transformer for Segmentation and Tracking [33.75267864844047]
Video Object (VOS) has emerged as an increasingly important problem with availability of larger datasets and more complex and realistic settings.
We propose a novel, clip-based DETR-style encoder-decoder architecture, which focuses on systematically analyzing and addressing aforementioned challenges.
Specifically, we propose a novel transformation-aware loss that focuses learning on portions of the video where an object undergoes significant deformations.
arXiv Detail & Related papers (2023-12-13T21:02:03Z) - A Simple Recipe for Contrastively Pre-training Video-First Encoders Beyond 16 Frames [57.758863967770594]
We build on the common paradigm of transferring large-scale, image--text models to video via shallow temporal fusion.<n>We expose two limitations to the approach: (1) decreased spatial capabilities, likely due to poor video--language alignment in standard video datasets, and (2) higher memory consumption, bottlenecking the number of frames that can be processed.
arXiv Detail & Related papers (2023-12-12T16:10:19Z) - Ring Attention with Blockwise Transformers for Near-Infinite Context [88.61687950039662]
We present a novel approach, Ring Attention with Blockwise Transformers (Ring Attention), which leverages blockwise computation of self-attention and feedforward to distribute long sequences across multiple devices.
Our approach enables training and inference of sequences that are up to device count times longer than those achievable by prior memory-efficient Transformers.
arXiv Detail & Related papers (2023-10-03T08:44:50Z) - Multiscale Memory Comparator Transformer for Few-Shot Video Segmentation [8.16038976420041]
We present a meta-learned Multiscale Memory Comparator (MMC) for few-shot video segmentation.
Unlike previous work, we instead preserve the detailed feature maps during across scale information exchange.
Our approach outperforms the baseline and yields state-of-the-art performance.
arXiv Detail & Related papers (2023-07-15T14:21:58Z) - Learning Trajectory-Aware Transformer for Video Super-Resolution [50.49396123016185]
Video super-resolution aims to restore a sequence of high-resolution (HR) frames from their low-resolution (LR) counterparts.
Existing approaches usually align and aggregate video frames from limited adjacent frames.
We propose a novel Transformer for Video Super-Resolution (TTVSR)
arXiv Detail & Related papers (2022-04-08T03:37:39Z) - Multiscale Vision Transformers [79.76412415996892]
We present Multiscale Vision Transformers (MViT) for video and image recognition, by connecting the seminal idea of multiscale feature hierarchies with transformer models.
We evaluate this fundamental architectural prior for modeling the dense nature of visual signals for a variety of video recognition tasks.
arXiv Detail & Related papers (2021-04-22T17:59:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.