DeepInteraction: 3D Object Detection via Modality Interaction
- URL: http://arxiv.org/abs/2208.11112v2
- Date: Wed, 24 Aug 2022 16:09:17 GMT
- Title: DeepInteraction: 3D Object Detection via Modality Interaction
- Authors: Zeyu Yang, Jiaqi Chen, Zhenwei Miao, Wei Li, Xiatian Zhu, Li Zhang
- Abstract summary: We introduce a novel modality interaction strategy for top-performance 3D object detectors.
Our method is ranked at the first position at the highly competitive nuScenes object detection leaderboard.
- Score: 37.85057350887215
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing top-performance 3D object detectors typically rely on the
multi-modal fusion strategy. This design is however fundamentally restricted
due to overlooking the modality-specific useful information and finally
hampering the model performance. To address this limitation, in this work we
introduce a novel modality interaction strategy where individual per-modality
representations are learned and maintained throughout for enabling their unique
characteristics to be exploited during object detection. To realize this
proposed strategy, we design a DeepInteraction architecture characterized by a
multi-modal representational interaction encoder and a multi-modal predictive
interaction decoder. Experiments on the large-scale nuScenes dataset show that
our proposed method surpasses all prior arts often by a large margin.
Crucially, our method is ranked at the first position at the highly competitive
nuScenes object detection leaderboard.
Related papers
- DeepInteraction++: Multi-Modality Interaction for Autonomous Driving [80.8837864849534]
We introduce a novel modality interaction strategy that allows individual per-modality representations to be learned and maintained throughout.
DeepInteraction++ is a multi-modal interaction framework characterized by a multi-modal representational interaction encoder and a multi-modal predictive interaction decoder.
Experiments demonstrate the superior performance of the proposed framework on both 3D object detection and end-to-end autonomous driving tasks.
arXiv Detail & Related papers (2024-08-09T14:04:21Z) - Simultaneous Detection and Interaction Reasoning for Object-Centric Action Recognition [21.655278000690686]
We propose an end-to-end object-centric action recognition framework.
It simultaneously performs Detection And Interaction Reasoning in one stage.
We conduct experiments on two datasets, Something-Else and Ikea-Assembly.
arXiv Detail & Related papers (2024-04-18T05:06:12Z) - Towards Unified 3D Object Detection via Algorithm and Data Unification [70.27631528933482]
We build the first unified multi-modal 3D object detection benchmark MM- Omni3D and extend the aforementioned monocular detector to its multi-modal version.
We name the designed monocular and multi-modal detectors as UniMODE and MM-UniMODE, respectively.
arXiv Detail & Related papers (2024-02-28T18:59:31Z) - Exploiting Modality-Specific Features For Multi-Modal Manipulation
Detection And Grounding [54.49214267905562]
We construct a transformer-based framework for multi-modal manipulation detection and grounding tasks.
Our framework simultaneously explores modality-specific features while preserving the capability for multi-modal alignment.
We propose an implicit manipulation query (IMQ) that adaptively aggregates global contextual cues within each modality.
arXiv Detail & Related papers (2023-09-22T06:55:41Z) - DETR4D: Direct Multi-View 3D Object Detection with Sparse Attention [50.11672196146829]
3D object detection with surround-view images is an essential task for autonomous driving.
We propose DETR4D, a Transformer-based framework that explores sparse attention and direct feature query for 3D object detection in multi-view images.
arXiv Detail & Related papers (2022-12-15T14:18:47Z) - Towards Accurate Camouflaged Object Detection with Mixture Convolution and Interactive Fusion [45.45231015502287]
We propose a novel deep learning based COD approach, which integrates the large receptive field and effective feature fusion into a unified framework.
Our method detects camouflaged objects with an effective fusion strategy, which aggregates the rich context information from a large receptive field.
arXiv Detail & Related papers (2021-01-14T16:06:08Z) - Self-supervised Human Detection and Segmentation via Multi-view
Consensus [116.92405645348185]
We propose a multi-camera framework in which geometric constraints are embedded in the form of multi-view consistency during training.
We show that our approach outperforms state-of-the-art self-supervised person detection and segmentation techniques on images that visually depart from those of standard benchmarks.
arXiv Detail & Related papers (2020-12-09T15:47:21Z) - A Deep Learning Approach to Object Affordance Segmentation [31.221897360610114]
We design an autoencoder that infers pixel-wise affordance labels in both videos and static images.
Our model surpasses the need for object labels and bounding boxes by using a soft-attention mechanism.
We show that our model achieves competitive results compared to strongly supervised methods on SOR3D-AFF.
arXiv Detail & Related papers (2020-04-18T15:34:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.