EagerMOT: 3D Multi-Object Tracking via Sensor Fusion
- URL: http://arxiv.org/abs/2104.14682v1
- Date: Thu, 29 Apr 2021 22:30:29 GMT
- Title: EagerMOT: 3D Multi-Object Tracking via Sensor Fusion
- Authors: Aleksandr Kim, Aljo\v{s}a O\v{s}ep, Laura Leal-Taix\'e
- Abstract summary: Multi-object tracking (MOT) enables mobile robots to perform well-informed motion planning and navigation by localizing surrounding objects in 3D space and time.
Existing methods rely on depth sensors (e.g., LiDAR) to detect and track targets in 3D space, but only up to a limited sensing range due to the sparsity of the signal.
We propose EagerMOT, a simple tracking formulation that integrates all available object observations from both sensor modalities to obtain a well-informed interpretation of the scene dynamics.
- Score: 68.8204255655161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-object tracking (MOT) enables mobile robots to perform well-informed
motion planning and navigation by localizing surrounding objects in 3D space
and time. Existing methods rely on depth sensors (e.g., LiDAR) to detect and
track targets in 3D space, but only up to a limited sensing range due to the
sparsity of the signal. On the other hand, cameras provide a dense and rich
visual signal that helps to localize even distant objects, but only in the
image domain. In this paper, we propose EagerMOT, a simple tracking formulation
that eagerly integrates all available object observations from both sensor
modalities to obtain a well-informed interpretation of the scene dynamics.
Using images, we can identify distant incoming objects, while depth estimates
allow for precise trajectory localization as soon as objects are within the
depth-sensing range. With EagerMOT, we achieve state-of-the-art results across
several MOT tasks on the KITTI and NuScenes datasets. Our code is available at
https://github.com/aleksandrkim61/EagerMOT.
Related papers
- RaTrack: Moving Object Detection and Tracking with 4D Radar Point Cloud [10.593320435411714]
We introduce RaTrack, an innovative solution tailored for radar-based tracking.
Our method focuses on motion segmentation and clustering, enriched by a motion estimation module.
RaTrack showcases superior tracking precision of moving objects, largely surpassing the performance of the state of the art.
arXiv Detail & Related papers (2023-09-18T13:02:29Z) - Delving into Motion-Aware Matching for Monocular 3D Object Tracking [81.68608983602581]
We find that the motion cue of objects along different time frames is critical in 3D multi-object tracking.
We propose MoMA-M3T, a framework that mainly consists of three motion-aware components.
We conduct extensive experiments on the nuScenes and KITTI datasets to demonstrate our MoMA-M3T achieves competitive performance against state-of-the-art methods.
arXiv Detail & Related papers (2023-08-22T17:53:58Z) - SurroundOcc: Multi-Camera 3D Occupancy Prediction for Autonomous Driving [98.74706005223685]
3D scene understanding plays a vital role in vision-based autonomous driving.
We propose a SurroundOcc method to predict the 3D occupancy with multi-camera images.
arXiv Detail & Related papers (2023-03-16T17:59:08Z) - TripletTrack: 3D Object Tracking using Triplet Embeddings and LSTM [0.0]
3D object tracking is a critical task in autonomous driving systems.
In this paper we investigate the use of triplet embeddings in combination with motion representations for 3D object tracking.
arXiv Detail & Related papers (2022-10-28T15:23:50Z) - A Lightweight and Detector-free 3D Single Object Tracker on Point Clouds [50.54083964183614]
It is non-trivial to perform accurate target-specific detection since the point cloud of objects in raw LiDAR scans is usually sparse and incomplete.
We propose DMT, a Detector-free Motion prediction based 3D Tracking network that totally removes the usage of complicated 3D detectors.
arXiv Detail & Related papers (2022-03-08T17:49:07Z) - DeepFusionMOT: A 3D Multi-Object Tracking Framework Based on
Camera-LiDAR Fusion with Deep Association [8.34219107351442]
This paper proposes a robust camera-LiDAR fusion-based MOT method that achieves a good trade-off between accuracy and speed.
Our proposed method presents obvious advantages over the state-of-the-art MOT methods in terms of both tracking accuracy and processing speed.
arXiv Detail & Related papers (2022-02-24T13:36:29Z) - CFTrack: Center-based Radar and Camera Fusion for 3D Multi-Object
Tracking [9.62721286522053]
We propose an end-to-end network for joint object detection and tracking based on radar and camera sensor fusion.
Our proposed method uses a center-based radar-camera fusion algorithm for object detection and utilizes a greedy algorithm for object association.
We evaluate our method on the challenging nuScenes dataset, where it achieves 20.0 AMOTA and outperforms all vision-based 3D tracking methods in the benchmark.
arXiv Detail & Related papers (2021-07-11T23:56:53Z) - Monocular Quasi-Dense 3D Object Tracking [99.51683944057191]
A reliable and accurate 3D tracking framework is essential for predicting future locations of surrounding objects and planning the observer's actions in numerous applications such as autonomous driving.
We propose a framework that can effectively associate moving objects over time and estimate their full 3D bounding box information from a sequence of 2D images captured on a moving platform.
arXiv Detail & Related papers (2021-03-12T15:30:02Z) - Deep Continuous Fusion for Multi-Sensor 3D Object Detection [103.5060007382646]
We propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization.
We design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution.
arXiv Detail & Related papers (2020-12-20T18:43:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.