FlowTrack: Point-level Flow Network for 3D Single Object Tracking
- URL: http://arxiv.org/abs/2407.01959v1
- Date: Tue, 2 Jul 2024 05:31:34 GMT
- Title: FlowTrack: Point-level Flow Network for 3D Single Object Tracking
- Authors: Shuo Li, Yubo Cui, Zhiheng Li, Zheng Fang,
- Abstract summary: 3D single object tracking (SOT) is a crucial task in fields of mobile robotics and autonomous driving.
Traditional motion-based approaches achieve target tracking by estimating the relative movement of target between two consecutive frames.
We propose a point-level flow method with multi-frame information for 3D SOT task, called FlowTrack.
- Score: 24.93453336062018
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D single object tracking (SOT) is a crucial task in fields of mobile robotics and autonomous driving. Traditional motion-based approaches achieve target tracking by estimating the relative movement of target between two consecutive frames. However, they usually overlook local motion information of the target and fail to exploit historical frame information effectively. To overcome the above limitations, we propose a point-level flow method with multi-frame information for 3D SOT task, called FlowTrack. Specifically, by estimating the flow for each point in the target, our method could capture the local motion details of target, thereby improving the tracking performance. At the same time, to handle scenes with sparse points, we present a learnable target feature as the bridge to efficiently integrate target information from past frames. Moreover, we design a novel Instance Flow Head to transform dense point-level flow into instance-level motion, effectively aggregating local motion information to obtain global target motion. Finally, our method achieves competitive performance with improvements of 5.9% on the KITTI dataset and 2.9% on NuScenes. The code will be made publicly available soon.
Related papers
- SeFlow: A Self-Supervised Scene Flow Method in Autonomous Driving [18.88208422580103]
Scene flow estimation predicts the 3D motion at each point in successive LiDAR scans.
Current state-of-the-art methods require annotated data to train scene flow networks.
We propose SeFlow, a self-supervised method that integrates efficient dynamic classification into a learning-based scene flow pipeline.
arXiv Detail & Related papers (2024-07-01T18:22:54Z) - SeMoLi: What Moves Together Belongs Together [51.72754014130369]
We tackle semi-supervised object detection based on motion cues.
Recent results suggest that motion-based clustering methods can be used to pseudo-label instances of moving objects.
We re-think this approach and suggest that both, object detection, as well as motion-inspired pseudo-labeling, can be tackled in a data-driven manner.
arXiv Detail & Related papers (2024-02-29T18:54:53Z) - STTracker: Spatio-Temporal Tracker for 3D Single Object Tracking [11.901758708579642]
3D single object tracking with point clouds is a critical task in 3D computer vision.
Previous methods usually input the last two frames and use the template point cloud in previous frame and the search area point cloud in the current frame respectively.
arXiv Detail & Related papers (2023-06-30T07:25:11Z) - An Effective Motion-Centric Paradigm for 3D Single Object Tracking in
Point Clouds [50.19288542498838]
3D single object tracking in LiDAR point clouds (LiDAR SOT) plays a crucial role in autonomous driving.
Current approaches all follow the Siamese paradigm based on appearance matching.
We introduce a motion-centric paradigm to handle LiDAR SOT from a new perspective.
arXiv Detail & Related papers (2023-03-21T17:28:44Z) - Target-Aware Tracking with Long-term Context Attention [8.20858704675519]
Long-term context attention (LCA) module can perform extensive information fusion on the target and its context from long-term frames.
LCA uses the target state from the previous frame to exclude the interference of similar objects and complex backgrounds.
Our tracker achieves state-of-the-art performance on multiple benchmarks, with 71.1% AUC, 89.3% NP, and 73.0% AO on LaSOT, TrackingNet, and GOT-10k.
arXiv Detail & Related papers (2023-02-27T14:40:58Z) - CXTrack: Improving 3D Point Cloud Tracking with Contextual Information [59.55870742072618]
3D single object tracking plays an essential role in many applications, such as autonomous driving.
We propose CXTrack, a novel transformer-based network for 3D object tracking.
We show that CXTrack achieves state-of-the-art tracking performance while running at 29 FPS.
arXiv Detail & Related papers (2022-11-12T11:29:01Z) - Learning Dynamic Compact Memory Embedding for Deformable Visual Object
Tracking [82.34356879078955]
We propose a compact memory embedding to enhance the discrimination of the segmentation-based deformable visual tracking method.
Our method outperforms the excellent segmentation-based trackers, i.e., D3S and SiamMask on DAVIS 2017 benchmark.
arXiv Detail & Related papers (2021-11-23T03:07:12Z) - Monocular Quasi-Dense 3D Object Tracking [99.51683944057191]
A reliable and accurate 3D tracking framework is essential for predicting future locations of surrounding objects and planning the observer's actions in numerous applications such as autonomous driving.
We propose a framework that can effectively associate moving objects over time and estimate their full 3D bounding box information from a sequence of 2D images captured on a moving platform.
arXiv Detail & Related papers (2021-03-12T15:30:02Z) - Tracking from Patterns: Learning Corresponding Patterns in Point Clouds
for 3D Object Tracking [34.40019455462043]
We propose to learn 3D object correspondences from temporal point cloud data and infer the motion information from correspondence patterns.
Our method exceeds the existing 3D tracking methods on both the KITTI and larger scale Nuscenes dataset.
arXiv Detail & Related papers (2020-10-20T06:07:20Z) - Know Your Surroundings: Exploiting Scene Information for Object Tracking [181.1750279330811]
Current state-of-the-art trackers only rely on a target appearance model in order to localize the object in each frame.
We propose a novel tracking architecture which can utilize scene information for tracking.
arXiv Detail & Related papers (2020-03-24T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.