Propagate And Calibrate: Real-time Passive Non-line-of-sight Tracking
- URL: http://arxiv.org/abs/2303.11791v2
- Date: Mon, 27 Mar 2023 10:11:31 GMT
- Title: Propagate And Calibrate: Real-time Passive Non-line-of-sight Tracking
- Authors: Yihao Wang, Zhigang Wang, Bin Zhao, Dong Wang, Mulin Chen, Xuelong Li
- Abstract summary: We propose a purely passive method to track a person walking in an invisible room by only observing a relay wall.
To excavate imperceptible changes in videos of the relay wall, we introduce difference frames as an essential carrier of temporal-local motion messages.
To evaluate the proposed method, we build and publish the first dynamic passive NLOS tracking dataset, NLOS-Track.
- Score: 84.38335117043907
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Non-line-of-sight (NLOS) tracking has drawn increasing attention in recent
years, due to its ability to detect object motion out of sight. Most previous
works on NLOS tracking rely on active illumination, e.g., laser, and suffer
from high cost and elaborate experimental conditions. Besides, these techniques
are still far from practical application due to oversimplified settings. In
contrast, we propose a purely passive method to track a person walking in an
invisible room by only observing a relay wall, which is more in line with real
application scenarios, e.g., security. To excavate imperceptible changes in
videos of the relay wall, we introduce difference frames as an essential
carrier of temporal-local motion messages. In addition, we propose PAC-Net,
which consists of alternating propagation and calibration, making it capable of
leveraging both dynamic and static messages on a frame-level granularity. To
evaluate the proposed method, we build and publish the first dynamic passive
NLOS tracking dataset, NLOS-Track, which fills the vacuum of realistic NLOS
datasets. NLOS-Track contains thousands of NLOS video clips and corresponding
trajectories. Both real-shot and synthetic data are included. Our codes and
dataset are available at https://againstentropy.github.io/NLOS-Track/.
Related papers
- PathFinder: Attention-Driven Dynamic Non-Line-of-Sight Tracking with a Mobile Robot [3.387892563308912]
We introduce a novel approach to process a sequence of dynamic successive frames in a line-of-sight (LOS) video using an attention-based neural network.
We validate the approach on in-the-wild scenes using a drone for video capture, thus demonstrating low-cost NLOS imaging in dynamic capture environments.
arXiv Detail & Related papers (2024-04-07T17:31:53Z) - Dense Optical Tracking: Connecting the Dots [82.79642869586587]
DOT is a novel, simple and efficient method for solving the problem of point tracking in a video.
We show that DOT is significantly more accurate than current optical flow techniques, outperforms sophisticated "universal trackers" like OmniMotion, and is on par with, or better than, the best point tracking algorithms like CoTracker.
arXiv Detail & Related papers (2023-12-01T18:59:59Z) - BEVTrack: A Simple and Strong Baseline for 3D Single Object Tracking in Bird's-Eye View [56.77287041917277]
3D Single Object Tracking (SOT) is a fundamental task of computer vision, proving essential for applications like autonomous driving.
In this paper, we propose BEVTrack, a simple yet effective baseline method.
By estimating the target motion in Bird's-Eye View (BEV) to perform tracking, BEVTrack demonstrates surprising simplicity from various aspects, i.e., network designs, training objectives, and tracking pipeline, while achieving superior performance.
arXiv Detail & Related papers (2023-09-05T12:42:26Z) - Iterative Scale-Up ExpansionIoU and Deep Features Association for
Multi-Object Tracking in Sports [26.33239898091364]
We propose a novel online and robust multi-object tracking approach named deep ExpansionIoU (Deep-EIoU) for sports scenarios.
Unlike conventional methods, we abandon the use of the Kalman filter and leverage the iterative scale-up ExpansionIoU and deep features for robust tracking in sports scenarios.
Our proposed method demonstrates remarkable effectiveness in tracking irregular motion objects, achieving a score of 77.2% on the SportsMOT dataset and 85.4% on the SoccerNet-Tracking dataset.
arXiv Detail & Related papers (2023-06-22T17:47:08Z) - MotionTrack: Learning Motion Predictor for Multiple Object Tracking [68.68339102749358]
We introduce a novel motion-based tracker, MotionTrack, centered around a learnable motion predictor.
Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on datasets such as Dancetrack and SportsMOT.
arXiv Detail & Related papers (2023-06-05T04:24:11Z) - CXTrack: Improving 3D Point Cloud Tracking with Contextual Information [59.55870742072618]
3D single object tracking plays an essential role in many applications, such as autonomous driving.
We propose CXTrack, a novel transformer-based network for 3D object tracking.
We show that CXTrack achieves state-of-the-art tracking performance while running at 29 FPS.
arXiv Detail & Related papers (2022-11-12T11:29:01Z) - Track without Appearance: Learn Box and Tracklet Embedding with Local
and Global Motion Patterns for Vehicle Tracking [45.524183249765244]
Vehicle tracking is an essential task in the multi-object tracking (MOT) field.
In this paper, we try to explore the significance of motion patterns for vehicle tracking without appearance information.
We propose a novel approach that tackles the association issue for long-term tracking with the exclusive fully-exploited motion information.
arXiv Detail & Related papers (2021-08-13T02:27:09Z) - Model-free Vehicle Tracking and State Estimation in Point Cloud
Sequences [17.351635242415703]
We study a novel setting of this problem: model-free single object tracking (SOT)
SOT takes the object state in the first frame as input, and jointly solves state estimation and tracking in subsequent frames.
We then propose an optimization-based algorithm called SOTracker based on point cloud registration, vehicle shapes, and motion priors.
arXiv Detail & Related papers (2021-03-10T13:01:26Z) - Robust Visual Object Tracking with Two-Stream Residual Convolutional
Networks [62.836429958476735]
We propose a Two-Stream Residual Convolutional Network (TS-RCN) for visual tracking.
Our TS-RCN can be integrated with existing deep learning based visual trackers.
To further improve the tracking performance, we adopt a "wider" residual network ResNeXt as its feature extraction backbone.
arXiv Detail & Related papers (2020-05-13T19:05:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.