Vision-Based Guidance for Tracking Dynamic Objects
- URL: http://arxiv.org/abs/2104.09301v1
- Date: Mon, 19 Apr 2021 13:45:56 GMT
- Title: Vision-Based Guidance for Tracking Dynamic Objects
- Authors: Pritam Karmokar, Kashish Dhal, William J. Beksi, Animesh Chakravarthy
- Abstract summary: We present a vision-based framework for tracking dynamic objects using guidance laws based on a rendezvous cone approach.
These guidance laws enable an unmanned aircraft system equipped with a monocular camera to continuously follow a moving object within the sensor's field of view.
- Score: 3.7590550630861443
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we present a novel vision-based framework for tracking dynamic
objects using guidance laws based on a rendezvous cone approach. These guidance
laws enable an unmanned aircraft system equipped with a monocular camera to
continuously follow a moving object within the sensor's field of view. We
identify and classify feature point estimators for managing the occurrence of
occlusions during the tracking process in an exclusive manner. Furthermore, we
develop an open-source simulation environment and perform a series of
simulations to show the efficacy of our methods.
Related papers
- Benchmarking Vision-Based Object Tracking for USVs in Complex Maritime Environments [0.8796261172196743]
Vision-based target tracking is crucial for unmanned surface vehicles.
Real-time tracking in maritime environments is challenging due to dynamic camera movement, low visibility, and scale variation.
This study proposes a vision-guided object-tracking framework for USVs.
arXiv Detail & Related papers (2024-12-10T10:35:17Z) - A Cross-Scene Benchmark for Open-World Drone Active Tracking [54.235808061746525]
Drone Visual Active Tracking aims to autonomously follow a target object by controlling the motion system based on visual observations.
We propose a unified cross-scene cross-domain benchmark for open-world drone active tracking called DAT.
We also propose a reinforcement learning-based drone tracking method called R-VAT.
arXiv Detail & Related papers (2024-12-01T09:37:46Z) - Initialization of Monocular Visual Navigation for Autonomous Agents Using Modified Structure from Small Motion [13.69678622755871]
We propose a standalone monocular visual Simultaneous Localization and Mapping (vSLAM) pipeline for autonomous space robots.
Our method, a state-of-the-art factor graph optimization pipeline, extends Structure from Small Motion to robustly initialize a monocular agent in spacecraft inspection trajectories.
We validate our approach on realistic, simulated satellite inspection image sequences with a tumbling spacecraft and demonstrate the method's effectiveness.
arXiv Detail & Related papers (2024-09-24T21:33:14Z) - A Robotics-Inspired Scanpath Model Reveals the Importance of Uncertainty and Semantic Object Cues for Gaze Guidance in Dynamic Scenes [8.64158103104882]
We present a computational model that simulates object segmentation and gaze behavior in an interconnected manner.
We show how our model's modular design allows for extensions, such as incorporating saccadic momentum or pre-saccadic attention.
arXiv Detail & Related papers (2024-08-02T15:20:34Z) - Track Anything Rapter(TAR) [0.0]
Track Anything Rapter (TAR) is designed to detect, segment, and track objects of interest based on user-provided multimodal queries.
TAR utilizes cutting-edge pre-trained models like DINO, CLIP, and SAM to estimate the relative pose of the queried object.
We showcase how the integration of these foundational models with a custom high-level control algorithm results in a highly stable and precise tracking system.
arXiv Detail & Related papers (2024-05-19T19:51:41Z) - LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry [52.131996528655094]
We present the Long-term Effective Any Point Tracking (LEAP) module.
LEAP innovatively combines visual, inter-track, and temporal cues with mindfully selected anchors for dynamic track estimation.
Based on these traits, we develop LEAP-VO, a robust visual odometry system adept at handling occlusions and dynamic scenes.
arXiv Detail & Related papers (2024-01-03T18:57:27Z) - S.T.A.R.-Track: Latent Motion Models for End-to-End 3D Object Tracking with Adaptive Spatio-Temporal Appearance Representations [10.46571824050325]
Following the tracking-by-attention paradigm, this paper introduces an object-centric, transformer-based framework for tracking in 3D.
Inspired by this, we propose S.T.A.R.-Track, which uses a novel latent motion model (LMM) to adjust object queries to account for changes in viewing direction and lighting conditions directly in the latent space.
arXiv Detail & Related papers (2023-06-30T12:22:41Z) - Attentive and Contrastive Learning for Joint Depth and Motion Field
Estimation [76.58256020932312]
Estimating the motion of the camera together with the 3D structure of the scene from a monocular vision system is a complex task.
We present a self-supervised learning framework for 3D object motion field estimation from monocular videos.
arXiv Detail & Related papers (2021-10-13T16:45:01Z) - Self-supervised Video Object Segmentation by Motion Grouping [79.13206959575228]
We develop a computer vision system able to segment objects by exploiting motion cues.
We introduce a simple variant of the Transformer to segment optical flow frames into primary objects and the background.
We evaluate the proposed architecture on public benchmarks (DAVIS2016, SegTrackv2, and FBMS59)
arXiv Detail & Related papers (2021-04-15T17:59:32Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - Self-supervised Object Tracking with Cycle-consistent Siamese Networks [55.040249900677225]
We exploit an end-to-end Siamese network in a cycle-consistent self-supervised framework for object tracking.
We propose to integrate a Siamese region proposal and mask regression network in our tracking framework so that a fast and more accurate tracker can be learned without the annotation of each frame.
arXiv Detail & Related papers (2020-08-03T04:10:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.