Vision-Based Guidance for Tracking Dynamic Objects
- URL: http://arxiv.org/abs/2104.09301v1
- Date: Mon, 19 Apr 2021 13:45:56 GMT
- Title: Vision-Based Guidance for Tracking Dynamic Objects
- Authors: Pritam Karmokar, Kashish Dhal, William J. Beksi, Animesh Chakravarthy
- Abstract summary: We present a vision-based framework for tracking dynamic objects using guidance laws based on a rendezvous cone approach.
These guidance laws enable an unmanned aircraft system equipped with a monocular camera to continuously follow a moving object within the sensor's field of view.
- Score: 3.7590550630861443
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we present a novel vision-based framework for tracking dynamic
objects using guidance laws based on a rendezvous cone approach. These guidance
laws enable an unmanned aircraft system equipped with a monocular camera to
continuously follow a moving object within the sensor's field of view. We
identify and classify feature point estimators for managing the occurrence of
occlusions during the tracking process in an exclusive manner. Furthermore, we
develop an open-source simulation environment and perform a series of
simulations to show the efficacy of our methods.
Related papers
- VOVTrack: Exploring the Potentiality in Videos for Open-Vocabulary Object Tracking [61.56592503861093]
This issue amalgamates the complexities of open-vocabulary object detection (OVD) and multi-object tracking (MOT)
Existing approaches to OVMOT often merge OVD and MOT methodologies as separate modules, predominantly focusing on the problem through an image-centric lens.
We propose VOVTrack, a novel method that integrates object states relevant to MOT and video-centric training to address this challenge from a video object tracking standpoint.
arXiv Detail & Related papers (2024-10-11T05:01:49Z) - Initialization of Monocular Visual Navigation for Autonomous Agents Using Modified Structure from Small Motion [13.69678622755871]
We propose a standalone monocular visual Simultaneous Localization and Mapping (vSLAM) pipeline for autonomous space robots.
Our method, a state-of-the-art factor graph optimization pipeline, extends Structure from Small Motion to robustly initialize a monocular agent in spacecraft inspection trajectories.
We validate our approach on realistic, simulated satellite inspection image sequences with a tumbling spacecraft and demonstrate the method's effectiveness.
arXiv Detail & Related papers (2024-09-24T21:33:14Z) - Track Anything Rapter(TAR) [0.0]
Track Anything Rapter (TAR) is designed to detect, segment, and track objects of interest based on user-provided multimodal queries.
TAR utilizes cutting-edge pre-trained models like DINO, CLIP, and SAM to estimate the relative pose of the queried object.
We showcase how the integration of these foundational models with a custom high-level control algorithm results in a highly stable and precise tracking system.
arXiv Detail & Related papers (2024-05-19T19:51:41Z) - LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry [52.131996528655094]
We present the Long-term Effective Any Point Tracking (LEAP) module.
LEAP innovatively combines visual, inter-track, and temporal cues with mindfully selected anchors for dynamic track estimation.
Based on these traits, we develop LEAP-VO, a robust visual odometry system adept at handling occlusions and dynamic scenes.
arXiv Detail & Related papers (2024-01-03T18:57:27Z) - Visual Forecasting as a Mid-level Representation for Avoidance [8.712750753534532]
The challenge of navigation in environments with dynamic objects continues to be a central issue in the study of autonomous agents.
While predictive methods hold promise, their reliance on precise state information makes them less practical for real-world implementation.
This study presents visual forecasting as an innovative alternative.
arXiv Detail & Related papers (2023-09-17T13:32:03Z) - S.T.A.R.-Track: Latent Motion Models for End-to-End 3D Object Tracking with Adaptive Spatio-Temporal Appearance Representations [10.46571824050325]
Following the tracking-by-attention paradigm, this paper introduces an object-centric, transformer-based framework for tracking in 3D.
Inspired by this, we propose S.T.A.R.-Track, which uses a novel latent motion model (LMM) to adjust object queries to account for changes in viewing direction and lighting conditions directly in the latent space.
arXiv Detail & Related papers (2023-06-30T12:22:41Z) - FlowBot3D: Learning 3D Articulation Flow to Manipulate Articulated Objects [14.034256001448574]
We propose a vision-based system that learns to predict the potential motions of the parts of a variety of articulated objects.
We deploy an analytical motion planner based on this vector field to achieve a policy that yields maximum articulation.
Results show that our system achieves state-of-the-art performance in both simulated and real-world experiments.
arXiv Detail & Related papers (2022-05-09T15:35:33Z) - Attentive and Contrastive Learning for Joint Depth and Motion Field
Estimation [76.58256020932312]
Estimating the motion of the camera together with the 3D structure of the scene from a monocular vision system is a complex task.
We present a self-supervised learning framework for 3D object motion field estimation from monocular videos.
arXiv Detail & Related papers (2021-10-13T16:45:01Z) - Self-supervised Video Object Segmentation by Motion Grouping [79.13206959575228]
We develop a computer vision system able to segment objects by exploiting motion cues.
We introduce a simple variant of the Transformer to segment optical flow frames into primary objects and the background.
We evaluate the proposed architecture on public benchmarks (DAVIS2016, SegTrackv2, and FBMS59)
arXiv Detail & Related papers (2021-04-15T17:59:32Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - Self-supervised Object Tracking with Cycle-consistent Siamese Networks [55.040249900677225]
We exploit an end-to-end Siamese network in a cycle-consistent self-supervised framework for object tracking.
We propose to integrate a Siamese region proposal and mask regression network in our tracking framework so that a fast and more accurate tracker can be learned without the annotation of each frame.
arXiv Detail & Related papers (2020-08-03T04:10:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.