Towards Agile Swarming in Real World: Onboard Relative Localization with Fast Tracking of Active Blinking Markers
- URL: http://arxiv.org/abs/2502.01172v1
- Date: Mon, 03 Feb 2025 09:05:00 GMT
- Title: Towards Agile Swarming in Real World: Onboard Relative Localization with Fast Tracking of Active Blinking Markers
- Authors: Tim Felix Lakemann, Daniel Bonilla Licea, Viktor Walter, Tomáš Báča, Martin Saska,
- Abstract summary: We introduce a novel onboard tracking approach enabling vision-based relative localization and communication using Active blinking Marker Tracking (AMT)<n>AMT addresses this by using weighted regression to predict the future appearance of active blinking markers while accounting for uncertainty in the prediction.<n>In outdoor experiments, the AMT approach outperformed state-of-the-art methods in tracking density, accuracy, and complexity.
- Score: 4.651174536068167
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A novel onboard tracking approach enabling vision-based relative localization and communication using Active blinking Marker Tracking (AMT) is introduced in this article. Active blinking markers on multi-robot team members improve the robustness of relative localization for aerial vehicles in tightly coupled swarms during real-world deployments, while also serving as a resilient communication channel. Traditional tracking algorithms struggle to track fast moving blinking markers due to their intermittent appearance in the camera frames. AMT addresses this by using weighted polynomial regression to predict the future appearance of active blinking markers while accounting for uncertainty in the prediction. In outdoor experiments, the AMT approach outperformed state-of-the-art methods in tracking density, accuracy, and complexity. The experimental validation of this novel tracking approach for relative localization involved testing motion patterns motivated by our research on agile multi-robot deployment.
Related papers
- Multi-tracklet Tracking for Generic Targets with Adaptive Detection Clustering [8.637143090635396]
This article proposes a tracklet tracker called Multi-Tracklet Tracking (MTT) that integrates flexible tracklet generation into a multi-tracklet association framework.<n> experiments on the benchmark for generic multiple object tracking demonstrate the competitiveness of the proposed framework.
arXiv Detail & Related papers (2025-08-07T09:05:27Z) - Tracking the Unstable: Appearance-Guided Motion Modeling for Robust Multi-Object Tracking in UAV-Captured Videos [58.156141601478794]
Multi-object tracking (UAVT) aims to track multiple objects while maintaining consistent identities across frames of a given video.<n>Existing methods typically model motion cues and appearance separately, overlooking their interplay and resulting in suboptimal tracking performance.<n>We propose AMOT, which exploits appearance and motion cues through two key components: an Appearance-Motion Consistency (AMC) matrix and a Motion-aware Track Continuation (MTC) module.
arXiv Detail & Related papers (2025-08-03T12:06:47Z) - Real-Time Moving Flock Detection in Pedestrian Trajectories Using Sequential Deep Learning Models [1.2289361708127877]
This paper investigates the use of sequential deep learning models, including Recurrent Neural Networks (RNNs), for real-time flock detection in multi-pedestrian trajectories.
We validate our method using real-world group movement datasets, demonstrating its robustness across varying sequence lengths and diverse movement patterns.
We extend our approach to identify other forms of collective motion, such as convoys and swarms, paving the way for more comprehensive multi-agent behavior analysis.
arXiv Detail & Related papers (2025-02-21T07:04:34Z) - Event-Based Tracking Any Point with Motion-Augmented Temporal Consistency [58.719310295870024]
This paper presents an event-based framework for tracking any point.
It tackles the challenges posed by spatial sparsity and motion sensitivity in events.
It achieves 150% faster processing with competitive model parameters.
arXiv Detail & Related papers (2024-12-02T09:13:29Z) - Trajectory Anomaly Detection with Language Models [21.401931052512595]
This paper presents a novel approach for trajectory anomaly detection using an autoregressive causal-attention model, termed LM-TAD.
By treating trajectories as sequences of tokens, our model learns the probability distributions over trajectories, enabling the identification of anomalous locations with high precision.
Our experiments demonstrate the effectiveness of LM-TAD on both synthetic and real-world datasets.
arXiv Detail & Related papers (2024-09-18T17:33:31Z) - LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry [52.131996528655094]
We present the Long-term Effective Any Point Tracking (LEAP) module.
LEAP innovatively combines visual, inter-track, and temporal cues with mindfully selected anchors for dynamic track estimation.
Based on these traits, we develop LEAP-VO, a robust visual odometry system adept at handling occlusions and dynamic scenes.
arXiv Detail & Related papers (2024-01-03T18:57:27Z) - Multi-Object Tracking by Iteratively Associating Detections with Uniform
Appearance for Trawl-Based Fishing Bycatch Monitoring [22.228127377617028]
The aim of in-trawl catch monitoring for use in fishing operations is to detect, track and classify fish targets in real-time from video footage.
We propose a novel MOT method, built upon an existing observation-centric tracking algorithm, by adopting a new iterative association step.
Our method offers improved performance in tracking targets with uniform appearance and outperforms state-of-the-art techniques on our underwater fish datasets as well as the MOT17 dataset.
arXiv Detail & Related papers (2023-04-10T18:55:10Z) - MotionTrack: Learning Robust Short-term and Long-term Motions for
Multi-Object Tracking [56.92165669843006]
We propose MotionTrack, which learns robust short-term and long-term motions in a unified framework to associate trajectories from a short to long range.
For dense crowds, we design a novel Interaction Module to learn interaction-aware motions from short-term trajectories, which can estimate the complex movement of each target.
For extreme occlusions, we build a novel Refind Module to learn reliable long-term motions from the target's history trajectory, which can link the interrupted trajectory with its corresponding detection.
arXiv Detail & Related papers (2023-03-18T12:38:33Z) - Modeling Continuous Motion for 3D Point Cloud Object Tracking [54.48716096286417]
This paper presents a novel approach that views each tracklet as a continuous stream.
At each timestamp, only the current frame is fed into the network to interact with multi-frame historical features stored in a memory bank.
To enhance the utilization of multi-frame features for robust tracking, a contrastive sequence enhancement strategy is proposed.
arXiv Detail & Related papers (2023-03-14T02:58:27Z) - AiATrack: Attention in Attention for Transformer Visual Tracking [89.94386868729332]
Transformer trackers have achieved impressive advancements recently, where the attention mechanism plays an important role.
We propose an attention in attention (AiA) module, which enhances appropriate correlations and suppresses erroneous ones by seeking consensus among all correlation vectors.
Our AiA module can be readily applied to both self-attention blocks and cross-attention blocks to facilitate feature aggregation and information propagation for visual tracking.
arXiv Detail & Related papers (2022-07-20T00:44:03Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.