Multi-Target Active Object Tracking with Monte Carlo Tree Search and
Target Motion Modeling
- URL: http://arxiv.org/abs/2205.03555v1
- Date: Sat, 7 May 2022 05:08:15 GMT
- Title: Multi-Target Active Object Tracking with Monte Carlo Tree Search and
Target Motion Modeling
- Authors: Zheng Chen, Jian Zhao, Mingyu Yang, Wengang Zhou, Houqiang Li
- Abstract summary: In this work, we are dedicated to multi-target active object tracking (AOT), where there are multiple targets as well as multiple cameras in the environment.
The goal is maximize the overall target coverage of all cameras.
We establish a multi-target 2D environment to simulate the sports games, and experimental results demonstrate that our method can effectively improve the target coverage.
- Score: 126.26121580486289
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we are dedicated to multi-target active object tracking (AOT),
where there are multiple targets as well as multiple cameras in the
environment. The goal is maximize the overall target coverage of all cameras.
Previous work makes a strong assumption that each camera is fixed in a location
and only allowed to rotate, which limits its application. In this work, we
relax the setting by allowing all cameras to both move along the boundary lines
and rotate. In our setting, the action space becomes much larger, which leads
to much higher computational complexity to identify the optimal action. To this
end, we propose to leverage the action selection from multi-agent reinforcement
learning (MARL) network to prune the search tree of Monte Carlo Tree Search
(MCTS) method, so as to find the optimal action more efficiently. Besides, we
model the motion of the targets to predict the future position of the targets,
which makes a better estimation of the future environment state in the MCTS
process. We establish a multi-target 2D environment to simulate the sports
games, and experimental results demonstrate that our method can effectively
improve the target coverage.
Related papers
- MONA: Moving Object Detection from Videos Shot by Dynamic Camera [20.190677328673836]
We introduce MONA, a framework for robust moving object detection and segmentation from videos shot by dynamic cameras.
MonA comprises two key modules: Dynamic Points Extraction, which leverages optical flow and tracking any point to identify dynamic points, and Moving Object, which employs adaptive bounding box filtering.
We validate MONA by integrating with the camera trajectory estimation method LEAP-VO, and it achieves state-of-the-art results on the MPI Sintel dataset.
arXiv Detail & Related papers (2025-01-22T19:30:28Z) - A Cross-Scene Benchmark for Open-World Drone Active Tracking [54.235808061746525]
Drone Visual Active Tracking aims to autonomously follow a target object by controlling the motion system based on visual observations.
We propose a unified cross-scene cross-domain benchmark for open-world drone active tracking called DAT.
We also propose a reinforcement learning-based drone tracking method called R-VAT.
arXiv Detail & Related papers (2024-12-01T09:37:46Z) - Toward Global Sensing Quality Maximization: A Configuration Optimization
Scheme for Camera Networks [15.795407587722924]
We investigate the reconfiguration strategy for the parameterized camera network model.
We form a single quantity that measures the sensing quality of the targets by the camera network.
We verify the effectiveness of our approach through extensive simulations and experiments.
arXiv Detail & Related papers (2022-11-28T09:21:47Z) - A Simple Baseline for Multi-Camera 3D Object Detection [94.63944826540491]
3D object detection with surrounding cameras has been a promising direction for autonomous driving.
We present SimMOD, a Simple baseline for Multi-camera Object Detection.
We conduct extensive experiments on the 3D object detection benchmark of nuScenes to demonstrate the effectiveness of SimMOD.
arXiv Detail & Related papers (2022-08-22T03:38:01Z) - RADNet: A Deep Neural Network Model for Robust Perception in Moving
Autonomous Systems [8.706086688708014]
We develop a novel ranking method to rank videos based on the degree of global camera motion.
For the high ranking camera videos we show that the accuracy of action detection is decreased.
We propose an action detection pipeline that is robust to the camera motion effect and verify it empirically.
arXiv Detail & Related papers (2022-04-30T23:14:08Z) - Coordinate-Aligned Multi-Camera Collaboration for Active Multi-Object
Tracking [114.16306938870055]
We propose a coordinate-aligned multi-camera collaboration system for AMOT.
In our approach, we regard each camera as an agent and address AMOT with a multi-agent reinforcement learning solution.
Our system achieves a coverage of 71.88%, outperforming the baseline method by 8.9%.
arXiv Detail & Related papers (2022-02-22T13:28:40Z) - Know Your Surroundings: Panoramic Multi-Object Tracking by Multimodality
Collaboration [56.01625477187448]
We propose a MultiModality PAnoramic multi-object Tracking framework (MMPAT)
It takes both 2D panorama images and 3D point clouds as input and then infers target trajectories using the multimodality data.
We evaluate the proposed method on the JRDB dataset, where the MMPAT achieves the top performance in both the detection and tracking tasks.
arXiv Detail & Related papers (2021-05-31T03:16:38Z) - Dynamic Attention guided Multi-Trajectory Analysis for Single Object
Tracking [62.13213518417047]
We propose to introduce more dynamics by devising a dynamic attention-guided multi-trajectory tracking strategy.
In particular, we construct dynamic appearance model that contains multiple target templates, each of which provides its own attention for locating the target in the new frame.
After spanning the whole sequence, we introduce a multi-trajectory selection network to find the best trajectory that delivers improved tracking performance.
arXiv Detail & Related papers (2021-03-30T05:36:31Z) - Pose-Assisted Multi-Camera Collaboration for Active Object Tracking [42.57706021569103]
Active Object Tracking (AOT) is crucial to many visionbased applications, e.g., mobile robot, intelligent surveillance.
In this paper, we extend the single-camera AOT to a multi-camera setting, where cameras tracking a target in a collaborative fashion.
We propose a novel Pose-Assisted Multi-Camera Collaboration System, which enables a camera to cooperate with the others by sharing camera poses for active object tracking.
arXiv Detail & Related papers (2020-01-15T07:49:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.