Pose-Assisted Multi-Camera Collaboration for Active Object Tracking
- URL: http://arxiv.org/abs/2001.05161v1
- Date: Wed, 15 Jan 2020 07:49:49 GMT
- Title: Pose-Assisted Multi-Camera Collaboration for Active Object Tracking
- Authors: Jing Li and Jing Xu and Fangwei Zhong and Xiangyu Kong and Yu Qiao and
Yizhou Wang
- Abstract summary: Active Object Tracking (AOT) is crucial to many visionbased applications, e.g., mobile robot, intelligent surveillance.
In this paper, we extend the single-camera AOT to a multi-camera setting, where cameras tracking a target in a collaborative fashion.
We propose a novel Pose-Assisted Multi-Camera Collaboration System, which enables a camera to cooperate with the others by sharing camera poses for active object tracking.
- Score: 42.57706021569103
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active Object Tracking (AOT) is crucial to many visionbased applications,
e.g., mobile robot, intelligent surveillance. However, there are a number of
challenges when deploying active tracking in complex scenarios, e.g., target is
frequently occluded by obstacles. In this paper, we extend the single-camera
AOT to a multi-camera setting, where cameras tracking a target in a
collaborative fashion. To achieve effective collaboration among cameras, we
propose a novel Pose-Assisted Multi-Camera Collaboration System, which enables
a camera to cooperate with the others by sharing camera poses for active object
tracking. In the system, each camera is equipped with two controllers and a
switcher: The vision-based controller tracks targets based on observed images.
The pose-based controller moves the camera in accordance to the poses of the
other cameras. At each step, the switcher decides which action to take from the
two controllers according to the visibility of the target. The experimental
results demonstrate that our system outperforms all the baselines and is
capable of generalizing to unseen environments. The code and demo videos are
available on our website
https://sites.google.com/view/pose-assistedcollaboration.
Related papers
- Direct-a-Video: Customized Video Generation with User-Directed Camera Movement and Object Motion [34.404342332033636]
We introduce Direct-a-Video, a system that allows users to independently specify motions for multiple objects as well as camera's pan and zoom movements.
For camera movement, we introduce new temporal cross-attention layers to interpret quantitative camera movement parameters.
Both components operate independently, allowing individual or combined control, and can generalize to open-domain scenarios.
arXiv Detail & Related papers (2024-02-05T16:30:57Z) - Enabling Cross-Camera Collaboration for Video Analytics on Distributed
Smart Cameras [7.609628915907225]
We present Argus, a distributed video analytics system with cross-camera collaboration on smart cameras.
We identify multi-camera, multi-target tracking as the primary task multi-camera video analytics and develop a novel technique that avoids redundant, processing-heavy tasks.
Argus reduces the number of object identifications and end-to-end latency by up to 7.13x and 2.19x compared to the state-of-the-art.
arXiv Detail & Related papers (2024-01-25T12:27:03Z) - Towards Effective Multi-Moving-Camera Tracking: A New Dataset and Lightweight Link Model [4.581852145863394]
Multi-target multi-camera (MTMC) tracking systems are composed of two modules: single-camera tracking (SCT) and inter-camera tracking (ICT)
MTMC tracking has been a very complicated task, while tracking across multiple moving cameras makes it even more challenging.
Linker is proposed to mitigate the identity switch by associating two disjoint tracklets of the same target into a complete trajectory within the same camera.
arXiv Detail & Related papers (2023-12-18T09:11:28Z) - Learning Active Camera for Multi-Object Navigation [94.89618442412247]
Getting robots to navigate to multiple objects autonomously is essential yet difficult in robot applications.
Existing navigation methods mainly focus on fixed cameras and few attempts have been made to navigate with active cameras.
In this paper, we consider navigating to multiple objects more efficiently with active cameras.
arXiv Detail & Related papers (2022-10-14T04:17:30Z) - Multi-Target Active Object Tracking with Monte Carlo Tree Search and
Target Motion Modeling [126.26121580486289]
In this work, we are dedicated to multi-target active object tracking (AOT), where there are multiple targets as well as multiple cameras in the environment.
The goal is maximize the overall target coverage of all cameras.
We establish a multi-target 2D environment to simulate the sports games, and experimental results demonstrate that our method can effectively improve the target coverage.
arXiv Detail & Related papers (2022-05-07T05:08:15Z) - Scalable and Real-time Multi-Camera Vehicle Detection,
Re-Identification, and Tracking [58.95210121654722]
We propose a real-time city-scale multi-camera vehicle tracking system that handles real-world, low-resolution CCTV instead of idealized and curated video streams.
Our method is ranked among the top five performers on the public leaderboard.
arXiv Detail & Related papers (2022-04-15T12:47:01Z) - Coordinate-Aligned Multi-Camera Collaboration for Active Multi-Object
Tracking [114.16306938870055]
We propose a coordinate-aligned multi-camera collaboration system for AMOT.
In our approach, we regard each camera as an agent and address AMOT with a multi-agent reinforcement learning solution.
Our system achieves a coverage of 71.88%, outperforming the baseline method by 8.9%.
arXiv Detail & Related papers (2022-02-22T13:28:40Z) - Attentive and Contrastive Learning for Joint Depth and Motion Field
Estimation [76.58256020932312]
Estimating the motion of the camera together with the 3D structure of the scene from a monocular vision system is a complex task.
We present a self-supervised learning framework for 3D object motion field estimation from monocular videos.
arXiv Detail & Related papers (2021-10-13T16:45:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.