Robot Person Following in Uniform Crowd Environment
- URL: http://arxiv.org/abs/2205.10553v1
- Date: Sat, 21 May 2022 10:20:14 GMT
- Title: Robot Person Following in Uniform Crowd Environment
- Authors: Adarsh Ghimire, Xiaoxiong Zhang, Sajid Javed, Jorge Dias, Naoufel
Werghi
- Abstract summary: Person-tracking robots have many applications, such as in security, elderly care, and socializing robots.
In this work, we focus on improving the perceptivity of a robot for a person following task by developing a robust and real-time applicable object tracker.
We present a new robot person tracking system with a new RGB-D tracker, Deep Tracking with RGB-D (DTRD) that is resilient to tricky challenges introduced by the uniform crowd environment.
- Score: 13.708992331117281
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Person-tracking robots have many applications, such as in security, elderly
care, and socializing robots. Such a task is particularly challenging when the
person is moving in a Uniform crowd. Also, despite significant progress of
trackers reported in the literature, state-of-the-art trackers have hardly
addressed person following in such scenarios. In this work, we focus on
improving the perceptivity of a robot for a person following task by developing
a robust and real-time applicable object tracker. We present a new robot person
tracking system with a new RGB-D tracker, Deep Tracking with RGB-D (DTRD) that
is resilient to tricky challenges introduced by the uniform crowd environment.
Our tracker utilizes transformer encoder-decoder architecture with RGB and
depth information to discriminate the target person from similar distractors. A
substantial amount of comprehensive experiments and results demonstrate that
our tracker has higher performance in two quantitative evaluation metrics and
confirms its superiority over other SOTA trackers.
Related papers
- What Matters to You? Towards Visual Representation Alignment for Robot
Learning [81.30964736676103]
When operating in service of people, robots need to optimize rewards aligned with end-user preferences.
We propose Representation-Aligned Preference-based Learning (RAPL), a method for solving the visual representation alignment problem.
arXiv Detail & Related papers (2023-10-11T23:04:07Z) - Polybot: Training One Policy Across Robots While Embracing Variability [70.74462430582163]
We propose a set of key design decisions to train a single policy for deployment on multiple robotic platforms.
Our framework first aligns the observation and action spaces of our policy across embodiments via utilizing wrist cameras.
We evaluate our method on a dataset collected over 60 hours spanning 6 tasks and 3 robots with varying joint configurations and sizes.
arXiv Detail & Related papers (2023-07-07T17:21:16Z) - EXOT: Exit-aware Object Tracker for Safe Robotic Manipulation of Moving
Object [18.17924341716236]
We propose the EXit-aware Object Tracker (EXOT) on a robot hand camera that recognizes an object's absence during manipulation.
The robot decides whether to proceed by examining the tracker's bounding box output containing the target object.
Our tracker shows 38% higher exit-aware performance than a baseline method.
arXiv Detail & Related papers (2023-06-08T15:03:47Z) - Person Monitoring by Full Body Tracking in Uniform Crowd Environment [10.71804432329509]
In the Middle East, uniform crowd environments are the norm which challenges state-of-the-art trackers.
In this work, we develop an annotated dataset with one specific target per video in a uniform crowd environment.
The dataset was used in evaluating and fine-tuning a state-of-the-art tracker.
arXiv Detail & Related papers (2022-09-02T21:21:47Z) - RGBD Object Tracking: An In-depth Review [89.96221353160831]
We firstly review RGBD object trackers from different perspectives, including RGBD fusion, depth usage, and tracking framework.
We benchmark a representative set of RGBD trackers, and give detailed analyses based on their performances.
arXiv Detail & Related papers (2022-03-26T18:53:51Z) - Global Instance Tracking: Locating Target More Like Humans [47.99395323689126]
Target tracking, the essential ability of the human visual system, has been simulated by computer vision tasks.
Existing trackers perform well in austere experimental environments but fail in challenges like occlusion and fast motion.
We propose the global instance tracking (GIT) task, which is supposed to search an arbitrary user-specified instance in a video.
arXiv Detail & Related papers (2022-02-26T06:16:34Z) - Cross-Modal Analysis of Human Detection for Robotics: An Industrial Case
Study [7.844709223688293]
We conduct a systematic cross-modal analysis of sensor-algorithm combinations typically used in robotics.
We compare the performance of state-of-the-art person detectors for 2D range data, 3D lidar, and RGB-D data.
We extend a strong image-based RGB-D detector to provide cross-modal supervision for lidar detectors in the form of weak 3D bounding box labels.
arXiv Detail & Related papers (2021-08-03T13:33:37Z) - Domain and Modality Gaps for LiDAR-based Person Detection on Mobile
Robots [91.01747068273666]
This paper studies existing LiDAR-based person detectors with a particular focus on mobile robot scenarios.
Experiments revolve around the domain gap between driving and mobile robot scenarios, as well as the modality gap between 3D and 2D LiDAR sensors.
Results provide practical insights into LiDAR-based person detection and facilitate informed decisions for relevant mobile robot designs and applications.
arXiv Detail & Related papers (2021-06-21T16:35:49Z) - Task-relevant Representation Learning for Networked Robotic Perception [74.0215744125845]
This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model's ultimate objective.
Our algorithm aggressively compresses robotic sensory data by up to 11x more than competing methods.
arXiv Detail & Related papers (2020-11-06T07:39:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.