HabitatDyn Dataset: Dynamic Object Detection to Kinematics Estimation
- URL: http://arxiv.org/abs/2304.10854v1
- Date: Fri, 21 Apr 2023 09:57:35 GMT
- Title: HabitatDyn Dataset: Dynamic Object Detection to Kinematics Estimation
- Authors: Zhengcheng Shen, Yi Gao, Linh K\"astner, Jens Lambrecht
- Abstract summary: We propose the dataset HabitatDyn, which contains both synthetic RGB videos, semantic labels, and depth information, as well as kinetics information.
HabitatDyn was created from the perspective of a mobile robot with a moving camera, and contains 30 scenes featuring six different types of moving objects with varying velocities.
- Score: 16.36110033895749
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The advancement of computer vision and machine learning has made datasets a
crucial element for further research and applications. However, the creation
and development of robots with advanced recognition capabilities are hindered
by the lack of appropriate datasets. Existing image or video processing
datasets are unable to accurately depict observations from a moving robot, and
they do not contain the kinematics information necessary for robotic tasks.
Synthetic data, on the other hand, are cost-effective to create and offer
greater flexibility for adapting to various applications. Hence, they are
widely utilized in both research and industry. In this paper, we propose the
dataset HabitatDyn, which contains both synthetic RGB videos, semantic labels,
and depth information, as well as kinetics information. HabitatDyn was created
from the perspective of a mobile robot with a moving camera, and contains 30
scenes featuring six different types of moving objects with varying velocities.
To demonstrate the usability of our dataset, two existing algorithms are used
for evaluation and an approach to estimate the distance between the object and
camera is implemented based on these segmentation methods and evaluated through
the dataset. With the availability of this dataset, we aspire to foster further
advancements in the field of mobile robotics, leading to more capable and
intelligent robots that can navigate and interact with their environments more
effectively. The code is publicly available at
https://github.com/ignc-research/HabitatDyn.
Related papers
- Learning Object Properties Using Robot Proprioception via Differentiable Robot-Object Interaction [52.12746368727368]
Differentiable simulation has become a powerful tool for system identification.
Our approach calibrates object properties by using information from the robot, without relying on data from the object itself.
We demonstrate the effectiveness of our method on a low-cost robotic platform.
arXiv Detail & Related papers (2024-10-04T20:48:38Z) - Tiny Robotics Dataset and Benchmark for Continual Object Detection [6.4036245876073234]
This work introduces a novel benchmark to evaluate the continual learning capabilities of object detection systems in tiny robotic platforms.
Our contributions include: (i) Tiny Robotics Object Detection (TiROD), a comprehensive dataset collected using a small mobile robot, designed to test the adaptability of object detectors across various domains and classes; (ii) an evaluation of state-of-the-art real-time object detectors combined with different continual learning strategies on this dataset; and (iii) we publish the data and the code to replicate the results to foster continuous advancements in this field.
arXiv Detail & Related papers (2024-09-24T16:21:27Z) - Polaris: Open-ended Interactive Robotic Manipulation via Syn2Real Visual Grounding and Large Language Models [53.22792173053473]
We introduce an interactive robotic manipulation framework called Polaris.
Polaris integrates perception and interaction by utilizing GPT-4 alongside grounded vision models.
We propose a novel Synthetic-to-Real (Syn2Real) pose estimation pipeline.
arXiv Detail & Related papers (2024-08-15T06:40:38Z) - Multimodal Anomaly Detection based on Deep Auto-Encoder for Object Slip
Perception of Mobile Manipulation Robots [22.63980025871784]
The proposed framework integrates heterogeneous data streams collected from various robot sensors, including RGB and depth cameras, a microphone, and a force-torque sensor.
The integrated data is used to train a deep autoencoder to construct latent representations of the multisensory data that indicate the normal status.
Anomalies can then be identified by error scores measured by the difference between the trained encoder's latent values and the latent values of reconstructed input data.
arXiv Detail & Related papers (2024-03-06T09:15:53Z) - Action-conditioned Deep Visual Prediction with RoAM, a new Indoor Human
Motion Dataset for Autonomous Robots [1.7778609937758327]
We introduce the Robot Autonomous Motion (RoAM) video dataset.
It is collected with a custom-made turtlebot3 Burger robot in a variety of indoor environments recording various human motions from the robot's ego-vision.
The dataset also includes synchronized records of the LiDAR scan and all control actions taken by the robot as it navigates around static and moving human agents.
arXiv Detail & Related papers (2023-06-28T00:58:44Z) - MetaGraspNet: A Large-Scale Benchmark Dataset for Vision-driven Robotic
Grasping via Physics-based Metaverse Synthesis [78.26022688167133]
We present a large-scale benchmark dataset for vision-driven robotic grasping via physics-based metaverse synthesis.
The proposed dataset contains 100,000 images and 25 different object types.
We also propose a new layout-weighted performance metric alongside the dataset for evaluating object detection and segmentation performance.
arXiv Detail & Related papers (2021-12-29T17:23:24Z) - Few-Shot Visual Grounding for Natural Human-Robot Interaction [0.0]
We propose a software architecture that segments a target object from a crowded scene, indicated verbally by a human user.
At the core of our system, we employ a multi-modal deep neural network for visual grounding.
We evaluate the performance of the proposed model on real RGB-D data collected from public scene datasets.
arXiv Detail & Related papers (2021-03-17T15:24:02Z) - Where is my hand? Deep hand segmentation for visual self-recognition in
humanoid robots [129.46920552019247]
We propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view.
We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy.
arXiv Detail & Related papers (2021-02-09T10:34:32Z) - Task-relevant Representation Learning for Networked Robotic Perception [74.0215744125845]
This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model's ultimate objective.
Our algorithm aggressively compresses robotic sensory data by up to 11x more than competing methods.
arXiv Detail & Related papers (2020-11-06T07:39:08Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.