World Robot Challenge 2020 -- Partner Robot: A Data-Driven Approach for
Room Tidying with Mobile Manipulator
- URL: http://arxiv.org/abs/2207.10106v2
- Date: Fri, 22 Jul 2022 01:44:49 GMT
- Title: World Robot Challenge 2020 -- Partner Robot: A Data-Driven Approach for
Room Tidying with Mobile Manipulator
- Authors: Tatsuya Matsushima, Yuki Noguchi, Jumpei Arima, Toshiki Aoki, Yuki
Okita, Yuya Ikeda, Koki Ishimoto, Shohei Taniguchi, Yuki Yamashita, Shoichi
Seto, Shixiang Shane Gu, Yusuke Iwasawa, Yutaka Matsuo
- Abstract summary: The Partner Robot Challenge in World Robot Challenge (WRC) 2020 benchmarked tidying tasks in the real home environments.
We developed an entire household service robot system, which leverages a data-driven approach to adapt to numerous edge cases.
Our robot system won the second prize, verifying the effectiveness and potential of data-driven robot systems for mobile manipulation in home environments.
- Score: 19.048572580336188
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tidying up a household environment using a mobile manipulator poses various
challenges in robotics, such as adaptation to large real-world environmental
variations, and safe and robust deployment in the presence of humans.The
Partner Robot Challenge in World Robot Challenge (WRC) 2020, a global
competition held in September 2021, benchmarked tidying tasks in the real home
environments, and importantly, tested for full system performances.For this
challenge, we developed an entire household service robot system, which
leverages a data-driven approach to adapt to numerous edge cases that occur
during the execution, instead of classical manual pre-programmed solutions. In
this paper, we describe the core ingredients of the proposed robot system,
including visual recognition, object manipulation, and motion planning. Our
robot system won the second prize, verifying the effectiveness and potential of
data-driven robot systems for mobile manipulation in home environments.
Related papers
- Multi-Task Interactive Robot Fleet Learning with Visual World Models [25.001148860168477]
Sirius-Fleet is a multi-task interactive robot fleet learning framework.
It monitors robot performance during deployment and involves humans to correct the robot's actions when necessary.
As the robot autonomy improves, anomaly predictors automatically adapt their prediction criteria.
arXiv Detail & Related papers (2024-10-30T04:49:39Z) - Language-guided Robust Navigation for Mobile Robots in Dynamically-changing Environments [26.209402619114353]
We develop an embodied AI system for human-in-the-loop navigation with a wheeled mobile robot.
We propose a method of monitoring the robot's current plan to detect changes in the environment that impact the intended trajectory of the robot.
This work can support applications like precision agriculture and construction, where persistent monitoring of the environment provides a human with information about the environment state.
arXiv Detail & Related papers (2024-09-28T21:30:23Z) - Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - Socially Pertinent Robots in Gerontological Healthcare [78.35311825198136]
This paper is an attempt to partially answer the question, via two waves of experiments with patients and companions in a day-care gerontological facility in Paris with a full-sized humanoid robot endowed with social and conversational interaction capabilities.
Overall, the users are receptive to this technology, especially when the robot perception and action skills are robust to environmental clutter and flexible to handle a plethora of different interactions.
arXiv Detail & Related papers (2024-04-11T08:43:37Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - HomeRobot: Open-Vocabulary Mobile Manipulation [107.05702777141178]
Open-Vocabulary Mobile Manipulation (OVMM) is the problem of picking any object in any unseen environment, and placing it in a commanded location.
HomeRobot has two components: a simulation component, which uses a large and diverse curated object set in new, high-quality multi-room home environments; and a real-world component, providing a software stack for the low-cost Hello Robot Stretch.
arXiv Detail & Related papers (2023-06-20T14:30:32Z) - Learning Human-to-Robot Handovers from Point Clouds [63.18127198174958]
We propose the first framework to learn control policies for vision-based human-to-robot handovers.
We show significant performance gains over baselines on a simulation benchmark, sim-to-sim transfer and sim-to-real transfer.
arXiv Detail & Related papers (2023-03-30T17:58:36Z) - Generalizable Human-Robot Collaborative Assembly Using Imitation
Learning and Force Control [17.270360447188196]
We present a system for human-robot collaborative assembly using learning from demonstration and pose estimation.
The proposed system is demonstrated using a physical 6 DoF manipulator in a collaborative human-robot assembly scenario.
arXiv Detail & Related papers (2022-12-02T20:35:55Z) - Semantic-Aware Environment Perception for Mobile Human-Robot Interaction [2.309914459672557]
We present a vision-based system for mobile robots to enable a semantic-aware environment without additional a-priori knowledge.
We deploy our system on a mobile humanoid robot that enables us to test our methods in real-world applications.
arXiv Detail & Related papers (2022-11-07T08:49:45Z) - Autonomous Aerial Robot for High-Speed Search and Intercept Applications [86.72321289033562]
A fully-autonomous aerial robot for high-speed object grasping has been proposed.
As an additional sub-task, our system is able to autonomously pierce balloons located in poles close to the surface.
Our approach has been validated in a challenging international competition and has shown outstanding results.
arXiv Detail & Related papers (2021-12-10T11:49:51Z) - Autonomous Planning Based on Spatial Concepts to Tidy Up Home
Environments with Service Robots [5.739787445246959]
We propose a novel planning method that can efficiently estimate the order and positions of the objects to be tidied up by learning the parameters of a probabilistic generative model.
The model allows a robot to learn the distributions of the co-occurrence probability of the objects and places to tidy up using the multimodal sensor information collected in a tidied environment.
We evaluate the effectiveness of the proposed method by an experimental simulation that reproduces the conditions of the Tidy Up Here task of the World Robot Summit 2018 international robotics competition.
arXiv Detail & Related papers (2020-02-10T11:49:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.