Towards a Robust Soft Baby Robot With Rich Interaction Ability for Advanced Machine Learning Algorithms
- URL: http://arxiv.org/abs/2404.08093v1
- Date: Thu, 11 Apr 2024 19:15:45 GMT
- Title: Towards a Robust Soft Baby Robot With Rich Interaction Ability for Advanced Machine Learning Algorithms
- Authors: Mohannad Alhakami, Dylan R. Ashley, Joel Dunham, Francesco Faccio, Eric Feron, Jürgen Schmidhuber,
- Abstract summary: We build a robust, partially soft robotic limb with a large action space and rich sensory data stream from multiple cameras.
As a proof of concept, we train two contemporary machine learning algorithms to perform a simple target-finding task.
- Score: 25.85850890842541
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence has made great strides in many areas lately, yet it has had comparatively little success in general-use robotics. We believe one of the reasons for this is the disconnect between traditional robotic design and the properties needed for open-ended, creativity-based AI systems. To that end, we, taking selective inspiration from nature, build a robust, partially soft robotic limb with a large action space, rich sensory data stream from multiple cameras, and the ability to connect with others to enhance the action space and data stream. As a proof of concept, we train two contemporary machine learning algorithms to perform a simple target-finding task. Altogether, we believe that this design serves as a first step to building a robot tailor-made for achieving artificial general intelligence.
Related papers
- Large Language Models as Natural Selector for Embodied Soft Robot Design [5.023206838671049]
This paper introduces RoboCrafter-QA, a novel benchmark to evaluate whether Large Language Models can learn representations of soft robot designs.
Our experiments reveal that while these models exhibit promising capabilities in learning design representations, they struggle with fine-grained distinctions between designs with subtle performance differences.
arXiv Detail & Related papers (2025-03-04T03:55:10Z) - Deep Learning based Quasi-consciousness Training for Robot Intelligent Model [0.6916967405051087]
This paper explores a deep learning based robot intelligent model that renders robots learn and reason for complex tasks.
Every robot must be subjected to at least 13 years of special school for training anthropomorphic behaviour patterns.
arXiv Detail & Related papers (2025-01-31T08:27:32Z) - A Retrospective on the Robot Air Hockey Challenge: Benchmarking Robust, Reliable, and Safe Learning Techniques for Real-world Robotics [53.33976793493801]
We organized the Robot Air Hockey Challenge at the NeurIPS 2023 conference.
We focus on practical challenges in robotics, such as the sim-to-real gap, low-level control issues, safety problems, real-time requirements, and the limited availability of real-world data.
Results show that solutions combining learning-based approaches with prior knowledge outperform those relying solely on data when real-world deployment is challenging.
arXiv Detail & Related papers (2024-11-08T17:20:47Z) - $π_0$: A Vision-Language-Action Flow Model for General Robot Control [77.32743739202543]
We propose a novel flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge.
We evaluate our model in terms of its ability to perform tasks in zero shot after pre-training, follow language instructions from people, and its ability to acquire new skills via fine-tuning.
arXiv Detail & Related papers (2024-10-31T17:22:30Z) - Generalized Robot Learning Framework [10.03174544844559]
We present a low-cost robot learning framework that is both easily reproducible and transferable to various robots and environments.
We demonstrate that deployable imitation learning can be successfully applied even to industrial-grade robots.
arXiv Detail & Related papers (2024-09-18T15:34:31Z) - Unifying 3D Representation and Control of Diverse Robots with a Single Camera [48.279199537720714]
We introduce Neural Jacobian Fields, an architecture that autonomously learns to model and control robots from vision alone.
Our approach achieves accurate closed-loop control and recovers the causal dynamic structure of each robot.
arXiv Detail & Related papers (2024-07-11T17:55:49Z) - Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [50.616995671367704]
We present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands.
Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies.
arXiv Detail & Related papers (2024-03-15T17:45:44Z) - RoboCodeX: Multimodal Code Generation for Robotic Behavior Synthesis [102.1876259853457]
We propose a tree-structured multimodal code generation framework for generalized robotic behavior synthesis, termed RoboCodeX.
RoboCodeX decomposes high-level human instructions into multiple object-centric manipulation units consisting of physical preferences such as affordance and safety constraints.
To further enhance the capability to map conceptual and perceptual understanding into control commands, a specialized multimodal reasoning dataset is collected for pre-training and an iterative self-updating methodology is introduced for supervised fine-tuning.
arXiv Detail & Related papers (2024-02-25T15:31:43Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - Exploring the effects of robotic design on learning and neural control [0.0]
dissertation focuses on the development of robotic bodies, rather than neural controllers.
I have discovered that robots can be designed such that they overcome many of the current pitfalls encountered by neural controllers in multitask settings.
arXiv Detail & Related papers (2023-06-06T15:17:34Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Evolution Gym: A Large-Scale Benchmark for Evolving Soft Robots [29.02903745467536]
We propose Evolution Gym, the first large-scale benchmark for co-optimizing the design and control of soft robots.
Our benchmark environments span a wide range of tasks, including locomotion on various types of terrains and manipulation.
We develop several robot co-evolution algorithms by combining state-of-the-art design optimization methods and deep reinforcement learning techniques.
arXiv Detail & Related papers (2022-01-24T18:39:22Z) - Learning Perceptual Concepts by Bootstrapping from Human Queries [41.07749131023931]
We propose a new approach whereby the robot learns a low-dimensional variant of the concept and uses it to generate a larger data set for learning the concept in the high-dimensional space.
This lets it take advantage of semantically meaningful privileged information only accessible at training time, like object poses and bounding boxes, that allows for richer human interaction to speed up learning.
arXiv Detail & Related papers (2021-11-09T16:43:46Z) - From Machine Learning to Robotics: Challenges and Opportunities for
Embodied Intelligence [113.06484656032978]
Article argues that embodied intelligence is a key driver for the advancement of machine learning technology.
We highlight challenges and opportunities specific to embodied intelligence.
We propose research directions which may significantly advance the state-of-the-art in robot learning.
arXiv Detail & Related papers (2021-10-28T16:04:01Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - Curiosity Based Reinforcement Learning on Robot Manufacturing Cell [0.0]
Reinforcement learning has proved to be highly successful in solving tasks like robotics and scheduling.
This paper introduces a novel combination of scheduling control on a flexible robot manufacturing cell with curiosity based reinforcement learning.
arXiv Detail & Related papers (2020-11-17T16:19:47Z) - Task-relevant Representation Learning for Networked Robotic Perception [74.0215744125845]
This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model's ultimate objective.
Our algorithm aggressively compresses robotic sensory data by up to 11x more than competing methods.
arXiv Detail & Related papers (2020-11-06T07:39:08Z) - Learning Locomotion Skills in Evolvable Robots [10.167123492952694]
We introduce a controller architecture and a generic learning method to allow a modular robot with an arbitrary shape to learn to walk towards a target and follow this target if it moves.
Our approach is validated on three robots, a spider, a gecko, and their offspring, in three real-world scenarios.
arXiv Detail & Related papers (2020-10-19T14:01:50Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.