OpenDR: An Open Toolkit for Enabling High Performance, Low Footprint
Deep Learning for Robotics
- URL: http://arxiv.org/abs/2203.00403v1
- Date: Tue, 1 Mar 2022 12:59:59 GMT
- Title: OpenDR: An Open Toolkit for Enabling High Performance, Low Footprint
Deep Learning for Robotics
- Authors: N. Passalis, S. Pedrazzi, R. Babuska, W. Burgard, D. Dias, F. Ferro,
M. Gabbouj, O. Green, A. Iosifidis, E. Kayacan, J. Kober, O. Michel, N.
Nikolaidis, P. Nousi, R. Pieters, M. Tzelepi, A. Valada, and A. Tefas
- Abstract summary: We present the Open Deep Learning Toolkit for Robotics (OpenDR)
OpenDR aims at developing an open, non-proprietary, efficient, and modular toolkit that can be easily used by robotics companies and research institutions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing Deep Learning (DL) frameworks typically do not provide ready-to-use
solutions for robotics, where very specific learning, reasoning, and embodiment
problems exist. Their relatively steep learning curve and the different
methodologies employed by DL compared to traditional approaches, along with the
high complexity of DL models, which often leads to the need of employing
specialized hardware accelerators, further increase the effort and cost needed
to employ DL models in robotics. Also, most of the existing DL methods follow a
static inference paradigm, as inherited by the traditional computer vision
pipelines, ignoring active perception, which can be employed to actively
interact with the environment in order to increase perception accuracy. In this
paper, we present the Open Deep Learning Toolkit for Robotics (OpenDR). OpenDR
aims at developing an open, non-proprietary, efficient, and modular toolkit
that can be easily used by robotics companies and research institutions to
efficiently develop and deploy AI and cognition technologies to robotics
applications, providing a solid step towards addressing the aforementioned
challenges. We also detail the design choices, along with an abstract interface
that was created to overcome these challenges. This interface can describe
various robotic tasks, spanning beyond traditional DL cognition and inference,
as known by existing frameworks, incorporating openness, homogeneity and
robotics-oriented perception e.g., through active perception, as its core
design principles.
Related papers
- RAMPA: Robotic Augmented Reality for Machine Programming and Automation [4.963604518596734]
This paper introduces Robotic Augmented Reality for Machine Programming (RAMPA)
RAMPA is a system that utilizes the capabilities of state-of-the-art and commercially available AR headsets, e.g., Meta Quest 3.
Our approach enables in-situ data recording, visualization, and fine-tuning of skill demonstrations directly within the user's physical environment.
arXiv Detail & Related papers (2024-10-17T10:21:28Z) - Robotic Control via Embodied Chain-of-Thought Reasoning [86.6680905262442]
Key limitation of learned robot control policies is their inability to generalize outside their training data.
Recent works on vision-language-action models (VLAs) have shown that the use of large, internet pre-trained vision-language models can substantially improve their robustness and generalization ability.
We introduce Embodied Chain-of-Thought Reasoning (ECoT) for VLAs, in which we train VLAs to perform multiple steps of reasoning about plans, sub-tasks, motions, and visually grounded features before predicting the robot action.
arXiv Detail & Related papers (2024-07-11T17:31:01Z) - SERL: A Software Suite for Sample-Efficient Robotic Reinforcement
Learning [85.21378553454672]
We develop a library containing a sample efficient off-policy deep RL method, together with methods for computing rewards and resetting the environment.
We find that our implementation can achieve very efficient learning, acquiring policies for PCB board assembly, cable routing, and object relocation.
These policies achieve perfect or near-perfect success rates, extreme robustness even under perturbations, and exhibit emergent robustness recovery and correction behaviors.
arXiv Detail & Related papers (2024-01-29T10:01:10Z) - Toward General-Purpose Robots via Foundation Models: A Survey and Meta-Analysis [82.59451639072073]
General-purpose robots operate seamlessly in any environment, with any object, and utilize various skills to complete diverse tasks.
As a community, we have been constraining most robotic systems by designing them for specific tasks, training them on specific datasets, and deploying them within specific environments.
Motivated by the impressive open-set performance and content generation capabilities of web-scale, large-capacity pre-trained models, we devote this survey to exploring how foundation models can be applied to general-purpose robotics.
arXiv Detail & Related papers (2023-12-14T10:02:55Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - From Machine Learning to Robotics: Challenges and Opportunities for
Embodied Intelligence [113.06484656032978]
Article argues that embodied intelligence is a key driver for the advancement of machine learning technology.
We highlight challenges and opportunities specific to embodied intelligence.
We propose research directions which may significantly advance the state-of-the-art in robot learning.
arXiv Detail & Related papers (2021-10-28T16:04:01Z) - Human-Robot Collaboration and Machine Learning: A Systematic Review of
Recent Research [69.48907856390834]
Human-robot collaboration (HRC) is the approach that explores the interaction between a human and a robot.
This paper proposes a thorough literature review of the use of machine learning techniques in the context of HRC.
arXiv Detail & Related papers (2021-10-14T15:14:33Z) - Modular approach to data preprocessing in ALOHA and application to a
smart industry use case [0.0]
The paper addresses a modular approach, integrated into the ALOHA tool flow, to support the data preprocessing and transformation pipeline.
To demonstrate the effectiveness of the approach, we present some experimental results related to a keyword spotting use case.
arXiv Detail & Related papers (2021-02-02T06:48:51Z) - Towards open and expandable cognitive AI architectures for large-scale
multi-agent human-robot collaborative learning [5.478764356647437]
A novel cognitive architecture for multi-agent LfD robotic learning is introduced, targeting to enable the reliable deployment of open, scalable and expandable robotic systems.
The conceptualization relies on employing multiple AI-empowered cognitive processes that operate at the edge nodes of a network of robotic platforms.
The applicability of the proposed framework is explained using an example of a real-world industrial case study.
arXiv Detail & Related papers (2020-12-15T09:49:22Z) - Variable Compliance Control for Robotic Peg-in-Hole Assembly: A Deep
Reinforcement Learning Approach [4.045850174820418]
We propose a learning-based method to solve peg-in-hole tasks with position uncertainty of the hole.
Our proposed learning framework for position-controlled robots was extensively evaluated on contact-rich insertion tasks.
arXiv Detail & Related papers (2020-08-24T06:53:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.