Emergent Hand Morphology and Control from Optimizing Robust Grasps of
Diverse Objects
- URL: http://arxiv.org/abs/2012.12209v1
- Date: Tue, 22 Dec 2020 17:52:29 GMT
- Title: Emergent Hand Morphology and Control from Optimizing Robust Grasps of
Diverse Objects
- Authors: Xinlei Pan, Animesh Garg, Animashree Anandkumar, Yuke Zhu
- Abstract summary: We introduce a data-driven approach where effective hand designs naturally emerge for the purpose of grasping diverse objects.
We develop a novel Bayesian Optimization algorithm that efficiently co-designs the morphology and grasping skills jointly.
We demonstrate the effectiveness of our approach in discovering robust and cost-efficient hand morphologies for grasping novel objects.
- Score: 63.89096733478149
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Evolution in nature illustrates that the creatures' biological structure and
their sensorimotor skills adapt to the environmental changes for survival.
Likewise, the ability to morph and acquire new skills can facilitate an
embodied agent to solve tasks of varying complexities. In this work, we
introduce a data-driven approach where effective hand designs naturally emerge
for the purpose of grasping diverse objects. Jointly optimizing morphology and
control imposes computational challenges since it requires constant evaluation
of a black-box function that measures the performance of a combination of
embodiment and behavior. We develop a novel Bayesian Optimization algorithm
that efficiently co-designs the morphology and grasping skills jointly through
learned latent-space representations. We design the grasping tasks based on a
taxonomy of three human grasp types: power grasp, pinch grasp, and lateral
grasp. Through experimentation and comparative study, we demonstrate the
effectiveness of our approach in discovering robust and cost-efficient hand
morphologies for grasping novel objects.
Related papers
- Visual-Geometric Collaborative Guidance for Affordance Learning [63.038406948791454]
We propose a visual-geometric collaborative guided affordance learning network that incorporates visual and geometric cues.
Our method outperforms the representative models regarding objective metrics and visual quality.
arXiv Detail & Related papers (2024-10-15T07:35:51Z) - Cognitive Evolutionary Learning to Select Feature Interactions for Recommender Systems [59.117526206317116]
We show that CELL can adaptively evolve into different models for different tasks and data.
Experiments on four real-world datasets demonstrate that CELL significantly outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2024-05-29T02:35:23Z) - Understanding fitness landscapes in morpho-evolution via local optima
networks [0.1843404256219181]
Morpho-evolution (ME) refers to the simultaneous optimisation of a robot's design and controller to maximise performance given a task and environment.
Previous research has provided empirical comparisons between encodings in terms of their performance with respect to an objective function and the diversity of designs that are evaluated, however there has been no attempt to explain the observed findings.
We investigate the structure of the fitness landscapes induced by three different encodings when evolving a robot for a locomotion task, shedding new light on the ease by which different fitness landscapes can be traversed by a search process.
arXiv Detail & Related papers (2024-02-12T17:26:35Z) - Multimodal Visual-Tactile Representation Learning through
Self-Supervised Contrastive Pre-Training [0.850206009406913]
MViTac is a novel methodology that leverages contrastive learning to integrate vision and touch sensations in a self-supervised fashion.
By availing both sensory inputs, MViTac leverages intra and inter-modality losses for learning representations, resulting in enhanced material property classification and more adept grasping prediction.
arXiv Detail & Related papers (2024-01-22T15:11:57Z) - DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative
Diffusion Models [102.13968267347553]
We present DiffuseBot, a physics-augmented diffusion model that generates soft robot morphologies capable of excelling in a wide spectrum of tasks.
We showcase a range of simulated and fabricated robots along with their capabilities.
arXiv Detail & Related papers (2023-11-28T18:58:48Z) - Co-Imitation: Learning Design and Behaviour by Imitation [10.40773958250192]
Co-adaptation of robots aims to adapt both body and behaviour of a system for a given task.
This paper introduces a new viewpoint on the co-adaptation problem, which we call co-imitation.
We propose a co-imitation methodology for adapting behaviour and morphology by matching state distributions of the demonstrator.
arXiv Detail & Related papers (2022-09-02T17:57:32Z) - Understanding Physical Effects for Effective Tool-use [91.55810923916454]
We present a robot learning and planning framework that produces an effective tool-use strategy with the least joint efforts.
We use a Finite Element Method (FEM)-based simulator that reproduces fine-grained, continuous visual and physical effects given observed tool-use events.
In simulation, we demonstrate that the proposed framework can produce more effective tool-use strategies, drastically different from the observed ones in two tasks.
arXiv Detail & Related papers (2022-06-30T03:13:38Z) - What Robot do I Need? Fast Co-Adaptation of Morphology and Control using
Graph Neural Networks [7.261920381796185]
A major challenge for the application of co-adaptation methods to the real world is the simulation-to-reality-gap.
This paper presents a new approach combining classic high-frequency deep neural networks with computational expensive Graph Neural Networks for the data-efficient co-adaptation of agents.
arXiv Detail & Related papers (2021-11-03T17:41:38Z) - Generative Adversarial Transformers [13.633811200719627]
We introduce the GANsformer, a novel and efficient type of transformer, and explore it for the task of visual generative modeling.
The network employs a bipartite structure that enables long-range interactions across the image, while maintaining computation of linearly efficiency.
We show it achieves state-of-the-art results in terms of image quality and diversity, while enjoying fast learning and better data-efficiency.
arXiv Detail & Related papers (2021-03-01T18:54:04Z) - Task-Agnostic Morphology Evolution [94.97384298872286]
Current approaches that co-adapt morphology and behavior use a specific task's reward as a signal for morphology optimization.
This often requires expensive policy optimization and results in task-dependent morphologies that are not built to generalize.
We propose a new approach, Task-Agnostic Morphology Evolution (TAME), to alleviate both of these issues.
arXiv Detail & Related papers (2021-02-25T18:59:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.