Bayesian optimization for robust robotic grasping using a sensorized compliant hand
- URL: http://arxiv.org/abs/2410.18237v1
- Date: Wed, 23 Oct 2024 19:33:14 GMT
- Title: Bayesian optimization for robust robotic grasping using a sensorized compliant hand
- Authors: Juan G. Lechuz-Sierra, Ana Elvira H. Martin, Ashok M. Sundaram, Ruben Martinez-Cantin, Máximo A. Roa,
- Abstract summary: We analyze different grasp metrics to provide realistic grasp optimization in a real system including tactile sensors.
An experimental evaluation in the robotic system shows the usefulness of the method for performing unknown object grasping.
- Score: 6.693397171872655
- License:
- Abstract: One of the first tasks we learn as children is to grasp objects based on our tactile perception. Incorporating such skill in robots will enable multiple applications, such as increasing flexibility in industrial processes or providing assistance to people with physical disabilities. However, the difficulty lies in adapting the grasping strategies to a large variety of tasks and objects, which can often be unknown. The brute-force solution is to learn new grasps by trial and error, which is inefficient and ineffective. In contrast, Bayesian optimization applies active learning by adding information to the approximation of an optimal grasp. This paper proposes the use of Bayesian optimization techniques to safely perform robotic grasping. We analyze different grasp metrics to provide realistic grasp optimization in a real system including tactile sensors. An experimental evaluation in the robotic system shows the usefulness of the method for performing unknown object grasping even in the presence of noise and uncertainty inherent to a real-world environment.
Related papers
- Robotic warehousing operations: a learn-then-optimize approach to large-scale neighborhood search [84.39855372157616]
This paper supports robotic parts-to-picker operations in warehousing by optimizing order-workstation assignments, item-pod assignments and the schedule of order fulfillment at workstations.
We solve it via large-scale neighborhood search, with a novel learn-then-optimize approach to subproblem generation.
In collaboration with Amazon Robotics, we show that our model and algorithm generate much stronger solutions for practical problems than state-of-the-art approaches.
arXiv Detail & Related papers (2024-08-29T20:22:22Z) - Offline Imitation Learning Through Graph Search and Retrieval [57.57306578140857]
Imitation learning is a powerful machine learning algorithm for a robot to acquire manipulation skills.
We propose GSR, a simple yet effective algorithm that learns from suboptimal demonstrations through Graph Search and Retrieval.
GSR can achieve a 10% to 30% higher success rate and over 30% higher proficiency compared to baselines.
arXiv Detail & Related papers (2024-07-22T06:12:21Z) - DiffVL: Scaling Up Soft Body Manipulation using Vision-Language Driven
Differentiable Physics [69.6158232150048]
DiffVL is a method that enables non-expert users to communicate soft-body manipulation tasks.
We leverage large language models to translate task descriptions into machine-interpretable optimization objectives.
arXiv Detail & Related papers (2023-12-11T14:29:25Z) - A model-free approach to fingertip slip and disturbance detection for
grasp stability inference [0.0]
We propose a method for assessing grasp stability using tactile sensing.
We use highly sensitive Uskin tactile sensors mounted on an Allegro hand to test and validate our method.
arXiv Detail & Related papers (2023-11-22T09:04:26Z) - Tactile Active Inference Reinforcement Learning for Efficient Robotic
Manipulation Skill Acquisition [10.072992621244042]
We propose a novel method for skill learning in robotic manipulation called Tactile Active Inference Reinforcement Learning (Tactile-AIRL)
To enhance the performance of reinforcement learning (RL), we introduce active inference, which integrates model-based techniques and intrinsic curiosity into the RL process.
We demonstrate that our method achieves significantly high training efficiency in non-prehensile objects pushing tasks.
arXiv Detail & Related papers (2023-11-19T10:19:22Z) - Design Optimizer for Planar Soft-Growing Robot Manipulators [1.1888144645004388]
This work presents a novel approach for design optimization of soft-growing robots.
I optimize the kinematic chain of a soft manipulator to reach targets and avoid unnecessary overuse of material and resources.
I tested the proposed method on different tasks to access its optimality, which showed significant performance in solving the problem.
arXiv Detail & Related papers (2023-10-05T08:23:17Z) - Learning to Detect Slip through Tactile Estimation of the Contact Force Field and its Entropy [6.739132519488627]
We introduce a physics-informed, data-driven approach to detect slip continuously in real time.
We employ the GelSight Mini, an optical tactile sensor, attached to custom-designed grippers to gather tactile data.
Our results show that the best classification algorithm achieves a high average accuracy of 95.61%.
arXiv Detail & Related papers (2023-03-02T03:16:21Z) - Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot
Learning [121.9708998627352]
Recent work has shown that, in practical robot learning applications, the effects of adversarial training do not pose a fair trade-off.
This work revisits the robustness-accuracy trade-off in robot learning by analyzing if recent advances in robust training methods and theory can make adversarial training suitable for real-world robot applications.
arXiv Detail & Related papers (2022-04-15T08:12:15Z) - Bayesian Meta-Learning for Few-Shot Policy Adaptation Across Robotic
Platforms [60.59764170868101]
Reinforcement learning methods can achieve significant performance but require a large amount of training data collected on the same robotic platform.
We formulate it as a few-shot meta-learning problem where the goal is to find a model that captures the common structure shared across different robotic platforms.
We experimentally evaluate our framework on a simulated reaching and a real-robot picking task using 400 simulated robots.
arXiv Detail & Related papers (2021-03-05T14:16:20Z) - Never Stop Learning: The Effectiveness of Fine-Tuning in Robotic
Reinforcement Learning [109.77163932886413]
We show how to adapt vision-based robotic manipulation policies to new variations by fine-tuning via off-policy reinforcement learning.
This adaptation uses less than 0.2% of the data necessary to learn the task from scratch.
We find that our approach of adapting pre-trained policies leads to substantial performance gains over the course of fine-tuning.
arXiv Detail & Related papers (2020-04-21T17:57:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.