A Self-supervised Contrastive Learning Method for Grasp Outcomes
Prediction
- URL: http://arxiv.org/abs/2306.14437v2
- Date: Thu, 21 Sep 2023 06:54:22 GMT
- Title: A Self-supervised Contrastive Learning Method for Grasp Outcomes
Prediction
- Authors: Chengliang Liu, Binhua Huang, Yiwen Liu, Yuanzhe Su, Ke Mai, Yupo
Zhang, Zhengkun Yi, Xinyu Wu
- Abstract summary: We show that contrastive learning methods perform well on the task of grasp outcomes prediction.
Our results reveal the potential of contrastive learning methods for applications in the field of robot grasping.
- Score: 9.865029065814236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we investigate the effectiveness of contrastive learning
methods for predicting grasp outcomes in an unsupervised manner. By utilizing a
publicly available dataset, we demonstrate that contrastive learning methods
perform well on the task of grasp outcomes prediction. Specifically, the
dynamic-dictionary-based method with the momentum updating technique achieves a
satisfactory accuracy of 81.83% using data from one single tactile sensor,
outperforming other unsupervised methods. Our results reveal the potential of
contrastive learning methods for applications in the field of robot grasping
and highlight the importance of accurate grasp prediction for achieving stable
grasps.
Related papers
- Learning Latent Graph Structures and their Uncertainty [63.95971478893842]
Graph Neural Networks (GNNs) use relational information as an inductive bias to enhance the model's accuracy.
As task-relevant relations might be unknown, graph structure learning approaches have been proposed to learn them while solving the downstream prediction task.
arXiv Detail & Related papers (2024-05-30T10:49:22Z) - Uncertainty for Active Learning on Graphs [70.44714133412592]
Uncertainty Sampling is an Active Learning strategy that aims to improve the data efficiency of machine learning models.
We benchmark Uncertainty Sampling beyond predictive uncertainty and highlight a significant performance gap to other Active Learning strategies.
We develop ground-truth Bayesian uncertainty estimates in terms of the data generating process and prove their effectiveness in guiding Uncertainty Sampling toward optimal queries.
arXiv Detail & Related papers (2024-05-02T16:50:47Z) - Certified Human Trajectory Prediction [66.1736456453465]
Tray prediction plays an essential role in autonomous vehicles.
We propose a certification approach tailored for the task of trajectory prediction.
We address the inherent challenges associated with trajectory prediction, including unbounded outputs, and mutli-modality.
arXiv Detail & Related papers (2024-03-20T17:41:35Z) - B-Learner: Quasi-Oracle Bounds on Heterogeneous Causal Effects Under
Hidden Confounding [51.74479522965712]
We propose a meta-learner called the B-Learner, which can efficiently learn sharp bounds on the CATE function under limits on hidden confounding.
We prove its estimates are valid, sharp, efficient, and have a quasi-oracle property with respect to the constituent estimators under more general conditions than existing methods.
arXiv Detail & Related papers (2023-04-20T18:07:19Z) - Sample-efficient Adversarial Imitation Learning [45.400080101596956]
We propose a self-supervised representation-based adversarial imitation learning method to learn state and action representations.
We show a 39% relative improvement over existing adversarial imitation learning methods on MuJoCo in a setting limited to 100 expert state-action pairs.
arXiv Detail & Related papers (2023-03-14T12:36:01Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Graph-based Ensemble Machine Learning for Student Performance Prediction [0.7874708385247353]
We propose a graph-based ensemble machine learning method to improve the stability of single machine learning methods.
Our model outperforms the best traditional machine learning algorithms by up to 14.8% in prediction accuracy.
arXiv Detail & Related papers (2021-12-15T05:19:46Z) - Automated Deepfake Detection [19.17617301462919]
We propose to utilize Automated Machine Learning to automatically search architecture for deepfake detection.
It is experimentally proved that our proposed method not only outperforms previous non-deep learning methods but achieves comparable or even better prediction accuracy.
arXiv Detail & Related papers (2021-06-20T14:48:50Z) - Stochastic Action Prediction for Imitation Learning [1.6385815610837169]
Imitation learning is a data-driven approach to acquiring skills that relies on expert demonstrations to learn a policy that maps observations to actions.
We demonstrate inherentity in demonstrations collected for tasks including line following with a remote-controlled car.
We find that accounting for adversariality in the expert data leads to substantial improvement in the success rate of task completion.
arXiv Detail & Related papers (2020-12-26T08:02:33Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.