Sim-MEES: Modular End-Effector System Grasping Dataset for Mobile
Manipulators in Cluttered Environments
- URL: http://arxiv.org/abs/2305.10580v1
- Date: Wed, 17 May 2023 21:40:26 GMT
- Title: Sim-MEES: Modular End-Effector System Grasping Dataset for Mobile
Manipulators in Cluttered Environments
- Authors: Juncheng Li, David J. Cappelleri
- Abstract summary: We present a large-scale synthetic dataset that contains 1,550 objects with varying difficulty levels and physics properties, as well as 11 million grasp labels for mobile manipulators to plan grasps using different modalities in cluttered environments.
Our dataset generation process combines analytic models and dynamic simulations of the entire cluttered environment to provide accurate grasp labels.
- Score: 10.414347878456852
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present Sim-MEES: a large-scale synthetic dataset that
contains 1,550 objects with varying difficulty levels and physics properties,
as well as 11 million grasp labels for mobile manipulators to plan grasps using
different gripper modalities in cluttered environments. Our dataset generation
process combines analytic models and dynamic simulations of the entire
cluttered environment to provide accurate grasp labels. We provide a detailed
study of our proposed labeling process for both parallel jaw grippers and
suction cup grippers, comparing them with state-of-the-art methods to
demonstrate how Sim-MEES can provide precise grasp labels in cluttered
environments.
Related papers
- HD-GEN: A High-Performance Software System for Human Mobility Data Generation Based on Patterns of Life [1.9739979974462676]
We introduce a comprehensive software pipeline for calibrating, generating, processing, and visualizing large-scale individual-level human mobility datasets.<n>A data generation engine constructs geographically grounded simulations using OpenStreetMap data.<n>A genetic algorithm-based calibration module fine-tunes simulation parameters to align with real-world mobility characteristics.<n>A data processing suite transforms raw simulation logs into structured formats suitable for downstream applications.
arXiv Detail & Related papers (2026-01-03T16:01:00Z) - Simulating Environments with Reasoning Models for Agent Training [55.98861707136674]
Building bespoke environments for training is heavy, brittle, and limits progress.<n>We propose two frameworks: Simia-SFT and Simia-RL.<n>Simia-SFT and Simia-RL enable scalable agent training without environment engineering.
arXiv Detail & Related papers (2025-11-03T18:29:57Z) - FastUMI-100K: Advancing Data-driven Robotic Manipulation with a Large-scale UMI-style Dataset [55.66606167502093]
We present FastUMI-100K, a large-scale UMI-style multimodal demonstration dataset.<n>FastUMI-100K offers a more scalable, flexible, and adaptable solution to fulfill the diverse requirements of real-world robot demonstration data.<n>Our dataset integrates multimodal streams, including end-effector states, multi-view wrist-mounted fisheye images and textual annotations.
arXiv Detail & Related papers (2025-10-09T09:57:25Z) - Trajectory World Models for Heterogeneous Environments [67.27233466954814]
Heterogeneity in sensors and actuators across environments poses a significant challenge to building large-scale pre-trained world models.
We introduce UniTraj, a unified dataset comprising over one million trajectories from 80 environments, designed to scale data while preserving critical diversity.
We propose TrajWorld, a novel architecture capable of flexibly handling varying sensor and actuator information and capturing environment dynamics in-context.
arXiv Detail & Related papers (2025-02-03T13:59:08Z) - SynthmanticLiDAR: A Synthetic Dataset for Semantic Segmentation on LiDAR Imaging [8.193070135759717]
We present a modified CARLA simulator designed with LiDAR semantic segmentation in mind.
We have generated SynthmanticLiDAR, a synthetic dataset for semantic segmentation on LiDAR imaging.
Our results show that incorporating SynthmanticLiDAR into the training process improves the overall performance of tested algorithms.
arXiv Detail & Related papers (2025-01-31T11:09:10Z) - GausSim: Foreseeing Reality by Gaussian Simulator for Elastic Objects [55.02281855589641]
GausSim is a novel neural network-based simulator designed to capture the dynamic behaviors of real-world elastic objects represented through Gaussian kernels.
We leverage continuum mechanics and treat each kernel as a Center of Mass System (CMS) that represents continuous piece of matter.
In addition, GausSim incorporates explicit physics constraints, such as mass and momentum conservation, ensuring interpretable results and robust, physically plausible simulations.
arXiv Detail & Related papers (2024-12-23T18:58:17Z) - The Well: a Large-Scale Collection of Diverse Physics Simulations for Machine Learning [4.812580392361432]
Well is a large-scale collection of numerical simulations of a wide variety of physical systems.
These datasets can be used individually or as part of a broader benchmark suite.
We provide a unified PyTorch interface for training and evaluating models.
arXiv Detail & Related papers (2024-11-30T19:42:14Z) - SOOD++: Leveraging Unlabeled Data to Boost Oriented Object Detection [59.868772767818975]
We propose a simple yet effective Semi-supervised Oriented Object Detection method termed SOOD++.
Specifically, we observe that objects from aerial images are usually arbitrary orientations, small scales, and aggregation.
Extensive experiments conducted on various multi-oriented object datasets under various labeled settings demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2024-07-01T07:03:51Z) - AGILE: Approach-based Grasp Inference Learned from Element Decomposition [2.812395851874055]
Humans can grasp objects by taking into account hand-object positioning information.
This work proposes a method to enable a robot manipulator to learn the same, grasping objects in the most optimal way.
arXiv Detail & Related papers (2024-02-02T10:47:08Z) - Manifold Learning with Sparse Regularised Optimal Transport [0.17205106391379024]
Real-world datasets are subject to noisy observations and sampling, so that distilling information about the underlying manifold is a major challenge.
We propose a method for manifold learning that utilises a symmetric version of optimal transport with a quadratic regularisation.
We prove that the resulting kernel is consistent with a Laplace-type operator in the continuous limit, establish robustness to heteroskedastic noise and exhibit these results in simulations.
arXiv Detail & Related papers (2023-07-19T08:05:46Z) - Sim-Suction: Learning a Suction Grasp Policy for Cluttered Environments
Using a Synthetic Benchmark [8.025760743074066]
Sim-Suction is a robust object-aware suction grasp policy for mobile manipulation platforms with dynamic camera viewpoints.
Sim-Suction-Dataset comprises 500 cluttered environments with 3.2 million annotated suction grasp poses.
Sim-Suction-Pointnet generates robust 6D suction grasp poses by learning point-wise affordances from the Sim-Suction-Dataset.
arXiv Detail & Related papers (2023-05-25T15:31:08Z) - TRoVE: Transforming Road Scene Datasets into Photorealistic Virtual
Environments [84.6017003787244]
This work proposes a synthetic data generation pipeline to address the difficulties and domain-gaps present in simulated datasets.
We show that using annotations and visual cues from existing datasets, we can facilitate automated multi-modal data generation.
arXiv Detail & Related papers (2022-08-16T20:46:08Z) - MetaGraspNet: A Large-Scale Benchmark Dataset for Scene-Aware
Ambidextrous Bin Picking via Physics-based Metaverse Synthesis [72.85526892440251]
We introduce MetaGraspNet, a large-scale photo-realistic bin picking dataset constructed via physics-based metaverse synthesis.
The proposed dataset contains 217k RGBD images across 82 different article types, with full annotations for object detection, amodal perception, keypoint detection, manipulation order and ambidextrous grasp labels for a parallel-jaw and vacuum gripper.
We also provide a real dataset consisting of over 2.3k fully annotated high-quality RGBD images, divided into 5 levels of difficulties and an unseen object set to evaluate different object and layout properties.
arXiv Detail & Related papers (2022-08-08T08:15:34Z) - MoDi: Unconditional Motion Synthesis from Diverse Data [51.676055380546494]
We present MoDi, an unconditional generative model that synthesizes diverse motions.
Our model is trained in a completely unsupervised setting from a diverse, unstructured and unlabeled motion dataset.
We show that despite the lack of any structure in the dataset, the latent space can be semantically clustered.
arXiv Detail & Related papers (2022-06-16T09:06:25Z) - BayesSimIG: Scalable Parameter Inference for Adaptive Domain
Randomization with IsaacGym [59.53949960353792]
BayesSimIG is a library that provides an implementation of BayesSim integrated with the recently released NVIDIA IsaacGym.
BayesSimIG provides an integration with NVIDIABoard to easily visualize slices of high-dimensional posteriors.
arXiv Detail & Related papers (2021-07-09T16:21:31Z) - Validation of Simulation-Based Testing: Bypassing Domain Shift with
Label-to-Image Synthesis [9.531148049378672]
We propose a novel framework consisting of a generative label-to-image synthesis model together with different transferability measures.
We validate our approach empirically on a semantic segmentation task on driving scenes.
Although the latter can distinguish between real-life and synthetic tests, in the former we observe surprisingly strong correlations of 0.7 for both cars and pedestrians.
arXiv Detail & Related papers (2021-06-10T07:23:58Z) - Meta-Sim2: Unsupervised Learning of Scene Structure for Synthetic Data
Generation [88.04759848307687]
In Meta-Sim2, we aim to learn the scene structure in addition to parameters, which is a challenging problem due to its discrete nature.
We use Reinforcement Learning to train our model, and design a feature space divergence between our synthesized and target images that is key to successful training.
We also show that this leads to downstream improvement in the performance of an object detector trained on our generated dataset as opposed to other baseline simulation methods.
arXiv Detail & Related papers (2020-08-20T17:28:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.