A Dataset And Benchmark Of Underwater Object Detection For Robot Picking
- URL: http://arxiv.org/abs/2106.05681v1
- Date: Thu, 10 Jun 2021 11:56:19 GMT
- Title: A Dataset And Benchmark Of Underwater Object Detection For Robot Picking
- Authors: Chongwei Liu, Haojie Li, Shuchang Wang, Ming Zhu, Dong Wang, Xin Fan
and Zhihui Wang
- Abstract summary: We introduce a dataset, Detecting Underwater Objects (DUO), and a corresponding benchmark, based on the collection and re-annotation of all relevant datasets.
DUO contains a collection of diverse underwater images with more rational annotations.
The corresponding benchmark provides indicators of both efficiency and accuracy of SOTAs for academic research and industrial applications.
- Score: 28.971646640023284
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Underwater object detection for robot picking has attracted a lot of
interest. However, it is still an unsolved problem due to several challenges.
We take steps towards making it more realistic by addressing the following
challenges. Firstly, the currently available datasets basically lack the test
set annotations, causing researchers must compare their method with other SOTAs
on a self-divided test set (from the training set). Training other methods lead
to an increase in workload and different researchers divide different datasets,
resulting there is no unified benchmark to compare the performance of different
algorithms. Secondly, these datasets also have other shortcomings, e.g., too
many similar images or incomplete labels. Towards these challenges we introduce
a dataset, Detecting Underwater Objects (DUO), and a corresponding benchmark,
based on the collection and re-annotation of all relevant datasets. DUO
contains a collection of diverse underwater images with more rational
annotations. The corresponding benchmark provides indicators of both efficiency
and accuracy of SOTAs (under the MMDtection framework) for academic research
and industrial applications, where JETSON AGX XAVIER is used to assess detector
speed to simulate the robot-embedded environment.
Related papers
- Underwater Object Detection in the Era of Artificial Intelligence: Current, Challenge, and Future [119.88454942558485]
Underwater object detection (UOD) aims to identify and localise objects in underwater images or videos.
In recent years, artificial intelligence (AI) based methods, especially deep learning methods, have shown promising performance in UOD.
arXiv Detail & Related papers (2024-10-08T00:25:33Z) - Comparing Importance Sampling Based Methods for Mitigating the Effect of
Class Imbalance [0.0]
We compare three techniques that derive from importance sampling: loss reweighting, undersampling, and oversampling.
We find that up-weighting the loss for and undersampling has a negigible effect on the performance on underrepresented classes.
Our findings also indicate that there may exist some redundancy in data in the Planet dataset.
arXiv Detail & Related papers (2024-02-28T22:52:27Z) - SimMining-3D: Altitude-Aware 3D Object Detection in Complex Mining
Environments: A Novel Dataset and ROS-Based Automatic Annotation Pipeline [0.9790236766474201]
We introduce a synthetic dataset SimMining 3D specifically designed for 3D object detection in mining environments.
The dataset captures objects and sensors positioned at various heights within mine benches, accurately reflecting authentic mining scenarios.
We propose evaluation metrics accounting for sensor-to-object height variations and point cloud density, enabling accurate model assessment.
arXiv Detail & Related papers (2023-12-11T04:33:45Z) - MetaGraspNet: A Large-Scale Benchmark Dataset for Scene-Aware
Ambidextrous Bin Picking via Physics-based Metaverse Synthesis [72.85526892440251]
We introduce MetaGraspNet, a large-scale photo-realistic bin picking dataset constructed via physics-based metaverse synthesis.
The proposed dataset contains 217k RGBD images across 82 different article types, with full annotations for object detection, amodal perception, keypoint detection, manipulation order and ambidextrous grasp labels for a parallel-jaw and vacuum gripper.
We also provide a real dataset consisting of over 2.3k fully annotated high-quality RGBD images, divided into 5 levels of difficulties and an unseen object set to evaluate different object and layout properties.
arXiv Detail & Related papers (2022-08-08T08:15:34Z) - Egocentric Human-Object Interaction Detection Exploiting Synthetic Data [19.220651860718892]
We consider the problem of detecting Egocentric HumanObject Interactions (EHOIs) in industrial contexts.
We propose a pipeline and a tool to generate photo-realistic synthetic First Person Vision (FPV) images automatically labeled for EHOI detection.
arXiv Detail & Related papers (2022-04-14T15:59:15Z) - MetaGraspNet: A Large-Scale Benchmark Dataset for Vision-driven Robotic
Grasping via Physics-based Metaverse Synthesis [78.26022688167133]
We present a large-scale benchmark dataset for vision-driven robotic grasping via physics-based metaverse synthesis.
The proposed dataset contains 100,000 images and 25 different object types.
We also propose a new layout-weighted performance metric alongside the dataset for evaluating object detection and segmentation performance.
arXiv Detail & Related papers (2021-12-29T17:23:24Z) - Are we ready for beyond-application high-volume data? The Reeds robot
perception benchmark dataset [3.781421673607643]
This paper presents a dataset, called Reeds, for research on robot perception algorithms.
The dataset aims to provide demanding benchmark opportunities for algorithms, rather than providing an environment for testing application-specific solutions.
arXiv Detail & Related papers (2021-09-16T23:21:42Z) - Batch Exploration with Examples for Scalable Robotic Reinforcement
Learning [63.552788688544254]
Batch Exploration with Examples (BEE) explores relevant regions of the state-space guided by a modest number of human provided images of important states.
BEE is able to tackle challenging vision-based manipulation tasks both in simulation and on a real Franka robot.
arXiv Detail & Related papers (2020-10-22T17:49:25Z) - DecAug: Augmenting HOI Detection via Decomposition [54.65572599920679]
Current algorithms suffer from insufficient training samples and category imbalance within datasets.
We propose an efficient and effective data augmentation method called DecAug for HOI detection.
Experiments show that our method brings up to 3.3 mAP and 1.6 mAP improvements on V-COCO and HICODET dataset.
arXiv Detail & Related papers (2020-10-02T13:59:05Z) - Stance Detection Benchmark: How Robust Is Your Stance Detection? [65.91772010586605]
Stance Detection (StD) aims to detect an author's stance towards a certain topic or claim.
We introduce a StD benchmark that learns from ten StD datasets of various domains in a multi-dataset learning setting.
Within this benchmark setup, we are able to present new state-of-the-art results on five of the datasets.
arXiv Detail & Related papers (2020-01-06T13:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.