Fast and Accurate Unknown Object Instance Segmentation through Error-Informed Refinement
- URL: http://arxiv.org/abs/2306.16132v2
- Date: Tue, 30 Apr 2024 14:37:59 GMT
- Title: Fast and Accurate Unknown Object Instance Segmentation through Error-Informed Refinement
- Authors: Seunghyeok Back, Sangbeom Lee, Kangmin Kim, Joosoon Lee, Sungho Shin, Jemo Maeng, Kyoobin Lee,
- Abstract summary: INSTA-BEER is a fast and accurate model-agnostic refinement method that enhances the performance of unknown object instance segmentation.
We introduce the quad-metric boundary error, which quantifies pixel-wise true positives, true negatives, false positives, and false negatives at the boundaries of object instances.
In comprehensive evaluations conducted on three widely used benchmark datasets, INSTA-BEER outperformed state-of-the-art models in both accuracy and inference time.
- Score: 7.297340899783621
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Accurate perception of unknown objects is essential for autonomous robots, particularly when manipulating novel items in unstructured environments. However, existing unknown object instance segmentation (UOIS) methods often have over-segmentation and under-segmentation problems, resulting in inaccurate instance boundaries and failures in subsequent robotic tasks such as grasping and placement. To address this challenge, this article introduces INSTA-BEER, a fast and accurate model-agnostic refinement method that enhances the UOIS performance. The model adopts an error-informed refinement approach, which first predicts pixel-wise errors in the initial segmentation and then refines the segmentation guided by these error estimates. We introduce the quad-metric boundary error, which quantifies pixel-wise true positives, true negatives, false positives, and false negatives at the boundaries of object instances, effectively capturing both fine-grained and instance-level segmentation errors. Additionally, the Error Guidance Fusion (EGF) module explicitly integrates error information into the refinement process, further improving segmentation quality. In comprehensive evaluations conducted on three widely used benchmark datasets, INSTA-BEER outperformed state-of-the-art models in both accuracy and inference time. Moreover, a real-world robotic experiment demonstrated the practical applicability of our method in improving the performance of target object grasping tasks in cluttered environments.
Related papers
- Approximate Size Targets Are Sufficient for Accurate Semantic Segmentation [52.239136918460616]
Extending binary class tags to approximate relative object-size distributions allows off-the-shelf architectures to solve the segmentation problem.
A straightforward zero-avoiding KL-divergence loss for average predictions produces segmentation accuracy comparable to the standard pixel-precise supervision.
Our ideas are validated on PASCAL VOC using our new human annotations of approximate object sizes.
arXiv Detail & Related papers (2025-03-10T06:02:13Z) - Embodied Uncertainty-Aware Object Segmentation [38.52448300879023]
We introduce uncertainty-aware object instance segmentation (UncOS) and demonstrate its usefulness for embodied interactive segmentation.
We obtain a set of region-factored segmentation hypotheses together with confidence estimates by making multiple queries of large pre-trained models.
The output can also serve as input to a belief-driven process for selecting robot actions to perturb the scene to reduce ambiguity.
arXiv Detail & Related papers (2024-08-08T21:29:22Z) - RISeg: Robot Interactive Object Segmentation via Body Frame-Invariant
Features [6.358423536732677]
We introduce a novel approach to correct inaccurate segmentation by using robot interaction and a designed body frame-invariant feature.
We demonstrate the effectiveness of our proposed interactive perception pipeline in accurately segmenting cluttered scenes by achieving an average object segmentation accuracy rate of 80.7%.
arXiv Detail & Related papers (2024-03-04T05:03:24Z) - For A More Comprehensive Evaluation of 6DoF Object Pose Tracking [22.696375341994035]
We contribute a unified benchmark to address the above problems.
For more accurate annotation of YCBV, we propose a multi-view multi-object global pose refinement method.
In experiments, we validate the precision and reliability of the proposed global pose refinement method with a realistic semi-synthesized dataset.
arXiv Detail & Related papers (2023-09-14T15:35:08Z) - Distributional Instance Segmentation: Modeling Uncertainty and High
Confidence Predictions with Latent-MaskRCNN [77.0623472106488]
In this paper, we explore a class of distributional instance segmentation models using latent codes.
For robotic picking applications, we propose a confidence mask method to achieve the high precision necessary.
We show that our method can significantly reduce critical errors in robotic systems, including our newly released dataset of ambiguous scenes.
arXiv Detail & Related papers (2023-05-03T05:57:29Z) - Collaborative Propagation on Multiple Instance Graphs for 3D Instance
Segmentation with Single-point Supervision [63.429704654271475]
We propose a novel weakly supervised method RWSeg that only requires labeling one object with one point.
With these sparse weak labels, we introduce a unified framework with two branches to propagate semantic and instance information.
Specifically, we propose a Cross-graph Competing Random Walks (CRW) algorithm that encourages competition among different instance graphs.
arXiv Detail & Related papers (2022-08-10T02:14:39Z) - Self-Supervised Interactive Object Segmentation Through a
Singulation-and-Grasping Approach [9.029861710944704]
We propose a robot learning approach to interact with novel objects and collect each object's training label.
The Singulation-and-Grasping (SaG) policy is trained through end-to-end reinforcement learning.
Our system achieves 70% singulation success rate in simulated cluttered scenes.
arXiv Detail & Related papers (2022-07-19T15:01:36Z) - PointInst3D: Segmenting 3D Instances by Points [136.7261709896713]
We propose a fully-convolutional 3D point cloud instance segmentation method that works in a per-point prediction fashion.
We find the key to its success is assigning a suitable target to each sampled point.
Our approach achieves promising results on both ScanNet and S3DIS benchmarks.
arXiv Detail & Related papers (2022-04-25T02:41:46Z) - SegmentMeIfYouCan: A Benchmark for Anomaly Segmentation [111.61261419566908]
Deep neural networks (DNNs) are usually trained on a closed set of semantic classes.
They are ill-equipped to handle previously-unseen objects.
detecting and localizing such objects is crucial for safety-critical applications such as perception for automated driving.
arXiv Detail & Related papers (2021-04-30T07:58:19Z) - Target-Aware Object Discovery and Association for Unsupervised Video
Multi-Object Segmentation [79.6596425920849]
This paper addresses the task of unsupervised video multi-object segmentation.
We introduce a novel approach for more accurate and efficient unseen-temporal segmentation.
We evaluate the proposed approach on DAVIS$_17$ and YouTube-VIS, and the results demonstrate that it outperforms state-of-the-art methods both in segmentation accuracy and inference speed.
arXiv Detail & Related papers (2021-04-10T14:39:44Z) - Exposing Semantic Segmentation Failures via Maximum Discrepancy
Competition [102.75463782627791]
We take steps toward answering the question by exposing failures of existing semantic segmentation methods in the open visual world.
Inspired by previous research on model falsification, we start from an arbitrarily large image set, and automatically sample a small image set by MAximizing the Discrepancy (MAD) between two segmentation methods.
The selected images have the greatest potential in falsifying either (or both) of the two methods.
A segmentation method, whose failures are more difficult to be exposed in the MAD competition, is considered better.
arXiv Detail & Related papers (2021-02-27T16:06:25Z) - Frustratingly Simple Few-Shot Object Detection [98.42824677627581]
We find that fine-tuning only the last layer of existing detectors on rare classes is crucial to the few-shot object detection task.
Such a simple approach outperforms the meta-learning methods by roughly 220 points on current benchmarks.
arXiv Detail & Related papers (2020-03-16T00:29:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.