Unseen Object Amodal Instance Segmentation via Hierarchical Occlusion
Modeling
- URL: http://arxiv.org/abs/2109.11103v1
- Date: Thu, 23 Sep 2021 01:55:42 GMT
- Title: Unseen Object Amodal Instance Segmentation via Hierarchical Occlusion
Modeling
- Authors: Seunghyeok Back, Joosoon Lee, Taewon Kim, Sangjun Noh, Raeyoung Kang,
Seongho Bak, Kyoobin Lee
- Abstract summary: Instance-aware segmentation of unseen objects is essential for a robotic system in an unstructured environment.
This paper addresses Unseen Object Amodal Instances (UOAIS) to detect 1) visible masks, 2) amodal masks, and 3) occlusions on unseen object instances.
We evaluate our method on three benchmarks (tabletop, indoors, and bin environments) and achieved state-of-the-art (SOTA) performance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Instance-aware segmentation of unseen objects is essential for a robotic
system in an unstructured environment. Although previous works achieved
encouraging results, they were limited to segmenting the only visible regions
of unseen objects. For robotic manipulation in a cluttered scene, amodal
perception is required to handle the occluded objects behind others. This paper
addresses Unseen Object Amodal Instance Segmentation (UOAIS) to detect 1)
visible masks, 2) amodal masks, and 3) occlusions on unseen object instances.
For this, we propose a Hierarchical Occlusion Modeling (HOM) scheme designed to
reason about the occlusion by assigning a hierarchy to a feature fusion and
prediction order. We evaluated our method on three benchmarks (tabletop,
indoors, and bin environments) and achieved state-of-the-art (SOTA)
performance. Robot demos for picking up occluded objects, codes, and datasets
are available at https://sites.google.com/view/uoais
Related papers
- Sequential Amodal Segmentation via Cumulative Occlusion Learning [15.729212571002906]
A visual system must be able to segment both the visible and occluded regions of objects, while discerning their occlusion order.
We introduce a diffusion model with cumulative occlusion learning designed for sequential amodal segmentation of objects with uncertain categories.
This model iteratively refines the prediction using the cumulative mask strategy during diffusion, effectively capturing the uncertainty of invisible regions.
It is akin to the human capability for amodal perception, i.e., to decipher the spatial ordering among objects and accurately predict complete contours for occluded objects in densely layered visual scenes.
arXiv Detail & Related papers (2024-05-09T14:17:26Z) - Amodal Ground Truth and Completion in the Wild [84.54972153436466]
We use 3D data to establish an automatic pipeline to determine authentic ground truth amodal masks for partially occluded objects in real images.
This pipeline is used to construct an amodal completion evaluation benchmark, MP3D-Amodal, consisting of a variety of object categories and labels.
arXiv Detail & Related papers (2023-12-28T18:59:41Z) - Discovering Object Masks with Transformers for Unsupervised Semantic
Segmentation [75.00151934315967]
MaskDistill is a novel framework for unsupervised semantic segmentation.
Our framework does not latch onto low-level image cues and is not limited to object-centric datasets.
arXiv Detail & Related papers (2022-06-13T17:59:43Z) - Topologically Persistent Features-based Object Recognition in Cluttered
Indoor Environments [1.2691047660244335]
Recognition of occluded objects in unseen indoor environments is a challenging problem for mobile robots.
This work proposes a new slicing-based topological descriptor that captures the 3D shape of object point clouds.
It yields similarities between the descriptors of the occluded and the corresponding unoccluded objects, enabling object unity-based recognition.
arXiv Detail & Related papers (2022-05-16T07:01:16Z) - Discovering Objects that Can Move [55.743225595012966]
We study the problem of object discovery -- separating objects from the background without manual labels.
Existing approaches utilize appearance cues, such as color, texture, and location, to group pixels into object-like regions.
We choose to focus on dynamic objects -- entities that can move independently in the world.
arXiv Detail & Related papers (2022-03-18T21:13:56Z) - Neural Descriptor Fields: SE(3)-Equivariant Object Representations for
Manipulation [75.83319382105894]
We present Neural Descriptor Fields (NDFs), an object representation that encodes both points and relative poses between an object and a target.
NDFs are trained in a self-supervised fashion via a 3D auto-encoding task that does not rely on expert-labeled keypoints.
Our performance generalizes across both object instances and 6-DoF object poses, and significantly outperforms a recent baseline that relies on 2D descriptors.
arXiv Detail & Related papers (2021-12-09T18:57:15Z) - RICE: Refining Instance Masks in Cluttered Environments with Graph
Neural Networks [53.15260967235835]
We propose a novel framework that refines the output of such methods by utilizing a graph-based representation of instance masks.
We train deep networks capable of sampling smart perturbations to the segmentations, and a graph neural network, which can encode relations between objects, to evaluate the segmentations.
We demonstrate an application that uses uncertainty estimates generated by our method to guide a manipulator, leading to efficient understanding of cluttered scenes.
arXiv Detail & Related papers (2021-06-29T20:29:29Z) - Robust Instance Segmentation through Reasoning about Multi-Object
Occlusion [9.536947328412198]
We propose a deep network for multi-object instance segmentation that is robust to occlusion.
Our work builds on Compositional Networks, which learn a generative model of neural feature activations to locate occluders.
In particular, we obtain feed-forward predictions of the object classes and their instance and occluder segmentations.
arXiv Detail & Related papers (2020-12-03T17:41:55Z) - Unseen Object Instance Segmentation for Robotic Environments [67.88276573341734]
We propose a method to segment unseen object instances in tabletop environments.
UOIS-Net is comprised of two stages: first, it operates only on depth to produce object instance center votes in 2D or 3D.
Surprisingly, our framework is able to learn from synthetic RGB-D data where the RGB is non-photorealistic.
arXiv Detail & Related papers (2020-07-16T01:59:13Z) - Instance Segmentation of Visible and Occluded Regions for Finding and
Picking Target from a Pile of Objects [25.836334764387498]
We present a robotic system for picking a target from a pile of objects that is capable of finding and grasping the target object.
We extend an existing instance segmentation model with a novel relook' architecture, in which the model explicitly learns the inter-instance relationship.
Also, by using image synthesis, we make the system capable of handling new objects without human annotations.
arXiv Detail & Related papers (2020-01-21T12:28:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.