Compositional Scene Modeling with Global Object-Centric Representations
- URL: http://arxiv.org/abs/2211.11500v2
- Date: Tue, 22 Nov 2022 02:10:24 GMT
- Title: Compositional Scene Modeling with Global Object-Centric Representations
- Authors: Tonglin Chen, Bin Li, Zhimeng Shen and Xiangyang Xue
- Abstract summary: Humans can easily identify the same object, even if occlusions exist, by completing the occluded parts based on its canonical image in the memory.
This paper proposes a compositional scene modeling method to infer global representations of canonical images of objects without any supervision.
- Score: 44.43366905943199
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The appearance of the same object may vary in different scene images due to
perspectives and occlusions between objects. Humans can easily identify the
same object, even if occlusions exist, by completing the occluded parts based
on its canonical image in the memory. Achieving this ability is still a
challenge for machine learning, especially under the unsupervised learning
setting. Inspired by such an ability of humans, this paper proposes a
compositional scene modeling method to infer global representations of
canonical images of objects without any supervision. The representation of each
object is divided into an intrinsic part, which characterizes globally
invariant information (i.e. canonical representation of an object), and an
extrinsic part, which characterizes scene-dependent information (e.g., position
and size). To infer the intrinsic representation of each object, we employ a
patch-matching strategy to align the representation of a potentially occluded
object with the canonical representations of objects, and sample the most
probable canonical representation based on the category of object determined by
amortized variational inference. Extensive experiments are conducted on four
object-centric learning benchmarks, and experimental results demonstrate that
the proposed method not only outperforms state-of-the-arts in terms of
segmentation and reconstruction, but also achieves good global object
identification performance.
Related papers
- Learning Global Object-Centric Representations via Disentangled Slot Attention [38.78205074748021]
This paper introduces a novel object-centric learning method to empower AI systems with human-like capabilities to identify objects across scenes and generate diverse scenes containing specific objects by learning a set of global object-centric representations.
Experimental results substantiate the efficacy of the proposed method, demonstrating remarkable proficiency in global object-centric representation learning, object identification, scene generation with specific objects and scene decomposition.
arXiv Detail & Related papers (2024-10-24T14:57:00Z) - Zero-Shot Object-Centric Representation Learning [72.43369950684057]
We study current object-centric methods through the lens of zero-shot generalization.
We introduce a benchmark comprising eight different synthetic and real-world datasets.
We find that training on diverse real-world images improves transferability to unseen scenarios.
arXiv Detail & Related papers (2024-08-17T10:37:07Z) - Variational Inference for Scalable 3D Object-centric Learning [19.445804699433353]
We tackle the task of scalable unsupervised object-centric representation learning on 3D scenes.
Existing approaches to object-centric representation learning show limitations in generalizing to larger scenes.
We propose to learn view-invariant 3D object representations in localized object coordinate systems.
arXiv Detail & Related papers (2023-09-25T10:23:40Z) - Robust and Controllable Object-Centric Learning through Energy-based
Models [95.68748828339059]
ours is a conceptually simple and general approach to learning object-centric representations through an energy-based model.
We show that ours can be easily integrated into existing architectures and can effectively extract high-quality object-centric representations.
arXiv Detail & Related papers (2022-10-11T15:11:15Z) - Self-Supervised Learning of Object Parts for Semantic Segmentation [7.99536002595393]
We argue that self-supervised learning of object parts is a solution to this issue.
Our method surpasses the state-of-the-art on three semantic segmentation benchmarks by 17%-3%.
arXiv Detail & Related papers (2022-04-27T17:55:17Z) - Discovering Objects that Can Move [55.743225595012966]
We study the problem of object discovery -- separating objects from the background without manual labels.
Existing approaches utilize appearance cues, such as color, texture, and location, to group pixels into object-like regions.
We choose to focus on dynamic objects -- entities that can move independently in the world.
arXiv Detail & Related papers (2022-03-18T21:13:56Z) - Contrastive Object Detection Using Knowledge Graph Embeddings [72.17159795485915]
We compare the error statistics of the class embeddings learned from a one-hot approach with semantically structured embeddings from natural language processing or knowledge graphs.
We propose a knowledge-embedded design for keypoint-based and transformer-based object detection architectures.
arXiv Detail & Related papers (2021-12-21T17:10:21Z) - Generalization and Robustness Implications in Object-Centric Learning [23.021791024676986]
In this paper, we train state-of-the-art unsupervised models on five common multi-object datasets.
From our experimental study, we find object-centric representations to be generally useful for downstream tasks.
arXiv Detail & Related papers (2021-07-01T17:51:11Z) - Global-Local Bidirectional Reasoning for Unsupervised Representation
Learning of 3D Point Clouds [109.0016923028653]
We learn point cloud representation by bidirectional reasoning between the local structures and the global shape without human supervision.
We show that our unsupervised model surpasses the state-of-the-art supervised methods on both synthetic and real-world 3D object classification datasets.
arXiv Detail & Related papers (2020-03-29T08:26:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.