Semantic Scene Graph Generation Based on an Edge Dual Scene Graph and
Message Passing Neural Network
- URL: http://arxiv.org/abs/2311.01192v1
- Date: Thu, 2 Nov 2023 12:36:52 GMT
- Title: Semantic Scene Graph Generation Based on an Edge Dual Scene Graph and
Message Passing Neural Network
- Authors: Hyeongjin Kim, Sangwon Kim, Jong Taek Lee, Byoung Chul Ko
- Abstract summary: Scene graph generation (SGG) captures the relationships between objects in an image and creates a structured graph-based representation.
Existing SGG methods have a limited ability to accurately predict detailed relationships.
A new approach to the modeling multiobject relationships, called edge dual scene graph generation (EdgeSGG), is proposed herein.
- Score: 3.9280441311534653
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Along with generative AI, interest in scene graph generation (SGG), which
comprehensively captures the relationships and interactions between objects in
an image and creates a structured graph-based representation, has significantly
increased in recent years. However, relying on object-centric and dichotomous
relationships, existing SGG methods have a limited ability to accurately
predict detailed relationships. To solve these problems, a new approach to the
modeling multiobject relationships, called edge dual scene graph generation
(EdgeSGG), is proposed herein. EdgeSGG is based on a edge dual scene graph and
Dual Message Passing Neural Network (DualMPNN), which can capture rich
contextual interactions between unconstrained objects. To facilitate the
learning of edge dual scene graphs with a symmetric graph structure, the
proposed DualMPNN learns both object- and relation-centric features for more
accurately predicting relation-aware contexts and allows fine-grained
relational updates between objects. A comparative experiment with
state-of-the-art (SoTA) methods was conducted using two public datasets for SGG
operations and six metrics for three subtasks. Compared with SoTA approaches,
the proposed model exhibited substantial performance improvements across all
SGG subtasks. Furthermore, experiment on long-tail distributions revealed that
incorporating the relationships between objects effectively mitigates existing
long-tail problems.
Related papers
- Scene Graph Generation Strategy with Co-occurrence Knowledge and Learnable Term Frequency [3.351553095054309]
Scene graph generation (SGG) represents the relationships between objects in an image as a graph structure.
Previous studies have failed to reflect the co-occurrence of objects during SGG generation.
We propose CooK, which reflects the Co-occurrence Knowledge between objects, and the learnable term frequency-inverse document frequency.
arXiv Detail & Related papers (2024-05-21T09:56:48Z) - Graph Transformer GANs with Graph Masked Modeling for Architectural
Layout Generation [153.92387500677023]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The proposed graph Transformer encoder combines graph convolutions and self-attentions in a Transformer to model both local and global interactions.
We also propose a novel self-guided pre-training method for graph representation learning.
arXiv Detail & Related papers (2024-01-15T14:36:38Z) - Unbiased Heterogeneous Scene Graph Generation with Relation-aware
Message Passing Neural Network [9.779600950401315]
We propose an unbiased heterogeneous scene graph generation (HetSGG) framework that captures relation-aware context.
We devise a novel message passing layer, called relation-aware message passing neural network (RMP), that aggregates the contextual information of an image.
arXiv Detail & Related papers (2022-12-01T11:25:36Z) - HL-Net: Heterophily Learning Network for Scene Graph Generation [90.2766568914452]
We propose a novel Heterophily Learning Network (HL-Net) to explore the homophily and heterophily between objects/relationships in scene graphs.
HL-Net comprises the following 1) an adaptive reweighting transformer module, which adaptively integrates the information from different layers to exploit both the heterophily and homophily in objects.
We conducted extensive experiments on two public datasets: Visual Genome (VG) and Open Images (OI)
arXiv Detail & Related papers (2022-05-03T06:00:29Z) - Relation Regularized Scene Graph Generation [206.76762860019065]
Scene graph generation (SGG) is built on top of detected objects to predict object pairwise visual relations.
We propose a relation regularized network (R2-Net) which can predict whether there is a relationship between two objects.
Our R2-Net can effectively refine object labels and generate scene graphs.
arXiv Detail & Related papers (2022-02-22T11:36:49Z) - Hyper-relationship Learning Network for Scene Graph Generation [95.6796681398668]
We propose a hyper-relationship learning network, termed HLN, for scene graph generation.
We evaluate HLN on the most popular SGG dataset, i.e., the Visual Genome dataset.
For example, the proposed HLN improves the recall per relationship from 11.3% to 13.1%, and maintains the recall per image from 19.8% to 34.9%.
arXiv Detail & Related papers (2022-02-15T09:26:16Z) - Semantic Compositional Learning for Low-shot Scene Graph Generation [122.51930904132685]
Many scene graph generation (SGG) models solely use the limited annotated relation triples for training.
We propose a novel semantic compositional learning strategy that makes it possible to construct additional, realistic relation triples.
For three recent SGG models, adding our strategy improves their performance by close to 50%, and all of them substantially exceed the current state-of-the-art.
arXiv Detail & Related papers (2021-08-19T10:13:55Z) - Zero-Shot Video Object Segmentation via Attentive Graph Neural Networks [150.5425122989146]
This work proposes a novel attentive graph neural network (AGNN) for zero-shot video object segmentation (ZVOS)
AGNN builds a fully connected graph to efficiently represent frames as nodes, and relations between arbitrary frame pairs as edges.
Experimental results on three video segmentation datasets show that AGNN sets a new state-of-the-art in each case.
arXiv Detail & Related papers (2020-01-19T10:45:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.