Pair then Relation: Pair-Net for Panoptic Scene Graph Generation
- URL: http://arxiv.org/abs/2307.08699v2
- Date: Tue, 1 Aug 2023 13:41:46 GMT
- Title: Pair then Relation: Pair-Net for Panoptic Scene Graph Generation
- Authors: Jinghao Wang, Zhengyu Wen, Xiangtai Li, Zujin Guo, Jingkang Yang,
Ziwei Liu
- Abstract summary: Panoptic Scene Graph (PSG) aims to create a more comprehensive scene graph representation using panoptic segmentation instead of boxes.
Current PSG methods have limited performance, which hinders downstream tasks or applications.
We present a novel framework: Pair then Relation (Pair-Net), which uses a Pair Proposal Network (PPN) to learn and filter sparse pair-wise relationships between subjects and objects.
- Score: 28.445190357176312
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Panoptic Scene Graph (PSG) is a challenging task in Scene Graph Generation
(SGG) that aims to create a more comprehensive scene graph representation using
panoptic segmentation instead of boxes. Compared to SGG, PSG has several
challenging problems: pixel-level segment outputs and full relationship
exploration (It also considers thing and stuff relation). Thus, current PSG
methods have limited performance, which hinders downstream tasks or
applications. The goal of this work aims to design a novel and strong baseline
for PSG. To achieve that, we first conduct an in-depth analysis to identify the
bottleneck of the current PSG models, finding that inter-object pair-wise
recall is a crucial factor that was ignored by previous PSG methods. Based on
this and the recent query-based frameworks, we present a novel framework: Pair
then Relation (Pair-Net), which uses a Pair Proposal Network (PPN) to learn and
filter sparse pair-wise relationships between subjects and objects. Moreover,
we also observed the sparse nature of object pairs for both Motivated by this,
we design a lightweight Matrix Learner within the PPN, which directly learn
pair-wised relationships for pair proposal generation. Through extensive
ablation and analysis, our approach significantly improves upon leveraging the
segmenter solid baseline. Notably, our method achieves new state-of-the-art
results on the PSG benchmark, with over 10\% absolute gains compared to
PSGFormer. The code of this paper is publicly available at
https://github.com/king159/Pair-Net.
Related papers
- OpenPSG: Open-set Panoptic Scene Graph Generation via Large Multimodal Models [28.742671870397757]
Panoptic Scene Graph Generation (PSG) aims to segment objects and recognize their relations, enabling the structured understanding of an image.
Previous methods focus on predicting predefined object and relation categories, hence limiting their applications in the open world scenarios.
In this paper, we focus on the task of open-set relation prediction integrated with a pretrained open-set panoptic segmentation model.
arXiv Detail & Related papers (2024-07-15T19:56:42Z) - HiLo: Exploiting High Low Frequency Relations for Unbiased Panoptic
Scene Graph Generation [13.221163846643607]
Panoptic Scene Graph generation (PSG) aims to segment the image and extract triplets of subjects, objects and their relations to build a scene graph.
This task suffers from a long-tail problem in its relation categories, making naive biased methods more inclined to high-frequency relations.
Existing unbiased methods tackle the long-tail problem by data/loss rebalancing to favor low-frequency relations.
While existing methods favor one over the other, our proposed HiLo framework lets different network branches specialize on low and high frequency relations.
arXiv Detail & Related papers (2023-03-28T14:08:09Z) - 1st Place Solution for PSG competition with ECCV'22 SenseHuman Workshop [1.5362025549031049]
Panoptic Scene Graph (PSG) generation aims to generate scene graph representations based on panoptic segmentation instead of rigid bounding boxes.
We propose GRNet, a Global Relation Network in two-stage paradigm, where the pre-extracted local object features and their corresponding masks are fed into a transformer with class embeddings.
We conduct comprehensive experiments on OpenPSG dataset and achieve the state-of-art performance on the leadboard.
arXiv Detail & Related papers (2023-02-06T09:47:46Z) - Panoptic Scene Graph Generation [41.534209967051645]
panoptic scene graph generation (PSG) is a new problem task that requires the model to generate a more comprehensive scene graph representation.
A high-quality PSG dataset contains 49k well-annotated overlapping images from COCO and Visual Genome.
arXiv Detail & Related papers (2022-07-22T17:59:53Z) - RU-Net: Regularized Unrolling Network for Scene Graph Generation [92.95032610978511]
Scene graph generation (SGG) aims to detect objects and predict the relationships between each pair of objects.
Existing SGG methods usually suffer from several issues, including 1) ambiguous object representations, and 2) low diversity in relationship predictions.
We propose a regularized unrolling network (RU-Net) to address both problems.
arXiv Detail & Related papers (2022-05-03T04:21:15Z) - Stacked Hybrid-Attention and Group Collaborative Learning for Unbiased
Scene Graph Generation [62.96628432641806]
Scene Graph Generation aims to first encode the visual contents within the given image and then parse them into a compact summary graph.
We first present a novel Stacked Hybrid-Attention network, which facilitates the intra-modal refinement as well as the inter-modal interaction.
We then devise an innovative Group Collaborative Learning strategy to optimize the decoder.
arXiv Detail & Related papers (2022-03-18T09:14:13Z) - Relation Regularized Scene Graph Generation [206.76762860019065]
Scene graph generation (SGG) is built on top of detected objects to predict object pairwise visual relations.
We propose a relation regularized network (R2-Net) which can predict whether there is a relationship between two objects.
Our R2-Net can effectively refine object labels and generate scene graphs.
arXiv Detail & Related papers (2022-02-22T11:36:49Z) - Learning Spatial Context with Graph Neural Network for Multi-Person Pose
Grouping [71.59494156155309]
Bottom-up approaches for image-based multi-person pose estimation consist of two stages: keypoint detection and grouping.
In this work, we formulate the grouping task as a graph partitioning problem, where we learn the affinity matrix with a Graph Neural Network (GNN)
The learned geometry-based affinity is further fused with appearance-based affinity to achieve robust keypoint association.
arXiv Detail & Related papers (2021-04-06T09:21:14Z) - Group-Wise Semantic Mining for Weakly Supervised Semantic Segmentation [49.90178055521207]
This work addresses weakly supervised semantic segmentation (WSSS), with the goal of bridging the gap between image-level annotations and pixel-level segmentation.
We formulate WSSS as a novel group-wise learning task that explicitly models semantic dependencies in a group of images to estimate more reliable pseudo ground-truths.
In particular, we devise a graph neural network (GNN) for group-wise semantic mining, wherein input images are represented as graph nodes.
arXiv Detail & Related papers (2020-12-09T12:40:13Z) - Learning Physical Graph Representations from Visual Scenes [56.7938395379406]
Physical Scene Graphs (PSGs) represent scenes as hierarchical graphs with nodes corresponding intuitively to object parts at different scales, and edges to physical connections between parts.
PSGNet augments standard CNNs by including: recurrent feedback connections to combine low and high-level image information; graph pooling and vectorization operations that convert spatially-uniform feature maps into object-centric graph structures.
We show that PSGNet outperforms alternative self-supervised scene representation algorithms at scene segmentation tasks.
arXiv Detail & Related papers (2020-06-22T16:10:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.