Detecting Objects with Context-Likelihood Graphs and Graph Refinement
- URL: http://arxiv.org/abs/2212.12395v3
- Date: Wed, 27 Sep 2023 17:43:12 GMT
- Title: Detecting Objects with Context-Likelihood Graphs and Graph Refinement
- Authors: Aritra Bhowmik, Yu Wang, Nora Baka, Martin R. Oswald, Cees G. M. Snoek
- Abstract summary: The goal of this paper is to detect objects by exploiting their ins. Contrary to existing methods, which learn objects and relations separately, our key idea is to learn the object-relation distribution jointly.
We propose a novel way of creating a graphical representation of an image from inter-object relations and initial class predictions, we call a context-likelihood graph.
We then learn the joint with an energy-based modeling technique which allows a sample and refine the context-likelihood graph iteratively for a given image.
- Score: 45.70356990655389
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The goal of this paper is to detect objects by exploiting their
interrelationships. Contrary to existing methods, which learn objects and
relations separately, our key idea is to learn the object-relation distribution
jointly. We first propose a novel way of creating a graphical representation of
an image from inter-object relation priors and initial class predictions, we
call a context-likelihood graph. We then learn the joint distribution with an
energy-based modeling technique which allows to sample and refine the
context-likelihood graph iteratively for a given image. Our formulation of
jointly learning the distribution enables us to generate a more accurate graph
representation of an image which leads to a better object detection
performance. We demonstrate the benefits of our context-likelihood graph
formulation and the energy-based graph refinement via experiments on the Visual
Genome and MS-COCO datasets where we achieve a consistent improvement over
object detectors like DETR and Faster-RCNN, as well as alternative methods
modeling object interrelationships separately. Our method is detector agnostic,
end-to-end trainable, and especially beneficial for rare object classes.
Related papers
- EGTR: Extracting Graph from Transformer for Scene Graph Generation [5.935927309154952]
Scene Graph Generation (SGG) is a challenging task of detecting objects and predicting relationships between objects.
We propose a lightweight one-stage SGG model that extracts the relation graph from the various relationships learned in the multi-head self-attention layers of the DETR decoder.
We demonstrate the effectiveness and efficiency of our method for the Visual Genome and Open Image V6 datasets.
arXiv Detail & Related papers (2024-04-02T16:20:02Z) - Scene-Graph ViT: End-to-End Open-Vocabulary Visual Relationship Detection [14.22646492640906]
We propose a simple and highly efficient decoder-free architecture for open-vocabulary visual relationship detection.
Our model consists of a Transformer-based image encoder that represents objects as tokens and models their relationships implicitly.
Our approach achieves state-of-the-art relationship detection performance on Visual Genome and on the large-vocabulary GQA benchmark at real-time inference speeds.
arXiv Detail & Related papers (2024-03-21T10:15:57Z) - Relational Prior Knowledge Graphs for Detection and Instance
Segmentation [24.360473253478112]
We propose a graph that enhances object features using priors.
Experimental evaluations on COCO show that the utilization of scene graphs, augmented with relational priors, offer benefits for object detection and instance segmentation.
arXiv Detail & Related papers (2023-10-11T15:15:05Z) - Bures-Wasserstein Means of Graphs [60.42414991820453]
We propose a novel framework for defining a graph mean via embeddings in the space of smooth graph signal distributions.
By finding a mean in this embedding space, we can recover a mean graph that preserves structural information.
We establish the existence and uniqueness of the novel graph mean, and provide an iterative algorithm for computing it.
arXiv Detail & Related papers (2023-05-31T11:04:53Z) - You Only Transfer What You Share: Intersection-Induced Graph Transfer
Learning for Link Prediction [79.15394378571132]
We investigate a previously overlooked phenomenon: in many cases, a densely connected, complementary graph can be found for the original graph.
The denser graph may share nodes with the original graph, which offers a natural bridge for transferring selective, meaningful knowledge.
We identify this setting as Graph Intersection-induced Transfer Learning (GITL), which is motivated by practical applications in e-commerce or academic co-authorship predictions.
arXiv Detail & Related papers (2023-02-27T22:56:06Z) - Relationformer: A Unified Framework for Image-to-Graph Generation [18.832626244362075]
This work proposes a unified one-stage transformer-based framework, namely Relationformer, that jointly predicts objects and their relations.
We leverage direct set-based object prediction and incorporate the interaction among the objects to learn an object-relation representation jointly.
We achieve state-of-the-art performance on multiple, diverse and multi-domain datasets.
arXiv Detail & Related papers (2022-03-19T00:36:59Z) - Relation Regularized Scene Graph Generation [206.76762860019065]
Scene graph generation (SGG) is built on top of detected objects to predict object pairwise visual relations.
We propose a relation regularized network (R2-Net) which can predict whether there is a relationship between two objects.
Our R2-Net can effectively refine object labels and generate scene graphs.
arXiv Detail & Related papers (2022-02-22T11:36:49Z) - Joint Graph Learning and Matching for Semantic Feature Correspondence [69.71998282148762]
We propose a joint emphgraph learning and matching network, named GLAM, to explore reliable graph structures for boosting graph matching.
The proposed method is evaluated on three popular visual matching benchmarks (Pascal VOC, Willow Object and SPair-71k)
It outperforms previous state-of-the-art graph matching methods by significant margins on all benchmarks.
arXiv Detail & Related papers (2021-09-01T08:24:02Z) - A Graph-based Interactive Reasoning for Human-Object Interaction
Detection [71.50535113279551]
We present a novel graph-based interactive reasoning model called Interactive Graph (abbr. in-Graph) to infer HOIs.
We construct a new framework to assemble in-Graph models for detecting HOIs, namely in-GraphNet.
Our framework is end-to-end trainable and free from costly annotations like human pose.
arXiv Detail & Related papers (2020-07-14T09:29:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.