Unbiased Scene Graph Generation from Biased Training
- URL: http://arxiv.org/abs/2002.11949v3
- Date: Wed, 11 Mar 2020 07:55:13 GMT
- Title: Unbiased Scene Graph Generation from Biased Training
- Authors: Kaihua Tang, Yulei Niu, Jianqiang Huang, Jiaxin Shi, Hanwang Zhang
- Abstract summary: We present a novel SGG framework based on causal inference but not the conventional likelihood.
We propose to draw the counterfactual causality from the trained graph to infer the effect from the bad bias.
In particular, we use Total Direct Effect (TDE) as the proposed final predicate score for unbiased SGG.
- Score: 99.88125954889937
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Today's scene graph generation (SGG) task is still far from practical, mainly
due to the severe training bias, e.g., collapsing diverse "human walk on / sit
on / lay on beach" into "human on beach". Given such SGG, the down-stream tasks
such as VQA can hardly infer better scene structures than merely a bag of
objects. However, debiasing in SGG is not trivial because traditional debiasing
methods cannot distinguish between the good and bad bias, e.g., good context
prior (e.g., "person read book" rather than "eat") and bad long-tailed bias
(e.g., "near" dominating "behind / in front of"). In this paper, we present a
novel SGG framework based on causal inference but not the conventional
likelihood. We first build a causal graph for SGG, and perform traditional
biased training with the graph. Then, we propose to draw the counterfactual
causality from the trained graph to infer the effect from the bad bias, which
should be removed. In particular, we use Total Direct Effect (TDE) as the
proposed final predicate score for unbiased SGG. Note that our framework is
agnostic to any SGG model and thus can be widely applied in the community who
seeks unbiased predictions. By using the proposed Scene Graph Diagnosis toolkit
on the SGG benchmark Visual Genome and several prevailing models, we observed
significant improvements over the previous state-of-the-art methods.
Related papers
- Fine-Grained Scene Graph Generation via Sample-Level Bias Prediction [12.319354506916547]
We propose a novel Sample-Level Bias Prediction (SBP) method for fine-grained Scene Graph Generation (SGG)
Firstly, we train a classic SGG model and construct a correction bias set.
Then, we devise a Bias-Oriented Generative Adversarial Network (BGAN) that learns to predict the constructed correction biases.
arXiv Detail & Related papers (2024-07-27T13:49:06Z) - HiKER-SGG: Hierarchical Knowledge Enhanced Robust Scene Graph Generation [13.929906773382752]
A common approach enabling the ability to reason over visual data is Scene Graph Generation (SGG)
We propose a novel SGG benchmark containing procedurally generated weather corruptions and other transformations over the Visual Genome dataset.
We show that HiKER-SGG does not only demonstrate superior performance on corrupted images in a zero-shot manner, but also outperforms current state-of-the-art methods on uncorrupted SGG tasks.
arXiv Detail & Related papers (2024-03-18T17:59:10Z) - Informative Scene Graph Generation via Debiasing [111.36290856077584]
Scene graph generation aims to detect visual relationship triplets, (subject, predicate, object)
Due to biases in data, current models tend to predict common predicates.
We propose DB-SGG, an effective framework based on debiasing but not the conventional distribution fitting.
arXiv Detail & Related papers (2023-08-10T02:04:01Z) - Learning To Generate Scene Graph from Head to Tail [65.48134724633472]
We propose a novel SGG framework, learning to generate scene graphs from Head to Tail (SGG-HT)
CRM learns head/easy samples firstly for robust features of head predicates and then gradually focuses on tail/hard ones.
SCM is proposed to relieve semantic deviation by ensuring the semantic consistency between the generated scene graph and the ground truth in global and local representations.
arXiv Detail & Related papers (2022-06-23T12:16:44Z) - Resistance Training using Prior Bias: toward Unbiased Scene Graph
Generation [47.69807004675605]
Scene Graph Generation (SGG) aims to build a structured representation of a scene using objects and pairwise relationships.
We propose Resistance Training using Prior Bias (RTPB) for the scene graph generation.
Our RTPB achieves an improvement of over 10% under the mean recall when applied to current SGG methods.
arXiv Detail & Related papers (2022-01-18T07:48:55Z) - From General to Specific: Informative Scene Graph Generation via Balance
Adjustment [113.04103371481067]
Current models are stuck in common predicates, e.g., "on" and "at", rather than informative ones.
We propose BA-SGG, a framework based on balance adjustment but not the conventional distribution fitting.
Our method achieves 14.3%, 8.0%, and 6.1% higher Mean Recall (mR) than that of the Transformer model at three scene graph generation sub-tasks on Visual Genome.
arXiv Detail & Related papers (2021-08-30T11:39:43Z) - Recovering the Unbiased Scene Graphs from the Biased Ones [99.24441932582195]
We show that due to the missing labels, scene graph generation (SGG) can be viewed as a "Learning from Positive and Unlabeled data" (PU learning) problem.
We propose Dynamic Label Frequency Estimation (DLFE) to take advantage of training-time data augmentation and average over multiple training iterations to introduce more valid examples.
Extensive experiments show that DLFE is more effective in estimating label frequencies than a naive variant of the traditional estimate, and DLFE significantly alleviates the long tail.
arXiv Detail & Related papers (2021-07-05T16:10:41Z) - CogTree: Cognition Tree Loss for Unbiased Scene Graph Generation [23.55530043171931]
Scene Graph Generation (SGG) is unsatisfactory when faced with biased data in real-world scenarios.
We propose a novel debiasing Cognition Tree (CogTree) loss for unbiased SGG.
The loss is model-agnostic and consistently boosting the performance of several state-of-the-art models.
arXiv Detail & Related papers (2020-09-16T07:47:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.