Context-LGM: Leveraging Object-Context Relation for Context-Aware Object
Recognition
- URL: http://arxiv.org/abs/2110.04042v1
- Date: Fri, 8 Oct 2021 11:31:58 GMT
- Title: Context-LGM: Leveraging Object-Context Relation for Context-Aware Object
Recognition
- Authors: Mingzhou Liu, Xinwei Sun, Fandong Zhang, Yizhou Yu, Yizhou Wang
- Abstract summary: We propose a novel Contextual Latent Generative Model (Context-LGM), which considers the object-context relation and models it in a hierarchical manner.
To infer contextual features, we reformulate the objective function of Variational Auto-Encoder (VAE), where contextual features are learned as a posterior conditioned distribution on the object.
The effectiveness of our method is verified by state-of-the-art performance on two context-aware object recognition tasks.
- Score: 48.5398871460388
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Context, as referred to situational factors related to the object of
interest, can help infer the object's states or properties in visual
recognition. As such contextual features are too diverse (across instances) to
be annotated, existing attempts simply exploit image labels as supervision to
learn them, resulting in various contextual tricks, such as features pyramid,
context attention, etc. However, without carefully modeling the context's
properties, especially its relation to the object, their estimated context can
suffer from large inaccuracy. To amend this problem, we propose a novel
Contextual Latent Generative Model (Context-LGM), which considers the
object-context relation and models it in a hierarchical manner. Specifically,
we firstly introduce a latent generative model with a pair of correlated latent
variables to respectively model the object and context, and embed their
correlation via the generative process. Then, to infer contextual features, we
reformulate the objective function of Variational Auto-Encoder (VAE), where
contextual features are learned as a posterior distribution conditioned on the
object. Finally, to implement this contextual posterior, we introduce a
Transformer that takes the object's information as a reference and locates
correlated contextual factors. The effectiveness of our method is verified by
state-of-the-art performance on two context-aware object recognition tasks,
i.e. lung cancer prediction and emotion recognition.
Related papers
- Lost in Context: The Influence of Context on Feature Attribution Methods for Object Recognition [4.674826882670651]
This study investigates how context manipulation influences both model accuracy and feature attribution.
We employ a range of feature attribution techniques to decipher the reliance of deep neural networks on context in object recognition tasks.
arXiv Detail & Related papers (2024-11-05T06:13:01Z) - Context-Aware Temporal Embedding of Objects in Video Data [0.8287206589886881]
In video analysis, understanding the temporal context is crucial for recognizing object interactions, event patterns, and contextual changes over time.
The proposed model leverages adjacency and semantic similarities between objects from neighboring video frames to construct context-aware temporal object embeddings.
Empirical studies demonstrate that our context-aware temporal embeddings can be used in conjunction with conventional visual embeddings to enhance the effectiveness of downstream applications.
arXiv Detail & Related papers (2024-08-23T01:44:10Z) - Exploiting Contextual Target Attributes for Target Sentiment
Classification [53.30511968323911]
Existing PTLM-based models for TSC can be categorized into two groups: 1) fine-tuning-based models that adopt PTLM as the context encoder; 2) prompting-based models that transfer the classification task to the text/word generation task.
We present a new perspective of leveraging PTLM for TSC: simultaneously leveraging the merits of both language modeling and explicit target-context interactions via contextual target attributes.
arXiv Detail & Related papers (2023-12-21T11:45:28Z) - Out of Context: A New Clue for Context Modeling of Aspect-based
Sentiment Analysis [54.735400754548635]
ABSA aims to predict the sentiment expressed in a review with respect to a given aspect.
The given aspect should be considered as a new clue out of context in the context modeling process.
We design several aspect-aware context encoders based on different backbones.
arXiv Detail & Related papers (2021-06-21T02:26:03Z) - Understanding Synonymous Referring Expressions via Contrastive Features [105.36814858748285]
We develop an end-to-end trainable framework to learn contrastive features on the image and object instance levels.
We conduct extensive experiments to evaluate the proposed algorithm on several benchmark datasets.
arXiv Detail & Related papers (2021-04-20T17:56:24Z) - How Far are We from Effective Context Modeling? An Exploratory Study on
Semantic Parsing in Context [59.13515950353125]
We present a grammar-based decoding semantic parsing and adapt typical context modeling methods on top of it.
We evaluate 13 context modeling methods on two large cross-domain datasets, and our best model achieves state-of-the-art performances.
arXiv Detail & Related papers (2020-02-03T11:28:10Z) - Don't Judge an Object by Its Context: Learning to Overcome Contextual
Bias [113.44471186752018]
Existing models often leverage co-occurrences between objects and their context to improve recognition accuracy.
This work focuses on addressing such contextual biases to improve the robustness of the learnt feature representations.
arXiv Detail & Related papers (2020-01-09T18:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.