VLGrammar: Grounded Grammar Induction of Vision and Language
- URL: http://arxiv.org/abs/2103.12975v1
- Date: Wed, 24 Mar 2021 04:05:08 GMT
- Title: VLGrammar: Grounded Grammar Induction of Vision and Language
- Authors: Yining Hong, Qing Li, Song-Chun Zhu, Siyuan Huang
- Abstract summary: We study grounded grammar induction of vision and language in a joint learning framework.
We present VLGrammar, a method that uses compound probabilistic context-free grammars (compound PCFGs) to induce the language grammar and the image grammar simultaneously.
- Score: 86.88273769411428
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cognitive grammar suggests that the acquisition of language grammar is
grounded within visual structures. While grammar is an essential representation
of natural language, it also exists ubiquitously in vision to represent the
hierarchical part-whole structure. In this work, we study grounded grammar
induction of vision and language in a joint learning framework. Specifically,
we present VLGrammar, a method that uses compound probabilistic context-free
grammars (compound PCFGs) to induce the language grammar and the image grammar
simultaneously. We propose a novel contrastive learning framework to guide the
joint learning of both modules. To provide a benchmark for the grounded grammar
induction task, we collect a large-scale dataset, \textsc{PartIt}, which
contains human-written sentences that describe part-level semantics for 3D
objects. Experiments on the \textsc{PartIt} dataset show that VLGrammar
outperforms all baselines in image grammar induction and language grammar
induction. The learned VLGrammar naturally benefits related downstream tasks.
Specifically, it improves the image unsupervised clustering accuracy by 30\%,
and performs well in image retrieval and text retrieval. Notably, the induced
grammar shows superior generalizability by easily generalizing to unseen
categories.
Related papers
- Compositional Entailment Learning for Hyperbolic Vision-Language Models [54.41927525264365]
We show how to fully leverage the innate hierarchical nature of hyperbolic embeddings by looking beyond individual image-text pairs.
We propose Compositional Entailment Learning for hyperbolic vision-language models.
Empirical evaluation on a hyperbolic vision-language model trained with millions of image-text pairs shows that the proposed compositional learning approach outperforms conventional Euclidean CLIP learning.
arXiv Detail & Related papers (2024-10-09T14:12:50Z) - Grammar Induction from Visual, Speech and Text [91.98797120799227]
This work introduces a novel visual-audio-text grammar induction task (textbfVAT-GI)
Inspired by the fact that language grammar exists beyond the texts, we argue that the text has not to be the predominant modality in grammar induction.
We propose a visual-audio-text inside-outside autoencoder (textbfVaTiora) framework, which leverages rich modal-specific and complementary features for effective grammar parsing.
arXiv Detail & Related papers (2024-10-01T02:24:18Z) - Detecting and explaining (in)equivalence of context-free grammars [0.6282171844772422]
We propose a scalable framework for deciding, proving, and explaining (in)equivalence of context-free grammars.
We present an implementation of the framework and evaluate it on large data sets collected within educational support systems.
arXiv Detail & Related papers (2024-07-25T17:36:18Z) - Learning Language Structures through Grounding [8.437466837766895]
We consider a family of machine learning tasks that aim to learn language structures through grounding.
In Part I, we consider learning syntactic parses through visual grounding.
In Part II, we propose two execution-aware methods to map sentences into corresponding semantic structures.
In Part III, we propose methods that learn language structures from annotations in other languages.
arXiv Detail & Related papers (2024-06-14T02:21:53Z) - Learning grammar with a divide-and-concur neural network [4.111899441919164]
We implement a divide-and-concur iterative projection approach to context-free grammar inference.
Our method requires a relatively small number of discrete parameters, making the inferred grammar directly interpretable.
arXiv Detail & Related papers (2022-01-18T22:42:43Z) - Dependency Induction Through the Lens of Visual Perception [81.91502968815746]
We propose an unsupervised grammar induction model that leverages word concreteness and a structural vision-based to jointly learn constituency-structure and dependency-structure grammars.
Our experiments show that the proposed extension outperforms the current state-of-the-art visually grounded models in constituency parsing even with a smaller grammar size.
arXiv Detail & Related papers (2021-09-20T18:40:37Z) - Visually Grounded Compound PCFGs [65.04669567781634]
Exploiting visual groundings for language understanding has recently been drawing much attention.
We study visually grounded grammar induction and learn a constituency from both unlabeled text and its visual captions.
arXiv Detail & Related papers (2020-09-25T19:07:00Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.