SpartQA: : A Textual Question Answering Benchmark for Spatial Reasoning
- URL: http://arxiv.org/abs/2104.05832v1
- Date: Mon, 12 Apr 2021 21:37:18 GMT
- Title: SpartQA: : A Textual Question Answering Benchmark for Spatial Reasoning
- Authors: Roshanak Mirzaee, Hossein Rajaby Faghihi, Qiang Ning, Parisa
Kordjmashidi
- Abstract summary: This paper proposes a benchmark for spatial reasoning on natural language text.
We design grammar and reasoning rules to automatically generate a spatial description of visual scenes and corresponding QA pairs.
Experiments show that further pretraining LMs on these automatically generated data significantly improves LMs' capability on spatial understanding.
- Score: 10.810615375345511
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a question-answering (QA) benchmark for spatial reasoning
on natural language text which contains more realistic spatial phenomena not
covered by prior work and is challenging for state-of-the-art language models
(LM). We propose a distant supervision method to improve on this task.
Specifically, we design grammar and reasoning rules to automatically generate a
spatial description of visual scenes and corresponding QA pairs. Experiments
show that further pretraining LMs on these automatically generated data
significantly improves LMs' capability on spatial understanding, which in turn
helps to better solve two external datasets, bAbI, and boolQ. We hope that this
work can foster investigations into more sophisticated models for spatial
reasoning over text.
Related papers
- Large Vision-Language Models for Remote Sensing Visual Question Answering [0.0]
Remote Sensing Visual Question Answering (RSVQA) is a challenging task that involves interpreting complex satellite imagery to answer natural language questions.
Traditional approaches often rely on separate visual feature extractors and language processing models, which can be computationally intensive and limited in their ability to handle open-ended questions.
We propose a novel method that leverages a generative Large Vision-Language Model (LVLM) to streamline the RSVQA process.
arXiv Detail & Related papers (2024-11-16T18:32:38Z) - IDEAL: Leveraging Infinite and Dynamic Characterizations of Large Language Models for Query-focused Summarization [59.06663981902496]
Query-focused summarization (QFS) aims to produce summaries that answer particular questions of interest, enabling greater user control and personalization.
We investigate two indispensable characteristics that the LLMs-based QFS models should be harnessed, Lengthy Document Summarization and Efficiently Fine-grained Query-LLM Alignment.
These innovations pave the way for broader application and accessibility in the field of QFS technology.
arXiv Detail & Related papers (2024-07-15T07:14:56Z) - Large Language Models Understand Layout [6.732578061359833]
Large language models (LLMs) demonstrate extraordinary abilities in a wide range of natural language processing (NLP) tasks.
We show that, beyond text understanding capability, LLMs are capable of processing text layouts denoted by spatial markers.
We show that layout understanding ability is beneficial for building efficient visual question-answering (VQA) systems.
arXiv Detail & Related papers (2024-07-08T09:03:12Z) - Crafting Interpretable Embeddings by Asking LLMs Questions [89.49960984640363]
Large language models (LLMs) have rapidly improved text embeddings for a growing array of natural-language processing tasks.
We introduce question-answering embeddings (QA-Emb), embeddings where each feature represents an answer to a yes/no question asked to an LLM.
We use QA-Emb to flexibly generate interpretable models for predicting fMRI voxel responses to language stimuli.
arXiv Detail & Related papers (2024-05-26T22:30:29Z) - How Can Large Language Models Understand Spatial-Temporal Data? [12.968952073740796]
This paper introduces STG-LLM, an innovative approach empowering Large Language Models for spatial-temporal forecasting.
We tackle the data mismatch by proposing: 1) STG-Tokenizer: This spatial-temporal graph tokenizer transforms intricate graph data into concise tokens capturing both spatial and temporal relationships; 2) STG-Adapter: This minimalistic adapter, consisting of linear encoding and decoding layers, bridges the gap between tokenized data and LLM comprehension.
arXiv Detail & Related papers (2024-01-25T14:03:15Z) - MuSR: Testing the Limits of Chain-of-thought with Multistep Soft Reasoning [63.80739044622555]
We introduce MuSR, a dataset for evaluating language models on soft reasoning tasks specified in a natural language narrative.
This dataset has two crucial features. First, it is created through a novel neurosymbolic synthetic-to-natural generation algorithm.
Second, our dataset instances are free text narratives corresponding to real-world domains of reasoning.
arXiv Detail & Related papers (2023-10-24T17:59:20Z) - RegaVAE: A Retrieval-Augmented Gaussian Mixture Variational Auto-Encoder
for Language Modeling [79.56442336234221]
We introduce RegaVAE, a retrieval-augmented language model built upon the variational auto-encoder (VAE)
It encodes the text corpus into a latent space, capturing current and future information from both source and target text.
Experimental results on various datasets demonstrate significant improvements in text generation quality and hallucination removal.
arXiv Detail & Related papers (2023-10-16T16:42:01Z) - PAXQA: Generating Cross-lingual Question Answering Examples at Training
Scale [53.92008514395125]
PAXQA (Projecting annotations for cross-lingual (x) QA) decomposes cross-lingual QA into two stages.
We propose a novel use of lexically-constrained machine translation, in which constrained entities are extracted from the parallel bitexts.
We show that models fine-tuned on these datasets outperform prior synthetic data generation models over several extractive QA datasets.
arXiv Detail & Related papers (2023-04-24T15:46:26Z) - Toward 3D Spatial Reasoning for Human-like Text-based Visual Question
Answering [23.083935053799145]
Text-based Visual Question Answering(TextVQA) aims to produce correct answers for given questions about the images with multiple scene texts.
We introduce 3D geometric information into a human-like spatial reasoning process to capture key objects' contextual knowledge.
Our method achieves state-of-the-art performance on TextVQA and ST-VQA datasets.
arXiv Detail & Related papers (2022-09-21T12:49:14Z) - Towards Complex Document Understanding By Discrete Reasoning [77.91722463958743]
Document Visual Question Answering (VQA) aims to understand visually-rich documents to answer questions in natural language.
We introduce a new Document VQA dataset, named TAT-DQA, which consists of 3,067 document pages and 16,558 question-answer pairs.
We develop a novel model named MHST that takes into account the information in multi-modalities, including text, layout and visual image, to intelligently address different types of questions.
arXiv Detail & Related papers (2022-07-25T01:43:19Z) - Robust and Interpretable Grounding of Spatial References with Relation
Networks [40.42540299023808]
Learning representations of spatial references in natural language is a key challenge in tasks like autonomous navigation and robotic manipulation.
Recent work has investigated various neural architectures for learning multi-modal representations for spatial concepts.
We develop effective models for understanding spatial references in text that are robust and interpretable.
arXiv Detail & Related papers (2020-05-02T04:11:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.