Utilizing Background Knowledge for Robust Reasoning over Traffic
Situations
- URL: http://arxiv.org/abs/2212.07798v1
- Date: Sun, 4 Dec 2022 09:17:24 GMT
- Title: Utilizing Background Knowledge for Robust Reasoning over Traffic
Situations
- Authors: Jiarui Zhang, Filip Ilievski, Aravinda Kollaa, Jonathan Francis,
Kaixin Ma, Alessandro Oltramari
- Abstract summary: We focus on a complementary research aspect of Intelligent Transportation: traffic understanding.
We scope our study to text-based methods and datasets given the abundant commonsense knowledge.
We adopt three knowledge-driven approaches for zero-shot QA over traffic situations.
- Score: 63.45021731775964
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding novel situations in the traffic domain requires an intricate
combination of domain-specific and causal commonsense knowledge. Prior work has
provided sufficient perception-based modalities for traffic monitoring, in this
paper, we focus on a complementary research aspect of Intelligent
Transportation: traffic understanding. We scope our study to text-based methods
and datasets given the abundant commonsense knowledge that can be extracted
using language models from large corpus and knowledge graphs. We adopt three
knowledge-driven approaches for zero-shot QA over traffic situations, based on
prior natural language inference methods, commonsense models with knowledge
graph self-supervision, and dense retriever-based models. We constructed two
text-based multiple-choice question answering sets: BDD-QA for evaluating
causal reasoning in the traffic domain and HDT-QA for measuring the possession
of domain knowledge akin to human driving license tests. Among the methods,
Unified-QA reaches the best performance on the BDD-QA dataset with the
adaptation of multiple formats of question answers. Language models trained
with inference information and commonsense knowledge are also good at
predicting the cause and effect in the traffic domain but perform badly at
answering human-driving QA sets. For such sets, DPR+Unified-QA performs the
best due to its efficient knowledge extraction.
Related papers
- Prompting Video-Language Foundation Models with Domain-specific Fine-grained Heuristics for Video Question Answering [71.62961521518731]
HeurVidQA is a framework that leverages domain-specific entity-actions to refine pre-trained video-language foundation models.
Our approach treats these models as implicit knowledge engines, employing domain-specific entity-action prompters to direct the model's focus toward precise cues that enhance reasoning.
arXiv Detail & Related papers (2024-10-12T06:22:23Z) - FusionMind -- Improving question and answering with external context
fusion [0.0]
We studied the impact of contextual knowledge on the Question-answering (QA) objective using pre-trained language models (LMs) and knowledge graphs (KGs)
We found that incorporating knowledge facts context led to a significant improvement in performance.
This suggests that the integration of contextual knowledge facts may be more impactful for enhancing question answering performance.
arXiv Detail & Related papers (2023-12-31T03:51:31Z) - Bridged-GNN: Knowledge Bridge Learning for Effective Knowledge Transfer [65.42096702428347]
Graph Neural Networks (GNNs) aggregate information from neighboring nodes.
Knowledge Bridge Learning (KBL) learns a knowledge-enhanced posterior distribution for target domains.
Bridged-GNN includes an Adaptive Knowledge Retrieval module to build Bridged-Graph and a Graph Knowledge Transfer module.
arXiv Detail & Related papers (2023-08-18T12:14:51Z) - A Study of Situational Reasoning for Traffic Understanding [63.45021731775964]
We devise three novel text-based tasks for situational reasoning in the traffic domain.
We adopt four knowledge-enhanced methods that have shown generalization capability across language reasoning tasks in prior work.
We provide in-depth analyses of model performance on data partitions and examine model predictions categorically.
arXiv Detail & Related papers (2023-06-05T01:01:12Z) - FiTs: Fine-grained Two-stage Training for Knowledge-aware Question
Answering [47.495991137191425]
We propose a Fine-grained Two-stage training framework (FiTs) to boost the KAQA system performance.
The first stage aims at aligning representations from the PLM and the KG, thus bridging the modality gaps between them.
The second stage, called knowledge-aware fine-tuning, aims to improve the model's joint reasoning ability.
arXiv Detail & Related papers (2023-02-23T06:25:51Z) - QASem Parsing: Text-to-text Modeling of QA-based Semantics [19.42681342441062]
We consider three QA-based semantic tasks, namely, QA-SRL, QANom and QADiscourse.
We release the first unified QASem parsing tool, practical for downstream applications.
arXiv Detail & Related papers (2022-05-23T15:56:07Z) - Augmenting Pre-trained Language Models with QA-Memory for Open-Domain
Question Answering [38.071375112873675]
We propose a question-answer augmented encoder-decoder model and accompanying pretraining strategy.
This yields an end-to-end system that outperforms prior QA retrieval methods on single-hop QA tasks.
arXiv Detail & Related papers (2022-04-10T02:33:00Z) - QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question
Answering [122.84513233992422]
We propose a new model, QA-GNN, which addresses the problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs)
We show its improvement over existing LM and LM+KG models, as well as its capability to perform interpretable and structured reasoning.
arXiv Detail & Related papers (2021-04-13T17:32:51Z) - KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain
Knowledge-Based VQA [107.7091094498848]
One of the most challenging question types in VQA is when answering the question requires outside knowledge not present in the image.
In this work we study open-domain knowledge, the setting when the knowledge required to answer a question is not given/annotated, neither at training nor test time.
We tap into two types of knowledge representations and reasoning. First, implicit knowledge which can be learned effectively from unsupervised language pre-training and supervised training data with transformer-based models.
arXiv Detail & Related papers (2020-12-20T20:13:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.