Optimizing Visual Question Answering Models for Driving: Bridging the Gap Between Human and Machine Attention Patterns
- URL: http://arxiv.org/abs/2406.09203v1
- Date: Thu, 13 Jun 2024 15:00:17 GMT
- Title: Optimizing Visual Question Answering Models for Driving: Bridging the Gap Between Human and Machine Attention Patterns
- Authors: Kaavya Rekanar, Martin Hayes, Ganesh Sistu, Ciaran Eising,
- Abstract summary: This study investigates the attention patterns of humans compared to a VQA model when answering driving-related questions.
We propose an approach integrating filters to optimize the model's attention mechanisms, prioritizing relevant objects and improving accuracy.
- Score: 1.3781842574516934
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Visual Question Answering (VQA) models play a critical role in enhancing the perception capabilities of autonomous driving systems by allowing vehicles to analyze visual inputs alongside textual queries, fostering natural interaction and trust between the vehicle and its occupants or other road users. This study investigates the attention patterns of humans compared to a VQA model when answering driving-related questions, revealing disparities in the objects observed. We propose an approach integrating filters to optimize the model's attention mechanisms, prioritizing relevant objects and improving accuracy. Utilizing the LXMERT model for a case study, we compare attention patterns of the pre-trained and Filter Integrated models, alongside human answers using images from the NuImages dataset, gaining insights into feature prioritization. We evaluated the models using a Subjective scoring framework which shows that the integration of the feature encoder filter has enhanced the performance of the VQA model by refining its attention mechanisms.
Related papers
- Prompting Video-Language Foundation Models with Domain-specific Fine-grained Heuristics for Video Question Answering [71.62961521518731]
HeurVidQA is a framework that leverages domain-specific entity-actions to refine pre-trained video-language foundation models.
Our approach treats these models as implicit knowledge engines, employing domain-specific entity-action prompters to direct the model's focus toward precise cues that enhance reasoning.
arXiv Detail & Related papers (2024-10-12T06:22:23Z) - Enhancing Visual-Language Modality Alignment in Large Vision Language Models via Self-Improvement [102.22911097049953]
SIMA is a framework that enhances visual and language modality alignment through self-improvement.
It employs an in-context self-critic mechanism to select response pairs for preference tuning.
We demonstrate that SIMA achieves superior modality alignment, outperforming previous approaches.
arXiv Detail & Related papers (2024-05-24T23:09:27Z) - AIDE: An Automatic Data Engine for Object Detection in Autonomous Driving [68.73885845181242]
We propose an Automatic Data Engine (AIDE) that automatically identifies issues, efficiently curates data, improves the model through auto-labeling, and verifies the model through generation of diverse scenarios.
We further establish a benchmark for open-world detection on AV datasets to comprehensively evaluate various learning paradigms, demonstrating our method's superior performance at a reduced cost.
arXiv Detail & Related papers (2024-03-26T04:27:56Z) - Deciphering AutoML Ensembles: cattleia's Assistance in Decision-Making [0.0]
Cattleia is an application that deciphers the ensembles for regression, multiclass, and binary classification tasks.
It works with models built by three AutoML packages: auto-sklearn, AutoGluon, and FLAML.
arXiv Detail & Related papers (2024-03-19T11:56:21Z) - Towards a performance analysis on pre-trained Visual Question Answering
models for autonomous driving [2.9552300389898094]
This paper presents a preliminary analysis of three popular Visual Question Answering (VQA) models, namely ViLBERT, ViLT, and LXMERT.
The performance of these models is evaluated by comparing the similarity of responses to reference answers provided by computer vision experts.
arXiv Detail & Related papers (2023-07-18T15:11:40Z) - Smooth-Trajectron++: Augmenting the Trajectron++ behaviour prediction
model with smooth attention [0.0]
This work investigates the state-of-the-art trajectory forecasting model Trajectron++ which we enhance by incorporating a smoothing term in its attention module.
This attention mechanism mimics human attention inspired by cognitive science research indicating limits to attention switching.
We evaluate the performance of the resulting Smooth-Trajectron++ model and compare it to the original model on various benchmarks.
arXiv Detail & Related papers (2023-05-31T09:19:55Z) - AttentionViz: A Global View of Transformer Attention [60.82904477362676]
We present a new visualization technique designed to help researchers understand the self-attention mechanism in transformers.
The main idea behind our method is to visualize a joint embedding of the query and key vectors used by transformer models to compute attention.
We create an interactive visualization tool, AttentionViz, based on these joint query-key embeddings.
arXiv Detail & Related papers (2023-05-04T23:46:49Z) - Top-Down Visual Attention from Analysis by Synthesis [87.47527557366593]
We consider top-down attention from a classic Analysis-by-Synthesis (AbS) perspective of vision.
We propose Analysis-by-Synthesis Vision Transformer (AbSViT), which is a top-down modulated ViT model that variationally approximates AbS, and controllable achieves top-down attention.
arXiv Detail & Related papers (2023-03-23T05:17:05Z) - VisQA: X-raying Vision and Language Reasoning in Transformers [10.439369423744708]
Recent research has shown that state-of-the-art models tend to produce answers exploiting biases and shortcuts in the training data.
We present VisQA, a visual analytics tool that explores this question of reasoning vs. bias exploitation.
arXiv Detail & Related papers (2021-04-02T08:08:25Z) - SparseBERT: Rethinking the Importance Analysis in Self-attention [107.68072039537311]
Transformer-based models are popular for natural language processing (NLP) tasks due to its powerful capacity.
Attention map visualization of a pre-trained model is one direct method for understanding self-attention mechanism.
We propose a Differentiable Attention Mask (DAM) algorithm, which can be also applied in guidance of SparseBERT design.
arXiv Detail & Related papers (2021-02-25T14:13:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.