Think Twice Before Recognizing: Large Multimodal Models for General Fine-grained Traffic Sign Recognition
- URL: http://arxiv.org/abs/2409.01534v1
- Date: Tue, 3 Sep 2024 02:08:47 GMT
- Title: Think Twice Before Recognizing: Large Multimodal Models for General Fine-grained Traffic Sign Recognition
- Authors: Yaozong Gan, Guang Li, Ren Togo, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama,
- Abstract summary: We propose a new strategy called think twice before recognizing to improve fine-grained traffic sign recognition (TSR)
Our strategy achieves effective fine-grained TSR by stimulating the multiple-thinking capability of large multimodal models (LMM)
- Score: 49.20086587208214
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new strategy called think twice before recognizing to improve fine-grained traffic sign recognition (TSR). Fine-grained TSR in the wild is difficult due to the complex road conditions, and existing approaches particularly struggle with cross-country TSR when data is lacking. Our strategy achieves effective fine-grained TSR by stimulating the multiple-thinking capability of large multimodal models (LMM). We introduce context, characteristic, and differential descriptions to design multiple thinking processes for the LMM. The context descriptions with center coordinate prompt optimization help the LMM to locate the target traffic sign in the original road images containing multiple traffic signs and filter irrelevant answers through the proposed prior traffic sign hypothesis. The characteristic description is based on few-shot in-context learning of template traffic signs, which decreases the cross-domain difference and enhances the fine-grained recognition capability of the LMM. The differential descriptions of similar traffic signs optimize the multimodal thinking capability of the LMM. The proposed method is independent of training data and requires only simple and uniform instructions. We conducted extensive experiments on three benchmark datasets and two real-world datasets from different countries, and the proposed method achieves state-of-the-art TSR results on all five datasets.
Related papers
- Strada-LLM: Graph LLM for traffic prediction [62.2015839597764]
A considerable challenge in traffic prediction lies in handling the diverse data distributions caused by vastly different traffic conditions.
We propose a graph-aware LLM for traffic prediction that considers proximal traffic information.
We adopt a lightweight approach for efficient domain adaptation when facing new data distributions in few-shot fashion.
arXiv Detail & Related papers (2024-10-28T09:19:29Z) - Cross-domain Few-shot In-context Learning for Enhancing Traffic Sign Recognition [49.20086587208214]
We propose a cross-domain few-shot in-context learning method based on the MLLM for enhancing traffic sign recognition.
By using description texts, our method reduces the cross-domain differences between template and real traffic signs.
Our approach requires only simple and uniform textual indications, without the need for large-scale traffic sign images and labels.
arXiv Detail & Related papers (2024-07-08T10:51:03Z) - Revolutionizing Traffic Sign Recognition: Unveiling the Potential of Vision Transformers [0.0]
Traffic Sign Recognition (TSR) holds a vital role in advancing driver assistance systems and autonomous vehicles.
This study explores three variants of Vision Transformers (PVT, TNT, LNL) and six convolutional neural networks (AlexNet, ResNet, VGG16, MobileNet, EfficientNet, GoogleNet) as baseline models.
To address the shortcomings of traditional methods, a novel pyramid EATFormer backbone is proposed, amalgamating Evolutionary Algorithms (EAs) with the Transformer architecture.
arXiv Detail & Related papers (2024-04-29T19:18:52Z) - Online Multi-Task Learning with Recursive Least Squares and Recursive Kernel Methods [50.67996219968513]
We introduce two novel approaches for Online Multi-Task Learning (MTL) Regression Problems.
We achieve exact and approximate recursions with quadratic per-instance cost on the dimension of the input space.
We compare our online MTL methods to other contenders in a real-world wind speed forecasting case study.
arXiv Detail & Related papers (2023-08-03T01:41:34Z) - Short-term passenger flow prediction for multi-traffic modes: A residual
network and Transformer based multi-task learning method [21.13073816634534]
Res-Transformer is a learning model for short-term passenger flow prediction of multi-traffic modes.
Model is evaluated on two large-scale real-world datasets from Beijing, China.
This paper can give critical insights into the short-tern passenger flow prediction for multi-traffic modes.
arXiv Detail & Related papers (2022-02-27T01:09:19Z) - Aerial Images Meet Crowdsourced Trajectories: A New Approach to Robust
Road Extraction [110.61383502442598]
We introduce a novel neural network framework termed Cross-Modal Message Propagation Network (CMMPNet)
CMMPNet is composed of two deep Auto-Encoders for modality-specific representation learning and a tailor-designed Dual Enhancement Module for cross-modal representation refinement.
Experiments on three real-world benchmarks demonstrate the effectiveness of our CMMPNet for robust road extraction.
arXiv Detail & Related papers (2021-11-30T04:30:10Z) - Physics-Informed Deep Learning for Traffic State Estimation [3.779860024918729]
Traffic state estimation (TSE) reconstructs the traffic variables (e.g., density) on road segments using partially observed data.
This paper introduces a physics-informed deep learning (PIDL) framework to efficiently conduct high-quality TSE with small amounts of observed data.
arXiv Detail & Related papers (2021-01-17T03:28:32Z) - SMART: Simultaneous Multi-Agent Recurrent Trajectory Prediction [72.37440317774556]
We propose advances that address two key challenges in future trajectory prediction.
multimodality in both training data and predictions and constant time inference regardless of number of agents.
arXiv Detail & Related papers (2020-07-26T08:17:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.