DGNN-YOLO: Interpretable Dynamic Graph Neural Networks with YOLO11 for Detecting and Tracking Small Occluded Objects in Urban Traffic
- URL: http://arxiv.org/abs/2411.17251v5
- Date: Thu, 09 Jan 2025 12:28:55 GMT
- Title: DGNN-YOLO: Interpretable Dynamic Graph Neural Networks with YOLO11 for Detecting and Tracking Small Occluded Objects in Urban Traffic
- Authors: Shahriar Soudeep, M. F. Mridha, Md Abrar Jahin, Nilanjan Dey,
- Abstract summary: This paper introduces DGNN-YOLO, a novel framework that integrates dynamic graph neural networks (DGNNs) with YOLO11 to address limitations.
Unlike standard GNNs, DGNNs are chosen for their superior ability to dynamically update graph structures in real-time.
This framework constructs and regularly updates its graph representations, capturing objects as nodes and their interactions as edges.
- Score: 2.0681376988193843
- License:
- Abstract: The detection and tracking of small, occluded objects such as pedestrians, cyclists, and motorbikes pose significant challenges for traffic surveillance systems because of their erratic movement, frequent occlusion, and poor visibility in dynamic urban environments. Traditional methods like YOLO11, while proficient in spatial feature extraction for precise detection, often struggle with these small and dynamically moving objects, particularly in handling real-time data updates and resource efficiency. This paper introduces DGNN-YOLO, a novel framework that integrates dynamic graph neural networks (DGNNs) with YOLO11 to address these limitations. Unlike standard GNNs, DGNNs are chosen for their superior ability to dynamically update graph structures in real-time, which enables adaptive and robust tracking of objects in highly variable urban traffic scenarios. This framework constructs and regularly updates its graph representations, capturing objects as nodes and their interactions as edges, thus effectively responding to rapidly changing conditions. Additionally, DGNN-YOLO incorporates Grad-CAM, Grad-CAM++, and Eigen-CAM visualization techniques to enhance interpretability and foster trust, offering insights into the model's decision-making process. Extensive experiments validate the framework's performance, achieving a precision of 0.8382, recall of 0.6875, and mAP@0.5:0.95 of 0.6476, significantly outperforming existing methods. This study offers a scalable and interpretable solution for real-time traffic surveillance and significantly advances intelligent transportation systems' capabilities by addressing the critical challenge of detecting and tracking small, occluded objects.
Related papers
- Virtual Nodes Improve Long-term Traffic Prediction [9.125554921271338]
This study introduces a novel framework that incorporates virtual nodes, which are additional nodes added to the graph and connected to existing nodes.
Our proposed model incorporates virtual nodes by constructing a semi-adaptive adjacency matrix.
Experimental results demonstrate that the inclusion of virtual nodes significantly enhances long-term prediction accuracy.
arXiv Detail & Related papers (2025-01-17T09:09:01Z) - CREST: An Efficient Conjointly-trained Spike-driven Framework for Event-based Object Detection Exploiting Spatiotemporal Dynamics [7.696109414724968]
Spiking neural networks (SNNs) are promising for event-based object recognition and detection.
Existing SNN frameworks often fail to handle multi-scaletemporal features, leading to increased data redundancy and reduced accuracy.
We propose CREST, a novel conjointly-trained spike-driven framework to exploit event-based object detection.
arXiv Detail & Related papers (2024-12-17T04:33:31Z) - NEST: A Neuromodulated Small-world Hypergraph Trajectory Prediction Model for Autonomous Driving [15.17856086804651]
NEST (Neuromodulated Small-world Hypergraph Trajectory Prediction) is a novel framework that integrates Small-world Networks and hypergraphs for superior interaction modeling and prediction accuracy.
We validate the NEST model on several real-world datasets, including nuScenes, MoCAD, and HighD.
arXiv Detail & Related papers (2024-12-16T11:49:12Z) - Deep Learning and Hybrid Approaches for Dynamic Scene Analysis, Object Detection and Motion Tracking [0.0]
This project aims to develop a robust video surveillance system, which can segment videos into smaller clips based on the detection of activities.
It uses CCTV footage, for example, to record only major events-like the appearance of a person or a thief-so that storage is optimized and digital searches are easier.
arXiv Detail & Related papers (2024-12-05T07:44:40Z) - 3D Multi-Object Tracking with Semi-Supervised GRU-Kalman Filter [6.13623925528906]
3D Multi-Object Tracking (MOT) is essential for intelligent systems like autonomous driving and robotic sensing.
We propose a GRU-based MOT method, which introduces a learnable Kalman filter into the motion module.
This approach is able to learn object motion characteristics through data-driven learning, thereby avoiding the need for manual model design and model error.
arXiv Detail & Related papers (2024-11-13T08:34:07Z) - Improving Traffic Flow Predictions with SGCN-LSTM: A Hybrid Model for Spatial and Temporal Dependencies [55.2480439325792]
This paper introduces the Signal-Enhanced Graph Convolutional Network Long Short Term Memory (SGCN-LSTM) model for predicting traffic speeds across road networks.
Experiments on the PEMS-BAY road network traffic dataset demonstrate the SGCN-LSTM model's effectiveness.
arXiv Detail & Related papers (2024-11-01T00:37:00Z) - OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - PNAS-MOT: Multi-Modal Object Tracking with Pareto Neural Architecture Search [64.28335667655129]
Multiple object tracking is a critical task in autonomous driving.
As tracking accuracy improves, neural networks become increasingly complex, posing challenges for their practical application in real driving scenarios due to the high level of latency.
In this paper, we explore the use of the neural architecture search (NAS) methods to search for efficient architectures for tracking, aiming for low real-time latency while maintaining relatively high accuracy.
arXiv Detail & Related papers (2024-03-23T04:18:49Z) - MotionTrack: Learning Motion Predictor for Multiple Object Tracking [68.68339102749358]
We introduce a novel motion-based tracker, MotionTrack, centered around a learnable motion predictor.
Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on datasets such as Dancetrack and SportsMOT.
arXiv Detail & Related papers (2023-06-05T04:24:11Z) - PDFormer: Propagation Delay-Aware Dynamic Long-Range Transformer for
Traffic Flow Prediction [78.05103666987655]
spatial-temporal Graph Neural Network (GNN) models have emerged as one of the most promising methods to solve this problem.
We propose a novel propagation delay-aware dynamic long-range transFormer, namely PDFormer, for accurate traffic flow prediction.
Our method can not only achieve state-of-the-art performance but also exhibit competitive computational efficiency.
arXiv Detail & Related papers (2023-01-19T08:42:40Z) - Constructing Geographic and Long-term Temporal Graph for Traffic
Forecasting [88.5550074808201]
We propose Geographic and Long term Temporal Graph Convolutional Recurrent Neural Network (GLT-GCRNN) for traffic forecasting.
In this work, we propose a novel framework for traffic forecasting that learns the rich interactions between roads sharing similar geographic or longterm temporal patterns.
arXiv Detail & Related papers (2020-04-23T03:50:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.