Multimodal Learning for Just-In-Time Software Defect Prediction in Autonomous Driving Systems
- URL: http://arxiv.org/abs/2502.20806v1
- Date: Fri, 28 Feb 2025 07:45:10 GMT
- Title: Multimodal Learning for Just-In-Time Software Defect Prediction in Autonomous Driving Systems
- Authors: Faisal Mohammad, Duksan Ryu,
- Abstract summary: This paper proposes a novel approach for just-in-time software defect prediction (JIT-SDP) in autonomous driving software systems using multimodal learning.<n>Our findings highlight the potential of multimodal learning to enhance the reliability and safety of autonomous driving software.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, the rise of autonomous driving technologies has highlighted the critical importance of reliable software for ensuring safety and performance. This paper proposes a novel approach for just-in-time software defect prediction (JIT-SDP) in autonomous driving software systems using multimodal learning. The proposed model leverages the multimodal transformers in which the pre-trained transformers and a combining module deal with the multiple data modalities of the software system datasets such as code features, change metrics, and contextual information. The key point for adapting multimodal learning is to utilize the attention mechanism between the different data modalities such as text, numerical, and categorical. In the combining module, the output of a transformer model on text data and tabular features containing categorical and numerical data are combined to produce the predictions using the fully connected layers. Experiments conducted on three open-source autonomous driving system software projects collected from the GitHub repository (Apollo, Carla, and Donkeycar) demonstrate that the proposed approach significantly outperforms state-of-the-art deep learning and machine learning models regarding evaluation metrics. Our findings highlight the potential of multimodal learning to enhance the reliability and safety of autonomous driving software through improved defect prediction.
Related papers
- SafeAuto: Knowledge-Enhanced Safe Autonomous Driving with Multimodal Foundation Models [63.71984266104757]
Multimodal Large Language Models (MLLMs) can process both visual and textual data.
We propose SafeAuto, a novel framework that enhances MLLM-based autonomous driving systems by incorporating both unstructured and structured knowledge.
arXiv Detail & Related papers (2025-02-28T21:53:47Z) - A Survey of World Models for Autonomous Driving [63.33363128964687]
Recent breakthroughs in autonomous driving have been propelled by advances in robust world modeling.<n>This paper systematically reviews recent advances in world models for autonomous driving.
arXiv Detail & Related papers (2025-01-20T04:00:02Z) - Automatic AI Model Selection for Wireless Systems: Online Learning via Digital Twinning [50.332027356848094]
AI-based applications are deployed at intelligent controllers to carry out functionalities like scheduling or power control.
The mapping between context and AI model parameters is ideally done in a zero-shot fashion.
This paper introduces a general methodology for the online optimization of AMS mappings.
arXiv Detail & Related papers (2024-06-22T11:17:50Z) - Mutual Information Analysis in Multimodal Learning Systems [3.3748750222488657]
Well-known examples include autonomous vehicles, audiovisual generative systems, vision-language systems, and so on.
Such systems integrate multiple signal modalities: text, speech, images, video, LiDAR, etc., to perform various tasks.
A key issue for understanding such systems is the relationship between various modalities and how it impacts task performance.
We employ the concept of mutual information (MI) to gain insight into this issue.
arXiv Detail & Related papers (2024-05-21T02:16:16Z) - NeuroFlow: Development of lightweight and efficient model integration
scheduling strategy for autonomous driving system [0.0]
This paper proposes a specialized autonomous driving system that takes into account the unique constraints and characteristics of automotive systems.
The proposed system systematically analyzes the intricate data flow in autonomous driving and provides functionality to dynamically adjust various factors that influence deep learning models.
arXiv Detail & Related papers (2023-12-15T07:51:20Z) - Open-sourced Data Ecosystem in Autonomous Driving: the Present and Future [130.87142103774752]
This review systematically assesses over seventy open-source autonomous driving datasets.
It offers insights into various aspects, such as the principles underlying the creation of high-quality datasets.
It also delves into the scientific and technical challenges that warrant resolution.
arXiv Detail & Related papers (2023-12-06T10:46:53Z) - Drive Anywhere: Generalizable End-to-end Autonomous Driving with
Multi-modal Foundation Models [114.69732301904419]
We present an approach to apply end-to-end open-set (any environment/scene) autonomous driving that is capable of providing driving decisions from representations queryable by image and text.
Our approach demonstrates unparalleled results in diverse tests while achieving significantly greater robustness in out-of-distribution situations.
arXiv Detail & Related papers (2023-10-26T17:56:35Z) - Modelling Concurrency Bugs Using Machine Learning [0.0]
This project aims to compare both common and recent machine learning approaches.
We define a synthetic dataset that we generate with the scope of simulating real-life (concurrent) programs.
We formulate hypotheses about fundamental limits of various machine learning model types.
arXiv Detail & Related papers (2023-05-08T17:30:24Z) - Ensemble Learning for Fusion of Multiview Vision with Occlusion and
Missing Information: Framework and Evaluations with Real-World Data and
Applications in Driver Hand Activity Recognition [0.0]
Multi-sensor frameworks provide opportunities for ensemble learning and sensor fusion.
We propose and analyze an imputation scheme to handle missing information.
We show that a late-fusion approach between parallel convolutional neural networks can outperform even the best-placed single camera model.
arXiv Detail & Related papers (2023-01-30T00:24:27Z) - Federated Deep Learning Meets Autonomous Vehicle Perception: Design and
Verification [168.67190934250868]
Federated learning empowered connected autonomous vehicle (FLCAV) has been proposed.
FLCAV preserves privacy while reducing communication and annotation costs.
It is challenging to determine the network resources and road sensor poses for multi-stage training.
arXiv Detail & Related papers (2022-06-03T23:55:45Z) - Automated Machine Learning Techniques for Data Streams [91.3755431537592]
This paper surveys the state-of-the-art open-source AutoML tools, applies them to data collected from streams, and measures how their performance changes over time.
The results show that off-the-shelf AutoML tools can provide satisfactory results but in the presence of concept drift, detection or adaptation techniques have to be applied to maintain the predictive accuracy over time.
arXiv Detail & Related papers (2021-06-14T11:42:46Z) - A Multi-Modal States based Vehicle Descriptor and Dilated Convolutional
Social Pooling for Vehicle Trajectory Prediction [3.131740922192114]
We propose a vehicle-descriptor based LSTM model with the dilated convolutional social pooling (VD+DCS-LSTM) to cope with the above issues.
Each vehicle's multi-modal state information is employed as our model's input.
The validity of the overall model was verified over the NGSIM US-101 and I-80 datasets.
arXiv Detail & Related papers (2020-03-07T01:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.