Comparative Study of Deep Learning Architectures for Textual Damage Level Classification
- URL: http://arxiv.org/abs/2501.01694v1
- Date: Fri, 03 Jan 2025 08:23:29 GMT
- Title: Comparative Study of Deep Learning Architectures for Textual Damage Level Classification
- Authors: Aziida Nanyonga, Hassan Wasswa, Graham Wild,
- Abstract summary: This study aims to leverage Natural Language Processing (NLP) and deep learning models to analyze unstructured text narratives.
Using LSTM, BLSTM, GRU, and sRNN deep learning models, we classify the aircraft damage level incurred during safety occurrences.
The sRNN model emerged as the top performer in terms of recall and accuracy, boasting a remarkable 89%.
- Score: 0.0
- License:
- Abstract: Given the paramount importance of safety in the aviation industry, even minor operational anomalies can have significant consequences. Comprehensive documentation of incidents and accidents serves to identify root causes and propose safety measures. However, the unstructured nature of incident event narratives poses a challenge for computer systems to interpret. Our study aimed to leverage Natural Language Processing (NLP) and deep learning models to analyze these narratives and classify the aircraft damage level incurred during safety occurrences. Through the implementation of LSTM, BLSTM, GRU, and sRNN deep learning models, our research yielded promising results, with all models showcasing competitive performance, achieving an accuracy of over 88% significantly surpassing the 25% random guess threshold for a four-class classification problem. Notably, the sRNN model emerged as the top performer in terms of recall and accuracy, boasting a remarkable 89%. These findings underscore the potential of NLP and deep learning models in extracting actionable insights from unstructured text narratives, particularly in evaluating the extent of aircraft damage within the realm of aviation safety occurrences.
Related papers
- Phase of Flight Classification in Aviation Safety using LSTM, GRU, and BiLSTM: A Case Study with ASN Dataset [0.0]
The research aims to determine whether the phase of flight can be inferred from narratives of post-accident events using NLP techniques.
The classification performance of various deep learning models was evaluated.
arXiv Detail & Related papers (2025-01-14T08:26:58Z) - Natural Language Processing and Deep Learning Models to Classify Phase of Flight in Aviation Safety Occurrences [14.379311972506791]
Researchers applied natural language processing (NLP) and artificial intelligence (AI) models to process text narratives to classify the flight phases of safety occurrences.
The classification performance of two deep learning models, ResNet and sRNN was evaluated, using an initial dataset of 27,000 safety occurrence reports from the NTSB.
arXiv Detail & Related papers (2025-01-11T15:02:49Z) - Sequential Classification of Aviation Safety Occurrences with Natural Language Processing [14.379311972506791]
The ability to classify and categorise safety occurrences would help aviation industry stakeholders make informed safety-critical decisions.
The classification performance of various deep learning models was evaluated on a set of 27,000 safety occurrence reports from the NTSB.
arXiv Detail & Related papers (2025-01-11T09:23:55Z) - Analyzing Aviation Safety Narratives with LDA, NMF and PLSA: A Case Study Using Socrata Datasets [0.0]
This study explores the application of topic modelling techniques on the Socrata dataset spanning from 1908 to 2009.
The analysis identified key themes such as pilot error, mechanical failure, weather conditions, and training deficiencies.
Future directions include integrating additional contextual variables, leveraging neural topic models, and enhancing aviation safety protocols.
arXiv Detail & Related papers (2025-01-03T08:14:39Z) - Classification of Operational Records in Aviation Using Deep Learning Approaches [0.0]
This study evaluates the performance of four different models for DP (deep learning) in a classification task involving Commercial, Military, and Private categories.
Among the models, BLSTM achieved the highest overall accuracy of 72%, demonstrating superior performance in stability and balanced classification.
CNN and sRNN exhibited lower accuracies of 67% and 69%, with significant misclassifications in the Private class.
arXiv Detail & Related papers (2025-01-02T12:12:02Z) - Enhancing Training Data Attribution for Large Language Models with Fitting Error Consideration [74.09687562334682]
We introduce a novel training data attribution method called Debias and Denoise Attribution (DDA)
Our method significantly outperforms existing approaches, achieving an averaged AUC of 91.64%.
DDA exhibits strong generality and scalability across various sources and different-scale models like LLaMA2, QWEN2, and Mistral.
arXiv Detail & Related papers (2024-10-02T07:14:26Z) - The BRAVO Semantic Segmentation Challenge Results in UNCV2024 [68.20197719071436]
We define two categories of reliability: (1) semantic reliability, which reflects the model's accuracy and calibration when exposed to various perturbations; and (2) OOD reliability, which measures the model's ability to detect object classes that are unknown during training.
The results reveal interesting insights into the importance of large-scale pre-training and minimal architectural design in developing robust and reliable semantic segmentation models.
arXiv Detail & Related papers (2024-09-23T15:17:30Z) - Learning Traffic Crashes as Language: Datasets, Benchmarks, and What-if Causal Analyses [76.59021017301127]
We propose a large-scale traffic crash language dataset, named CrashEvent, summarizing 19,340 real-world crash reports.
We further formulate the crash event feature learning as a novel text reasoning problem and further fine-tune various large language models (LLMs) to predict detailed accident outcomes.
Our experiments results show that our LLM-based approach not only predicts the severity of accidents but also classifies different types of accidents and predicts injury outcomes.
arXiv Detail & Related papers (2024-06-16T03:10:16Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Certified Robustness Against Natural Language Attacks by Causal
Intervention [61.62348826831147]
Causal Intervention by Semantic Smoothing (CISS) is a novel framework towards robustness against natural language attacks.
CISS is provably robust against word substitution attacks, as well as empirically robust even when perturbations are strengthened by unknown attack algorithms.
arXiv Detail & Related papers (2022-05-24T19:20:48Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.