Predicting Overtakes in Trucks Using CAN Data
- URL: http://arxiv.org/abs/2404.05723v1
- Date: Mon, 8 Apr 2024 17:58:22 GMT
- Title: Predicting Overtakes in Trucks Using CAN Data
- Authors: Talha Hanif Butt, Prayag Tiwari, Fernando Alonso-Fernandez,
- Abstract summary: We investigate the detection of truck overtakes from CAN data.
Our analysis covers up to 10 seconds before the overtaking event.
We observe that the prediction scores of the overtake class tend to increase as we approach the overtake trigger.
- Score: 51.28632782308621
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Safe overtakes in trucks are crucial to prevent accidents, reduce congestion, and ensure efficient traffic flow, making early prediction essential for timely and informed driving decisions. Accordingly, we investigate the detection of truck overtakes from CAN data. Three classifiers, Artificial Neural Networks (ANN), Random Forest, and Support Vector Machines (SVM), are employed for the task. Our analysis covers up to 10 seconds before the overtaking event, using an overlapping sliding window of 1 second to extract CAN features. We observe that the prediction scores of the overtake class tend to increase as we approach the overtake trigger, while the no-overtake class remain stable or oscillates depending on the classifier. Thus, the best accuracy is achieved when approaching the trigger, making early overtaking prediction challenging. The classifiers show good accuracy in classifying overtakes (Recall/TPR > 93%), but accuracy is suboptimal in classifying no-overtakes (TNR typically 80-90% and below 60% for one SVM variant). We further combine two classifiers (Random Forest and linear SVM) by averaging their output scores. The fusion is observed to improve no-overtake classification (TNR > 92%) at the expense of reducing overtake accuracy (TPR). However, the latter is kept above 91% near the overtake trigger. Therefore, the fusion balances TPR and TNR, providing more consistent performance than individual classifiers.
Related papers
- Artificial Intelligence (AI) Based Prediction of Mortality, for COVID-19 Patients [0.0]
For severely affected COVID-19 patients, it is crucial to identify high-risk patients and predict survival and need for intensive care (ICU)
This study investigated the performances of nine machine and deep learning algorithms in combination with two widely used feature selection methods.
LSTM performed the best in predicting last status and ICU requirement with 90%, 92%, 86% and 95% accuracy, sensitivity, specificity, and AUC respectively.
arXiv Detail & Related papers (2024-03-28T12:11:29Z) - Multi-class real-time crash risk forecasting using convolutional neural
network: Istanbul case study [0.0]
The performance of an artificial neural network (ANN) in forecasting crash risk is shown in this paper.
The proposed CNN model is capable of learning from recorded, processed, and categorized input characteristics.
The findings of this research suggest applying the CNN model as a multi-class prediction model for real-time crash risk prediction.
arXiv Detail & Related papers (2024-02-09T10:51:09Z) - Rule-Based Error Detection and Correction to Operationalize Movement Trajectory Classification [1.192247515575942]
We provide a neuro-symbolic rule-based framework to conduct error correction and detection of models to integrate into our movement trajectory platform.
We show an F1 scores for predicting errors of up to 0.984, significant performance increase for out-of distribution accuracy (8.51% improvement over SOTA for zero-shot accuracy) and accuracy improvement over the SOTA model.
arXiv Detail & Related papers (2023-08-28T01:57:38Z) - When Does Confidence-Based Cascade Deferral Suffice? [69.28314307469381]
Cascades are a classical strategy to enable inference cost to vary adaptively across samples.
A deferral rule determines whether to invoke the next classifier in the sequence, or to terminate prediction.
Despite being oblivious to the structure of the cascade, confidence-based deferral often works remarkably well in practice.
arXiv Detail & Related papers (2023-07-06T04:13:57Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Efficient, Uncertainty-based Moderation of Neural Networks Text
Classifiers [8.883733362171034]
We propose a framework for the efficient, in-operation moderation of classifiers' output.
We suggest a semi-automated approach that uses prediction uncertainties to pass unconfident, probably incorrect classifications to human moderators.
A series of benchmarking experiments show that our framework can improve the classification F1-scores by 5.1 to 11.2%.
arXiv Detail & Related papers (2022-04-04T09:07:54Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - An End-to-End Deep Learning Approach for Epileptic Seizure Prediction [4.094649684498489]
We propose an end-to-end deep learning solution using a convolutional neural network (CNN)
Overall sensitivity, false prediction rate, and area under receiver operating characteristic curve reaches 93.5%, 0.063/h, 0.981 and 98.8%, 0.074/h, 0.988 on two datasets respectively.
arXiv Detail & Related papers (2021-08-17T05:49:43Z) - Discriminative Jackknife: Quantifying Uncertainty in Deep Learning via
Higher-Order Influence Functions [121.10450359856242]
We develop a frequentist procedure that utilizes influence functions of a model's loss functional to construct a jackknife (or leave-one-out) estimator of predictive confidence intervals.
The DJ satisfies (1) and (2), is applicable to a wide range of deep learning models, is easy to implement, and can be applied in a post-hoc fashion without interfering with model training or compromising its accuracy.
arXiv Detail & Related papers (2020-06-29T13:36:52Z) - Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning [134.15174177472807]
We introduce adversarial training into self-supervision, to provide general-purpose robust pre-trained models for the first time.
We conduct extensive experiments to demonstrate that the proposed framework achieves large performance margins.
arXiv Detail & Related papers (2020-03-28T18:28:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.