Rule-Based Error Detection and Correction to Operationalize Movement Trajectory Classification
- URL: http://arxiv.org/abs/2308.14250v3
- Date: Fri, 2 Aug 2024 01:38:16 GMT
- Title: Rule-Based Error Detection and Correction to Operationalize Movement Trajectory Classification
- Authors: Bowen Xi, Kevin Scaria, Divyagna Bavikadi, Paulo Shakarian,
- Abstract summary: We provide a neuro-symbolic rule-based framework to conduct error correction and detection of models to integrate into our movement trajectory platform.
We show an F1 scores for predicting errors of up to 0.984, significant performance increase for out-of distribution accuracy (8.51% improvement over SOTA for zero-shot accuracy) and accuracy improvement over the SOTA model.
- Score: 1.192247515575942
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Classification of movement trajectories has many applications in transportation and is a key component for large-scale movement trajectory generation and anomaly detection which has key safety applications in the aftermath of a disaster or other external shock. However, the current state-of-the-art (SOTA) are based on supervised deep learning - which leads to challenges when the distribution of trajectories changes due to such a shock. We provide a neuro-symbolic rule-based framework to conduct error correction and detection of these models to integrate into our movement trajectory platform. We provide a suite of experiments on several recent SOTA models where we show highly accurate error detection, the ability to improve accuracy with a changing test distribution, and accuracy improvement for the base use case in addition to a suite of theoretical properties that informed algorithm development. Specifically, we show an F1 scores for predicting errors of up to 0.984, significant performance increase for out-of distribution accuracy (8.51% improvement over SOTA for zero-shot accuracy), and accuracy improvement over the SOTA model.
Related papers
- Selective Learning: Towards Robust Calibration with Dynamic Regularization [79.92633587914659]
Miscalibration in deep learning refers to there is a discrepancy between the predicted confidence and performance.
We introduce Dynamic Regularization (DReg) which aims to learn what should be learned during training thereby circumventing the confidence adjusting trade-off.
arXiv Detail & Related papers (2024-02-13T11:25:20Z) - Perception Reinforcement Using Auxiliary Learning Feature Fusion: A
Modified Yolov8 for Head Detection [8.065947209864646]
We present a modified Yolov8 which improves head detection performance through target perception.
An Auxiliary Learning Feature Fusion (ALFF) module comprised of LSTM and convolutional blocks is used as the auxiliary task.
In addition, we introduce Noise into Distribution Focal Loss to facilitate model fitting and improve the accuracy of detection.
arXiv Detail & Related papers (2023-10-14T04:52:35Z) - AirIMU: Learning Uncertainty Propagation for Inertial Odometry [29.093168179953185]
Inertial odometry (IO) using strap-down inertial measurement units (IMUs) is critical in many robotic applications.
We present AirIMU, a hybrid approach to estimate the uncertainty, especially the non-deterministic errors, by data-driven methods.
We demonstrate its effectiveness on various platforms, including hand-held devices, vehicles, and a helicopter that covers a trajectory of 262 kilometers.
arXiv Detail & Related papers (2023-10-07T17:08:22Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Learning Sample Difficulty from Pre-trained Models for Reliable
Prediction [55.77136037458667]
We propose to utilize large-scale pre-trained models to guide downstream model training with sample difficulty-aware entropy regularization.
We simultaneously improve accuracy and uncertainty calibration across challenging benchmarks.
arXiv Detail & Related papers (2023-04-20T07:29:23Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Guaranteed Trajectory Tracking under Learned Dynamics with Contraction Metrics and Disturbance Estimation [5.147919654191323]
This paper presents an approach to trajectory-centric learning control based on contraction metrics and disturbance estimation.
The proposed framework is validated on a planar quadrotor example.
arXiv Detail & Related papers (2021-12-15T15:57:33Z) - Learn to Predict Vertical Track Irregularity with Extremely Imbalanced
Data [6.448383767373112]
We showcase an application framework for predicting vertical track irregularity, based on a real-world, large-scale dataset produced by several operating railways in China.
We also proposed a novel approach for handling imbalanced data in time series prediction tasks with adaptive data sampling and penalized loss.
arXiv Detail & Related papers (2020-12-05T15:49:39Z) - Understanding and Mitigating the Tradeoff Between Robustness and
Accuracy [88.51943635427709]
Adversarial training augments the training set with perturbations to improve the robust error.
We show that the standard error could increase even when the augmented perturbations have noiseless observations from the optimal linear predictor.
arXiv Detail & Related papers (2020-02-25T08:03:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.