Improving Prediction Confidence in Learning-Enabled Autonomous Systems
- URL: http://arxiv.org/abs/2110.03123v1
- Date: Thu, 7 Oct 2021 00:40:34 GMT
- Title: Improving Prediction Confidence in Learning-Enabled Autonomous Systems
- Authors: Dimitrios Boursinos and Xenofon Koutsoukos
- Abstract summary: We utilize a feedback loop between learning-enabled components used for classification and the sensors of an autonomous system in order to improve the confidence of the predictions.
We design a classifier using Inductive Conformal Prediction (ICP) based on a triplet network architecture in order to learn representations that can be used to quantify the similarity between test and training examples.
A feedback loop that queries the sensors for a new input is used to further refine the predictions and increase the classification accuracy.
- Score: 2.66512000865131
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous systems use extensively learning-enabled components such as deep
neural networks (DNNs) for prediction and decision making. In this paper, we
utilize a feedback loop between learning-enabled components used for
classification and the sensors of an autonomous system in order to improve the
confidence of the predictions. We design a classifier using Inductive Conformal
Prediction (ICP) based on a triplet network architecture in order to learn
representations that can be used to quantify the similarity between test and
training examples. The method allows computing confident set predictions with
an error rate predefined using a selected significance level. A feedback loop
that queries the sensors for a new input is used to further refine the
predictions and increase the classification accuracy. The method is
computationally efficient, scalable to high-dimensional inputs, and can be
executed in a feedback loop with the system in real-time. The approach is
evaluated using a traffic sign recognition dataset and the results show that
the error rate is reduced.
Related papers
- Prediction-Powered Inference [68.97619568620709]
Prediction-powered inference is a framework for performing valid statistical inference when an experimental dataset is supplemented with predictions from a machine-learning system.
The framework yields simple algorithms for computing provably valid confidence intervals for quantities such as means, quantiles, and linear and logistic regression coefficients.
Prediction-powered inference could enable researchers to draw valid and more data-efficient conclusions using machine learning.
arXiv Detail & Related papers (2023-01-23T18:59:28Z) - Refining neural network predictions using background knowledge [68.35246878394702]
We show we can use logical background knowledge in learning system to compensate for a lack of labeled training data.
We introduce differentiable refinement functions that find a corrected prediction close to the original prediction.
This algorithm finds optimal refinements on complex SAT formulas in significantly fewer iterations and frequently finds solutions where gradient descent can not.
arXiv Detail & Related papers (2022-06-10T10:17:59Z) - Large-Scale Sequential Learning for Recommender and Engineering Systems [91.3755431537592]
In this thesis, we focus on the design of an automatic algorithms that provide personalized ranking by adapting to the current conditions.
For the former, we propose novel algorithm called SAROS that take into account both kinds of feedback for learning over the sequence of interactions.
The proposed idea of taking into account the neighbour lines shows statistically significant results in comparison with the initial approach for faults detection in power grid.
arXiv Detail & Related papers (2022-05-13T21:09:41Z) - Reliable Probability Intervals For Classification Using Inductive Venn
Predictors Based on Distance Learning [2.66512000865131]
We use the Inductive Venn Predictors framework for computing probability intervals regarding the correctness of each prediction in real-time.
We propose based on distance metric learning to compute informative probability intervals in applications involving high-dimensional inputs.
arXiv Detail & Related papers (2021-10-07T00:51:43Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - Cross-Validation and Uncertainty Determination for Randomized Neural
Networks with Applications to Mobile Sensors [0.0]
Extreme learning machines provide an attractive and efficient method for supervised learning under limited computing ressources and green machine learning.
Results are discussed about supervised learning with such networks and regression methods in terms of consistency and bounds for the generalization and prediction error.
arXiv Detail & Related papers (2021-01-06T12:28:06Z) - Network Classifiers Based on Social Learning [71.86764107527812]
We propose a new way of combining independently trained classifiers over space and time.
The proposed architecture is able to improve prediction performance over time with unlabeled data.
We show that this strategy results in consistent learning with high probability, and it yields a robust structure against poorly trained classifiers.
arXiv Detail & Related papers (2020-10-23T11:18:20Z) - Neural Representations in Hybrid Recommender Systems: Prediction versus
Regularization [8.384351067134999]
We define the neural representation for prediction (NRP) framework and apply it to the autoencoder-based recommendation systems.
We also apply the NRP framework to a direct neural network structure which predicts the ratings without reconstructing the user and item information.
The results confirm that neural representations are better for prediction than regularization and show that the NRP framework, combined with the direct neural network structure, outperforms the state-of-the-art methods in the prediction task.
arXiv Detail & Related papers (2020-10-12T23:12:49Z) - Representation Learning for Sequence Data with Deep Autoencoding
Predictive Components [96.42805872177067]
We propose a self-supervised representation learning method for sequence data, based on the intuition that useful representations of sequence data should exhibit a simple structure in the latent space.
We encourage this latent structure by maximizing an estimate of predictive information of latent feature sequences, which is the mutual information between past and future windows at each time step.
We demonstrate that our method recovers the latent space of noisy dynamical systems, extracts predictive features for forecasting tasks, and improves automatic speech recognition when used to pretrain the encoder on large amounts of unlabeled data.
arXiv Detail & Related papers (2020-10-07T03:34:01Z) - Detecting Adversarial Examples in Learning-Enabled Cyber-Physical
Systems using Variational Autoencoder for Regression [4.788163807490198]
It has been shown that deep neural networks (DNN) are not robust and adversarial examples can cause the model to make a false prediction.
The paper considers the problem of efficiently detecting adversarial examples in LECs used for regression in CPS.
We demonstrate the method using an advanced emergency braking system implemented in an open source simulator for self-driving cars.
arXiv Detail & Related papers (2020-03-21T11:15:33Z) - Trusted Confidence Bounds for Learning Enabled Cyber-Physical Systems [2.1320960069210484]
The paper presents an approach for computing confidence bounds based on Inductive Conformal Prediction (ICP)
We train a Triplet Network architecture to learn representations of the input data that can be used to estimate the similarity between test examples and examples in the training data set.
Then, these representations are used to estimate the confidence of set predictions from a classifier that is based on the neural network architecture used in the triplet.
arXiv Detail & Related papers (2020-03-11T04:31:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.