Examining Redundancy in the Context of Safe Machine Learning
- URL: http://arxiv.org/abs/2007.01900v1
- Date: Fri, 3 Jul 2020 18:23:56 GMT
- Title: Examining Redundancy in the Context of Safe Machine Learning
- Authors: Hans Dermot Doran and Monika Reif
- Abstract summary: This paper describes a set of experiments with neural network classifiers on the MNIST database of digits.
We report on a set of measurements using the MNIST database which ultimately serve to underline the expected difficulties in using NN classifiers in safe and dependable systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper describes a set of experiments with neural network classifiers on
the MNIST database of digits. The purpose is to investigate na\"ive
implementations of redundant architectures as a first step towards safe and
dependable machine learning. We report on a set of measurements using the MNIST
database which ultimately serve to underline the expected difficulties in using
NN classifiers in safe and dependable systems.
Related papers
- Neural Network Verification with PyRAT [1.1470070927586018]
We present PyRAT, a tool based on abstract interpretation to verify the safety of neural networks.
In this paper, we describe the different abstractions used by PyRAT to find the reachable states of a neural network.
PyRAT has already been used in several collaborations to ensure safety guarantees, with its second place at the VNN-Comp 2024 showcasing its performance.
arXiv Detail & Related papers (2024-10-31T13:05:46Z) - Machine Unlearning using Forgetting Neural Networks [0.0]
This paper presents a new approach to machine unlearning using forgetting neural networks (FNN)
FNNs are neural networks with specific forgetting layers, that take inspiration from the processes involved when a human brain forgets.
We report our results on the MNIST handwritten digit recognition and fashion datasets.
arXiv Detail & Related papers (2024-10-29T02:52:26Z) - Neuro-mimetic Task-free Unsupervised Online Learning with Continual
Self-Organizing Maps [56.827895559823126]
Self-organizing map (SOM) is a neural model often used in clustering and dimensionality reduction.
We propose a generalization of the SOM, the continual SOM, which is capable of online unsupervised learning under a low memory budget.
Our results, on benchmarks including MNIST, Kuzushiji-MNIST, and Fashion-MNIST, show almost a two times increase in accuracy.
arXiv Detail & Related papers (2024-02-19T19:11:22Z) - Batch-Ensemble Stochastic Neural Networks for Out-of-Distribution
Detection [55.028065567756066]
Out-of-distribution (OOD) detection has recently received much attention from the machine learning community due to its importance in deploying machine learning models in real-world applications.
In this paper we propose an uncertainty quantification approach by modelling the distribution of features.
We incorporate an efficient ensemble mechanism, namely batch-ensemble, to construct the batch-ensemble neural networks (BE-SNNs) and overcome the feature collapse problem.
We show that BE-SNNs yield superior performance on several OOD benchmarks, such as the Two-Moons dataset, the FashionMNIST vs MNIST dataset, FashionM
arXiv Detail & Related papers (2022-06-26T16:00:22Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - NSL: Hybrid Interpretable Learning From Noisy Raw Data [66.15862011405882]
This paper introduces a hybrid neural-symbolic learning framework, called NSL, that learns interpretable rules from labelled unstructured data.
NSL combines pre-trained neural networks for feature extraction with FastLAS, a state-of-the-art ILP system for rule learning under the answer set semantics.
We demonstrate that NSL is able to learn robust rules from MNIST data and achieve comparable or superior accuracy when compared to neural network and random forest baselines.
arXiv Detail & Related papers (2020-12-09T13:02:44Z) - Neural Complexity Measures [96.06344259626127]
We propose Neural Complexity (NC), a meta-learning framework for predicting generalization.
Our model learns a scalar complexity measure through interactions with many heterogeneous tasks in a data-driven way.
arXiv Detail & Related papers (2020-08-07T02:12:10Z) - Graph Neural Networks for Leveraging Industrial Equipment Structure: An
application to Remaining Useful Life Estimation [21.297461316329453]
We propose to capture the structure of a complex equipment in the form of a graph, and use graph neural networks (GNNs) to model multi-sensor time-series data.
We observe that the proposed GNN-based RUL estimation model compares favorably to several strong baselines from literature such as those based on RNNs and CNNs.
arXiv Detail & Related papers (2020-06-30T06:38:08Z) - Trusted Confidence Bounds for Learning Enabled Cyber-Physical Systems [2.1320960069210484]
The paper presents an approach for computing confidence bounds based on Inductive Conformal Prediction (ICP)
We train a Triplet Network architecture to learn representations of the input data that can be used to estimate the similarity between test examples and examples in the training data set.
Then, these representations are used to estimate the confidence of set predictions from a classifier that is based on the neural network architecture used in the triplet.
arXiv Detail & Related papers (2020-03-11T04:31:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.