NSL: Hybrid Interpretable Learning From Noisy Raw Data
- URL: http://arxiv.org/abs/2012.05023v1
- Date: Wed, 9 Dec 2020 13:02:44 GMT
- Title: NSL: Hybrid Interpretable Learning From Noisy Raw Data
- Authors: Daniel Cunnington, Alessandra Russo, Mark Law, Jorge Lobo, Lance
Kaplan
- Abstract summary: This paper introduces a hybrid neural-symbolic learning framework, called NSL, that learns interpretable rules from labelled unstructured data.
NSL combines pre-trained neural networks for feature extraction with FastLAS, a state-of-the-art ILP system for rule learning under the answer set semantics.
We demonstrate that NSL is able to learn robust rules from MNIST data and achieve comparable or superior accuracy when compared to neural network and random forest baselines.
- Score: 66.15862011405882
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inductive Logic Programming (ILP) systems learn generalised, interpretable
rules in a data-efficient manner utilising existing background knowledge.
However, current ILP systems require training examples to be specified in a
structured logical format. Neural networks learn from unstructured data,
although their learned models may be difficult to interpret and are vulnerable
to data perturbations at run-time. This paper introduces a hybrid
neural-symbolic learning framework, called NSL, that learns interpretable rules
from labelled unstructured data. NSL combines pre-trained neural networks for
feature extraction with FastLAS, a state-of-the-art ILP system for rule
learning under the answer set semantics. Features extracted by the neural
components define the structured context of labelled examples and the
confidence of the neural predictions determines the level of noise of the
examples. Using the scoring function of FastLAS, NSL searches for short,
interpretable rules that generalise over such noisy examples. We evaluate our
framework on propositional and first-order classification tasks using the MNIST
dataset as raw data. Specifically, we demonstrate that NSL is able to learn
robust rules from perturbed MNIST data and achieve comparable or superior
accuracy when compared to neural network and random forest baselines whilst
being more general and interpretable.
Related papers
- A Closer Look at Benchmarking Self-Supervised Pre-training with Image Classification [51.35500308126506]
Self-supervised learning (SSL) is a machine learning approach where the data itself provides supervision, eliminating the need for external labels.
We study how classification-based evaluation protocols for SSL correlate and how well they predict downstream performance on different dataset types.
arXiv Detail & Related papers (2024-07-16T23:17:36Z) - Characterizing out-of-distribution generalization of neural networks: application to the disordered Su-Schrieffer-Heeger model [38.79241114146971]
We show how interpretability methods can increase trust in predictions of a neural network trained to classify quantum phases.
In particular, we show that we can ensure better out-of-distribution generalization in the complex classification problem.
This work is an example of how the systematic use of interpretability methods can improve the performance of NNs in scientific problems.
arXiv Detail & Related papers (2024-06-14T13:24:32Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Neural networks trained with SGD learn distributions of increasing
complexity [78.30235086565388]
We show that neural networks trained using gradient descent initially classify their inputs using lower-order input statistics.
We then exploit higher-order statistics only later during training.
We discuss the relation of DSB to other simplicity biases and consider its implications for the principle of universality in learning.
arXiv Detail & Related papers (2022-11-21T15:27:22Z) - Learning Signal Temporal Logic through Neural Network for Interpretable
Classification [13.829082181692872]
We propose an explainable neural-symbolic framework for the classification of time-series behaviors.
We demonstrate the computational efficiency, compactness, and interpretability of the proposed method through driving scenarios and naval surveillance case studies.
arXiv Detail & Related papers (2022-10-04T21:11:54Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - Locally Sparse Networks for Interpretable Predictions [7.362415721170984]
We propose a framework for training locally sparse neural networks where the local sparsity is learned via a sample-specific gating mechanism.
The sample-specific sparsity is predicted via a textitgating network, which is trained in tandem with the textitprediction network.
We demonstrate that our method outperforms state-of-the-art models when predicting the target function with far fewer features per instance.
arXiv Detail & Related papers (2021-06-11T15:46:50Z) - Relational Weight Priors in Neural Networks for Abstract Pattern
Learning and Language Modelling [6.980076213134383]
Abstract patterns are the best known examples of a hard problem for neural networks in terms of generalisation to unseen data.
It has been argued that these low-level problems demonstrate the inability of neural networks to learn systematically.
We propose Embedded Relation Based Patterns (ERBP) as a novel way to create a relational inductive bias that encourages learning equality and distance-based relations for abstract patterns.
arXiv Detail & Related papers (2021-03-10T17:21:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.