Cluster Flow: how a hierarchical clustering layer make allows deep-NNs
more resilient to hacking, more human-like and easily implements relational
reasoning
- URL: http://arxiv.org/abs/2304.14081v1
- Date: Thu, 27 Apr 2023 10:41:03 GMT
- Title: Cluster Flow: how a hierarchical clustering layer make allows deep-NNs
more resilient to hacking, more human-like and easily implements relational
reasoning
- Authors: Ella Gale, Oliver Matthews
- Abstract summary: ClusterFlow is a semi-supervised hierarchical clustering framework.
It can operate on trained NNs and feature data found at the pre-SoftMax layer.
It adds more human-like functionality to modern deep convolutional neural networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the huge recent breakthroughs in neural networks (NNs) for artificial
intelligence (specifically deep convolutional networks) such NNs do not achieve
human-level performance: they can be hacked by images that would fool no human
and lack `common sense'. It has been argued that a basis of human-level
intelligence is mankind's ability to perform relational reasoning: the
comparison of different objects, measuring similarity, grasping of relations
between objects and the converse, figuring out the odd one out in a set of
objects. Mankind can even do this with objects they have never seen before.
Here we show how ClusterFlow, a semi-supervised hierarchical clustering
framework can operate on trained NNs utilising the rich multi-dimensional class
and feature data found at the pre-SoftMax layer to build a hyperspacial map of
classes/features and this adds more human-like functionality to modern deep
convolutional neural networks. We demonstrate this with 3 tasks. 1. the
statistical learning based `mistakes' made by infants when attending to images
of cats and dogs. 2. improving both the resilience to hacking images and the
accurate measure of certainty in deep-NNs. 3. Relational reasoning over sets of
images, including those not known to the NN nor seen before. We also
demonstrate that ClusterFlow can work on non-NN data and deal with missing data
by testing it on a Chemistry dataset. This work suggests that modern deep NNs
can be made more human-like without re-training of the NNs. As it is known that
some methods used in deep and convolutional NNs are not biologically plausible
or perhaps even the best approach: the ClusterFlow framework can sit on top of
any NN and will be a useful tool to add as NNs are improved in this regard.
Related papers
- Hyper Evidential Deep Learning to Quantify Composite Classification Uncertainty [11.964685090237392]
We show that a novel framework called Hyper-Evidential Neural Network (HENN) explicitly models predictive uncertainty due to composite class labels.
Our results demonstrate that HENN outperforms its state-of-the-art counterparts based on four image datasets.
arXiv Detail & Related papers (2024-04-17T01:26:15Z) - Unveiling the Unseen: Identifiable Clusters in Trained Depthwise
Convolutional Kernels [56.69755544814834]
Recent advances in depthwise-separable convolutional neural networks (DS-CNNs) have led to novel architectures.
This paper reveals another striking property of DS-CNN architectures: discernible and explainable patterns emerge in their trained depthwise convolutional kernels in all layers.
arXiv Detail & Related papers (2024-01-25T19:05:53Z) - Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - Curriculum Design Helps Spiking Neural Networks to Classify Time Series [16.402675046686834]
Spiking Neural Networks (SNNs) have a greater potential for modeling time series data than Artificial Neural Networks (ANNs)
In this work, enlighten by brain-inspired science, we find that, not only the structure but also the learning process should be human-like.
arXiv Detail & Related papers (2023-12-26T02:04:53Z) - You Can Have Better Graph Neural Networks by Not Training Weights at
All: Finding Untrained GNNs Tickets [105.24703398193843]
Untrainedworks in graph neural networks (GNNs) still remains mysterious.
We show that the found untrainedworks can substantially mitigate the GNN over-smoothing problem.
We also observe that such sparse untrainedworks have appealing performance in out-of-distribution detection and robustness of input perturbations.
arXiv Detail & Related papers (2022-11-28T14:17:36Z) - Rethinking Nearest Neighbors for Visual Classification [56.00783095670361]
k-NN is a lazy learning method that aggregates the distance between the test image and top-k neighbors in a training set.
We adopt k-NN with pre-trained visual representations produced by either supervised or self-supervised methods in two steps.
Via extensive experiments on a wide range of classification tasks, our study reveals the generality and flexibility of k-NN integration.
arXiv Detail & Related papers (2021-12-15T20:15:01Z) - Mining the Weights Knowledge for Optimizing Neural Network Structures [1.995792341399967]
We introduce a switcher neural network (SNN) that uses as inputs the weights of a task-specific neural network (called TNN for short)
By mining the knowledge contained in the weights, the SNN outputs scaling factors for turning off neurons in the TNN.
In terms of accuracy, we outperform baseline networks and other structure learning methods stably and significantly.
arXiv Detail & Related papers (2021-10-11T05:20:56Z) - Utilizing Explainable AI for Quantization and Pruning of Deep Neural
Networks [0.495186171543858]
Recent efforts to understand and explain AI (Artificial Intelligence) methods have led to a new research area, termed as explainable AI.
Recent efforts to understand and explain AI (Artificial Intelligence) methods have led to a new research area, termed as explainable AI.
In this paper, we utilize explainable AI methods: mainly DeepLIFT method.
arXiv Detail & Related papers (2020-08-20T16:52:58Z) - Locality Guided Neural Networks for Explainable Artificial Intelligence [12.435539489388708]
We propose a novel algorithm for back propagation, called Locality Guided Neural Network(LGNN)
LGNN preserves locality between neighbouring neurons within each layer of a deep network.
In our experiments, we train various VGG and Wide ResNet (WRN) networks for image classification on CIFAR100.
arXiv Detail & Related papers (2020-07-12T23:45:51Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.