Analysis of functional neural codes of deep learning models: Functional
Telescope Hypothesis
- URL: http://arxiv.org/abs/2205.10952v3
- Date: Sun, 26 Nov 2023 03:02:02 GMT
- Title: Analysis of functional neural codes of deep learning models: Functional
Telescope Hypothesis
- Authors: Jung Hoon Lee and Sujith Vijayan
- Abstract summary: We use the self-organizing map (SOM) to analyze internal codes associated with deep learning models' decision-making.
Our analyses suggest that shallow layers close to the input layer compress features into condensed space and that deep layers close to the output layer expand feature space.
We also found evidence indicating that compressed features may underlie DNNs' vulnerabilities to adversarial perturbations.
- Score: 2.045460181566931
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNNs), the agents of deep learning (DL), require a
massive number of parallel/sequential operations. This makes it difficult to
comprehend DNNs' operations and impedes proper diagnosis. Without better
knowledge of their internal process, deploying DNNs in high-stakes domains can
lead to catastrophic failures. Therefore, to build more reliable DNNs/DL to be
deployed in high-stakes real-world problems, it is imperative that we gain
insights into DNNs' internal operations underlying their decision-making. Here,
we use the self-organizing map (SOM) to analyze DL models' internal codes
associated with DNNs' decision-making. Our analyses suggest that shallow layers
close to the input layer compress features into condensed space and that deep
layers close to the output layer expand feature space. We also found evidence
indicating that compressed features may underlie DNNs' vulnerabilities to
adversarial perturbations.
Related papers
- Searching for internal symbols underlying deep learning [0.36832029288386137]
Deep learning (DL) enables deep neural networks (DNNs) to automatically learn complex tasks or rules from given examples without instructions or guiding principles.
One line of studies suggests that DNNs may learn concepts, the high level features recognizable to humans.
We combined foundation segmentation models and unsupervised learning to extract internal codes and identify potential use of abstract codes to make DL's decision-making more reliable and safer.
arXiv Detail & Related papers (2024-05-31T03:39:26Z) - Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Black-box Safety Analysis and Retraining of DNNs based on Feature
Extraction and Clustering [0.9590956574213348]
We propose SAFE, a black-box approach to automatically characterize the root causes of DNN errors.
It relies on a transfer learning model pre-trained on ImageNet to extract the features from error-inducing images.
It then applies a density-based clustering algorithm to detect arbitrary shaped clusters of images modeling plausible causes of error.
arXiv Detail & Related papers (2022-01-13T17:02:57Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - Topological Measurement of Deep Neural Networks Using Persistent
Homology [0.7919213739992464]
The inner representation of deep neural networks (DNNs) is indecipherable.
Persistent homology (PH) was employed for investigating the complexities of trained DNNs.
arXiv Detail & Related papers (2021-06-06T03:06:15Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - Examining the causal structures of deep neural networks using
information theory [0.0]
Deep Neural Networks (DNNs) are often examined at the level of their response to input, such as analyzing the mutual information between nodes and data sets.
DNNs can also be examined at the level of causation, exploring "what does what" within the layers of the network itself.
Here, we introduce a suite of metrics based on information theory to quantify and track changes in the causal structure of DNNs during training.
arXiv Detail & Related papers (2020-10-26T19:53:16Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - CodNN -- Robust Neural Networks From Coded Classification [27.38642191854458]
Deep Neural Networks (DNNs) are a revolutionary force in the ongoing information revolution.
DNNs are highly sensitive to noise, whether adversarial or random.
This poses a fundamental challenge for hardware implementations of DNNs, and for their deployment in critical applications such as autonomous driving.
By our approach, either the data or internal layers of the DNN are coded with error correcting codes, and successful computation under noise is guaranteed.
arXiv Detail & Related papers (2020-04-22T17:07:15Z) - Architecture Disentanglement for Deep Neural Networks [174.16176919145377]
We introduce neural architecture disentanglement (NAD) to explain the inner workings of deep neural networks (DNNs)
NAD learns to disentangle a pre-trained DNN into sub-architectures according to independent tasks, forming information flows that describe the inference processes.
Results show that misclassified images have a high probability of being assigned to task sub-architectures similar to the correct ones.
arXiv Detail & Related papers (2020-03-30T08:34:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.