Neuro-Visualizer: An Auto-encoder-based Loss Landscape Visualization
Method
- URL: http://arxiv.org/abs/2309.14601v1
- Date: Tue, 26 Sep 2023 01:10:16 GMT
- Title: Neuro-Visualizer: An Auto-encoder-based Loss Landscape Visualization
Method
- Authors: Mohannad Elhamod, Anuj Karpatne
- Abstract summary: We present a novel auto-encoder-based non-linear landscape visualization method called Neuro-Visualizer.
Our findings show that Neuro-Visualizer outperforms other linear and non-linear baselines.
- Score: 4.981452040789784
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, there has been a growing interest in visualizing the loss
landscape of neural networks. Linear landscape visualization methods, such as
principal component analysis, have become widely used as they intuitively help
researchers study neural networks and their training process. However, these
linear methods suffer from limitations and drawbacks due to their lack of
flexibility and low fidelity at representing the high dimensional landscape. In
this paper, we present a novel auto-encoder-based non-linear landscape
visualization method called Neuro-Visualizer that addresses these shortcoming
and provides useful insights about neural network loss landscapes. To
demonstrate its potential, we run experiments on a variety of problems in two
separate applications of knowledge-guided machine learning (KGML). Our findings
show that Neuro-Visualizer outperforms other linear and non-linear baselines
and helps corroborate, and sometime challenge, claims proposed by machine
learning community. All code and data used in the experiments of this paper are
available at an anonymous link
https://anonymous.4open.science/r/NeuroVisualizer-FDD6
Related papers
- Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - The Brain-Inspired Decoder for Natural Visual Image Reconstruction [4.433315630787158]
We propose a deep learning neural network architecture with biological properties to reconstruct visual image from spike trains.
Our model is an end-to-end decoder from neural spike trains to images.
Our results show that our method can effectively combine receptive field features to reconstruct images.
arXiv Detail & Related papers (2022-07-18T13:31:26Z) - FuNNscope: Visual microscope for interactively exploring the loss
landscape of fully connected neural networks [77.34726150561087]
We show how to explore high-dimensional landscape characteristics of neural networks.
We generalize observations on small neural networks to more complex systems.
An interactive dashboard opens up a number of possible application networks.
arXiv Detail & Related papers (2022-04-09T16:41:53Z) - Visualizing Deep Neural Networks with Topographic Activation Maps [1.1470070927586014]
We introduce and compare methods to obtain a topographic layout of neurons in a Deep Neural Network layer.
We demonstrate how to use topographic activation maps to identify errors or encoded biases and to visualize training processes.
arXiv Detail & Related papers (2022-04-07T15:56:44Z) - Neuronal Learning Analysis using Cycle-Consistent Adversarial Networks [4.874780144224057]
We use a variant of deep generative models called - CycleGAN, to learn the unknown mapping between pre- and post-learning neural activities.
We develop an end-to-end pipeline to preprocess, train and evaluate calcium fluorescence signals, and a procedure to interpret the resulting deep learning models.
arXiv Detail & Related papers (2021-11-25T13:24:19Z) - Neural Fields in Visual Computing and Beyond [54.950885364735804]
Recent advances in machine learning have created increasing interest in solving visual computing problems using coordinate-based neural networks.
neural fields have seen successful application in the synthesis of 3D shapes and image, animation of human bodies, 3D reconstruction, and pose estimation.
This report provides context, mathematical grounding, and an extensive review of literature on neural fields.
arXiv Detail & Related papers (2021-11-22T18:57:51Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.