Creating Powerful and Interpretable Models withRegression Networks
- URL: http://arxiv.org/abs/2107.14417v1
- Date: Fri, 30 Jul 2021 03:37:00 GMT
- Title: Creating Powerful and Interpretable Models withRegression Networks
- Authors: Lachlan O'Neill, Simon Angus, Satya Borgohain, Nader Chmait, David L.
Dowe
- Abstract summary: We propose a novel architecture, Regression Networks, which combines the power of neural networks with the understandability of regression analysis.
We demonstrate that the models exceed the state-of-the-art performance of interpretable models on several benchmark datasets.
- Score: 2.2049183478692584
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the discipline has evolved, research in machine learning has been focused
more and more on creating more powerful neural networks, without regard for the
interpretability of these networks. Such "black-box models" yield
state-of-the-art results, but we cannot understand why they make a particular
decision or prediction. Sometimes this is acceptable, but often it is not.
We propose a novel architecture, Regression Networks, which combines the
power of neural networks with the understandability of regression analysis.
While some methods for combining these exist in the literature, our
architecture generalizes these approaches by taking interactions into account,
offering the power of a dense neural network without forsaking
interpretability. We demonstrate that the models exceed the state-of-the-art
performance of interpretable models on several benchmark datasets, matching the
power of a dense neural network. Finally, we discuss how these techniques can
be generalized to other neural architectures, such as convolutional and
recurrent neural networks.
Related papers
- Hybrid deep additive neural networks [0.0]
We introduce novel deep neural networks that incorporate the idea of additive regression.
Our neural networks share architectural similarities with Kolmogorov-Arnold networks but are based on simpler yet flexible activation and basis functions.
We derive their universal approximation properties and demonstrate their effectiveness through simulation studies and a real-data application.
arXiv Detail & Related papers (2024-11-14T04:26:47Z) - GINN-KAN: Interpretability pipelining with applications in Physics Informed Neural Networks [5.2969467015867915]
We introduce the concept of interpretability pipelineing, to incorporate multiple interpretability techniques to outperform each individual technique.
We evaluate two recent models selected for their potential to incorporate interpretability into standard neural network architectures.
We introduce a novel interpretable neural network GINN-KAN that synthesizes the advantages of both models.
arXiv Detail & Related papers (2024-08-27T04:57:53Z) - Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Message Passing Variational Autoregressive Network for Solving Intractable Ising Models [6.261096199903392]
Many deep neural networks have been used to solve Ising models, including autoregressive neural networks, convolutional neural networks, recurrent neural networks, and graph neural networks.
Here we propose a variational autoregressive architecture with a message passing mechanism, which can effectively utilize the interactions between spin variables.
The new network trained under an annealing framework outperforms existing methods in solving several prototypical Ising spin Hamiltonians, especially for larger spin systems at low temperatures.
arXiv Detail & Related papers (2024-04-09T11:27:07Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Generalization and Estimation Error Bounds for Model-based Neural
Networks [78.88759757988761]
We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks.
We derive practical design rules that allow to construct model-based networks with guaranteed high generalization.
arXiv Detail & Related papers (2023-04-19T16:39:44Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Gaussian Process Surrogate Models for Neural Networks [6.8304779077042515]
In science and engineering, modeling is a methodology used to understand complex systems whose internal processes are opaque.
We construct a class of surrogate models for neural networks using Gaussian processes.
We demonstrate our approach captures existing phenomena related to the spectral bias of neural networks, and then show that our surrogate models can be used to solve practical problems.
arXiv Detail & Related papers (2022-08-11T20:17:02Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.