HOPE: High-order Polynomial Expansion of Black-box Neural Networks
- URL: http://arxiv.org/abs/2307.08192v1
- Date: Mon, 17 Jul 2023 01:46:15 GMT
- Title: HOPE: High-order Polynomial Expansion of Black-box Neural Networks
- Authors: Tingxiong Xiao, Weihang Zhang, Yuxiao Cheng, Jinli Suo
- Abstract summary: We introduce HOPE (High-order Polynomial Expansion), a method for expanding a network into a high-order Taylor on a reference input.
Numerical analysis confirms the high accuracy, low computational complexity, and good convergence of the proposed method.
We demonstrate HOPE's wide applications built on deep learning, including function discovery, fast inference, and feature selection.
- Score: 7.156504968033132
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite their remarkable performance, deep neural networks remain mostly
``black boxes'', suggesting inexplicability and hindering their wide
applications in fields requiring making rational decisions. Here we introduce
HOPE (High-order Polynomial Expansion), a method for expanding a network into a
high-order Taylor polynomial on a reference input. Specifically, we derive the
high-order derivative rule for composite functions and extend the rule to
neural networks to obtain their high-order derivatives quickly and accurately.
From these derivatives, we can then derive the Taylor polynomial of the neural
network, which provides an explicit expression of the network's local
interpretations. Numerical analysis confirms the high accuracy, low
computational complexity, and good convergence of the proposed method.
Moreover, we demonstrate HOPE's wide applications built on deep learning,
including function discovery, fast inference, and feature selection. The code
is available at https://github.com/HarryPotterXTX/HOPE.git.
Related papers
- A Quasilinear Algorithm for Computing Higher-Order Derivatives of Deep Feed-Forward Neural Networks [0.0]
$n$-TangentProp computes the exact derivative $dn/dxn f(x)$ in quasilinear, instead of exponential time.
We demonstrate that our method is particularly beneficial in the context of physics-informed neural networks.
arXiv Detail & Related papers (2024-12-12T22:57:28Z) - Towards an Algebraic Framework For Approximating Functions Using Neural
Network Polynomials [0.589889361990138]
We make the case for neural network objects and extend an already existing neural network calculus explained in detail in Chapter 2 on citebigbook.
Our aim will be to show that, yes, indeed, it makes sense to talk about neural networks, neural network exponentials, sine, and cosines in the sense that they do indeed approximate their real number counterparts subject to limitations on certain parameters, $q$, and $varepsilon$.
arXiv Detail & Related papers (2024-02-01T23:06:50Z) - Factor Graph Neural Networks [20.211455592922736]
Graph Neural Networks (GNNs) can learn powerful representations in an end-to-end fashion with great success in many real-world applications.
We propose Factor Graph Neural Networks (FGNNs) to effectively capture higher-order relations for inference and learning.
arXiv Detail & Related papers (2023-08-02T00:32:02Z) - Tensorized Hypergraph Neural Networks [69.65385474777031]
We propose a novel adjacency-tensor-based textbfTensorized textbfHypergraph textbfNeural textbfNetwork (THNN)
THNN is faithful hypergraph modeling framework through high-order outer product feature passing message.
Results from experiments on two widely used hypergraph datasets for 3-D visual object classification show the model's promising performance.
arXiv Detail & Related papers (2023-06-05T03:26:06Z) - Regularization of polynomial networks for image recognition [78.4786845859205]
Polynomial Networks (PNs) have emerged as an alternative method with a promising performance and improved interpretability.
We introduce a class of PNs, which are able to reach the performance of ResNet across a range of six benchmarks.
arXiv Detail & Related papers (2023-03-24T10:05:22Z) - A Unified Algebraic Perspective on Lipschitz Neural Networks [88.14073994459586]
This paper introduces a novel perspective unifying various types of 1-Lipschitz neural networks.
We show that many existing techniques can be derived and generalized via finding analytical solutions of a common semidefinite programming (SDP) condition.
Our approach, called SDP-based Lipschitz Layers (SLL), allows us to design non-trivial yet efficient generalization of convex potential layers.
arXiv Detail & Related papers (2023-03-06T14:31:09Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Towards Lower Bounds on the Depth of ReLU Neural Networks [7.355977594790584]
We investigate whether the class of exactly representable functions strictly increases by adding more layers.
We settle an old conjecture about piecewise linear functions by Wang and Sun (2005) in the affirmative.
We present upper bounds on the sizes of neural networks required to represent functions with logarithmic depth.
arXiv Detail & Related papers (2021-05-31T09:49:14Z) - Deep Polynomial Neural Networks [77.70761658507507]
$Pi$Nets are a new class of function approximators based on expansions.
$Pi$Nets produce state-the-art results in three challenging tasks, i.e. image generation, face verification and 3D mesh representation learning.
arXiv Detail & Related papers (2020-06-20T16:23:32Z) - $\Pi-$nets: Deep Polynomial Neural Networks [86.36557534288535]
$Pi$-Nets are neural networks in which the output is a high-order of the input.
We empirically demonstrate that $Pi$-Nets have better representation power than standard DCNNs.
Our framework elucidates why recent generative models, such as StyleGAN, improve upon their predecessors.
arXiv Detail & Related papers (2020-03-08T18:48:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.