OGGN: A Novel Generalized Oracle Guided Generative Architecture for
Modelling Inverse Function of Artificial Neural Networks
- URL: http://arxiv.org/abs/2104.03935v1
- Date: Thu, 8 Apr 2021 17:28:52 GMT
- Title: OGGN: A Novel Generalized Oracle Guided Generative Architecture for
Modelling Inverse Function of Artificial Neural Networks
- Authors: Mohammad Aaftab V, Mansi Sharma
- Abstract summary: This paper presents a novel Generative Neural Network Architecture for modelling the inverse function of an Artificial Neural Network (ANN) either completely or partially.
The proposed Oracle Guided Generative Neural Network, dubbed as OGGN, is flexible to handle a variety of feature generation problems.
The constraint functions enable a neural network to investigate a given local space for a longer period of time.
- Score: 0.6091702876917279
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a novel Generative Neural Network Architecture for
modelling the inverse function of an Artificial Neural Network (ANN) either
completely or partially. Modelling the complete inverse function of an ANN
involves generating the values of all features that corresponds to a desired
output. On the other hand, partially modelling the inverse function means
generating the values of a subset of features and fixing the remaining feature
values. The feature set generation is a critical step for artificial neural
networks, useful in several practical applications in engineering and science.
The proposed Oracle Guided Generative Neural Network, dubbed as OGGN, is
flexible to handle a variety of feature generation problems. In general, an ANN
is able to predict the target values based on given feature vectors. The OGGN
architecture enables to generate feature vectors given the predetermined target
values of an ANN. When generated feature vectors are fed to the forward ANN,
the target value predicted by ANN will be close to the predetermined target
values. Therefore, the OGGN architecture is able to map, inverse function of
the function represented by forward ANN. Besides, there is another important
contribution of this work. This paper also introduces a new class of functions,
defined as constraint functions. The constraint functions enable a neural
network to investigate a given local space for a longer period of time. Thus,
enabling to find a local optimum of the loss function apart from just being
able to find the global optimum. OGGN can also be adapted to solve a system of
polynomial equations in many variables. The experiments on synthetic datasets
validate the effectiveness of OGGN on various use cases.
Related papers
- How (Implicit) Regularization of ReLU Neural Networks Characterizes the
Learned Function -- Part II: the Multi-D Case of Two Layers with Random First
Layer [2.1485350418225244]
We give an exact macroscopic characterization of the generalization behavior of randomized, shallow NNs with ReLU activation.
We show that RSNs correspond to a generalized additive model (GAM)-typed regression in which infinitely many directions are considered.
arXiv Detail & Related papers (2023-03-20T21:05:47Z) - Versatile Neural Processes for Learning Implicit Neural Representations [57.090658265140384]
We propose Versatile Neural Processes (VNP), which largely increases the capability of approximating functions.
Specifically, we introduce a bottleneck encoder that produces fewer and informative context tokens, relieving the high computational cost.
We demonstrate the effectiveness of the proposed VNP on a variety of tasks involving 1D, 2D and 3D signals.
arXiv Detail & Related papers (2023-01-21T04:08:46Z) - EIGNN: Efficient Infinite-Depth Graph Neural Networks [51.97361378423152]
Graph neural networks (GNNs) are widely used for modelling graph-structured data in numerous applications.
Motivated by this limitation, we propose a GNN model with infinite depth, which we call Efficient Infinite-Depth Graph Neural Networks (EIGNN)
We show that EIGNN has a better ability to capture long-range dependencies than recent baselines, and consistently achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-02-22T08:16:58Z) - Graph-adaptive Rectified Linear Unit for Graph Neural Networks [64.92221119723048]
Graph Neural Networks (GNNs) have achieved remarkable success by extending traditional convolution to learning on non-Euclidean data.
We propose Graph-adaptive Rectified Linear Unit (GReLU) which is a new parametric activation function incorporating the neighborhood information in a novel and efficient way.
We conduct comprehensive experiments to show that our plug-and-play GReLU method is efficient and effective given different GNN backbones and various downstream tasks.
arXiv Detail & Related papers (2022-02-13T10:54:59Z) - CDiNN -Convex Difference Neural Networks [0.8122270502556374]
Neural networks with ReLU activation function have been shown to be universal function approximators learn function mapping as non-smooth functions.
New neural network architecture called ICNNs learn the output as a convex input.
arXiv Detail & Related papers (2021-03-31T17:31:16Z) - Delay Differential Neural Networks [0.2538209532048866]
We propose a novel model, delay differential neural networks (DDNN), inspired by delay differential equations (DDEs)
For training DDNNs, we provide a memory-efficient adjoint method for computing gradients and back-propagate through the network.
Experiments conducted on synthetic and real-world image classification datasets such as Cifar10 and Cifar100 show the effectiveness of the proposed models.
arXiv Detail & Related papers (2020-12-12T12:20:54Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Alpha Discovery Neural Network based on Prior Knowledge [55.65102700986668]
Genetic programming (GP) is the state-of-the-art in financial automated feature construction task.
This paper proposes Alpha Discovery Neural Network (ADNN), a tailored neural network structure which can automatically construct diversified financial technical indicators.
arXiv Detail & Related papers (2019-12-26T03:10:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.