Graph Signal Recovery Using Restricted Boltzmann Machines
- URL: http://arxiv.org/abs/2011.10549v1
- Date: Fri, 20 Nov 2020 18:43:53 GMT
- Title: Graph Signal Recovery Using Restricted Boltzmann Machines
- Authors: Ankith Mohan, Aiichiro Nakano, Emilio Ferrara
- Abstract summary: We propose a model-agnostic pipeline to recover graph signals from an expert system by exploiting the content addressable memory property of restricted Boltzmann machine.
We show that denoising the representations learned by the deep neural networks is usually more effective than denoising the data itself.
- Score: 11.077860020575084
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We propose a model-agnostic pipeline to recover graph signals from an expert
system by exploiting the content addressable memory property of restricted
Boltzmann machine and the representational ability of a neural network. The
proposed pipeline requires the deep neural network that is trained on a
downward machine learning task with clean data, data which is free from any
form of corruption or incompletion. We show that denoising the representations
learned by the deep neural networks is usually more effective than denoising
the data itself. Although this pipeline can deal with noise in any dataset, it
is particularly effective for graph-structured datasets.
Related papers
- Noise robust neural network architecture [0.0]
We show that the resulting architecture achieves decent noise robustness when faced with input data with white noise.
We apply simple dune neural networks for MNIST dataset and demonstrate that even for very noisy input images which are hard for human to recognize, our approach achieved better test set accuracy than human without dataset augmentation.
arXiv Detail & Related papers (2023-05-16T08:30:45Z) - Graph Neural Networks with Trainable Adjacency Matrices for Fault
Diagnosis on Multivariate Sensor Data [69.25738064847175]
It is necessary to consider the behavior of the signals in each sensor separately, to take into account their correlation and hidden relationships with each other.
The graph nodes can be represented as data from the different sensors, and the edges can display the influence of these data on each other.
It was proposed to construct a graph during the training of graph neural network. This allows to train models on data where the dependencies between the sensors are not known in advance.
arXiv Detail & Related papers (2022-10-20T11:03:21Z) - Signal Processing for Implicit Neural Representations [80.38097216996164]
Implicit Neural Representations (INRs) encode continuous multi-media data via multi-layer perceptrons.
Existing works manipulate such continuous representations via processing on their discretized instance.
We propose an implicit neural signal processing network, dubbed INSP-Net, via differential operators on INR.
arXiv Detail & Related papers (2022-10-17T06:29:07Z) - Weak-signal extraction enabled by deep-neural-network denoising of
diffraction data [26.36525764239897]
We show how data can be denoised via a deep convolutional neural network.
We demonstrate that weak signals stemming from charge ordering, insignificant in the noisy data, become visible and accurate in the denoised data.
arXiv Detail & Related papers (2022-09-19T14:43:01Z) - Flurry: a Fast Framework for Reproducible Multi-layered Provenance Graph
Representation Learning [0.44040106718326594]
Flurry is an end-to-end data pipeline which simulates cyberattacks.
It captures data from these attacks at multiple system and application layers, converts audit logs from these attacks into data provenance graphs, and incorporates this data with a framework for training deep neural models.
We showcase this pipeline by processing data from multiple system attacks and performing anomaly detection via graph classification.
arXiv Detail & Related papers (2022-03-05T13:52:11Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Physical Constraint Embedded Neural Networks for inference and noise
regulation [0.0]
We present methods for embedding even--odd symmetries and conservation laws in neural networks.
We demonstrate that it can accurately infer symmetries without prior knowledge.
We highlight the noise resilient properties of physical constraint embedded neural networks.
arXiv Detail & Related papers (2021-05-19T14:07:20Z) - An Introduction to Robust Graph Convolutional Networks [71.68610791161355]
We propose a novel Robust Graph Convolutional Neural Networks for possible erroneous single-view or multi-view data.
By incorporating an extra layers via Autoencoders into traditional graph convolutional networks, we characterize and handle typical error models explicitly.
arXiv Detail & Related papers (2021-03-27T04:47:59Z) - Information contraction in noisy binary neural networks and its
implications [11.742803725197506]
We consider noisy binary neural networks, where each neuron has a non-zero probability of producing an incorrect output.
Our key finding is a lower bound for the required number of neurons in noisy neural networks, which is first of its kind.
This paper offers new understanding of noisy information processing systems through the lens of information theory.
arXiv Detail & Related papers (2021-01-28T00:01:45Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Applications of Koopman Mode Analysis to Neural Networks [52.77024349608834]
We consider the training process of a neural network as a dynamical system acting on the high-dimensional weight space.
We show how the Koopman spectrum can be used to determine the number of layers required for the architecture.
We also show how using Koopman modes we can selectively prune the network to speed up the training procedure.
arXiv Detail & Related papers (2020-06-21T11:00:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.