Neural Network Approximation of Graph Fourier Transforms for Sparse
Sampling of Networked Flow Dynamics
- URL: http://arxiv.org/abs/2002.05508v1
- Date: Tue, 11 Feb 2020 20:18:37 GMT
- Title: Neural Network Approximation of Graph Fourier Transforms for Sparse
Sampling of Networked Flow Dynamics
- Authors: Alessio Pagani, Zhuangkun Wei, Ricardo Silva, Weisi Guo
- Abstract summary: Infrastructure monitoring is critical for safe operations and sustainability. Water distribution networks (WDNs) are large-scale networked critical systems with complex cascade dynamics which are difficult to predict.
Existing approaches use multi-objective optimisation to find the minimum set of essential monitoring points, but lack performance guarantees and a theoretical framework.
Here, we first develop Graph Fourier Transform (GFT) operators to compress networked contamination spreading dynamics to identify the essential principle data collection points with inference performance guarantees.
We then build autoencoder (AE) inspired neural networks (NN) to generalize the GFT sampling process and under-sample further from the initial
- Score: 13.538871180763156
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Infrastructure monitoring is critical for safe operations and sustainability.
Water distribution networks (WDNs) are large-scale networked critical systems
with complex cascade dynamics which are difficult to predict. Ubiquitous
monitoring is expensive and a key challenge is to infer the contaminant
dynamics from partial sparse monitoring data. Existing approaches use
multi-objective optimisation to find the minimum set of essential monitoring
points, but lack performance guarantees and a theoretical framework.
Here, we first develop Graph Fourier Transform (GFT) operators to compress
networked contamination spreading dynamics to identify the essential principle
data collection points with inference performance guarantees. We then build
autoencoder (AE) inspired neural networks (NN) to generalize the GFT sampling
process and under-sample further from the initial sampling set, allowing a very
small set of data points to largely reconstruct the contamination dynamics over
real and artificial WDNs. Various sources of the contamination are tested and
we obtain high accuracy reconstruction using around 5-10% of the sample set.
This general approach of compression and under-sampled recovery via neural
networks can be applied to a wide range of networked infrastructures to enable
digital twins.
Related papers
- Advanced Financial Fraud Detection Using GNN-CL Model [13.5240775562349]
The innovative GNN-CL model proposed in this paper marks a breakthrough in the field of financial fraud detection.
It combines the advantages of graph neural networks (gnn), convolutional neural networks (cnn) and long short-term memory (LSTM) networks.
A key novelty of this paper is the use of multilayer perceptrons (MLPS) to estimate node similarity.
arXiv Detail & Related papers (2024-07-09T03:59:06Z) - SynA-ResNet: Spike-driven ResNet Achieved through OR Residual Connection [10.702093960098104]
Spiking Neural Networks (SNNs) have garnered substantial attention in brain-like computing for their biological fidelity and the capacity to execute energy-efficient spike-driven operations.
We propose a novel training paradigm that first accumulates a large amount of redundant information through OR Residual Connection (ORRC)
We then filters out the redundant information using the Synergistic Attention (SynA) module, which promotes feature extraction in the backbone while suppressing the influence of noise and useless features in the shortcuts.
arXiv Detail & Related papers (2023-11-11T13:36:27Z) - Graph Neural Network-Based Anomaly Detection for River Network Systems [0.8399688944263843]
Real-time monitoring of water quality is increasingly reliant on in-situ sensor technology.
Anomaly detection is crucial for identifying erroneous patterns in sensor data.
This paper presents a solution to the challenging task of anomaly detection for river network sensor data.
arXiv Detail & Related papers (2023-04-19T01:32:32Z) - Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural
Networks [89.28881869440433]
This paper provides the first theoretical characterization of joint edge-model sparse learning for graph neural networks (GNNs)
It proves analytically that both sampling important nodes and pruning neurons with the lowest-magnitude can reduce the sample complexity and improve convergence without compromising the test accuracy.
arXiv Detail & Related papers (2023-02-06T16:54:20Z) - Overlapping Community Detection using Dynamic Dilated Aggregation in
Deep Residual GCN [2.709785011931266]
Overlapping community detection is a key problem in graph mining.
In this study, we design a deep residual graph convolutional network (DynaResGCN) based on our novel dynamic dilated aggregation mechanisms.
Our experiments show significantly superior performance over many state-of-the-art methods for the detection of overlapping communities in networks.
arXiv Detail & Related papers (2022-10-20T11:22:58Z) - Efficient Global Robustness Certification of Neural Networks via
Interleaving Twin-Network Encoding [8.173681464694651]
We formulate the global robustness certification for neural networks with ReLU activation functions as a mixed-integer linear programming (MILP) problem.
Our approach includes a novel interleaving twin-network encoding scheme, where two copies of the neural network are encoded side-by-side.
A case study of closed-loop control safety verification is conducted, and demonstrates the importance and practicality of our approach.
arXiv Detail & Related papers (2022-03-26T19:23:37Z) - CPFN: Cascaded Primitive Fitting Networks for High-Resolution Point
Clouds [51.47100091540298]
We present Cascaded Primitive Fitting Networks (CPFN) that relies on an adaptive patch sampling network to assemble detection results of global and local primitive detection networks.
CPFN improves the state-of-the-art SPFN performance by 13-14% on high-resolution point cloud datasets and specifically improves the detection of fine-scale primitives by 20-22%.
arXiv Detail & Related papers (2021-08-31T23:27:33Z) - Learning Frequency-aware Dynamic Network for Efficient Super-Resolution [56.98668484450857]
This paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain.
In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden.
Experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures.
arXiv Detail & Related papers (2021-03-15T12:54:26Z) - Understanding Self-supervised Learning with Dual Deep Networks [74.92916579635336]
We propose a novel framework to understand contrastive self-supervised learning (SSL) methods that employ dual pairs of deep ReLU networks.
We prove that in each SGD update of SimCLR with various loss functions, the weights at each layer are updated by a emphcovariance operator.
To further study what role the covariance operator plays and which features are learned in such a process, we model data generation and augmentation processes through a emphhierarchical latent tree model (HLTM)
arXiv Detail & Related papers (2020-10-01T17:51:49Z) - On Robustness and Transferability of Convolutional Neural Networks [147.71743081671508]
Modern deep convolutional networks (CNNs) are often criticized for not generalizing under distributional shifts.
We study the interplay between out-of-distribution and transfer performance of modern image classification CNNs for the first time.
We find that increasing both the training set and model sizes significantly improve the distributional shift robustness.
arXiv Detail & Related papers (2020-07-16T18:39:04Z) - Resolution Adaptive Networks for Efficient Inference [53.04907454606711]
We propose a novel Resolution Adaptive Network (RANet), which is inspired by the intuition that low-resolution representations are sufficient for classifying "easy" inputs.
In RANet, the input images are first routed to a lightweight sub-network that efficiently extracts low-resolution representations.
High-resolution paths in the network maintain the capability to recognize the "hard" samples.
arXiv Detail & Related papers (2020-03-16T16:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.