Robustness of Physics-Informed Neural Networks to Noise in Sensor Data
- URL: http://arxiv.org/abs/2211.12042v1
- Date: Tue, 22 Nov 2022 06:24:43 GMT
- Title: Robustness of Physics-Informed Neural Networks to Noise in Sensor Data
- Authors: Jian Cheng Wong, Pao-Hsiung Chiu, Chin Chun Ooi, My Ha Da
- Abstract summary: PINNs have been shown to be an effective way of incorporating physics-based domain knowledge into neural network models.
In this work, we conduct a preliminary investigation of the robustness of physics-informed neural networks to the magnitude of noise in the data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Physics-Informed Neural Networks (PINNs) have been shown to be an effective
way of incorporating physics-based domain knowledge into neural network models
for many important real-world systems. They have been particularly effective as
a means of inferring system information based on data, even in cases where data
is scarce. Most of the current work however assumes the availability of
high-quality data. In this work, we further conduct a preliminary investigation
of the robustness of physics-informed neural networks to the magnitude of noise
in the data. Interestingly, our experiments reveal that the inclusion of
physics in the neural network is sufficient to negate the impact of noise in
data originating from hypothetical low quality sensors with high
signal-to-noise ratios of up to 1. The resultant predictions for this test case
are seen to still match the predictive value obtained for equivalent data
obtained from high-quality sensors with potentially 10x less noise. This
further implies the utility of physics-informed neural network modeling for
making sense of data from sensor networks in the future, especially with the
advent of Industry 4.0 and the increasing trend towards ubiquitous deployment
of low-cost sensors which are typically noisier.
Related papers
- Data-Driven Fire Modeling: Learning First Arrival Times and Model Parameters with Neural Networks [12.416949154231714]
We investigate the ability of neural networks to parameterize dynamics in fire science.
In particular, we investigate neural networks that map five key parameters in fire spread to the first arrival time.
For the inverse problem, we quantify the network's sensitivity in estimating each of the key parameters.
arXiv Detail & Related papers (2024-08-16T19:54:41Z) - Low-Power Vibration-Based Predictive Maintenance for Industry 4.0 using Neural Networks: A Survey [33.08038317407649]
This paper investigates the potential of neural networks for low-power on-device computation of vibration sensor data for predictive maintenance.
No satisfactory standard benchmark dataset exists for evaluating neural networks in predictive maintenance tasks.
We highlight the need for future research on hardware implementations of neural networks for low-power predictive maintenance applications.
arXiv Detail & Related papers (2024-08-01T12:46:37Z) - Physics-Enhanced Graph Neural Networks For Soft Sensing in Industrial Internet of Things [6.374763930914524]
The Industrial Internet of Things (IIoT) is reshaping manufacturing, industrial processes, and infrastructure management.
achieving highly reliable IIoT can be hindered by factors such as the cost of installing large numbers of sensors, limitations in retrofitting existing systems with sensors, or harsh environmental conditions that may make sensor installation impractical.
We propose physics-enhanced Graph Neural Networks (GNNs), which integrate principles of physics into graph-based methodologies.
arXiv Detail & Related papers (2024-04-11T18:03:59Z) - Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Graph Neural Networks with Trainable Adjacency Matrices for Fault
Diagnosis on Multivariate Sensor Data [69.25738064847175]
It is necessary to consider the behavior of the signals in each sensor separately, to take into account their correlation and hidden relationships with each other.
The graph nodes can be represented as data from the different sensors, and the edges can display the influence of these data on each other.
It was proposed to construct a graph during the training of graph neural network. This allows to train models on data where the dependencies between the sensors are not known in advance.
arXiv Detail & Related papers (2022-10-20T11:03:21Z) - Bayesian Physics-Informed Neural Networks for real-world nonlinear
dynamical systems [0.0]
We integrate data, physics, and uncertainties by combining neural networks, physics-informed modeling, and Bayesian inference.
Our study reveals the inherent advantages and disadvantages of Neural Networks, Bayesian Inference, and a combination of both.
We anticipate that the underlying concepts and trends generalize to more complex disease conditions.
arXiv Detail & Related papers (2022-05-12T19:04:31Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Information contraction in noisy binary neural networks and its
implications [11.742803725197506]
We consider noisy binary neural networks, where each neuron has a non-zero probability of producing an incorrect output.
Our key finding is a lower bound for the required number of neurons in noisy neural networks, which is first of its kind.
This paper offers new understanding of noisy information processing systems through the lens of information theory.
arXiv Detail & Related papers (2021-01-28T00:01:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.