Neural Stochastic Differential Equations for Robust and Explainable
Analysis of Electromagnetic Unintended Radiated Emissions
- URL: http://arxiv.org/abs/2309.15386v1
- Date: Wed, 27 Sep 2023 03:37:16 GMT
- Title: Neural Stochastic Differential Equations for Robust and Explainable
Analysis of Electromagnetic Unintended Radiated Emissions
- Authors: Sumit Kumar Jha, Susmit Jha, Rickard Ewetz, Alvaro Velasquez
- Abstract summary: We present a comprehensive evaluation of the robustness and explainability of ResNet-like models in the context of Unintendeded Emission (URE) classification.
We propose a novel application of Neural SDEs to build models for URE classification that are not only robust to noise but also provide more meaningful and intuitive explanations.
- Score: 25.174139739860657
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a comprehensive evaluation of the robustness and explainability of
ResNet-like models in the context of Unintended Radiated Emission (URE)
classification and suggest a new approach leveraging Neural Stochastic
Differential Equations (SDEs) to address identified limitations. We provide an
empirical demonstration of the fragility of ResNet-like models to Gaussian
noise perturbations, where the model performance deteriorates sharply and its
F1-score drops to near insignificance at 0.008 with a Gaussian noise of only
0.5 standard deviation. We also highlight a concerning discrepancy where the
explanations provided by ResNet-like models do not reflect the inherent
periodicity in the input data, a crucial attribute in URE detection from stable
devices. In response to these findings, we propose a novel application of
Neural SDEs to build models for URE classification that are not only robust to
noise but also provide more meaningful and intuitive explanations. Neural SDE
models maintain a high F1-score of 0.93 even when exposed to Gaussian noise
with a standard deviation of 0.5, demonstrating superior resilience to ResNet
models. Neural SDE models successfully recover the time-invariant or periodic
horizontal bands from the input data, a feature that was conspicuously missing
in the explanations generated by ResNet-like models. This advancement presents
a small but significant step in the development of robust and interpretable
models for real-world URE applications where data is inherently noisy and
assurance arguments demand interpretable machine learning predictions.
Related papers
- Non-adversarial training of Neural SDEs with signature kernel scores [4.721845865189578]
State-of-the-art performance for irregular time series generation has been previously obtained by training these models adversarially as GANs.
In this paper, we introduce a novel class of scoring rules on pathspace based on signature kernels.
arXiv Detail & Related papers (2023-05-25T17:31:18Z) - Neural Abstractions [72.42530499990028]
We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics.
We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models.
arXiv Detail & Related papers (2023-01-27T12:38:09Z) - Robust Neural Posterior Estimation and Statistical Model Criticism [1.5749416770494706]
We argue that modellers must treat simulators as idealistic representations of the true data generating process.
In this work we revisit neural posterior estimation (NPE), a class of algorithms that enable black-box parameter inference in simulation models.
We find that the presence of misspecification, in contrast, leads to unreliable inference when NPE is used naively.
arXiv Detail & Related papers (2022-10-12T20:06:55Z) - Evaluating the Adversarial Robustness for Fourier Neural Operators [78.36413169647408]
Fourier Neural Operator (FNO) was the first to simulate turbulent flow with zero-shot super-resolution.
We generate adversarial examples for FNO based on norm-bounded data input perturbations.
Our results show that the model's robustness degrades rapidly with increasing perturbation levels.
arXiv Detail & Related papers (2022-04-08T19:19:42Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Score Matching Model for Unbounded Data Score [23.708122045184695]
In real datasets, the score function diverges as the perturbation noise ($sigma$) decreases to zero.
We introduce Unbounded Noise Score Network (UNCSN) that resolves the score problem.
We also introduce a new type of SDE, so the exact log likelihood can be calculated from the newly suggested SDE.
arXiv Detail & Related papers (2021-06-10T06:30:16Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - Noisy Recurrent Neural Networks [45.94390701863504]
We study recurrent neural networks (RNNs) trained by injecting noise into hidden states as discretizations of differential equations driven by input data.
We find that, under reasonable assumptions, this implicit regularization promotes flatter minima; it biases towards models with more stable dynamics; and, in classification tasks, it favors models with larger classification margin.
Our theory is supported by empirical results which demonstrate improved robustness with respect to various input perturbations, while maintaining state-of-the-art performance.
arXiv Detail & Related papers (2021-02-09T15:20:50Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Score-Based Generative Modeling through Stochastic Differential
Equations [114.39209003111723]
We present a differential equation that transforms a complex data distribution to a known prior distribution by injecting noise.
A corresponding reverse-time SDE transforms the prior distribution back into the data distribution by slowly removing the noise.
By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks.
We demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.
arXiv Detail & Related papers (2020-11-26T19:39:10Z) - Sparsely constrained neural networks for model discovery of PDEs [0.0]
We present a modular framework that determines the sparsity pattern of a deep-learning based surrogate using any sparse regression technique.
We show how a different network architecture and sparsity estimator improve model discovery accuracy and convergence on several benchmark examples.
arXiv Detail & Related papers (2020-11-09T11:02:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.