Safety Performance of Neural Networks in the Presence of Covariate Shift
- URL: http://arxiv.org/abs/2307.12716v1
- Date: Mon, 24 Jul 2023 11:55:32 GMT
- Title: Safety Performance of Neural Networks in the Presence of Covariate Shift
- Authors: Chih-Hong Cheng, Harald Ruess, Konstantinos Theodorou
- Abstract summary: We propose to reshape the initial test set, as used for the safety performance evaluation prior to deployment, based on an approximation of the operational data.
This approximation is obtained by observing and learning the distribution of activation patterns of neurons in the network during operation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Covariate shift may impact the operational safety performance of neural
networks. A re-evaluation of the safety performance, however, requires
collecting new operational data and creating corresponding ground truth labels,
which often is not possible during operation. We are therefore proposing to
reshape the initial test set, as used for the safety performance evaluation
prior to deployment, based on an approximation of the operational data. This
approximation is obtained by observing and learning the distribution of
activation patterns of neurons in the network during operation. The reshaped
test set reflects the distribution of neuron activation values as observed
during operation, and may therefore be used for re-evaluating safety
performance in the presence of covariate shift. First, we derive conservative
bounds on the values of neurons by applying finite binning and static dataflow
analysis. Second, we formulate a mixed integer linear programming (MILP)
constraint for constructing the minimum set of data points to be removed in the
test set, such that the difference between the discretized test and operational
distributions is bounded. We discuss potential benefits and limitations of this
constraint-based approach based on our initial experience with an implemented
research prototype.
Related papers
- Deep learning with missing data [3.829599191332801]
We propose Pattern Embedded Neural Networks (PENNs), which can be applied in conjunction with any existing imputation technique.
In addition to a neural network trained on the imputed data, PENNs pass the vectors of observation indicators through a second neural network to provide a compact representation.
The outputs are then combined in a third neural network to produce final predictions.
arXiv Detail & Related papers (2025-04-21T18:57:36Z) - A Dataset for Semantic Segmentation in the Presence of Unknowns [49.795683850385956]
Existing datasets allow evaluation of only knowns or unknowns - but not both.
We propose a novel anomaly segmentation dataset, ISSU, that features a diverse set of anomaly inputs from cluttered real-world environments.
The dataset is twice larger than existing anomaly segmentation datasets.
arXiv Detail & Related papers (2025-03-28T10:31:01Z) - Elliptic Loss Regularization [24.24785205800212]
We propose a technique for enforcing a level of smoothness in the mapping between the data input space and the loss value.
We specify the level of regularity by requiring that the loss of the network satisfies an elliptic operator over the data domain.
arXiv Detail & Related papers (2025-03-04T00:08:08Z) - Simulation-Free Training of Neural ODEs on Paired Data [20.36333430055869]
We employ the flow matching framework for simulation-free training of NODEs.
We show that applying flow matching directly between paired data can often lead to an ill-defined flow.
We propose a simple extension that applies flow matching in the embedding space of data pairs.
arXiv Detail & Related papers (2024-10-30T11:18:27Z) - Semi-Supervised Deep Sobolev Regression: Estimation and Variable Selection by ReQU Neural Network [3.4623717820849476]
We propose SDORE, a Semi-supervised Deep Sobolev Regressor, for the nonparametric estimation of the underlying regression function and its gradient.
Our study includes a thorough analysis of the convergence rates of SDORE in $L2$-norm, achieving the minimax optimality.
arXiv Detail & Related papers (2024-01-09T13:10:30Z) - Function-Space Regularization in Neural Networks: A Probabilistic
Perspective [51.133793272222874]
We show that we can derive a well-motivated regularization technique that allows explicitly encoding information about desired predictive functions into neural network training.
We evaluate the utility of this regularization technique empirically and demonstrate that the proposed method leads to near-perfect semantic shift detection and highly-calibrated predictive uncertainty estimates.
arXiv Detail & Related papers (2023-12-28T17:50:56Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - ENN: A Neural Network with DCT Adaptive Activation Functions [2.2713084727838115]
We present Expressive Neural Network (ENN), a novel model in which the non-linear activation functions are modeled using the Discrete Cosine Transform (DCT)
This parametrization keeps the number of trainable parameters low, is appropriate for gradient-based schemes, and adapts to different learning tasks.
The performance of ENN outperforms state of the art benchmarks, providing above a 40% gap in accuracy in some scenarios.
arXiv Detail & Related papers (2023-07-02T21:46:30Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Toward Robust Uncertainty Estimation with Random Activation Functions [3.0586855806896045]
We propose a novel approach for uncertainty quantification via ensembles, called Random Activation Functions (RAFs) Ensemble.
RAFs Ensemble outperforms state-of-the-art ensemble uncertainty quantification methods on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-02-28T13:17:56Z) - Robust and Adaptive Temporal-Difference Learning Using An Ensemble of
Gaussian Processes [70.80716221080118]
The paper takes a generative perspective on policy evaluation via temporal-difference (TD) learning.
The OS-GPTD approach is developed to estimate the value function for a given policy by observing a sequence of state-reward pairs.
To alleviate the limited expressiveness associated with a single fixed kernel, a weighted ensemble (E) of GP priors is employed to yield an alternative scheme.
arXiv Detail & Related papers (2021-12-01T23:15:09Z) - On the Practicality of Deterministic Epistemic Uncertainty [106.06571981780591]
deterministic uncertainty methods (DUMs) achieve strong performance on detecting out-of-distribution data.
It remains unclear whether DUMs are well calibrated and can seamlessly scale to real-world applications.
arXiv Detail & Related papers (2021-07-01T17:59:07Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z) - Guiding Neural Network Initialization via Marginal Likelihood
Maximization [0.9137554315375919]
We leverage the relationship between neural network and Gaussian process models having corresponding activation and covariance functions to infer the hyper- parameter values.
Our experiment shows that marginal consistency provides recommendations that yield near-optimal prediction performance on MNIST classification task.
arXiv Detail & Related papers (2020-12-17T21:46:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.