Covariance-Robust Dynamic Watermarking
- URL: http://arxiv.org/abs/2003.13908v1
- Date: Tue, 31 Mar 2020 01:55:58 GMT
- Title: Covariance-Robust Dynamic Watermarking
- Authors: Matt Olfat, Stephen Sloan, Pedro Hespanhol, Matt Porter, Ram
Vasudevan, and Anil Aswani
- Abstract summary: We develop a new dynamic watermarking method that is able to handle uncertainties in the covariance of measurement noise.
We show that our tests satisfy some notions of fairness.
- Score: 14.039712456943223
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Attack detection and mitigation strategies for cyberphysical systems (CPS)
are an active area of research, and researchers have developed a variety of
attack-detection tools such as dynamic watermarking. However, such methods
often make assumptions that are difficult to guarantee, such as exact knowledge
of the distribution of measurement noise. Here, we develop a new dynamic
watermarking method that we call covariance-robust dynamic watermarking, which
is able to handle uncertainties in the covariance of measurement noise.
Specifically, we consider two cases. In the first this covariance is fixed but
unknown, and in the second this covariance is slowly-varying. For our tests, we
only require knowledge of a set within which the covariance lies. Furthermore,
we connect this problem to that of algorithmic fairness and the nascent field
of fair hypothesis testing, and we show that our tests satisfy some notions of
fairness. Finally, we exhibit the efficacy of our tests on empirical examples
chosen to reflect values observed in a standard simulation model of autonomous
vehicles.
Related papers
- Enhancing Anomaly Detection Generalization through Knowledge Exposure: The Dual Effects of Augmentation [9.740752855568202]
Anomaly detection involves identifying instances within a dataset that deviates from the norm and occur infrequently.
Current benchmarks tend to favor methods biased towards low diversity in normal data, which does not align with real-world scenarios.
We propose new testing protocols and a novel method called Knowledge Exposure (KE), which integrates external knowledge to comprehend concept dynamics.
arXiv Detail & Related papers (2024-06-15T12:37:36Z) - Uncertainty in Additive Feature Attribution methods [34.80932512496311]
We focus on the class of additive feature attribution explanation methods.
We study the relationship between a feature's attribution and its uncertainty and observe little correlation.
We coin the term "stable instances" for such instances and diagnose factors that make an instance stable.
arXiv Detail & Related papers (2023-11-29T08:40:46Z) - Towards stable real-world equation discovery with assessing
differentiating quality influence [52.2980614912553]
We propose alternatives to the commonly used finite differences-based method.
We evaluate these methods in terms of applicability to problems, similar to the real ones, and their ability to ensure the convergence of equation discovery algorithms.
arXiv Detail & Related papers (2023-11-09T23:32:06Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - Bootstrapped Edge Count Tests for Nonparametric Two-Sample Inference
Under Heterogeneity [5.8010446129208155]
We develop a new nonparametric testing procedure that accurately detects differences between the two samples.
A comprehensive simulation study and an application to detecting user behaviors in online games demonstrates the excellent non-asymptotic performance of the proposed test.
arXiv Detail & Related papers (2023-04-26T22:25:44Z) - Doubly Stochastic Models: Learning with Unbiased Label Noises and
Inference Stability [85.1044381834036]
We investigate the implicit regularization effects of label noises under mini-batch sampling settings of gradient descent.
We find such implicit regularizer would favor some convergence points that could stabilize model outputs against perturbation of parameters.
Our work doesn't assume SGD as an Ornstein-Uhlenbeck like process and achieve a more general result with convergence of approximation proved.
arXiv Detail & Related papers (2023-04-01T14:09:07Z) - Centrality and Consistency: Two-Stage Clean Samples Identification for
Learning with Instance-Dependent Noisy Labels [87.48541631675889]
We propose a two-stage clean samples identification method.
First, we employ a class-level feature clustering procedure for the early identification of clean samples.
Second, for the remaining clean samples that are close to the ground truth class boundary, we propose a novel consistency-based classification method.
arXiv Detail & Related papers (2022-07-29T04:54:57Z) - Holistic Approach to Measure Sample-level Adversarial Vulnerability and
its Utility in Building Trustworthy Systems [17.707594255626216]
Adversarial attack perturbs an image with an imperceptible noise, leading to incorrect model prediction.
We propose a holistic approach for quantifying adversarial vulnerability of a sample by combining different perspectives.
We demonstrate that by reliably estimating adversarial vulnerability at the sample level, it is possible to develop a trustworthy system.
arXiv Detail & Related papers (2022-05-05T12:36:17Z) - A One-step Approach to Covariate Shift Adaptation [82.01909503235385]
A default assumption in many machine learning scenarios is that the training and test samples are drawn from the same probability distribution.
We propose a novel one-step approach that jointly learns the predictive model and the associated weights in one optimization.
arXiv Detail & Related papers (2020-07-08T11:35:47Z) - A Kernel Two-sample Test for Dynamical Systems [7.198860143325813]
evaluating whether data streams are drawn from the same distribution is at the heart of various machine learning problems.
This is particularly relevant for data generated by dynamical systems since such systems are essential for many real-world processes in biomedical, economic, or engineering systems.
We propose a two-sample test for dynamical systems by addressing three core challenges: we (i) introduce a novel notion of mixing that captures autocorrelations in a relevant metric, (ii) propose an efficient way to estimate the speed of mixing relying purely on data, and (iii) integrate these into established kernel two-sample tests.
arXiv Detail & Related papers (2020-04-23T11:57:26Z) - Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial
Perturbations [65.05561023880351]
Adversarial examples are malicious inputs crafted to induce misclassification.
This paper studies a complementary failure mode, invariance-based adversarial examples.
We show that defenses against sensitivity-based attacks actively harm a model's accuracy on invariance-based attacks.
arXiv Detail & Related papers (2020-02-11T18:50:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.