Manipulating hidden-Markov-model inferences by corrupting batch data
- URL: http://arxiv.org/abs/2402.13287v1
- Date: Mon, 19 Feb 2024 12:22:22 GMT
- Title: Manipulating hidden-Markov-model inferences by corrupting batch data
- Authors: William N. Caballero, Jose Manuel Camacho, Tahir Ekin, Roi Naveiro
- Abstract summary: A self-interested adversary may have incentive to corrupt time-series data, thereby altering a decision maker's inference.
This research provides a novel, probabilistic perspective toward the manipulation of hidden Markov model inferences via corrupted data.
- Score: 0.4915744683251149
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Time-series models typically assume untainted and legitimate streams of data.
However, a self-interested adversary may have incentive to corrupt this data,
thereby altering a decision maker's inference. Within the broader field of
adversarial machine learning, this research provides a novel, probabilistic
perspective toward the manipulation of hidden Markov model inferences via
corrupted data. In particular, we provision a suite of corruption problems for
filtering, smoothing, and decoding inferences leveraging an adversarial risk
analysis approach. Multiple stochastic programming models are set forth that
incorporate realistic uncertainties and varied attacker objectives. Three
general solution methods are developed by alternatively viewing the problem
from frequentist and Bayesian perspectives. The efficacy of each method is
illustrated via extensive, empirical testing. The developed methods are
characterized by their solution quality and computational effort, resulting in
a stratification of techniques across varying problem-instance architectures.
This research highlights the weaknesses of hidden Markov models under
adversarial activity, thereby motivating the need for robustification
techniques to ensure their security.
Related papers
- Fin-Fed-OD: Federated Outlier Detection on Financial Tabular Data [11.027356898413139]
Anomaly detection in real-world scenarios poses challenges due to dynamic and often unknown anomaly distributions.
This paper addresses the question of enhancing outlier detection within individual organizations without compromising data confidentiality.
We propose a novel method leveraging representation learning and federated learning techniques to improve the detection of unknown anomalies.
arXiv Detail & Related papers (2024-04-23T11:22:04Z) - It's All in the Mix: Wasserstein Machine Learning with Mixed Features [5.739657897440173]
We present a practically efficient algorithm to solve mixed-feature problems.
We demonstrate that our approach can significantly outperform existing methods that are to the presence of discrete features.
arXiv Detail & Related papers (2023-12-19T15:15:52Z) - Towards stable real-world equation discovery with assessing
differentiating quality influence [52.2980614912553]
We propose alternatives to the commonly used finite differences-based method.
We evaluate these methods in terms of applicability to problems, similar to the real ones, and their ability to ensure the convergence of equation discovery algorithms.
arXiv Detail & Related papers (2023-11-09T23:32:06Z) - A Discrepancy Aware Framework for Robust Anomaly Detection [51.710249807397695]
We present a Discrepancy Aware Framework (DAF), which demonstrates robust performance consistently with simple and cheap strategies.
Our method leverages an appearance-agnostic cue to guide the decoder in identifying defects, thereby alleviating its reliance on synthetic appearance.
Under the simple synthesis strategies, it outperforms existing methods by a large margin. Furthermore, it also achieves the state-of-the-art localization performance.
arXiv Detail & Related papers (2023-10-11T15:21:40Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Elucidating Noisy Data via Uncertainty-Aware Robust Learning [9.711326718689495]
Our proposed method can learn the clean target distribution from a dirty dataset.
We leverage a mixture-of-experts model that can distinguish two different types of predictive uncertainty.
We present a novel validation scheme for evaluating the performance of the corruption pattern estimation.
arXiv Detail & Related papers (2021-11-02T14:44:50Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.