Empirical Perturbation Analysis of Linear System Solvers from a Data Poisoning Perspective
- URL: http://arxiv.org/abs/2410.00878v1
- Date: Tue, 1 Oct 2024 17:14:05 GMT
- Title: Empirical Perturbation Analysis of Linear System Solvers from a Data Poisoning Perspective
- Authors: Yixin Liu, Arielle Carr, Lichao Sun,
- Abstract summary: We investigate how errors in the input data will affect the fitting error and accuracy of the solution from a linear system-solving algorithm under perturbations common in adversarial attacks.
We propose data perturbation through two distinct knowledge levels, developing a poisoning optimization and studying two methods of perturbation: Label-guided Perturbation (LP) and Unconditioning Perturbation (UP)
Under the circumstance that the data is intentionally perturbed -- as is the case with data poisoning -- we seek to understand how different kinds of solvers react to these perturbations, identifying those algorithms most impacted by different types of adversarial attacks.
- Score: 16.569765598914152
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The perturbation analysis of linear solvers applied to systems arising broadly in machine learning settings -- for instance, when using linear regression models -- establishes an important perspective when reframing these analyses through the lens of a data poisoning attack. By analyzing solvers' responses to such attacks, this work aims to contribute to the development of more robust linear solvers and provide insights into poisoning attacks on linear solvers. In particular, we investigate how the errors in the input data will affect the fitting error and accuracy of the solution from a linear system-solving algorithm under perturbations common in adversarial attacks. We propose data perturbation through two distinct knowledge levels, developing a poisoning optimization and studying two methods of perturbation: Label-guided Perturbation (LP) and Unconditioning Perturbation (UP). Existing works mainly focus on deriving the worst-case perturbation bound from a theoretical perspective, and the analysis is often limited to specific kinds of linear system solvers. Under the circumstance that the data is intentionally perturbed -- as is the case with data poisoning -- we seek to understand how different kinds of solvers react to these perturbations, identifying those algorithms most impacted by different types of adversarial attacks.
Related papers
- Targeted Cause Discovery with Data-Driven Learning [66.86881771339145]
We propose a novel machine learning approach for inferring causal variables of a target variable from observations.
We employ a neural network trained to identify causality through supervised learning on simulated data.
Empirical results demonstrate the effectiveness of our method in identifying causal relationships within large-scale gene regulatory networks.
arXiv Detail & Related papers (2024-08-29T02:21:11Z) - Machine Learning for Pre/Post Flight UAV Rotor Defect Detection Using Vibration Analysis [54.550658461477106]
Unmanned Aerial Vehicles (UAVs) will be critical infrastructural components of future smart cities.
In order to operate efficiently, UAV reliability must be ensured by constant monitoring for faults and failures.
This paper leverages signal processing and Machine Learning methods to analyze the data of a comprehensive vibrational analysis to determine the presence of rotor blade defects.
arXiv Detail & Related papers (2024-04-24T13:50:27Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Interactive System-wise Anomaly Detection [66.3766756452743]
Anomaly detection plays a fundamental role in various applications.
It is challenging for existing methods to handle the scenarios where the instances are systems whose characteristics are not readily observed as data.
We develop an end-to-end approach which includes an encoder-decoder module that learns system embeddings.
arXiv Detail & Related papers (2023-04-21T02:20:24Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Scalable Intervention Target Estimation in Linear Models [52.60799340056917]
Current approaches to causal structure learning either work with known intervention targets or use hypothesis testing to discover the unknown intervention targets.
This paper proposes a scalable and efficient algorithm that consistently identifies all intervention targets.
The proposed algorithm can be used to also update a given observational Markov equivalence class into the interventional Markov equivalence class.
arXiv Detail & Related papers (2021-11-15T03:16:56Z) - Explanation-Guided Diagnosis of Machine Learning Evasion Attacks [3.822543555265593]
We introduce a novel framework that harnesses explainable ML methods to guide high-fidelity assessment of ML evasion attacks.
Our framework enables explanation-guided correlation analysis between pre-evasion perturbations and post-evasion explanations.
arXiv Detail & Related papers (2021-06-30T05:47:12Z) - Data-Driven Fault Diagnosis Analysis and Open-Set Classification of
Time-Series Data [1.0152838128195467]
A framework for data-driven analysis and open-set classification is developed for fault diagnosis applications.
A data-driven fault classification algorithm is proposed which can handle imbalanced datasets, class overlapping, and unknown faults.
An algorithm is proposed to estimate the size of the fault when training data contains information from known fault realizations.
arXiv Detail & Related papers (2020-09-10T09:53:13Z) - Unique properties of adversarially trained linear classifiers on
Gaussian data [13.37805637358556]
adversarial learning research community has made remarkable progress in understanding root causes of adversarial perturbations.
It is common to develop adversarially robust learning theory on simple problems, in the hope that insights will transfer to real world datasets'
In particular, we show with a linear classifier, it is always possible to solve a binary classification problem on Gaussian data under arbitrary levels of adversarial corruption.
arXiv Detail & Related papers (2020-06-06T14:06:38Z) - Sparse Methods for Automatic Relevance Determination [0.0]
We first review automatic relevance determination (ARD) and analytically demonstrate the need to additional regularization or thresholding to achieve sparse models.
We then discuss two classes of methods, regularization based and thresholding based, which build on ARD to learn parsimonious solutions to linear problems.
arXiv Detail & Related papers (2020-05-18T14:08:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.