Local Intrinsic Dimensionality Signals Adversarial Perturbations
- URL: http://arxiv.org/abs/2109.11803v1
- Date: Fri, 24 Sep 2021 08:29:50 GMT
- Title: Local Intrinsic Dimensionality Signals Adversarial Perturbations
- Authors: Sandamal Weerasinghe, Tansu Alpcan, Sarah M. Erfani, Christopher
Leckie, Benjamin I. P. Rubinstein
- Abstract summary: Local dimensionality (LID) is a local metric that describes the minimum number of latent variables required to describe each data point.
In this paper, we derive a lower-bound and an upper-bound for the LID value of a perturbed data point and demonstrate that the bounds, in particular the lower-bound, has a positive correlation with the magnitude of the perturbation.
- Score: 28.328973408891834
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The vulnerability of machine learning models to adversarial perturbations has
motivated a significant amount of research under the broad umbrella of
adversarial machine learning. Sophisticated attacks may cause learning
algorithms to learn decision functions or make decisions with poor predictive
performance. In this context, there is a growing body of literature that uses
local intrinsic dimensionality (LID), a local metric that describes the minimum
number of latent variables required to describe each data point, for detecting
adversarial samples and subsequently mitigating their effects. The research to
date has tended to focus on using LID as a practical defence method often
without fully explaining why LID can detect adversarial samples. In this paper,
we derive a lower-bound and an upper-bound for the LID value of a perturbed
data point and demonstrate that the bounds, in particular the lower-bound, has
a positive correlation with the magnitude of the perturbation. Hence, we
demonstrate that data points that are perturbed by a large amount would have
large LID values compared to unperturbed samples, thus justifying its use in
the prior literature. Furthermore, our empirical validation demonstrates the
validity of the bounds on benchmark datasets.
Related papers
- Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks [16.064233621959538]
We propose a query-efficient and computation-efficient MIA that directly textbfRe-levertextbfAges the original membershitextbfP scores to mtextbfItigate the errors in textbfDifficulty calibration.
arXiv Detail & Related papers (2024-08-31T11:59:42Z) - Querying Easily Flip-flopped Samples for Deep Active Learning [63.62397322172216]
Active learning is a machine learning paradigm that aims to improve the performance of a model by strategically selecting and querying unlabeled data.
One effective selection strategy is to base it on the model's predictive uncertainty, which can be interpreted as a measure of how informative a sample is.
This paper proposes the it least disagree metric (LDM) as the smallest probability of disagreement of the predicted label.
arXiv Detail & Related papers (2024-01-18T08:12:23Z) - A New Benchmark and Reverse Validation Method for Passage-level
Hallucination Detection [63.56136319976554]
Large Language Models (LLMs) generate hallucinations, which can cause significant damage when deployed for mission-critical tasks.
We propose a self-check approach based on reverse validation to detect factual errors automatically in a zero-resource fashion.
We empirically evaluate our method and existing zero-resource detection methods on two datasets.
arXiv Detail & Related papers (2023-10-10T10:14:59Z) - On the Universal Adversarial Perturbations for Efficient Data-free
Adversarial Detection [55.73320979733527]
We propose a data-agnostic adversarial detection framework, which induces different responses between normal and adversarial samples to UAPs.
Experimental results show that our method achieves competitive detection performance on various text classification tasks.
arXiv Detail & Related papers (2023-06-27T02:54:07Z) - Learning Prompt-Enhanced Context Features for Weakly-Supervised Video
Anomaly Detection [37.99031842449251]
Video anomaly detection under weak supervision presents significant challenges.
We present a weakly supervised anomaly detection framework that focuses on efficient context modeling and enhanced semantic discriminability.
Our approach significantly improves the detection accuracy of certain anomaly sub-classes, underscoring its practical value and efficacy.
arXiv Detail & Related papers (2023-06-26T06:45:16Z) - Temporal Output Discrepancy for Loss Estimation-based Active Learning [65.93767110342502]
We present a novel deep active learning approach that queries the oracle for data annotation when the unlabeled sample is believed to incorporate high loss.
Our approach achieves superior performances than the state-of-the-art active learning methods on image classification and semantic segmentation tasks.
arXiv Detail & Related papers (2022-12-20T19:29:37Z) - Intrinsic Dimensionality Estimation within Tight Localities: A
Theoretical and Experimental Analysis [0.0]
We propose a local ID estimation strategy stable even for tight' localities consisting of as few as 20 sample points.
Our experimental results show that our proposed estimation technique can achieve notably smaller variance, while maintaining comparable levels of bias, at much smaller sample sizes than state-of-the-art estimators.
arXiv Detail & Related papers (2022-09-29T00:00:11Z) - Representation Learning with Information Theory for COVID-19 Detection [18.98329701403629]
We show how to aid deep models in discovering useful priors from data to learn their intrinsic properties.
Our model, which we call a dual role network (DRN), uses a dependency approach based on Least Squared Mutual Information (LSMI)
Experiments on CT based COVID-19 Detection and COVID-19 Severity Detection benchmarks demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2022-07-04T14:25:12Z) - Provably Efficient Causal Reinforcement Learning with Confounded
Observational Data [135.64775986546505]
We study how to incorporate the dataset (observational data) collected offline, which is often abundantly available in practice, to improve the sample efficiency in the online setting.
We propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner.
arXiv Detail & Related papers (2020-06-22T14:49:33Z) - Stance Detection Benchmark: How Robust Is Your Stance Detection? [65.91772010586605]
Stance Detection (StD) aims to detect an author's stance towards a certain topic or claim.
We introduce a StD benchmark that learns from ten StD datasets of various domains in a multi-dataset learning setting.
Within this benchmark setup, we are able to present new state-of-the-art results on five of the datasets.
arXiv Detail & Related papers (2020-01-06T13:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.