Theoretical Foundations of Adversarially Robust Learning
- URL: http://arxiv.org/abs/2306.07723v1
- Date: Tue, 13 Jun 2023 12:20:55 GMT
- Title: Theoretical Foundations of Adversarially Robust Learning
- Authors: Omar Montasser
- Abstract summary: Current machine learning systems have been shown to be brittle against adversarial examples.
In this thesis, we explore what robustness properties can we hope to guarantee against adversarial examples.
- Score: 7.589246500826111
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite extraordinary progress, current machine learning systems have been
shown to be brittle against adversarial examples: seemingly innocuous but
carefully crafted perturbations of test examples that cause machine learning
predictors to misclassify. Can we learn predictors robust to adversarial
examples? and how? There has been much empirical interest in this contemporary
challenge in machine learning, and in this thesis, we address it from a
theoretical perspective.
In this thesis, we explore what robustness properties can we hope to
guarantee against adversarial examples and develop an understanding of how to
algorithmically guarantee them. We illustrate the need to go beyond traditional
approaches and principles such as empirical risk minimization and uniform
convergence, and make contributions that can be categorized as follows: (1)
introducing problem formulations capturing aspects of emerging practical
challenges in robust learning, (2) designing new learning algorithms with
provable robustness guarantees, and (3) characterizing the complexity of robust
learning and fundamental limitations on the performance of any algorithm.
Related papers
- A Tutorial on Doubly Robust Learning for Causal Inference [0.0]
Doubly robust learning offers a robust framework for causal inference from observational data.
This tutorial aims to demystify doubly robust methods and demonstrate their application using the EconML package.
arXiv Detail & Related papers (2024-06-02T20:18:40Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - Understanding Self-Predictive Learning for Reinforcement Learning [61.62067048348786]
We study the learning dynamics of self-predictive learning for reinforcement learning.
We propose a novel self-predictive algorithm that learns two representations simultaneously.
arXiv Detail & Related papers (2022-12-06T20:43:37Z) - Statistical Foundation Behind Machine Learning and Its Impact on
Computer Vision [8.974457198386414]
This paper revisits the principle of uniform convergence in statistical learning, discusses how it acts as the foundation behind machine learning, and attempts to gain a better understanding of the essential problem that current deep learning algorithms are solving.
Using computer vision as an example domain in machine learning, the discussion shows that recent research trends in leveraging increasingly large-scale data to perform pre-training for representation learning are largely to reduce the discrepancy between a practically tractable empirical loss and its ultimately desired but intractable expected loss.
arXiv Detail & Related papers (2022-09-06T17:59:04Z) - Adversarial Robustness with Semi-Infinite Constrained Learning [177.42714838799924]
Deep learning to inputs perturbations has raised serious questions about its use in safety-critical domains.
We propose a hybrid Langevin Monte Carlo training approach to mitigate this issue.
We show that our approach can mitigate the trade-off between state-of-the-art performance and robust robustness.
arXiv Detail & Related papers (2021-10-29T13:30:42Z) - Quantifying Epistemic Uncertainty in Deep Learning [15.494774321257939]
Uncertainty quantification is at the core of the reliability and robustness of machine learning.
We provide a theoretical framework to dissect the uncertainty, especially the textitepistemic component, in deep learning.
We propose two approaches to estimate these uncertainties, one based on influence function and one on variability.
arXiv Detail & Related papers (2021-10-23T03:21:10Z) - When and How to Fool Explainable Models (and Humans) with Adversarial
Examples [1.439518478021091]
We explore the possibilities and limits of adversarial attacks for explainable machine learning models.
First, we extend the notion of adversarial examples to fit in explainable machine learning scenarios.
Next, we propose a comprehensive framework to study whether adversarial examples can be generated for explainable models.
arXiv Detail & Related papers (2021-07-05T11:20:55Z) - Constrained Learning with Non-Convex Losses [119.8736858597118]
Though learning has become a core technology of modern information processing, there is now ample evidence that it can lead to biased, unsafe, and prejudiced solutions.
arXiv Detail & Related papers (2021-03-08T23:10:33Z) - Of Moments and Matching: Trade-offs and Treatments in Imitation Learning [26.121994149869767]
We provide a unifying view of a large family of previous imitation learning algorithms through the lens of moment matching.
By considering adversarially chosen divergences between learner and expert behavior, we are able to derive bounds on policy performance.
We derive two novel algorithm templates, AdVIL and AdRIL, with strong guarantees, simple implementation, and competitive empirical performance.
arXiv Detail & Related papers (2021-03-04T18:57:11Z) - Probably Approximately Correct Constrained Learning [135.48447120228658]
We develop a generalization theory based on the probably approximately correct (PAC) learning framework.
We show that imposing a learner does not make a learning problem harder in the sense that any PAC learnable class is also a constrained learner.
We analyze the properties of this solution and use it to illustrate how constrained learning can address problems in fair and robust classification.
arXiv Detail & Related papers (2020-06-09T19:59:29Z) - Provably Efficient Exploration for Reinforcement Learning Using
Unsupervised Learning [96.78504087416654]
Motivated by the prevailing paradigm of using unsupervised learning for efficient exploration in reinforcement learning (RL) problems, we investigate when this paradigm is provably efficient.
We present a general algorithmic framework that is built upon two components: an unsupervised learning algorithm and a noregret tabular RL algorithm.
arXiv Detail & Related papers (2020-03-15T19:23:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.