An Overview and Prospective Outlook on Robust Training and Certification
of Machine Learning Models
- URL: http://arxiv.org/abs/2208.07464v1
- Date: Mon, 15 Aug 2022 23:09:54 GMT
- Title: An Overview and Prospective Outlook on Robust Training and Certification
of Machine Learning Models
- Authors: Brendon G. Anderson, Tanmay Gautam, Somayeh Sojoudi
- Abstract summary: As learning algorithms become more popular in data-driven control systems, their robustness to data uncertainty must be ensured in order to maintain reliable safety-critical operations.
We begin by reviewing common formalisms for such robustness, and then move on to discuss popular and state-of-the-art techniques for training robust machine learning models.
- Score: 12.012612885618436
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this discussion paper, we survey recent research surrounding robustness of
machine learning models. As learning algorithms become increasingly more
popular in data-driven control systems, their robustness to data uncertainty
must be ensured in order to maintain reliable safety-critical operations. We
begin by reviewing common formalisms for such robustness, and then move on to
discuss popular and state-of-the-art techniques for training robust machine
learning models as well as methods for provably certifying such robustness.
From this unification of robust machine learning, we identify and discuss
pressing directions for future research in the area.
Related papers
- Verification of Machine Unlearning is Fragile [48.71651033308842]
We introduce two novel adversarial unlearning processes capable of circumventing both types of verification strategies.
This study highlights the vulnerabilities and limitations in machine unlearning verification, paving the way for further research into the safety of machine unlearning.
arXiv Detail & Related papers (2024-08-01T21:37:10Z) - Enhancing Generative Class Incremental Learning Performance with Model Forgetting Approach [50.36650300087987]
This study presents a novel approach to Generative Class Incremental Learning (GCIL) by introducing the forgetting mechanism.
We have found that integrating the forgetting mechanisms significantly enhances the models' performance in acquiring new knowledge.
arXiv Detail & Related papers (2024-03-27T05:10:38Z) - Self-consistent Validation for Machine Learning Electronic Structure [81.54661501506185]
Method integrates machine learning with self-consistent field methods to achieve both low validation cost and interpret-ability.
This, in turn, enables exploration of the model's ability with active learning and instills confidence in its integration into real-world studies.
arXiv Detail & Related papers (2024-02-15T18:41:35Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [71.63186089279218]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - Holistic Adversarial Robustness of Deep Learning Models [91.34155889052786]
Adversarial robustness studies the worst-case performance of a machine learning model to ensure safety and reliability.
This paper provides a comprehensive overview of research topics and foundational principles of research methods for adversarial robustness of deep learning models.
arXiv Detail & Related papers (2022-02-15T05:30:27Z) - A Review of Formal Methods applied to Machine Learning [0.6853165736531939]
We review state-of-the-art formal methods applied to the emerging field of the verification of machine learning systems.
We first recall established formal methods and their current use in an exemplar safety-critical field, avionic software.
We then provide a comprehensive and detailed review of the formal methods developed so far for machine learning, highlighting their strengths and limitations.
arXiv Detail & Related papers (2021-04-06T12:48:17Z) - Towards CRISP-ML(Q): A Machine Learning Process Model with Quality
Assurance Methodology [53.063411515511056]
We propose a process model for the development of machine learning applications.
The first phase combines business and data understanding as data availability oftentimes affects the feasibility of the project.
The sixth phase covers state-of-the-art approaches for monitoring and maintenance of a machine learning applications.
arXiv Detail & Related papers (2020-03-11T08:25:49Z) - A Safety Framework for Critical Systems Utilising Deep Neural Networks [13.763070043077633]
This paper presents a principled novel safety argument framework for critical systems that utilise deep neural networks.
The approach allows various forms of predictions, e.g., future reliability of passing some demands, or confidence on a required reliability level.
It is supported by a Bayesian analysis using operational data and the recent verification and validation techniques for deep learning.
arXiv Detail & Related papers (2020-03-07T23:35:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.