SoK: Machine Learning Governance
- URL: http://arxiv.org/abs/2109.10870v1
- Date: Mon, 20 Sep 2021 17:56:22 GMT
- Title: SoK: Machine Learning Governance
- Authors: Varun Chandrasekaran, Hengrui Jia, Anvith Thudi, Adelin Travers,
Mohammad Yaghini, Nicolas Papernot
- Abstract summary: We develop the concept of ML governance to balance such benefits and risks.
We use identities to hold principals accountable for failures of ML systems.
We highlight the need for techniques that allow a model owner to manage the life cycle of their system.
- Score: 16.36671448193025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The application of machine learning (ML) in computer systems introduces not
only many benefits but also risks to society. In this paper, we develop the
concept of ML governance to balance such benefits and risks, with the aim of
achieving responsible applications of ML. Our approach first systematizes
research towards ascertaining ownership of data and models, thus fostering a
notion of identity specific to ML systems. Building on this foundation, we use
identities to hold principals accountable for failures of ML systems through
both attribution and auditing. To increase trust in ML systems, we then survey
techniques for developing assurance, i.e., confidence that the system meets its
security requirements and does not exhibit certain known failures. This leads
us to highlight the need for techniques that allow a model owner to manage the
life cycle of their system, e.g., to patch or retire their ML system. Put
altogether, our systematization of knowledge standardizes the interactions
between principals involved in the deployment of ML throughout its life cycle.
We highlight opportunities for future work, e.g., to formalize the resulting
game between ML principals.
Related papers
- Rethinking Machine Unlearning for Large Language Models [85.92660644100582]
We explore machine unlearning in the domain of large language models (LLMs)
This initiative aims to eliminate undesirable data influence (e.g., sensitive or illegal information) and the associated model capabilities.
arXiv Detail & Related papers (2024-02-13T20:51:58Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Concrete Safety for ML Problems: System Safety for ML Development and
Assessment [0.758305251912708]
Concerns of trustworthiness, unintended social harms, and unacceptable social and ethical violations undermine the promise of ML advancements.
Systems safety engineering is an established discipline with a proven track record of identifying and managing risks even in high-complexity sociotechnical systems.
arXiv Detail & Related papers (2023-02-06T18:02:07Z) - Learning Physical Concepts in Cyber-Physical Systems: A Case Study [72.74318982275052]
We provide an overview of the current state of research regarding methods for learning physical concepts in time series data.
We also analyze the most important methods from the current state of the art using the example of a three-tank system.
arXiv Detail & Related papers (2021-11-28T14:24:52Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Declarative Machine Learning Systems [7.5717114708721045]
Machine learning (ML) has moved from a academic endeavor to a pervasive technology adopted in almost every aspect of computing.
Recent successes in applying ML in natural sciences revealed that ML can be used to tackle some of the hardest real-world problems humanity faces today.
We believe the next wave of ML systems will allow a larger amount of people, potentially without coding skills, to perform the same tasks.
arXiv Detail & Related papers (2021-07-16T23:57:57Z) - Characterizing and Detecting Mismatch in Machine-Learning-Enabled
Systems [1.4695979686066065]
Development and deployment of machine learning systems remains a challenge.
In this paper, we report our findings and their implications for improving end-to-end ML-enabled system development.
arXiv Detail & Related papers (2021-03-25T19:40:29Z) - White Paper Machine Learning in Certified Systems [70.24215483154184]
DEEL Project set-up the ML Certification 3 Workgroup (WG) set-up by the Institut de Recherche Technologique Saint Exup'ery de Toulouse (IRT)
arXiv Detail & Related papers (2021-03-18T21:14:30Z) - Towards a Robust and Trustworthy Machine Learning System Development [0.09236074230806578]
We present our recent survey on the state-of-the-art ML trustworthiness and technologies from a security engineering perspective.
We then push our studies forward above and beyond a survey by describing a metamodel we created that represents the body of knowledge in a standard and visualized way for ML practitioners.
We propose future research directions motivated by our findings to advance the development of robust and trustworthy ML systems.
arXiv Detail & Related papers (2021-01-08T14:43:58Z) - Technology Readiness Levels for AI & ML [79.22051549519989]
Development of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
Engineering systems follow well-defined processes and testing standards to streamline development for high-quality, reliable results.
We propose a proven systems engineering approach for machine learning development and deployment.
arXiv Detail & Related papers (2020-06-21T17:14:34Z) - Adversarial Machine Learning: Bayesian Perspectives [0.4915744683251149]
Adversarial Machine Learning (AML) is emerging as a major field aimed at protecting machine learning (ML) systems against security threats.
In certain scenarios there may be adversaries that actively manipulate input data to fool learning systems.
This creates a new class of security vulnerabilities that ML systems may face, and a new desirable property called adversarial robustness essential to trust operations.
arXiv Detail & Related papers (2020-03-07T10:30:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.