Green Lighting ML: Confidentiality, Integrity, and Availability of
Machine Learning Systems in Deployment
- URL: http://arxiv.org/abs/2007.04693v1
- Date: Thu, 9 Jul 2020 10:38:59 GMT
- Title: Green Lighting ML: Confidentiality, Integrity, and Availability of
Machine Learning Systems in Deployment
- Authors: Abhishek Gupta, Erick Galinkin
- Abstract summary: In production machine learning, there is generally a hand-off from those who build a model to those who deploy a model.
In this hand-off, the engineers responsible for model deployment are often not privy to the details of the model.
In order to help alleviate this issue, automated systems for validating privacy and security of models need to be developed.
- Score: 4.2317391919680425
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Security and ethics are both core to ensuring that a machine learning system
can be trusted. In production machine learning, there is generally a hand-off
from those who build a model to those who deploy a model. In this hand-off, the
engineers responsible for model deployment are often not privy to the details
of the model and thus, the potential vulnerabilities associated with its usage,
exposure, or compromise. Techniques such as model theft, model inversion, or
model misuse may not be considered in model deployment, and so it is incumbent
upon data scientists and machine learning engineers to understand these
potential risks so they can communicate them to the engineers deploying and
hosting their models. This is an open problem in the machine learning community
and in order to help alleviate this issue, automated systems for validating
privacy and security of models need to be developed, which will help to lower
the burden of implementing these hand-offs and increasing the ubiquity of their
adoption.
Related papers
- Verification of Machine Unlearning is Fragile [48.71651033308842]
We introduce two novel adversarial unlearning processes capable of circumventing both types of verification strategies.
This study highlights the vulnerabilities and limitations in machine unlearning verification, paving the way for further research into the safety of machine unlearning.
arXiv Detail & Related papers (2024-08-01T21:37:10Z) - Beimingwu: A Learnware Dock System [42.54363998206648]
This paper describes Beimingwu, the first open-source learnware dock system providing foundational support for future research of learnware paradigm.
The system significantly streamlines the model development for new user tasks, thanks to its integrated architecture and engine design.
Notably, this is possible even for users with limited data and minimal expertise in machine learning, without compromising the raw data's security.
arXiv Detail & Related papers (2024-01-24T09:27:51Z) - Balancing Transparency and Risk: The Security and Privacy Risks of
Open-Source Machine Learning Models [31.658006126446175]
We present a comprehensive overview of common privacy and security threats associated with the use of open-source models.
By raising awareness of these dangers, we strive to promote the responsible and secure use of AI systems.
arXiv Detail & Related papers (2023-08-18T11:59:15Z) - Self-Destructing Models: Increasing the Costs of Harmful Dual Uses of
Foundation Models [103.71308117592963]
We present an algorithm for training self-destructing models leveraging techniques from meta-learning and adversarial learning.
In a small-scale experiment, we show MLAC can largely prevent a BERT-style model from being re-purposed to perform gender identification.
arXiv Detail & Related papers (2022-11-27T21:43:45Z) - Learnware: Small Models Do Big [69.88234743773113]
The prevailing big model paradigm, which has achieved impressive results in natural language processing and computer vision applications, has not yet addressed those issues, whereas becoming a serious source of carbon emissions.
This article offers an overview of the learnware paradigm, which attempts to enable users not need to build machine learning models from scratch, with the hope of reusing small models to do things even beyond their original purposes.
arXiv Detail & Related papers (2022-10-07T15:55:52Z) - Enabling Automated Machine Learning for Model-Driven AI Engineering [60.09869520679979]
We propose a novel approach to enable Model-Driven Software Engineering and Model-Driven AI Engineering.
In particular, we support Automated ML, thus assisting software engineers without deep AI knowledge in developing AI-intensive systems.
arXiv Detail & Related papers (2022-03-06T10:12:56Z) - Analyzing a Caching Model [7.378507865227209]
Interpretability remains a major obstacle for adoption in real-world deployments.
By analyzing a state-of-the-art caching model, we provide evidence that the model has learned concepts beyond simple statistics.
arXiv Detail & Related papers (2021-12-13T19:53:07Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
and Defenses [150.64470864162556]
This work systematically categorizes and discusses a wide range of dataset vulnerabilities and exploits.
In addition to describing various poisoning and backdoor threat models and the relationships among them, we develop their unified taxonomy.
arXiv Detail & Related papers (2020-12-18T22:38:47Z) - A Hierarchy of Limitations in Machine Learning [0.0]
This paper attempts a comprehensive, structured overview of the specific conceptual, procedural, and statistical limitations of models in machine learning when applied to society.
Modelers themselves can use the described hierarchy to identify possible failure points and think through how to address them.
Consumers of machine learning models can know what to question when confronted with the decision about if, where, and how to apply machine learning.
arXiv Detail & Related papers (2020-02-12T19:39:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.