Privacy and Security Implications of Cloud-Based AI Services : A Survey
- URL: http://arxiv.org/abs/2402.00896v1
- Date: Wed, 31 Jan 2024 13:30:20 GMT
- Title: Privacy and Security Implications of Cloud-Based AI Services : A Survey
- Authors: Alka Luqman, Riya Mahesh, Anupam Chattopadhyay
- Abstract summary: This paper details the privacy and security landscape in today's cloud ecosystem.
It identifies that there is a gap in addressing the risks introduced by machine learning models.
- Score: 4.1964397179107085
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper details the privacy and security landscape in today's cloud
ecosystem and identifies that there is a gap in addressing the risks introduced
by machine learning models. As machine learning algorithms continue to evolve
and find applications across diverse domains, the need to categorize and
quantify privacy and security risks becomes increasingly critical. With the
emerging trend of AI-as-a-Service (AIaaS), machine learned AI models (or ML
models) are deployed on the cloud by model providers and used by model
consumers. We first survey the AIaaS landscape to document the various kinds of
liabilities that ML models, especially Deep Neural Networks pose and then
introduce a taxonomy to bridge this gap by holistically examining the risks
that creators and consumers of ML models are exposed to and their known
defences till date. Such a structured approach will be beneficial for ML model
providers to create robust solutions. Likewise, ML model consumers will find it
valuable to evaluate such solutions and understand the implications of their
engagement with such services. The proposed taxonomies provide a foundational
basis for solutions in private, secure and robust ML, paving the way for more
transparent and resilient AI systems.
Related papers
- "Glue pizza and eat rocks" -- Exploiting Vulnerabilities in Retrieval-Augmented Generative Models [74.05368440735468]
Retrieval-Augmented Generative (RAG) models enhance Large Language Models (LLMs)
In this paper, we demonstrate a security threat where adversaries can exploit the openness of these knowledge bases.
arXiv Detail & Related papers (2024-06-26T05:36:23Z) - Privacy Implications of Explainable AI in Data-Driven Systems [0.0]
Machine learning (ML) models suffer from a lack of interpretability.
The absence of transparency, often referred to as the black box nature of ML models, undermines trust.
XAI techniques address this challenge by providing frameworks and methods to explain the internal decision-making processes.
arXiv Detail & Related papers (2024-06-22T08:51:58Z) - Knowledge Distillation-Based Model Extraction Attack using GAN-based Private Counterfactual Explanations [1.6576983459630268]
We focus on investigating how model explanations, particularly counterfactual explanations, can be exploited for performing MEA within the ML platform.
We propose a novel approach for MEA based on Knowledge Distillation (KD) to enhance the efficiency of extracting a substitute model.
We also assess the effectiveness of differential privacy (DP) as a mitigation strategy.
arXiv Detail & Related papers (2024-04-04T10:28:55Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - A Review of Machine Learning-based Security in Cloud Computing [5.384804060261833]
Cloud Computing (CC) is revolutionizing the way IT resources are delivered to users, allowing them to access and manage their systems with increased cost-effectiveness and simplified infrastructure.
With the growth of CC comes a host of security risks, including threats to availability, integrity, and confidentiality.
Machine Learning (ML) is increasingly being used by Cloud Service Providers (CSPs) to reduce the need for human intervention in identifying and resolving security issues.
arXiv Detail & Related papers (2023-09-10T01:52:23Z) - A Blackbox Model Is All You Need to Breach Privacy: Smart Grid
Forecasting Models as a Use Case [0.7714988183435832]
We show that a black box access to an LSTM model can reveal a significant amount of information equivalent to having access to the data itself.
This highlights the importance of protecting forecasting models at the same level as the data.
arXiv Detail & Related papers (2023-09-04T11:07:37Z) - A Survey of Machine Unlearning [56.017968863854186]
Recent regulations now require that, on request, private information about a user must be removed from computer systems.
ML models often remember' the old data.
Recent works on machine unlearning have not been able to completely solve the problem.
arXiv Detail & Related papers (2022-09-06T08:51:53Z) - Enabling Automated Machine Learning for Model-Driven AI Engineering [60.09869520679979]
We propose a novel approach to enable Model-Driven Software Engineering and Model-Driven AI Engineering.
In particular, we support Automated ML, thus assisting software engineers without deep AI knowledge in developing AI-intensive systems.
arXiv Detail & Related papers (2022-03-06T10:12:56Z) - Privacy-Preserving Machine Learning: Methods, Challenges and Directions [4.711430413139393]
Well-designed privacy-preserving machine learning (PPML) solutions have attracted increasing research interest from academia and industry.
This paper systematically reviews existing privacy-preserving approaches and proposes a PGU model to guide evaluation for various PPML solutions.
arXiv Detail & Related papers (2021-08-10T02:58:31Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.