Legal Risks of Adversarial Machine Learning Research
- URL: http://arxiv.org/abs/2006.16179v1
- Date: Mon, 29 Jun 2020 16:45:15 GMT
- Title: Legal Risks of Adversarial Machine Learning Research
- Authors: Ram Shankar Siva Kumar, Jonathon Penney, Bruce Schneier, Kendra Albert
- Abstract summary: We show that studying or testing the security of any operational system potentially runs afoul the Computer Fraud and Abuse Act (CFAA)
Our analysis show that because there is a split in how CFAA is interpreted, aspects of adversarial ML attacks may be sanctioned in some jurisdictions and not penalized in others.
We argue that the court is likely to adopt a narrow construction of the CFAA, and that this will actually lead to better adversarial ML security outcomes in the long term.
- Score: 0.7837881800517111
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial Machine Learning is booming with ML researchers increasingly
targeting commercial ML systems such as those used in Facebook, Tesla,
Microsoft, IBM, Google to demonstrate vulnerabilities. In this paper, we ask,
"What are the potential legal risks to adversarial ML researchers when they
attack ML systems?" Studying or testing the security of any operational system
potentially runs afoul the Computer Fraud and Abuse Act (CFAA), the primary
United States federal statute that creates liability for hacking. We claim that
Adversarial ML research is likely no different. Our analysis show that because
there is a split in how CFAA is interpreted, aspects of adversarial ML attacks,
such as model inversion, membership inference, model stealing, reprogramming
the ML system and poisoning attacks, may be sanctioned in some jurisdictions
and not penalized in others. We conclude with an analysis predicting how the US
Supreme Court may resolve some present inconsistencies in the CFAA's
application in Van Buren v. United States, an appeal expected to be decided in
2021. We argue that the court is likely to adopt a narrow construction of the
CFAA, and that this will actually lead to better adversarial ML security
outcomes in the long term.
Related papers
- Federated Learning Priorities Under the European Union Artificial
Intelligence Act [68.44894319552114]
We perform a first-of-its-kind interdisciplinary analysis (legal and ML) of the impact the AI Act may have on Federated Learning.
We explore data governance issues and the concern for privacy.
Most noteworthy are the opportunities to defend against data bias and enhance private and secure computation.
arXiv Detail & Related papers (2024-02-05T19:52:19Z) - A Comprehensive Evaluation of Large Language Models on Legal Judgment
Prediction [60.70089334782383]
Large language models (LLMs) have demonstrated great potential for domain-specific applications.
Recent disputes over GPT-4's law evaluation raise questions concerning their performance in real-world legal tasks.
We design practical baseline solutions based on LLMs and test on the task of legal judgment prediction.
arXiv Detail & Related papers (2023-10-18T07:38:04Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Review on the Feasibility of Adversarial Evasion Attacks and Defenses
for Network Intrusion Detection Systems [0.7829352305480285]
Recent research raises many concerns in the cybersecurity field.
An increasing number of researchers are studying the feasibility of such attacks on security systems based on machine learning algorithms.
arXiv Detail & Related papers (2023-03-13T11:00:05Z) - Adv-Bot: Realistic Adversarial Botnet Attacks against Network Intrusion
Detection Systems [0.7829352305480285]
A growing number of researchers are recently investigating the feasibility of such attacks against machine learning-based security systems.
This study was to investigate the actual feasibility of adversarial attacks, specifically evasion attacks, against network-based intrusion detection systems.
Our goal is to create adversarial botnet traffic that can avoid detection while still performing all of its intended malicious functionality.
arXiv Detail & Related papers (2023-03-12T14:01:00Z) - The Role of Machine Learning in Cybersecurity [1.6932802756478726]
Deployment of Machine Learning in cybersecurity is still at an early stage, revealing a significant discrepancy between research and practice.
This paper is the first attempt to provide a holistic understanding of the role of ML in the entire cybersecurity domain.
We highlight the advantages of ML with respect to human-driven detection methods, as well as the additional tasks that can be addressed by ML in cybersecurity.
arXiv Detail & Related papers (2022-06-20T10:56:08Z) - Adversarial for Good? How the Adversarial ML Community's Values Impede
Socially Beneficial Uses of Attacks [1.2664869982542892]
adversarial machine learning (ML) attacks have the potential to be used "for good"
But most research on adversarial ML has not engaged in developing tools for resistance against ML systems.
arXiv Detail & Related papers (2021-07-11T13:51:52Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - White Paper Machine Learning in Certified Systems [70.24215483154184]
DEEL Project set-up the ML Certification 3 Workgroup (WG) set-up by the Institut de Recherche Technologique Saint Exup'ery de Toulouse (IRT)
arXiv Detail & Related papers (2021-03-18T21:14:30Z) - Understanding the Usability Challenges of Machine Learning In
High-Stakes Decision Making [67.72855777115772]
Machine learning (ML) is being applied to a diverse and ever-growing set of domains.
In many cases, domain experts -- who often have no expertise in ML or data science -- are asked to use ML predictions to make high-stakes decisions.
We investigate the ML usability challenges present in the domain of child welfare screening through a series of collaborations with child welfare screeners.
arXiv Detail & Related papers (2021-03-02T22:50:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.