Security and Privacy Considerations for Machine Learning Models Deployed
in the Government and Public Sector (white paper)
- URL: http://arxiv.org/abs/2010.05809v1
- Date: Mon, 12 Oct 2020 16:05:29 GMT
- Title: Security and Privacy Considerations for Machine Learning Models Deployed
in the Government and Public Sector (white paper)
- Authors: Nader Sehatbakhsh, Ellie Daw, Onur Savas, Amin Hassanzadeh, Ian
McCulloh
- Abstract summary: We describe how the inevitable interactions between a user of unknown trustworthiness and the machine learning models, deployed in governments and public sectors, can jeopardize the system.
We propose recommendations and guidelines that can enhance the security and privacy of the provided services.
- Score: 1.438446731705183
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As machine learning becomes a more mainstream technology, the objective for
governments and public sectors is to harness the power of machine learning to
advance their mission by revolutionizing public services. Motivational
government use cases require special considerations for implementation given
the significance of the services they provide. Not only will these applications
be deployed in a potentially hostile environment that necessitates protective
mechanisms, but they are also subject to government transparency and
accountability initiatives which further complicates such protections.
In this paper, we describe how the inevitable interactions between a user of
unknown trustworthiness and the machine learning models, deployed in
governments and public sectors, can jeopardize the system in two major ways: by
compromising the integrity or by violating the privacy. We then briefly
overview the possible attacks and defense scenarios, and finally, propose
recommendations and guidelines that once considered can enhance the security
and privacy of the provided services.
Related papers
- Coordinated Disclosure of Dual-Use Capabilities: An Early Warning System for Advanced AI [0.0]
We propose Coordinated Disclosure of Dual-Use Capabilities (CDDC) as a process to guide early information-sharing between advanced AI developers, U.S. government agencies, and other private sector actors.
This aims to provide the U.S. government, dual-use foundation model developers, and other actors with an overview of AI capabilities that could significantly impact public safety and security, as well as maximal time to respond.
arXiv Detail & Related papers (2024-07-01T16:09:54Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - ExTRUST: Reducing Exploit Stockpiles with a Privacy-Preserving Depletion
System for Inter-State Relationships [4.349142920611964]
This paper proposes a privacy-preserving approach that allows multiple state parties to privately compare their stock of vulnerabilities and exploits.
We call our system Extrust and show that it is scalable and can withstand several attack scenarios.
arXiv Detail & Related papers (2023-06-01T12:02:17Z) - Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining [75.25943383604266]
We question whether the use of large Web-scraped datasets should be viewed as differential-privacy-preserving.
We caution that publicizing these models pretrained on Web data as "private" could lead to harm and erode the public's trust in differential privacy as a meaningful definition of privacy.
We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.
arXiv Detail & Related papers (2022-12-13T10:41:12Z) - A Trust Framework for Government Use of Artificial Intelligence and
Automated Decision Making [0.1527458325979785]
This paper identifies the challenges of the mechanisation, digitisation and automation of public sector systems and processes.
It proposes a modern and practical framework to ensure and assure ethical and high veracity Artificial Intelligence (AI) or Automated Decision Making (ADM) systems in public institutions.
arXiv Detail & Related papers (2022-08-22T06:51:15Z) - Distributed Machine Learning and the Semblance of Trust [66.1227776348216]
Federated Learning (FL) allows the data owner to maintain data governance and perform model training locally without having to share their data.
FL and related techniques are often described as privacy-preserving.
We explain why this term is not appropriate and outline the risks associated with over-reliance on protocols that were not designed with formal definitions of privacy in mind.
arXiv Detail & Related papers (2021-12-21T08:44:05Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z) - An operational architecture for privacy-by-design in public service
applications [0.26249027950824505]
We present an operational architecture for privacy-by-design based on independent regulatory oversight.
We briefly discuss the feasibility of implementing our architecture based on existing techniques.
arXiv Detail & Related papers (2020-06-08T14:57:29Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.