On the Standardization of Behavioral Use Clauses and Their Adoption for
Responsible Licensing of AI
- URL: http://arxiv.org/abs/2402.05979v1
- Date: Wed, 7 Feb 2024 22:29:42 GMT
- Title: On the Standardization of Behavioral Use Clauses and Their Adoption for
Responsible Licensing of AI
- Authors: Daniel McDuff, Tim Korjakow, Scott Cambo, Jesse Josua Benjamin, Jenny
Lee, Yacine Jernite, Carlos Mu\~noz Ferrandis, Aaron Gokaslan, Alek
Tarkowski, Joseph Lindley, A. Feder Cooper, Danish Contractor
- Abstract summary: In 2018, licenses with behaviorial-use clauses were proposed to give developers a framework for releasing AI assets.
As of the end of 2023, on the order of 40,000 software and model repositories have adopted responsible AI licenses.
- Score: 27.748532981456464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Growing concerns over negligent or malicious uses of AI have increased the
appetite for tools that help manage the risks of the technology. In 2018,
licenses with behaviorial-use clauses (commonly referred to as Responsible AI
Licenses) were proposed to give developers a framework for releasing AI assets
while specifying their users to mitigate negative applications. As of the end
of 2023, on the order of 40,000 software and model repositories have adopted
responsible AI licenses licenses. Notable models licensed with behavioral use
clauses include BLOOM (language) and LLaMA2 (language), Stable Diffusion
(image), and GRID (robotics). This paper explores why and how these licenses
have been adopted, and why and how they have been adapted to fit particular use
cases. We use a mixed-methods methodology of qualitative interviews, clustering
of license clauses, and quantitative analysis of license adoption. Based on
this evidence we take the position that responsible AI licenses need
standardization to avoid confusing users or diluting their impact. At the same
time, customization of behavioral restrictions is also appropriate in some
contexts (e.g., medical domains). We advocate for ``standardized
customization'' that can meet users' needs and can be supported via tooling.
Related papers
- An FDA for AI? Pitfalls and Plausibility of Approval Regulation for Frontier Artificial Intelligence [0.0]
We explore the applicability of approval regulation -- that is, regulation of a product that combines experimental minima with government licensure conditioned partially or fully upon that experimentation -- to the regulation of frontier AI.
There are a number of reasons to believe that approval regulation, simplistically applied, would be inapposite for frontier AI risks.
We conclude by highlighting the role of policy learning and experimentation in regulatory development.
arXiv Detail & Related papers (2024-08-01T17:54:57Z) - A Path Towards Legal Autonomy: An interoperable and explainable approach to extracting, transforming, loading and computing legal information using large language models, expert systems and Bayesian networks [2.2192488799070444]
Legal autonomy can be achieved either by imposing constraints on AI actors such as developers, deployers and users, or by imposing constraints on the range and scope of the impact that AI agents can have on the environment.
The latter approach involves encoding extant rules concerning AI driven devices into the software of AI agents controlling those devices.
This is a challenge since the effectivity of such an approach requires a method of extracting, loading, transforming and computing legal information that would be both explainable and legally interoperable.
arXiv Detail & Related papers (2024-03-27T13:12:57Z) - From Instructions to Constraints: Language Model Alignment with
Automatic Constraint Verification [70.08146540745877]
We investigate common constraints in NLP tasks, categorize them into three classes based on the types of their arguments.
We propose a unified framework, ACT (Aligning to ConsTraints), to automatically produce supervision signals for user alignment with constraints.
arXiv Detail & Related papers (2024-03-10T22:14:54Z) - ConstraintChecker: A Plugin for Large Language Models to Reason on
Commonsense Knowledge Bases [53.29427395419317]
Reasoning over Commonsense Knowledge Bases (CSKB) has been explored as a way to acquire new commonsense knowledge.
We propose **ConstraintChecker**, a plugin over prompting techniques to provide and check explicit constraints.
arXiv Detail & Related papers (2024-01-25T08:03:38Z) - Catch the Butterfly: Peeking into the Terms and Conflicts among SPDX
Licenses [16.948633594354412]
Third-party libraries (TPLs) in software development has accelerated the creation of modern software.
Developers may inadvertently violate the licenses of TPLs, leading to legal issues.
There is a need for a high-quality license dataset that encompasses a broad range of mainstream licenses.
arXiv Detail & Related papers (2024-01-19T11:27:34Z) - Foundation Models and Fair Use [96.04664748698103]
In the U.S. and other countries, copyrighted content may be used to build foundation models without incurring liability due to the fair use doctrine.
In this work, we survey the potential risks of developing and deploying foundation models based on copyrighted content.
We discuss technical mitigations that can help foundation models stay in line with fair use.
arXiv Detail & Related papers (2023-03-28T03:58:40Z) - Canary in a Coalmine: Better Membership Inference with Ensembled
Adversarial Queries [53.222218035435006]
We use adversarial tools to optimize for queries that are discriminative and diverse.
Our improvements achieve significantly more accurate membership inference than existing methods.
arXiv Detail & Related papers (2022-10-19T17:46:50Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Can I use this publicly available dataset to build commercial AI
software? Most likely not [8.853674186565934]
We propose a new approach to assess the potential license compliance violations if a given publicly available dataset were to be used for building commercial AI software.
Our results show that there are risks of license violations on 5 of these 6 studied datasets if they were used for commercial purposes.
arXiv Detail & Related papers (2021-11-03T17:44:06Z) - Behavioral Use Licensing for Responsible AI [11.821476868900506]
We advocate the use of licensing to enable legally enforceable behavioral use conditions on software and code.
We envision how licensing may be implemented in accordance with existing responsible AI guidelines.
arXiv Detail & Related papers (2020-11-04T09:23:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.