Behavioral Use Licensing for Responsible AI
- URL: http://arxiv.org/abs/2011.03116v2
- Date: Thu, 20 Oct 2022 22:39:24 GMT
- Title: Behavioral Use Licensing for Responsible AI
- Authors: Danish Contractor and Daniel McDuff and Julia Haines and Jenny Lee and
Christopher Hines and Brent Hecht and Nicholas Vincent and Hanlin Li
- Abstract summary: We advocate the use of licensing to enable legally enforceable behavioral use conditions on software and code.
We envision how licensing may be implemented in accordance with existing responsible AI guidelines.
- Score: 11.821476868900506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the growing reliance on artificial intelligence (AI) for many different
applications, the sharing of code, data, and models is important to ensure the
replicability and democratization of scientific knowledge. Many high-profile
academic publishing venues expect code and models to be submitted and released
with papers. Furthermore, developers often want to release these assets to
encourage development of technology that leverages their frameworks and
services. A number of organizations have expressed concerns about the
inappropriate or irresponsible use of AI and have proposed ethical guidelines
around the application of such systems. While such guidelines can help set
norms and shape policy, they are not easily enforceable. In this paper, we
advocate the use of licensing to enable legally enforceable behavioral use
conditions on software and code and provide several case studies that
demonstrate the feasibility of behavioral use licensing. We envision how
licensing may be implemented in accordance with existing responsible AI
guidelines.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - A Path Towards Legal Autonomy: An interoperable and explainable approach to extracting, transforming, loading and computing legal information using large language models, expert systems and Bayesian networks [2.2192488799070444]
Legal autonomy can be achieved either by imposing constraints on AI actors such as developers, deployers and users, or by imposing constraints on the range and scope of the impact that AI agents can have on the environment.
The latter approach involves encoding extant rules concerning AI driven devices into the software of AI agents controlling those devices.
This is a challenge since the effectivity of such an approach requires a method of extracting, loading, transforming and computing legal information that would be both explainable and legally interoperable.
arXiv Detail & Related papers (2024-03-27T13:12:57Z) - On the Standardization of Behavioral Use Clauses and Their Adoption for
Responsible Licensing of AI [27.748532981456464]
In 2018, licenses with behaviorial-use clauses were proposed to give developers a framework for releasing AI assets.
As of the end of 2023, on the order of 40,000 software and model repositories have adopted responsible AI licenses.
arXiv Detail & Related papers (2024-02-07T22:29:42Z) - Model Reporting for Certifiable AI: A Proposal from Merging EU
Regulation into AI Development [2.9620297386658185]
Despite large progress in Explainable and Safe AI, practitioners suffer from a lack of regulation and standards for AI safety.
We propose the use of standardized cards to document AI applications throughout the development process.
arXiv Detail & Related papers (2023-07-21T12:13:54Z) - Foundation Models and Fair Use [96.04664748698103]
In the U.S. and other countries, copyrighted content may be used to build foundation models without incurring liability due to the fair use doctrine.
In this work, we survey the potential risks of developing and deploying foundation models based on copyrighted content.
We discuss technical mitigations that can help foundation models stay in line with fair use.
arXiv Detail & Related papers (2023-03-28T03:58:40Z) - Lessons from Formally Verified Deployed Software Systems (Extended version) [65.69802414600832]
This article examines a range of projects, in various application areas, that have produced formally verified systems and deployed them for actual use.
It considers the technologies used, the form of verification applied, the results obtained, and the lessons that the software industry should draw regarding its ability to benefit from formal verification techniques and tools.
arXiv Detail & Related papers (2023-01-05T18:18:46Z) - Enabling Automated Machine Learning for Model-Driven AI Engineering [60.09869520679979]
We propose a novel approach to enable Model-Driven Software Engineering and Model-Driven AI Engineering.
In particular, we support Automated ML, thus assisting software engineers without deep AI knowledge in developing AI-intensive systems.
arXiv Detail & Related papers (2022-03-06T10:12:56Z) - Actionable Approaches to Promote Ethical AI in Libraries [7.1492901819376415]
The widespread use of artificial intelligence (AI) in many domains has revealed numerous ethical issues.
No practical guidance currently exists for libraries to plan for, evaluate, or audit the ethics of intended or deployed AI.
We report on several promising approaches for promoting ethical AI that can be adapted from other contexts.
arXiv Detail & Related papers (2021-09-20T16:38:49Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - ECCOLA -- a Method for Implementing Ethically Aligned AI Systems [11.31664099885664]
We present a method for implementing AI ethics into practice.
The method, ECCOLA, has been iteratively developed using a cyclical action design research approach.
arXiv Detail & Related papers (2020-04-17T17:57:07Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.