Montreal AI Ethics Institute's (MAIEI) Submission to the World
Intellectual Property Organization (WIPO) Conversation on Intellectual
Property (IP) and Artificial Intelligence (AI) Second Session
- URL: http://arxiv.org/abs/2008.04520v1
- Date: Tue, 11 Aug 2020 05:31:10 GMT
- Title: Montreal AI Ethics Institute's (MAIEI) Submission to the World
Intellectual Property Organization (WIPO) Conversation on Intellectual
Property (IP) and Artificial Intelligence (AI) Second Session
- Authors: Allison Cohen (1) and Abhishek Gupta (1 and 2) ((1) Montreal AI Ethics
Institute and (2) Microsoft)
- Abstract summary: IP protections for AI "inventors" present a host of negative externalities and obscure the fact that the genuine inventor, deserving of IP, is the human agent.
This document will conclude by recommending strategies for bringing IP law into the 21st century.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This document posits that, at best, a tenuous case can be made for providing
AI exclusive IP over their "inventions". Furthermore, IP protections for AI are
unlikely to confer the benefit of ensuring regulatory compliance. Rather, IP
protections for AI "inventors" present a host of negative externalities and
obscures the fact that the genuine inventor, deserving of IP, is the human
agent. This document will conclude by recommending strategies for WIPO to bring
IP law into the 21st century, enabling it to productively account for AI
"inventions".
Theme: IP Protection for AI-Generated and AI-Assisted Works Based on insights
from the Montreal AI Ethics Institute (MAIEI) staff and supplemented by
workshop contributions from the AI Ethics community convened by MAIEI on July
5, 2020.
Related papers
- AI Royalties -- an IP Framework to Compensate Artists & IP Holders for AI-Generated Content [3.4410934027154996]
This article investigates how AI-generated content can disrupt central revenue streams of the creative industries.
It reviews the IP and copyright questions related to the input and output of generative AI systems.
arXiv Detail & Related papers (2024-04-05T15:35:08Z) - FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare [73.78776682247187]
Concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI.
This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.
arXiv Detail & Related papers (2023-08-11T10:49:05Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - The State of AI Ethics Report (Volume 5) [0.0]
Report focuses on AI ethics with a special emphasis on "Environment and AI", "Creativity and AI", and "Geopolitics and AI"
Special contributions on the subject of pedagogy in AI ethics, sociology and AI ethics, and organizational challenges to implementing AI ethics in practice.
Report also has an extensive section covering the gamut of issues when it comes to the societal impacts of AI.
arXiv Detail & Related papers (2021-08-09T10:47:14Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - The Role of Social Movements, Coalitions, and Workers in Resisting
Harmful Artificial Intelligence and Contributing to the Development of
Responsible AI [0.0]
Coalitions in all sectors are acting worldwide to resist hamful applications of AI.
There are biased, wrongful, and disturbing assumptions embedded in AI algorithms.
Perhaps one of the greatest contributions of AI will be to make us understand how important human wisdom truly is in life on earth.
arXiv Detail & Related papers (2021-07-11T18:51:29Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - The Sanction of Authority: Promoting Public Trust in AI [4.729969944853141]
We argue that public distrust of AI originates from the under-development of a regulatory ecosystem that would guarantee the trustworthiness of the AIs that pervade society.
We elaborate the pivotal role of externally auditable AI documentation within this model and the work to be done to ensure it is effective.
arXiv Detail & Related papers (2021-01-22T22:01:30Z) - The State of AI Ethics Report (October 2020) [30.265104923077185]
The State of AI Ethics captures the most relevant developments in the field of AI Ethics since July 2020.
This report aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the ever-changing developments in the field.
The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments.
arXiv Detail & Related papers (2020-11-05T12:36:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.