Report prepared by the Montreal AI Ethics Institute (MAIEI) on
Publication Norms for Responsible AI
- URL: http://arxiv.org/abs/2009.07262v2
- Date: Sun, 4 Oct 2020 07:50:39 GMT
- Title: Report prepared by the Montreal AI Ethics Institute (MAIEI) on
Publication Norms for Responsible AI
- Authors: Abhishek Gupta (1 and 2), Camylle Lanteigne (1 and 3), Victoria Heath
(1) ((1) Montreal AI Ethics Institute, (2) Microsoft, (3) Algora Lab)
- Abstract summary: Montreal AI Ethics Institute co-hosted two public consultations with the Partnership on AI in May 2020.
The meetups examined potential publication norms for responsible AI.
MAIEI provides six initial recommendations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The history of science and technology shows that seemingly innocuous
developments in scientific theories and research have enabled real-world
applications with significant negative consequences for humanity. In order to
ensure that the science and technology of AI is developed in a humane manner,
we must develop research publication norms that are informed by our growing
understanding of AI's potential threats and use cases. Unfortunately, it's
difficult to create a set of publication norms for responsible AI because the
field of AI is currently fragmented in terms of how this technology is
researched, developed, funded, etc. To examine this challenge and find
solutions, the Montreal AI Ethics Institute (MAIEI) co-hosted two public
consultations with the Partnership on AI in May 2020. These meetups examined
potential publication norms for responsible AI, with the goal of creating a
clear set of recommendations and ways forward for publishers.
In its submission, MAIEI provides six initial recommendations, these include:
1) create tools to navigate publication decisions, 2) offer a page number
extension, 3) develop a network of peers, 4) require broad impact statements,
5) require the publication of expected results, and 6) revamp the peer-review
process. After considering potential concerns regarding these recommendations,
including constraining innovation and creating a "black market" for AI
research, MAIEI outlines three ways forward for publishers, these include: 1)
state clearly and consistently the need for established norms, 2) coordinate
and build trust as a community, and 3) change the approach.
Related papers
- AI Research is not Magic, it has to be Reproducible and Responsible: Challenges in the AI field from the Perspective of its PhD Students [1.1922075410173798]
We surveyed 28 AI doctoral candidates from 13 European countries.
Challenges underscore the findability and quality of AI resources such as datasets, models, and experiments.
There is need for immediate adoption of responsible and reproducible AI research practices.
arXiv Detail & Related papers (2024-08-13T12:19:02Z) - The Ethics of Advanced AI Assistants [53.89899371095332]
This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants.
We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user.
We consider the deployment of advanced assistants at a societal scale, focusing on cooperation, equity and access, misinformation, economic impact, the environment and how best to evaluate advanced AI assistants.
arXiv Detail & Related papers (2024-04-24T23:18:46Z) - Responsible Artificial Intelligence: A Structured Literature Review [0.0]
The EU has recently issued several publications emphasizing the necessity of trust in AI.
This highlights the urgent need for international regulation.
This paper introduces a comprehensive and, to our knowledge, the first unified definition of responsible AI.
arXiv Detail & Related papers (2024-03-11T17:01:13Z) - Beyond principlism: Practical strategies for ethical AI use in research practices [0.0]
The rapid adoption of generative artificial intelligence in scientific research has outpaced the development of ethical guidelines.
Existing approaches offer little practical guidance for addressing ethical challenges of AI in scientific research practices.
I propose a user-centered, realism-inspired approach to bridge the gap between abstract principles and day-to-day research practices.
arXiv Detail & Related papers (2024-01-27T03:53:25Z) - Investigating Responsible AI for Scientific Research: An Empirical Study [4.597781832707524]
The push for Responsible AI (RAI) in such institutions underscores the increasing emphasis on integrating ethical considerations within AI design and development.
This paper aims to assess the awareness and preparedness regarding the ethical risks inherent in AI design and development.
Our results have revealed certain knowledge gaps concerning ethical, responsible, and inclusive AI, with limitations in awareness of the available AI ethics frameworks.
arXiv Detail & Related papers (2023-12-15T06:40:27Z) - Report of the 1st Workshop on Generative AI and Law [78.62063815165968]
This report presents the takeaways of the inaugural Workshop on Generative AI and Law (GenLaw)
A cross-disciplinary group of practitioners and scholars from computer science and law convened to discuss the technical, doctrinal, and policy challenges presented by law for Generative AI.
arXiv Detail & Related papers (2023-11-11T04:13:37Z) - FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare [73.78776682247187]
Concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI.
This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.
arXiv Detail & Related papers (2023-08-11T10:49:05Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - An Ethical Framework for Guiding the Development of Affectively-Aware
Artificial Intelligence [0.0]
We propose guidelines for evaluating the (moral and) ethical consequences of affectively-aware AI.
We propose a multi-stakeholder analysis framework that separates the ethical responsibilities of AI Developers vis-a-vis the entities that deploy such AI.
We end with recommendations for researchers, developers, operators, as well as regulators and law-makers.
arXiv Detail & Related papers (2021-07-29T03:57:53Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.