The Need for Ethical, Responsible, and Trustworthy Artificial
Intelligence for Environmental Sciences
- URL: http://arxiv.org/abs/2112.08453v1
- Date: Wed, 15 Dec 2021 19:57:38 GMT
- Title: The Need for Ethical, Responsible, and Trustworthy Artificial
Intelligence for Environmental Sciences
- Authors: Amy McGovern and Imme Ebert-Uphoff and David John Gagne II and Ann
Bostrom
- Abstract summary: It is imperative that we initiate a discussion about the ethical and responsible use of AI.
A common misconception is that the environmental sciences are immune to such unintended consequences when AI is being used.
We focus on weather and climate examples but the conclusions apply broadly across the environmental sciences.
- Score: 0.04588028371034406
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Given the growing use of Artificial Intelligence (AI) and machine learning
(ML) methods across all aspects of environmental sciences, it is imperative
that we initiate a discussion about the ethical and responsible use of AI. In
fact, much can be learned from other domains where AI was introduced, often
with the best of intentions, yet often led to unintended societal consequences,
such as hard coding racial bias in the criminal justice system or increasing
economic inequality through the financial system. A common misconception is
that the environmental sciences are immune to such unintended consequences when
AI is being used, as most data come from observations, and AI algorithms are
based on mathematical formulas, which are often seen as objective. In this
article, we argue the opposite can be the case. Using specific examples, we
demonstrate many ways in which the use of AI can introduce similar consequences
in the environmental sciences. This article will stimulate discussion and
research efforts in this direction. As a community, we should avoid repeating
any foreseeable mistakes made in other domains through the introduction of AI.
In fact, with proper precautions, AI can be a great tool to help {\it reduce}
climate and environmental injustice. We primarily focus on weather and climate
examples but the conclusions apply broadly across the environmental sciences.
Related papers
- Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Towards Sustainable Artificial Intelligence: An Overview of
Environmental Protection Uses and Issues [0.0]
This paper describes the paradox of an energy-consuming technology serving the ecological challenges of tomorrow.
It draws on numerous examples from AI for Green players to present use cases and concrete examples.
The environmental dimension is part of the broader ethical problem of AI, and addressing it is crucial for ensuring the sustainability of AI in the long term.
arXiv Detail & Related papers (2022-12-22T14:31:48Z) - Ever heard of ethical AI? Investigating the salience of ethical AI
issues among the German population [0.0]
General interest in AI and a higher educational level are predictive of some engagement with AI.
Ethical issues are voiced only by a small subset of citizens with fairness, accountability, and transparency being the least mentioned ones.
Once ethical AI is top of the mind, there is some potential for activism.
arXiv Detail & Related papers (2022-07-28T13:46:13Z) - AI Ethics Issues in Real World: Evidence from AI Incident Database [0.6091702876917279]
We identify 13 application areas which often see unethical use of AI, with intelligent service robots, language/vision models and autonomous driving taking the lead.
Ethical issues appear in 8 different forms, from inappropriate use and racial discrimination, to physical safety and unfair algorithm.
arXiv Detail & Related papers (2022-06-15T16:25:57Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Relational Artificial Intelligence [5.5586788751870175]
Even though AI is traditionally associated with rational decision making, understanding and shaping the societal impact of AI in all its facets requires a relational perspective.
A rational approach to AI, where computational algorithms drive decision making independent of human intervention, has shown to result in bias and exclusion.
A relational approach, that focus on the relational nature of things, is needed to deal with the ethical, legal, societal, cultural, and environmental implications of AI.
arXiv Detail & Related papers (2022-02-04T15:29:57Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Ethics as a service: a pragmatic operationalisation of AI Ethics [1.1083289076967895]
gap exists between theory of AI ethics principles and the practical design of AI systems.
This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited.
arXiv Detail & Related papers (2021-02-11T21:29:25Z) - Learning from Learning Machines: Optimisation, Rules, and Social Norms [91.3755431537592]
It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making.
Recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
arXiv Detail & Related papers (2019-12-29T17:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.