Towards a framework for understanding societal and ethical implications
of Artificial Intelligence
- URL: http://arxiv.org/abs/2001.09750v1
- Date: Fri, 3 Jan 2020 17:55:15 GMT
- Title: Towards a framework for understanding societal and ethical implications
of Artificial Intelligence
- Authors: Richard Benjamins and Idoia Salazar
- Abstract summary: The objective of this paper is to identify the main societal and ethical challenges implied by a massive uptake of AI.
We have surveyed the literature for the most common challenges and classified them in seven groups: 1) Non-desired effects, 2) Liability, 3) Unknown consequences, 4) Relation people-robots, 5) Concentration of power and wealth, 6) Intentional bad uses, and 7) AI for weapons and warfare.
- Score: 2.28438857884398
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence (AI) is one of the most discussed technologies today.
There are many innovative applications such as the diagnosis and treatment of
cancer, customer experience, new business, education, contagious diseases
propagation and optimization of the management of humanitarian catastrophes.
However, with all those opportunities also comes great responsibility to ensure
good and fair practice of AI. The objective of this paper is to identify the
main societal and ethical challenges implied by a massive uptake of AI. We have
surveyed the literature for the most common challenges and classified them in
seven groups: 1) Non-desired effects, 2) Liability, 3) Unknown consequences, 4)
Relation people-robots, 5) Concentration of power and wealth, 6) Intentional
bad uses, and 7) AI for weapons and warfare. The challenges should be dealt
with in different ways depending on their origin; some have technological
solutions, while others require ethical, societal, or political answers.
Depending on the origin, different stakeholders might need to act. Whatever the
identified stakeholder, not treating those issues will lead to uncertainty and
unforeseen consequences with potentially large negative societal impact,
hurting especially the most vulnerable groups of societies. Technology is
helping to take better decisions, and AI is promoting data-driven decisions in
addition to experience- and intuition-based discussion, with many improvements
happening. However, the negative side effects of this technology need to be
well understood and acted upon before we launch them massively into the world.
Related papers
- Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI [67.58673784790375]
We argue that the 'bigger is better' AI paradigm is not only fragile scientifically, but comes with undesirable consequences.
First, it is not sustainable, as its compute demands increase faster than model performance, leading to unreasonable economic requirements and a disproportionate environmental footprint.
Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate.
arXiv Detail & Related papers (2024-09-21T14:43:54Z) - Ten Hard Problems in Artificial Intelligence We Must Get Right [72.99597122935903]
We explore the AI2050 "hard problems" that block the promise of AI and cause AI risks.
For each problem, we outline the area, identify significant recent work, and suggest ways forward.
arXiv Detail & Related papers (2024-02-06T23:16:41Z) - The Global Impact of AI-Artificial Intelligence: Recent Advances and
Future Directions, A Review [0.0]
The article highlights the implications of AI, including its impact on economic, ethical, social, security & privacy, and job displacement aspects.
It discusses the ethical concerns surrounding AI development, including issues of bias, security, and privacy violations.
The article concludes by emphasizing the importance of public engagement and education to promote awareness and understanding of AI's impact on society at large.
arXiv Detail & Related papers (2023-12-22T00:41:21Z) - The Promise and Peril of Artificial Intelligence -- Violet Teaming
Offers a Balanced Path Forward [56.16884466478886]
This paper reviews emerging issues with opaque and uncontrollable AI systems.
It proposes an integrative framework called violet teaming to develop reliable and responsible AI.
It emerged from AI safety research to manage risks proactively by design.
arXiv Detail & Related papers (2023-08-28T02:10:38Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Ever heard of ethical AI? Investigating the salience of ethical AI
issues among the German population [0.0]
General interest in AI and a higher educational level are predictive of some engagement with AI.
Ethical issues are voiced only by a small subset of citizens with fairness, accountability, and transparency being the least mentioned ones.
Once ethical AI is top of the mind, there is some potential for activism.
arXiv Detail & Related papers (2022-07-28T13:46:13Z) - AI Challenges for Society and Ethics [0.0]
Artificial intelligence is already being applied in and impacting many important sectors in society, including healthcare, finance, and policing.
The role of AI governance is ultimately to take practical steps to mitigate this risk of harm while enabling the benefits of innovation in AI.
It also requires thinking through the normative question of what beneficial use of AI in society looks like, which is equally challenging.
arXiv Detail & Related papers (2022-06-22T13:33:11Z) - Ethical AI for Social Good [0.0]
The concept of AI for Social Good(AI4SG) is gaining momentum in both information societies and the AI community.
This paper fills the vacuum by addressing the ethical aspects that are critical for future AI4SG efforts.
arXiv Detail & Related papers (2021-07-14T15:16:51Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - The Role of Social Movements, Coalitions, and Workers in Resisting
Harmful Artificial Intelligence and Contributing to the Development of
Responsible AI [0.0]
Coalitions in all sectors are acting worldwide to resist hamful applications of AI.
There are biased, wrongful, and disturbing assumptions embedded in AI algorithms.
Perhaps one of the greatest contributions of AI will be to make us understand how important human wisdom truly is in life on earth.
arXiv Detail & Related papers (2021-07-11T18:51:29Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.