A Survey of the Potential Long-term Impacts of AI
- URL: http://arxiv.org/abs/2206.11076v1
- Date: Wed, 22 Jun 2022 13:42:28 GMT
- Title: A Survey of the Potential Long-term Impacts of AI
- Authors: Sam Clarke and Jess Whittlestone
- Abstract summary: It is increasingly recognised that advances in artificial intelligence could have large and long-lasting impacts on society.
We identify and discuss five potential long-term impacts of AI.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is increasingly recognised that advances in artificial intelligence could
have large and long-lasting impacts on society. However, what form those
impacts will take, just how large and long-lasting they will be, and whether
they will ultimately be positive or negative for humanity, is far from clear.
Based on surveying literature on the societal impacts of AI, we identify and
discuss five potential long-term impacts of AI: how AI could lead to long-term
changes in science, cooperation, power, epistemics, and values. We review the
state of existing research in each of these areas and highlight priority
questions for future research.
Related papers
- Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI [67.58673784790375]
We argue that the 'bigger is better' AI paradigm is not only fragile scientifically, but comes with undesirable consequences.
First, it is not sustainable, as its compute demands increase faster than model performance, leading to unreasonable economic requirements and a disproportionate environmental footprint.
Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate.
arXiv Detail & Related papers (2024-09-21T14:43:54Z) - Now, Later, and Lasting: Ten Priorities for AI Research, Policy, and Practice [63.20307830884542]
Next several decades may well be a turning point for humanity, comparable to the industrial revolution.
Launched a decade ago, the project is committed to a perpetual series of studies by multidisciplinary experts.
We offer ten recommendations for action that collectively address both the short- and long-term potential impacts of AI technologies.
arXiv Detail & Related papers (2024-04-06T22:18:31Z) - The Global Impact of AI-Artificial Intelligence: Recent Advances and
Future Directions, A Review [0.0]
The article highlights the implications of AI, including its impact on economic, ethical, social, security & privacy, and job displacement aspects.
It discusses the ethical concerns surrounding AI development, including issues of bias, security, and privacy violations.
The article concludes by emphasizing the importance of public engagement and education to promote awareness and understanding of AI's impact on society at large.
arXiv Detail & Related papers (2023-12-22T00:41:21Z) - Quantifying the Benefit of Artificial Intelligence for Scientific Research [2.4700789675440524]
We estimate both the direct use of AI and the potential benefit of AI in scientific research.
We find that the use of AI in research is widespread throughout the sciences, growing especially rapidly since 2015.
Our analysis reveals considerable potential for AI to benefit numerous scientific fields, yet a notable disconnect exists between AI education and its research applications.
arXiv Detail & Related papers (2023-04-17T08:08:50Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Human-Centered Responsible Artificial Intelligence: Current & Future
Trends [76.94037394832931]
In recent years, the CHI community has seen significant growth in research on Human-Centered Responsible Artificial Intelligence.
All of this work is aimed at developing AI that benefits humanity while being grounded in human rights and ethics, and reducing the potential harms of AI.
In this special interest group, we aim to bring together researchers from academia and industry interested in these topics to map current and future research trends.
arXiv Detail & Related papers (2023-02-16T08:59:42Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - AI safety: state of the field through quantitative lens [0.0]
AI safety is a relatively new field of research focused on techniques for building AI beneficial for humans.
There is a severe lack of research into concrete policies regarding AI.
As we expect AI to be the main driving forces of changes in society, AI safety is the field under which we need to decide the direction of humanity's future.
arXiv Detail & Related papers (2020-02-12T11:26:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.