Ten Hard Problems in Artificial Intelligence We Must Get Right
- URL: http://arxiv.org/abs/2402.04464v2
- Date: Fri, 19 Apr 2024 10:38:59 GMT
- Title: Ten Hard Problems in Artificial Intelligence We Must Get Right
- Authors: Gavin Leech, Simson Garfinkel, Misha Yagudin, Alexander Briand, Aleksandr Zhuravlev,
- Abstract summary: We explore the AI2050 "hard problems" that block the promise of AI and cause AI risks.
For each problem, we outline the area, identify significant recent work, and suggest ways forward.
- Score: 72.99597122935903
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We explore the AI2050 "hard problems" that block the promise of AI and cause AI risks: (1) developing general capabilities of the systems; (2) assuring the performance of AI systems and their training processes; (3) aligning system goals with human goals; (4) enabling great applications of AI in real life; (5) addressing economic disruptions; (6) ensuring the participation of all; (7) at the same time ensuring socially responsible deployment; (8) addressing any geopolitical disruptions that AI causes; (9) promoting sound governance of the technology; and (10) managing the philosophical disruptions for humans living in the age of AI. For each problem, we outline the area, identify significant recent work, and suggest ways forward. [Note: this paper reviews literature through January 2023.]
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - AI in Action: Accelerating Progress Towards the Sustainable Development Goals [4.09375125119842]
We draw on Google's internal and collaborative research, technical work, and social impact initiatives to show AI's potential to accelerate action on the UN's Sustainable Development Goals.
The paper highlights AI capabilities (including computer vision, generative AI, natural language processing, and multimodal AI) and showcases how AI is altering how we approach problem-solving across all 17 SDGs.
We then offer insights on AI development and deployment to drive bold and responsible innovation, enhance impact, close the accessibility gap, and ensure that everyone, everywhere, can benefit from AI.
arXiv Detail & Related papers (2024-07-02T23:25:27Z) - Reconfiguring Participatory Design to Resist AI Realism [1.0878040851638]
This paper argues that participatory design can play a role in questioning and resisting AI Realism.
I examine three concerning aspects of AI Realism: the facade of democratization that lacks true empowerment, demands for human adaptability, and the obfuscation of essential human labor enabling the AI system.
I propose resisting AI Realism by reconfiguring PD to continue engaging with value-centered visions, increasing its exploration of non-AI alternatives, and making the essential human labor underpinning AI systems visible.
arXiv Detail & Related papers (2024-06-05T13:21:46Z) - Now, Later, and Lasting: Ten Priorities for AI Research, Policy, and Practice [63.20307830884542]
Next several decades may well be a turning point for humanity, comparable to the industrial revolution.
Launched a decade ago, the project is committed to a perpetual series of studies by multidisciplinary experts.
We offer ten recommendations for action that collectively address both the short- and long-term potential impacts of AI technologies.
arXiv Detail & Related papers (2024-04-06T22:18:31Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Inherent Limitations of AI Fairness [16.588468396705366]
The study of AI fairness has rapidly developed into a rich field of research with links to computer science, social science, law, and philosophy.
Many technical solutions for measuring and achieving AI fairness have been proposed, yet their approach has been criticized in recent years for being misleading, unrealistic and harmful.
arXiv Detail & Related papers (2022-12-13T11:23:24Z) - Aligning Artificial Intelligence with Humans through Public Policy [0.0]
This essay outlines research on AI that learn structures in policy data that can be leveraged for downstream tasks.
We believe this represents the "comprehension" phase of AI and policy, but leveraging policy as a key source of human values to align AI requires "understanding" policy.
arXiv Detail & Related papers (2022-06-25T21:31:14Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Socially Responsible AI Algorithms: Issues, Purposes, and Challenges [31.382000425295885]
Technologists and AI researchers have a responsibility to develop trustworthy AI systems.
To build long-lasting trust between AI and human beings, we argue that the key is to think beyond algorithmic fairness.
arXiv Detail & Related papers (2021-01-01T17:34:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.