Next Wave Artificial Intelligence: Robust, Explainable, Adaptable,
Ethical, and Accountable
- URL: http://arxiv.org/abs/2012.06058v1
- Date: Fri, 11 Dec 2020 00:50:09 GMT
- Title: Next Wave Artificial Intelligence: Robust, Explainable, Adaptable,
Ethical, and Accountable
- Authors: Odest Chadwicke Jenkins, Daniel Lopresti, and Melanie Mitchell
- Abstract summary: Deep neural networks have led to many successes and new capabilities in computer vision, speech recognition, language processing, game-playing, and robotics.
A concerning limitation is that even the most successful of today's AI systems suffer from brittleness.
AI systems also can absorb biases-based on gender, race, or other factors-from their training data and further magnify these biases in their subsequent decision-making.
- Score: 5.4138734778206
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The history of AI has included several "waves" of ideas. The first wave, from
the mid-1950s to the 1980s, focused on logic and symbolic hand-encoded
representations of knowledge, the foundations of so-called "expert systems".
The second wave, starting in the 1990s, focused on statistics and machine
learning, in which, instead of hand-programming rules for behavior, programmers
constructed "statistical learning algorithms" that could be trained on large
datasets. In the most recent wave research in AI has largely focused on deep
(i.e., many-layered) neural networks, which are loosely inspired by the brain
and trained by "deep learning" methods. However, while deep neural networks
have led to many successes and new capabilities in computer vision, speech
recognition, language processing, game-playing, and robotics, their potential
for broad application remains limited by several factors.
A concerning limitation is that even the most successful of today's AI
systems suffer from brittleness-they can fail in unexpected ways when faced
with situations that differ sufficiently from ones they have been trained on.
This lack of robustness also appears in the vulnerability of AI systems to
adversarial attacks, in which an adversary can subtly manipulate data in a way
to guarantee a specific wrong answer or action from an AI system. AI systems
also can absorb biases-based on gender, race, or other factors-from their
training data and further magnify these biases in their subsequent
decision-making. Taken together, these various limitations have prevented AI
systems such as automatic medical diagnosis or autonomous vehicles from being
sufficiently trustworthy for wide deployment. The massive proliferation of AI
across society will require radically new ideas to yield technology that will
not sacrifice our productivity, our quality of life, or our values.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - A Survey on Vision-Language-Action Models for Embodied AI [71.16123093739932]
Vision-language-action models (VLAs) have become a foundational element in robot learning.
Various methods have been proposed to enhance traits such as versatility, dexterity, and generalizability.
VLAs serve as high-level task planners capable of decomposing long-horizon tasks into executable subtasks.
arXiv Detail & Related papers (2024-05-23T01:43:54Z) - Artificial General Intelligence (AGI)-Native Wireless Systems: A Journey Beyond 6G [58.440115433585824]
Building future wireless systems that support services like digital twins (DTs) is challenging to achieve through advances to conventional technologies like meta-surfaces.
While artificial intelligence (AI)-native networks promise to overcome some limitations of wireless technologies, developments still rely on AI tools like neural networks.
This paper revisits the concept of AI-native wireless systems, equipping them with the common sense necessary to transform them into artificial general intelligence (AGI)-native systems.
arXiv Detail & Related papers (2024-04-29T04:51:05Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Examining the Differential Risk from High-level Artificial Intelligence
and the Question of Control [0.0]
The extent and scope of future AI capabilities remain a key uncertainty.
There are concerns over the extent of integration and oversight of AI opaque decision processes.
This study presents a hierarchical complex systems framework to model AI risk and provide a template for alternative futures analysis.
arXiv Detail & Related papers (2022-11-06T15:46:02Z) - Ten Years after ImageNet: A 360{\deg} Perspective on AI [36.9586431868379]
It is ten years since neural networks made their spectacular comeback.
The dominance of AI by Big-Tech who control talent, computing resources, and data may lead to an extreme AI divide.
Failure to meet high expectations in high profile, and much heralded flagship projects like self-driving vehicles could trigger another AI winter.
arXiv Detail & Related papers (2022-10-01T01:41:17Z) - Thinking Fast and Slow in AI: the Role of Metacognition [35.114607887343105]
State-of-the-art AI still lacks many capabilities that would naturally be included in a notion of (human) intelligence.
We argue that a better study of the mechanisms that allow humans to have these capabilities can help us understand how to imbue AI systems with these competencies.
arXiv Detail & Related papers (2021-10-05T06:05:38Z) - Self-explaining AI as an alternative to interpretable AI [0.0]
Double descent indicates that deep neural networks operate by smoothly interpolating between data points.
Neural networks trained on complex real world data are inherently hard to interpret and prone to failure if asked to extrapolate.
Self-explaining AIs are capable of providing a human-understandable explanation along with confidence levels for both the decision and explanation.
arXiv Detail & Related papers (2020-02-12T18:50:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.