Watershed of Artificial Intelligence: Human Intelligence, Machine
Intelligence, and Biological Intelligence
- URL: http://arxiv.org/abs/2104.13155v1
- Date: Tue, 27 Apr 2021 13:03:25 GMT
- Title: Watershed of Artificial Intelligence: Human Intelligence, Machine
Intelligence, and Biological Intelligence
- Authors: Li Weigang, Liriam Enamoto, Denise Leyi Li, Geraldo Pereira Rocha
Filho
- Abstract summary: This article reviews the Once Learning mechanism that was proposed 23 years ago and the subsequent successes of One-shot Learning in image classification and You Only Look Once-YOLO in objective detection.
The proposal is that AI should be clearly divided into the following categories: Artificial Human Intelligence (AHI), Artificial Machine Intelligence (AMI), and Artificial Biological Intelligence (ABI)
- Score: 0.2580765958706853
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This article reviews the Once Learning mechanism that was proposed 23 years
ago and the subsequent successes of One-shot Learning in image classification
and You Only Look Once-YOLO in objective detection. Analyzing the current
development of AI, the proposal is that AI should be clearly divided into the
following categories: Artificial Human Intelligence (AHI), Artificial Machine
Intelligence (AMI), and Artificial Biological Intelligence (ABI), which will
also be the main directions of theory and application development for AI. As a
watershed for the branches of AI, some classification standards and methods are
discussed: 1) AI R&D should be human-oriented, machine-oriented, and
biological-oriented; 2) The information input is processed by Dimensionality-up
or dimensionality-reduction; and 3) One/Few or large samples are used for
knowledge learning.
Related papers
- The Interplay of Learning, Analytics, and Artificial Intelligence in Education: A Vision for Hybrid Intelligence [0.45207442500313766]
I challenge the prevalent narrow conceptualisation of AI as tools, and argue for the importance of alternative conceptualisations of AI.
I highlight the differences between human intelligence and artificial information processing, and posit that AI can also serve as an instrument for understanding human learning.
The paper presents three unique conceptualisations of AI: the externalization of human cognition, the internalization of AI models to influence human mental models, and the extension of human cognition via tightly coupled human-AI hybrid intelligence systems.
arXiv Detail & Related papers (2024-03-24T10:07:46Z) - Advancing Explainable AI Toward Human-Like Intelligence: Forging the
Path to Artificial Brain [0.7770029179741429]
The intersection of Artificial Intelligence (AI) and neuroscience in Explainable AI (XAI) is pivotal for enhancing transparency and interpretability in complex decision-making processes.
This paper explores the evolution of XAI methodologies, ranging from feature-based to human-centric approaches.
The challenges in achieving explainability in generative models, ensuring responsible AI practices, and addressing ethical implications are discussed.
arXiv Detail & Related papers (2024-02-07T14:09:11Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - The Future of Fundamental Science Led by Generative Closed-Loop
Artificial Intelligence [67.70415658080121]
Recent advances in machine learning and AI are disrupting technological innovation, product development, and society as a whole.
AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access.
Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery.
arXiv Detail & Related papers (2023-07-09T21:16:56Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - On some Foundational Aspects of Human-Centered Artificial Intelligence [52.03866242565846]
There is no clear definition of what is meant by Human Centered Artificial Intelligence.
This paper introduces the term HCAI agent to refer to any physical or software computational agent equipped with AI components.
We see the notion of HCAI agent, together with its components and functions, as a way to bridge the technical and non-technical discussions on human-centered AI.
arXiv Detail & Related papers (2021-12-29T09:58:59Z) - A Classification of Artificial Intelligence Systems for Mathematics
Education [3.718476964451589]
This chapter provides an overview of the different Artificial Intelligence (AI) systems that are being used in digital tools for Mathematics Education (ME)
It is aimed at researchers in AI and Machine Learning (ML), for whom we shed some light on the specific technologies that are being used in educational applications.
arXiv Detail & Related papers (2021-07-13T12:09:10Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Future Trends for Human-AI Collaboration: A Comprehensive Taxonomy of
AI/AGI Using Multiple Intelligences and Learning Styles [95.58955174499371]
We describe various aspects of multiple human intelligences and learning styles, which may impact on a variety of AI problem domains.
Future AI systems will be able not only to communicate with human users and each other, but also to efficiently exchange knowledge and wisdom.
arXiv Detail & Related papers (2020-08-07T21:00:13Z) - AI from concrete to abstract: demystifying artificial intelligence to
the general public [0.0]
This article presents a new methodology, AI from concrete to abstract (AIcon2abs)
The main strategy adopted by is to promote a demystification of artificial intelligence.
The simplicity of the WiSARD weightless artificial neural network model enables easy visualization and understanding of training and classification tasks.
arXiv Detail & Related papers (2020-06-07T01:14:06Z) - Human Evaluation of Interpretability: The Case of AI-Generated Music
Knowledge [19.508678969335882]
We focus on evaluating AI-discovered knowledge/rules in the arts and humanities.
We present an experimental procedure to collect and assess human-generated verbal interpretations of AI-generated music theory/rules rendered as sophisticated symbolic/numeric objects.
arXiv Detail & Related papers (2020-04-15T06:03:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.