Neurocognitive Informatics Manifesto
- URL: http://arxiv.org/abs/2101.03609v1
- Date: Sun, 10 Jan 2021 19:20:15 GMT
- Title: Neurocognitive Informatics Manifesto
- Authors: W{\l}odzis{\l}aw Duch
- Abstract summary: Informatics studies all aspects of the structure of natural and artificial information systems.
Neurocognitive informatics is a new field that should help to improve the matching of artificial and natural systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Informatics studies all aspects of the structure of natural and artificial
information systems. Theoretical and abstract approaches to information have
made great advances, but human information processing is still unmatched in
many areas, including information management, representation and understanding.
Neurocognitive informatics is a new, emerging field that should help to improve
the matching of artificial and natural systems, and inspire better
computational algorithms to solve problems that are still beyond the reach of
machines. In this position paper examples of neurocognitive inspirations and
promising directions in this area are given.
Related papers
- Neural Information Organizing and Processing -- Neural Machines [0.0]
The informational synthesis of neural structures, processes, parameters and characteristics that allow a unified description and modeling as neural machines of natural and artificial neural systems is presented.
arXiv Detail & Related papers (2024-02-15T15:15:11Z) - Nature-Inspired Local Propagation [68.63385571967267]
Natural learning processes rely on mechanisms where data representation and learning are intertwined in such a way as to respect locality.
We show that the algorithmic interpretation of the derived "laws of learning", which takes the structure of Hamiltonian equations, reduces to Backpropagation when the speed of propagation goes to infinity.
This opens the doors to machine learning based on full on-line information that are based the replacement of Backpropagation with the proposed local algorithm.
arXiv Detail & Related papers (2024-02-04T21:43:37Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Neuronal Auditory Machine Intelligence (NEURO-AMI) In Perspective [0.0]
We present an overview of a new competing bio-inspired continual learning neural tool Neuronal Auditory Machine Intelligence (Neuro-AMI)
In this report, we present an overview of a new competing bio-inspired continual learning neural tool Neuronal Auditory Machine Intelligence (Neuro-AMI)
arXiv Detail & Related papers (2023-10-14T13:17:58Z) - Advanced Computing and Related Applications Leveraging Brain-inspired
Spiking Neural Networks [0.0]
Spiking neural network is one of the cores of artificial intelligence which realizes brain-like computing.
This paper summarizes the strengths, weaknesses and applicability of five neuronal models and analyzes the characteristics of five network topologies.
arXiv Detail & Related papers (2023-09-08T16:41:08Z) - Synergistic information supports modality integration and flexible
learning in neural networks solving multiple tasks [107.8565143456161]
We investigate the information processing strategies adopted by simple artificial neural networks performing a variety of cognitive tasks.
Results show that synergy increases as neural networks learn multiple diverse tasks.
randomly turning off neurons during training through dropout increases network redundancy, corresponding to an increase in robustness.
arXiv Detail & Related papers (2022-10-06T15:36:27Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - Adaptive cognitive fit: Artificial intelligence augmented management of
information facets and representations [62.997667081978825]
Explosive growth in big data technologies and artificial intelligence [AI] applications have led to increasing pervasiveness of information facets.
Information facets, such as equivocality and veracity, can dominate and significantly influence human perceptions of information.
We suggest that artificially intelligent technologies that can adapt information representations to overcome cognitive limitations are necessary.
arXiv Detail & Related papers (2022-04-25T02:47:25Z) - On Information Processing Limitations In Humans and Machines [0.0]
Information theory is concerned with the study of transmission, processing, extraction, and utilization of information.
This paper will discuss some of the implications of what is known about the limitations of human information processing for the development of reliable Artificial Intelligence.
arXiv Detail & Related papers (2021-12-07T13:03:00Z) - Neural Fields in Visual Computing and Beyond [54.950885364735804]
Recent advances in machine learning have created increasing interest in solving visual computing problems using coordinate-based neural networks.
neural fields have seen successful application in the synthesis of 3D shapes and image, animation of human bodies, 3D reconstruction, and pose estimation.
This report provides context, mathematical grounding, and an extensive review of literature on neural fields.
arXiv Detail & Related papers (2021-11-22T18:57:51Z) - Incorporation of Deep Neural Network & Reinforcement Learning with
Domain Knowledge [0.0]
We present a study of the manners by which Domain information has been incorporated when building models with Neural Networks.
Integrating space data is uniquely important to the development of Knowledge understanding model, as well as other fields that aid in understanding information by utilizing the human-machine interface and Reinforcement Learning.
arXiv Detail & Related papers (2021-07-29T17:29:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.