On Information Processing Limitations In Humans and Machines
- URL: http://arxiv.org/abs/2112.03669v1
- Date: Tue, 7 Dec 2021 13:03:00 GMT
- Title: On Information Processing Limitations In Humans and Machines
- Authors: Birgitta Dresp-Langley
- Abstract summary: Information theory is concerned with the study of transmission, processing, extraction, and utilization of information.
This paper will discuss some of the implications of what is known about the limitations of human information processing for the development of reliable Artificial Intelligence.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Information theory is concerned with the study of transmission, processing,
extraction, and utilization of information. In its most abstract form,
information is conceived as a means of resolving uncertainty. Shannon and
Weaver (1949) were among the first to develop a conceptual framework for
information theory. One of the key assumptions of the model is that uncertainty
increases linearly with the amount of complexity (in bit units) of information
transmitted or generated. A whole body of data from the cognitive neurosciences
has shown since that the time of human response or action increases in a
similar fashion as a function of information complexity. This paper will
discuss some of the implications of what is known about the limitations of
human information processing for the development of reliable Artificial
Intelligence. It is concluded that novel conceptual frameworks are needed to
inspire future studies on this complex problem space.
Related papers
- Informational Embodiment: Computational role of information structure in codes and robots [48.00447230721026]
We address an information theory (IT) account on how the precision of sensors, the accuracy of motors, their placement, the body geometry, shape the information structure in robots and computational codes.
We envision the robot's body as a physical communication channel through which information is conveyed, in and out, despite intrinsic noise and material limitations.
We introduce a special class of efficient codes used in IT that reached the Shannon limits in terms of information capacity for error correction and robustness against noise, and parsimony.
arXiv Detail & Related papers (2024-08-23T09:59:45Z) - An Information Bottleneck Characterization of the Understanding-Workload
Tradeoff [15.90243405031747]
Consideration of human factors that impact explanation efficacy is central to explainable AI (XAI) design.
Existing work in XAI has demonstrated a tradeoff between understanding and workload induced by different types of explanations.
arXiv Detail & Related papers (2023-10-11T18:35:26Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Explainable, Domain-Adaptive, and Federated Artificial Intelligence in
Medicine [5.126042819606137]
We focus on three key methodological approaches that address some of the particular challenges in AI-driven medical decision making.
Domain adaptation and transfer learning enable AI models to be trained and applied across multiple domains.
Federated learning enables learning large-scale models without exposing sensitive personal health information.
arXiv Detail & Related papers (2022-11-17T03:32:00Z) - Synergistic information supports modality integration and flexible
learning in neural networks solving multiple tasks [107.8565143456161]
We investigate the information processing strategies adopted by simple artificial neural networks performing a variety of cognitive tasks.
Results show that synergy increases as neural networks learn multiple diverse tasks.
randomly turning off neurons during training through dropout increases network redundancy, corresponding to an increase in robustness.
arXiv Detail & Related papers (2022-10-06T15:36:27Z) - Neurocompositional computing: From the Central Paradox of Cognition to a
new generation of AI systems [120.297940190903]
Recent progress in AI has resulted from the use of limited forms of neurocompositional computing.
New, deeper forms of neurocompositional computing create AI systems that are more robust, accurate, and comprehensible.
arXiv Detail & Related papers (2022-05-02T18:00:10Z) - Adaptive cognitive fit: Artificial intelligence augmented management of
information facets and representations [62.997667081978825]
Explosive growth in big data technologies and artificial intelligence [AI] applications have led to increasing pervasiveness of information facets.
Information facets, such as equivocality and veracity, can dominate and significantly influence human perceptions of information.
We suggest that artificially intelligent technologies that can adapt information representations to overcome cognitive limitations are necessary.
arXiv Detail & Related papers (2022-04-25T02:47:25Z) - Neurocognitive Informatics Manifesto [0.0]
Informatics studies all aspects of the structure of natural and artificial information systems.
Neurocognitive informatics is a new field that should help to improve the matching of artificial and natural systems.
arXiv Detail & Related papers (2021-01-10T19:20:15Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - A Theory of Usable Information Under Computational Constraints [103.5901638681034]
We propose a new framework for reasoning about information in complex systems.
Our foundation is based on a variational extension of Shannon's information theory.
We show that by incorporating computational constraints, $mathcalV$-information can be reliably estimated from data.
arXiv Detail & Related papers (2020-02-25T06:09:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.