Adaptive cognitive fit: Artificial intelligence augmented management of
information facets and representations
- URL: http://arxiv.org/abs/2204.11405v1
- Date: Mon, 25 Apr 2022 02:47:25 GMT
- Title: Adaptive cognitive fit: Artificial intelligence augmented management of
information facets and representations
- Authors: Jim Samuel, Rajiv Kashyap, Yana Samuel and Alexander Pelaez
- Abstract summary: Explosive growth in big data technologies and artificial intelligence [AI] applications have led to increasing pervasiveness of information facets.
Information facets, such as equivocality and veracity, can dominate and significantly influence human perceptions of information.
We suggest that artificially intelligent technologies that can adapt information representations to overcome cognitive limitations are necessary.
- Score: 62.997667081978825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explosive growth in big data technologies and artificial intelligence [AI]
applications have led to increasing pervasiveness of information facets and a
rapidly growing array of information representations. Information facets, such
as equivocality and veracity, can dominate and significantly influence human
perceptions of information and consequently affect human performance. Extant
research in cognitive fit, which preceded the big data and AI era, focused on
the effects of aligning information representation and task on performance,
without sufficient consideration to information facets and attendant cognitive
challenges. Therefore, there is a compelling need to understand the interplay
of these dominant information facets with information representations and
tasks, and their influence on human performance. We suggest that artificially
intelligent technologies that can adapt information representations to overcome
cognitive limitations are necessary for these complex information environments.
To this end, we propose and test a novel *Adaptive Cognitive Fit* [ACF]
framework that explains the influence of information facets and AI-augmented
information representations on human performance. We draw on information
processing theory and cognitive dissonance theory to advance the ACF framework
and a set of propositions. We empirically validate the ACF propositions with an
economic experiment that demonstrates the influence of information facets, and
a machine learning simulation that establishes the viability of using AI to
improve human performance.
Related papers
- Visual Knowledge in the Big Model Era: Retrospect and Prospect [63.282425615863]
Visual knowledge is a new form of knowledge representation that can encapsulate visual concepts and their relations in a succinct, comprehensive, and interpretable manner.
As the knowledge about the visual world has been identified as an indispensable component of human cognition and intelligence, visual knowledge is poised to have a pivotal role in establishing machine intelligence.
arXiv Detail & Related papers (2024-04-05T07:31:24Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - From DDMs to DNNs: Using process data and models of decision-making to
improve human-AI interactions [1.1510009152620668]
We argue that artificial intelligence (AI) research would benefit from a stronger focus on insights about how decisions emerge over time.
First, we introduce a highly established computational framework that assumes decisions to emerge from the noisy accumulation of evidence.
Next, we discuss to what extent current approaches in multi-agent AI do or do not incorporate process data and models of decision making.
arXiv Detail & Related papers (2023-08-29T11:27:22Z) - Requirements for Explainability and Acceptance of Artificial
Intelligence in Collaborative Work [0.0]
The present structured literature analysis examines the requirements for the explainability and acceptance of AI.
Results indicate that the two main groups of users are developers who require information about the internal operations of the model.
The acceptance of AI systems depends on information about the system's functions and performance, privacy and ethical considerations.
arXiv Detail & Related papers (2023-06-27T11:36:07Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Can Information Behaviour Inform Machine Learning? [0.0]
The paper illustrates how human information behaviour research can bring to machine learning a more nuanced view of information and informing.
Despite their clear differences, the fields of information behaviour and machine learning share many common objectives, paradigms, and key research questions.
arXiv Detail & Related papers (2022-05-01T19:00:52Z) - On Information Processing Limitations In Humans and Machines [0.0]
Information theory is concerned with the study of transmission, processing, extraction, and utilization of information.
This paper will discuss some of the implications of what is known about the limitations of human information processing for the development of reliable Artificial Intelligence.
arXiv Detail & Related papers (2021-12-07T13:03:00Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.