Enhancing Human-Machine Teaming for Medical Prognosis Through Neural
Ordinary Differential Equations (NODEs)
- URL: http://arxiv.org/abs/2102.04121v1
- Date: Mon, 8 Feb 2021 10:52:23 GMT
- Title: Enhancing Human-Machine Teaming for Medical Prognosis Through Neural
Ordinary Differential Equations (NODEs)
- Authors: D. Fompeyrine, E. S. Vorm, N. Ricka, F. Rose, G. Pellegrin
- Abstract summary: A key barrier to the full realization of Machine Learning's potential in medical prognoses is technology acceptance.
Recent efforts to produce explainable AI (XAI) have made progress in improving the interpretability of some ML models.
We propose a novel ML architecture to enhance human understanding and encourage acceptability.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine Learning (ML) has recently been demonstrated to rival expert-level
human accuracy in prediction and detection tasks in a variety of domains,
including medicine. Despite these impressive findings, however, a key barrier
to the full realization of ML's potential in medical prognoses is technology
acceptance. Recent efforts to produce explainable AI (XAI) have made progress
in improving the interpretability of some ML models, but these efforts suffer
from limitations intrinsic to their design: they work best at identifying why a
system fails, but do poorly at explaining when and why a model's prediction is
correct. We posit that the acceptability of ML predictions in expert domains is
limited by two key factors: the machine's horizon of prediction that extends
beyond human capability, and the inability for machine predictions to
incorporate human intuition into their models. We propose the use of a novel ML
architecture, Neural Ordinary Differential Equations (NODEs) to enhance human
understanding and encourage acceptability. Our approach prioritizes human
cognitive intuition at the center of the algorithm design, and offers a
distribution of predictions rather than single outputs. We explain how this
approach may significantly improve human-machine collaboration in prediction
tasks in expert domains such as medical prognoses. We propose a model and
demonstrate, by expanding a concrete example from the literature, how our model
advances the vision of future hybrid Human-AI systems.
Related papers
- CogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding [62.075029712357]
This work introduces the Cognitive Diffusion Probabilistic Models (CogDPM)
CogDPM features a precision estimation method based on the hierarchical sampling capabilities of diffusion models and weight the guidance with precision weights estimated by the inherent property of diffusion models.
We apply CogDPM to real-world prediction tasks using the United Kindom precipitation and surface wind datasets.
arXiv Detail & Related papers (2024-05-03T15:54:50Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Human Trajectory Forecasting with Explainable Behavioral Uncertainty [63.62824628085961]
Human trajectory forecasting helps to understand and predict human behaviors, enabling applications from social robots to self-driving cars.
Model-free methods offer superior prediction accuracy but lack explainability, while model-based methods provide explainability but cannot predict well.
We show that BNSP-SFM achieves up to a 50% improvement in prediction accuracy, compared with 11 state-of-the-art methods.
arXiv Detail & Related papers (2023-07-04T16:45:21Z) - The future of human-centric eXplainable Artificial Intelligence (XAI) is not post-hoc explanations [3.7673721058583123]
We propose a shift from post-hoc explainability to designing interpretable neural network architectures.
We identify five needs of human-centric XAI and propose two schemes for interpretable-by-design neural network.
arXiv Detail & Related papers (2023-07-01T15:24:47Z) - Learning Theory of Mind via Dynamic Traits Attribution [59.9781556714202]
We propose a new neural ToM architecture that learns to generate a latent trait vector of an actor from the past trajectories.
This trait vector then multiplicatively modulates the prediction mechanism via a fast weights' scheme in the prediction neural network.
We empirically show that the fast weights provide a good inductive bias to model the character traits of agents and hence improves mindreading ability.
arXiv Detail & Related papers (2022-04-17T11:21:18Z) - Human-Algorithm Collaboration: Achieving Complementarity and Avoiding
Unfairness [92.26039686430204]
We show that even in carefully-designed systems, complementary performance can be elusive.
First, we provide a theoretical framework for modeling simple human-algorithm systems.
Next, we use this model to prove conditions where complementarity is impossible.
arXiv Detail & Related papers (2022-02-17T18:44:41Z) - Probabilistic Human Motion Prediction via A Bayesian Neural Network [71.16277790708529]
We propose a probabilistic model for human motion prediction in this paper.
Our model could generate several future motions when given an observed motion sequence.
We extensively validate our approach on a large scale benchmark dataset Human3.6m.
arXiv Detail & Related papers (2021-07-14T09:05:33Z) - A Turing Test for Transparency [0.0]
A central goal of explainable artificial intelligence (XAI) is to improve the trust relationship in human-AI interaction.
Recent empirical evidence shows that explanations can have the opposite effect.
This effect challenges the very goal of XAI and implies that responsible usage of transparent AI methods has to consider the ability of humans to distinguish machine generated from human explanations.
arXiv Detail & Related papers (2021-06-21T20:09:40Z) - Deep Interpretable Models of Theory of Mind For Human-Agent Teaming [0.7734726150561086]
We develop an interpretable modular neural framework for modeling the intentions of other observed entities.
We demonstrate the efficacy of our approach with experiments on data from human participants on a search and rescue task in Minecraft.
arXiv Detail & Related papers (2021-04-07T06:18:58Z) - A Comparative Approach to Explainable Artificial Intelligence Methods in
Application to High-Dimensional Electronic Health Records: Examining the
Usability of XAI [0.0]
XAI aims to produce a demonstrative factor of trust, which for human subjects is achieved through communicative means.
The ideology behind trusting a machine to tend towards the livelihood of a human poses an ethical conundrum.
XAI methods produce visualization of the feature contribution towards a given models output on both a local and global level.
arXiv Detail & Related papers (2021-03-08T18:15:52Z) - Harnessing Explanations to Bridge AI and Humans [14.354362614416285]
Machine learning models are increasingly integrated into societally critical applications such as recidivism prediction and medical diagnosis.
We propose future directions for closing the gap between the efficacy of explanations and improvement in human performance.
arXiv Detail & Related papers (2020-03-16T18:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.