Sequential Interpretability: Methods, Applications, and Future Direction
for Understanding Deep Learning Models in the Context of Sequential Data
- URL: http://arxiv.org/abs/2004.12524v1
- Date: Mon, 27 Apr 2020 00:58:42 GMT
- Title: Sequential Interpretability: Methods, Applications, and Future Direction
for Understanding Deep Learning Models in the Context of Sequential Data
- Authors: Benjamin Shickel, Parisa Rashidi
- Abstract summary: We review current techniques for interpreting deep learning techniques involving sequential data.
We identify similarities to non-sequential methods, and discuss current limitations and future avenues of sequential interpretability research.
- Score: 1.8275108630751837
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning continues to revolutionize an ever-growing number of critical
application areas including healthcare, transportation, finance, and basic
sciences. Despite their increased predictive power, model transparency and
human explainability remain a significant challenge due to the "black box"
nature of modern deep learning models. In many cases the desired balance
between interpretability and performance is predominately task specific.
Human-centric domains such as healthcare necessitate a renewed focus on
understanding how and why these frameworks are arriving at critical and
potentially life-or-death decisions. Given the quantity of research and
empirical successes of deep learning for computer vision, most of the existing
interpretability research has focused on image processing techniques.
Comparatively, less attention has been paid to interpreting deep learning
frameworks using sequential data. Given recent deep learning advancements in
highly sequential domains such as natural language processing and physiological
signal processing, the need for deep sequential explanations is at an all-time
high. In this paper, we review current techniques for interpreting deep
learning techniques involving sequential data, identify similarities to
non-sequential methods, and discuss current limitations and future avenues of
sequential interpretability research.
Related papers
- State-Space Modeling in Long Sequence Processing: A Survey on Recurrence in the Transformer Era [59.279784235147254]
This survey provides an in-depth summary of the latest approaches that are based on recurrent models for sequential data processing.
The emerging picture suggests that there is room for thinking of novel routes, constituted by learning algorithms which depart from the standard Backpropagation Through Time.
arXiv Detail & Related papers (2024-06-13T12:51:22Z) - Continual Learning in Medical Image Analysis: A Comprehensive Review of Recent Advancements and Future Prospects [5.417947115749931]
Continual learning has emerged as a crucial approach for developing unified and sustainable deep models.
This systematic review paper provides a comprehensive overview of the state-of-the-art in continual learning techniques applied to medical imaging analysis.
arXiv Detail & Related papers (2023-12-28T13:16:03Z) - Looking deeper into interpretable deep learning in neuroimaging: a
comprehensive survey [20.373311465258393]
This paper comprehensively reviews interpretable deep learning models in the neuroimaging domain.
We discuss how multiple recent neuroimaging studies leveraged model interpretability to capture anatomical and functional brain alterations most relevant to model predictions.
arXiv Detail & Related papers (2023-07-14T04:50:04Z) - A Threefold Review on Deep Semantic Segmentation: Efficiency-oriented,
Temporal and Depth-aware design [77.34726150561087]
We conduct a survey on the most relevant and recent advances in Deep Semantic in the context of vision for autonomous vehicles.
Our main objective is to provide a comprehensive discussion on the main methods, advantages, limitations, results and challenges faced from each perspective.
arXiv Detail & Related papers (2023-03-08T01:29:55Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - Deep Learning to See: Towards New Foundations of Computer Vision [88.69805848302266]
This book criticizes the supposed scientific progress in the field of computer vision.
It proposes the investigation of vision within the framework of information-based laws of nature.
arXiv Detail & Related papers (2022-06-30T15:20:36Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z) - Adaptive Explainable Continual Learning Framework for Regression
Problems with Focus on Power Forecasts [0.0]
Two continual learning scenarios will be proposed to describe the potential challenges in this context.
Deep neural networks have to learn new tasks and overcome forgetting the knowledge obtained from the old tasks as the amount of data keeps increasing in applications.
Research topics are related but not limited to developing continual deep learning algorithms, strategies for non-stationarity detection in data streams, explainable and visualizable artificial intelligence, etc.
arXiv Detail & Related papers (2021-08-24T14:59:10Z) - Ten Quick Tips for Deep Learning in Biology [116.78436313026478]
Machine learning is concerned with the development and applications of algorithms that can recognize patterns in data and use them for predictive modeling.
Deep learning has become its own subfield of machine learning.
In the context of biological research, deep learning has been increasingly used to derive novel insights from high-dimensional biological data.
arXiv Detail & Related papers (2021-05-29T21:02:44Z) - A Review on Explainability in Multimodal Deep Neural Nets [2.3204178451683264]
multimodal AI techniques have achieved much success in several application domains.
Despite their outstanding performance, the complex, opaque and black-box nature of the deep neural nets limits their social acceptance and usability.
This paper extensively reviews the present literature to present a comprehensive survey and commentary on the explainability in multimodal deep neural nets.
arXiv Detail & Related papers (2021-05-17T14:17:49Z) - Structure preserving deep learning [1.2263454117570958]
deep learning has risen to the foreground as a topic of massive interest.
There are multiple challenging mathematical problems involved in applying deep learning.
A growing effort to mathematically understand the structure in existing deep learning methods.
arXiv Detail & Related papers (2020-06-05T10:59:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.