A Time Series Approach to Explainability for Neural Nets with
Applications to Risk-Management and Fraud Detection
- URL: http://arxiv.org/abs/2212.02906v1
- Date: Tue, 6 Dec 2022 12:04:01 GMT
- Title: A Time Series Approach to Explainability for Neural Nets with
Applications to Risk-Management and Fraud Detection
- Authors: Marc Wildi and Branka Hadji Misheva
- Abstract summary: Trust in technology is enabled by understanding the rationale behind the predictions made.
For cross-sectional data classical XAI approaches can lead to valuable insights about the models' inner workings.
We propose a novel XAI technique for deep learning methods which preserves and exploits the natural time ordering of the data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Artificial intelligence is creating one of the biggest revolution across
technology driven application fields. For the finance sector, it offers many
opportunities for significant market innovation and yet broad adoption of AI
systems heavily relies on our trust in their outputs. Trust in technology is
enabled by understanding the rationale behind the predictions made. To this
end, the concept of eXplainable AI emerged introducing a suite of techniques
attempting to explain to users how complex models arrived at a certain
decision. For cross-sectional data classical XAI approaches can lead to
valuable insights about the models' inner workings, but these techniques
generally cannot cope well with longitudinal data (time series) in the presence
of dependence structure and non-stationarity. We here propose a novel XAI
technique for deep learning methods which preserves and exploits the natural
time ordering of the data.
Related papers
- XEdgeAI: A Human-centered Industrial Inspection Framework with Data-centric Explainable Edge AI Approach [2.0209172586699173]
This paper introduces a novel XAI-integrated Visual Quality Inspection framework.
Our framework incorporates XAI and the Large Vision Language Model to deliver human-centered interpretability.
This approach paves the way for the broader adoption of reliable and interpretable AI tools in critical industrial applications.
arXiv Detail & Related papers (2024-07-16T14:30:24Z) - Explainable Artificial Intelligence Techniques for Accurate Fault Detection and Diagnosis: A Review [0.0]
We review the eXplainable AI (XAI) tools and techniques in this context.
We focus on their role in making AI decision-making transparent, particularly in critical scenarios where humans are involved.
We discuss current limitations and potential future research that aims to balance explainability with model performance.
arXiv Detail & Related papers (2024-04-17T17:49:38Z) - A Hypothesis on Good Practices for AI-based Systems for Financial Time
Series Forecasting: Towards Domain-Driven XAI Methods [0.0]
Machine learning and deep learning have become increasingly prevalent in financial prediction and forecasting tasks.
These models often lack transparency and interpretability, making them challenging to use in sensitive domains like finance.
This paper explores good practices for deploying explainability in AI-based systems for finance.
arXiv Detail & Related papers (2023-11-13T17:56:45Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - When not to use machine learning: a perspective on potential and
limitations [0.0]
We highlight the guiding principles of data-driven modeling, how these principles imbue models with almost magical predictive power.
We hope that the discussion to follow provides researchers throughout the sciences with a better understanding of when said techniques are appropriate.
arXiv Detail & Related papers (2022-10-06T04:00:00Z) - Principled Knowledge Extrapolation with GANs [92.62635018136476]
We study counterfactual synthesis from a new perspective of knowledge extrapolation.
We show that an adversarial game with a closed-form discriminator can be used to address the knowledge extrapolation problem.
Our method enjoys both elegant theoretical guarantees and superior performance in many scenarios.
arXiv Detail & Related papers (2022-05-21T08:39:42Z) - INTERN: A New Learning Paradigm Towards General Vision [117.3343347061931]
We develop a new learning paradigm named INTERN.
By learning with supervisory signals from multiple sources in multiple stages, the model being trained will develop strong generalizability.
In most cases, our models, adapted with only 10% of the training data in the target domain, outperform the counterparts trained with the full set of data.
arXiv Detail & Related papers (2021-11-16T18:42:50Z) - Edge-Cloud Polarization and Collaboration: A Comprehensive Survey [61.05059817550049]
We conduct a systematic review for both cloud and edge AI.
We are the first to set up the collaborative learning mechanism for cloud and edge modeling.
We discuss potentials and practical experiences of some on-going advanced edge AI topics.
arXiv Detail & Related papers (2021-11-11T05:58:23Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Opportunities and Challenges in Explainable Artificial Intelligence
(XAI): A Survey [2.7086321720578623]
Black-box nature of deep neural networks challenges its use in mission critical applications.
XAI promotes a set of tools, techniques, and algorithms that can generate high-quality interpretable, intuitive, human-understandable explanations of AI decisions.
arXiv Detail & Related papers (2020-06-16T02:58:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.