A Hypothesis on Good Practices for AI-based Systems for Financial Time
Series Forecasting: Towards Domain-Driven XAI Methods
- URL: http://arxiv.org/abs/2311.07513v1
- Date: Mon, 13 Nov 2023 17:56:45 GMT
- Title: A Hypothesis on Good Practices for AI-based Systems for Financial Time
Series Forecasting: Towards Domain-Driven XAI Methods
- Authors: Branka Hadji Misheva and Joerg Osterrieder
- Abstract summary: Machine learning and deep learning have become increasingly prevalent in financial prediction and forecasting tasks.
These models often lack transparency and interpretability, making them challenging to use in sensitive domains like finance.
This paper explores good practices for deploying explainability in AI-based systems for finance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning and deep learning have become increasingly prevalent in
financial prediction and forecasting tasks, offering advantages such as
enhanced customer experience, democratising financial services, improving
consumer protection, and enhancing risk management. However, these complex
models often lack transparency and interpretability, making them challenging to
use in sensitive domains like finance. This has led to the rise of eXplainable
Artificial Intelligence (XAI) methods aimed at creating models that are easily
understood by humans. Classical XAI methods, such as LIME and SHAP, have been
developed to provide explanations for complex models. While these methods have
made significant contributions, they also have limitations, including
computational complexity, inherent model bias, sensitivity to data sampling,
and challenges in dealing with feature dependence. In this context, this paper
explores good practices for deploying explainability in AI-based systems for
finance, emphasising the importance of data quality, audience-specific methods,
consideration of data properties, and the stability of explanations. These
practices aim to address the unique challenges and requirements of the
financial industry and guide the development of effective XAI tools.
Related papers
- Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction [5.417632175667161]
Explainable Artificial Intelligence (XAI) addresses challenges by providing explanations for how these models make decisions and predictions.
Existing studies have examined the fundamental concepts of XAI, its general principles, and the scope of XAI techniques.
This paper provides a comprehensive literature review encompassing common terminologies and definitions, the need for XAI, beneficiaries of XAI, a taxonomy of XAI methods, and the application of XAI methods in different application areas.
arXiv Detail & Related papers (2024-08-30T21:42:17Z) - A Survey of Explainable Artificial Intelligence (XAI) in Financial Time Series Forecasting [1.2937020918620652]
The field of eXplainable AI (XAI) aims to make AI models more understandable.
This paper categorizes XAI approaches that predict financial time series.
It provides a comprehensive view of XAI's current role in finance.
arXiv Detail & Related papers (2024-07-22T17:06:19Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - A Comprehensive Review on Financial Explainable AI [29.229196780505532]
We provide a comparative survey of methods that aim to improve the explainability of deep learning models within the context of finance.
We categorize the collection of explainable AI methods according to their corresponding characteristics.
We review the concerns and challenges of adopting explainable AI methods, together with future directions we deemed appropriate and important.
arXiv Detail & Related papers (2023-09-21T10:30:49Z) - Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models [51.3422222472898]
We document the capability of large language models (LLMs) like ChatGPT to predict stock price movements using news headlines.
We develop a theoretical model incorporating information capacity constraints, underreaction, limits-to-arbitrage, and LLMs.
arXiv Detail & Related papers (2023-04-15T19:22:37Z) - A Time Series Approach to Explainability for Neural Nets with
Applications to Risk-Management and Fraud Detection [0.0]
Trust in technology is enabled by understanding the rationale behind the predictions made.
For cross-sectional data classical XAI approaches can lead to valuable insights about the models' inner workings.
We propose a novel XAI technique for deep learning methods which preserves and exploits the natural time ordering of the data.
arXiv Detail & Related papers (2022-12-06T12:04:01Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Bayesian Bilinear Neural Network for Predicting the Mid-price Dynamics
in Limit-Order Book Markets [84.90242084523565]
Traditional time-series econometric methods often appear incapable of capturing the true complexity of the multi-level interactions driving the price dynamics.
By adopting a state-of-the-art second-order optimization algorithm, we train a Bayesian bilinear neural network with temporal attention.
By addressing the use of predictive distributions to analyze errors and uncertainties associated with the estimated parameters and model forecasts, we thoroughly compare our Bayesian model with traditional ML alternatives.
arXiv Detail & Related papers (2022-03-07T18:59:54Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.