Survey on Deep Fuzzy Systems in regression applications: a view on
interpretability
- URL: http://arxiv.org/abs/2209.04230v1
- Date: Fri, 9 Sep 2022 10:40:31 GMT
- Title: Survey on Deep Fuzzy Systems in regression applications: a view on
interpretability
- Authors: Jorge S. S. J\'unior, J\'er\^ome Mendes, Francisco Souza, Cristiano
Premebida
- Abstract summary: Regression problems have been more and more embraced by deep learning (DL) techniques.
Accessing the interpretability of these models is an essential factor for addressing problems in sensitive areas.
This paper aims to investigate the state-of-the-art on existing methodologies that combine DL and FLS.
- Score: 1.2158275183241178
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Regression problems have been more and more embraced by deep learning (DL)
techniques. The increasing number of papers recently published in this domain,
including surveys and reviews, shows that deep regression has captured the
attention of the community due to efficiency and good accuracy in systems with
high-dimensional data. However, many DL methodologies have complex structures
that are not readily transparent to human users. Accessing the interpretability
of these models is an essential factor for addressing problems in sensitive
areas such as cyber-security systems, medical, financial surveillance, and
industrial processes. Fuzzy logic systems (FLS) are inherently interpretable
models, well known in the literature, capable of using nonlinear
representations for complex systems through linguistic terms with membership
degrees mimicking human thought. Within an atmosphere of explainable artificial
intelligence, it is necessary to consider a trade-off between accuracy and
interpretability for developing intelligent models. This paper aims to
investigate the state-of-the-art on existing methodologies that combine DL and
FLS, namely deep fuzzy systems, to address regression problems, configuring a
topic that is currently not sufficiently explored in the literature and thus
deserves a comprehensive survey.
Related papers
- Neurosymbolic AI approach to Attribution in Large Language Models [5.3454230926797734]
Neurosymbolic AI (NesyAI) combines the strengths of neural networks with structured symbolic reasoning.
This paper explores how NesyAI frameworks can enhance existing attribution models, offering more reliable, interpretable, and adaptable systems.
arXiv Detail & Related papers (2024-09-30T02:20:36Z) - Ontology Embedding: A Survey of Methods, Applications and Resources [54.3453925775069]
Ontologies are widely used for representing domain knowledge and meta data.
One straightforward solution is to integrate statistical analysis and machine learning.
Numerous papers have been published on embedding, but a lack of systematic reviews hinders researchers from gaining a comprehensive understanding of this field.
arXiv Detail & Related papers (2024-06-16T14:49:19Z) - Machine Learning Robustness: A Primer [12.426425119438846]
The discussion begins with a detailed definition of robustness, portraying it as the ability of ML models to maintain stable performance across varied and unexpected environmental conditions.
The chapter delves into the factors that impede robustness, such as data bias, model complexity, and the pitfalls of underspecified ML pipelines.
The discussion progresses to explore amelioration strategies for bolstering robustness, starting with data-centric approaches like debiasing and augmentation.
arXiv Detail & Related papers (2024-04-01T03:49:42Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Large Language Models for Information Retrieval: A Survey [58.30439850203101]
Information retrieval has evolved from term-based methods to its integration with advanced neural models.
Recent research has sought to leverage large language models (LLMs) to improve IR systems.
We delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers.
arXiv Detail & Related papers (2023-08-14T12:47:22Z) - Explaining Relation Classification Models with Semantic Extents [1.7604348079019634]
A lack of explainability is currently a complicating factor in many real-world applications.
We introduce semantic extents, a concept to analyze decision patterns for the relation classification task.
We provide an annotation tool and a software framework to determine semantic extents for humans and models.
arXiv Detail & Related papers (2023-08-04T08:17:52Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - LioNets: A Neural-Specific Local Interpretation Technique Exploiting
Penultimate Layer Information [6.570220157893279]
Interpretable machine learning (IML) is an urgent topic of research.
This paper focuses on a local-based, neural-specific interpretation process applied to textual and time-series data.
arXiv Detail & Related papers (2021-04-13T09:39:33Z) - A new interpretable unsupervised anomaly detection method based on
residual explanation [47.187609203210705]
We present RXP, a new interpretability method to deal with the limitations for AE-based AD in large-scale systems.
It stands out for its implementation simplicity, low computational cost and deterministic behavior.
In an experiment using data from a real heavy-haul railway line, the proposed method achieved superior performance compared to SHAP.
arXiv Detail & Related papers (2021-03-14T15:35:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.