Survey on Deep Fuzzy Systems in regression applications: a view on
interpretability
- URL: http://arxiv.org/abs/2209.04230v1
- Date: Fri, 9 Sep 2022 10:40:31 GMT
- Title: Survey on Deep Fuzzy Systems in regression applications: a view on
interpretability
- Authors: Jorge S. S. J\'unior, J\'er\^ome Mendes, Francisco Souza, Cristiano
Premebida
- Abstract summary: Regression problems have been more and more embraced by deep learning (DL) techniques.
Accessing the interpretability of these models is an essential factor for addressing problems in sensitive areas.
This paper aims to investigate the state-of-the-art on existing methodologies that combine DL and FLS.
- Score: 1.2158275183241178
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Regression problems have been more and more embraced by deep learning (DL)
techniques. The increasing number of papers recently published in this domain,
including surveys and reviews, shows that deep regression has captured the
attention of the community due to efficiency and good accuracy in systems with
high-dimensional data. However, many DL methodologies have complex structures
that are not readily transparent to human users. Accessing the interpretability
of these models is an essential factor for addressing problems in sensitive
areas such as cyber-security systems, medical, financial surveillance, and
industrial processes. Fuzzy logic systems (FLS) are inherently interpretable
models, well known in the literature, capable of using nonlinear
representations for complex systems through linguistic terms with membership
degrees mimicking human thought. Within an atmosphere of explainable artificial
intelligence, it is necessary to consider a trade-off between accuracy and
interpretability for developing intelligent models. This paper aims to
investigate the state-of-the-art on existing methodologies that combine DL and
FLS, namely deep fuzzy systems, to address regression problems, configuring a
topic that is currently not sufficiently explored in the literature and thus
deserves a comprehensive survey.
Related papers
- Causality can systematically address the monsters under the bench(marks) [64.36592889550431]
Benchmarks are plagued by various biases, artifacts, or leakage.
Models may behave unreliably due to poorly explored failure modes.
causality offers an ideal framework to systematically address these challenges.
arXiv Detail & Related papers (2025-02-07T17:01:37Z) - Mechanistic understanding and validation of large AI models with SemanticLens [13.712668314238082]
Unlike human-engineered systems such as aeroplanes, the inner workings of AI models remain largely opaque.
This paper introduces SemanticLens, a universal explanation method for neural networks that maps hidden knowledge encoded by components.
arXiv Detail & Related papers (2025-01-09T17:47:34Z) - Sycophancy in Large Language Models: Causes and Mitigations [0.0]
Large language models (LLMs) have demonstrated remarkable capabilities across a wide range of natural language processing tasks.
Their tendency to exhibit sycophantic behavior poses significant risks to their reliability and ethical deployment.
This paper provides a technical survey of sycophancy in LLMs, analyzing its causes, impacts, and potential mitigation strategies.
arXiv Detail & Related papers (2024-11-22T16:56:49Z) - Neurosymbolic AI approach to Attribution in Large Language Models [5.3454230926797734]
Neurosymbolic AI (NesyAI) combines the strengths of neural networks with structured symbolic reasoning.
This paper explores how NesyAI frameworks can enhance existing attribution models, offering more reliable, interpretable, and adaptable systems.
arXiv Detail & Related papers (2024-09-30T02:20:36Z) - Ontology Embedding: A Survey of Methods, Applications and Resources [54.3453925775069]
Onologies are widely used for representing domain knowledge and meta data.
logical reasoning that can directly support are quite limited in learning, approximation and prediction.
One straightforward solution is to integrate statistical analysis and machine learning.
arXiv Detail & Related papers (2024-06-16T14:49:19Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Large Language Models for Information Retrieval: A Survey [58.30439850203101]
Information retrieval has evolved from term-based methods to its integration with advanced neural models.
Recent research has sought to leverage large language models (LLMs) to improve IR systems.
We delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers.
arXiv Detail & Related papers (2023-08-14T12:47:22Z) - Explaining Relation Classification Models with Semantic Extents [1.7604348079019634]
A lack of explainability is currently a complicating factor in many real-world applications.
We introduce semantic extents, a concept to analyze decision patterns for the relation classification task.
We provide an annotation tool and a software framework to determine semantic extents for humans and models.
arXiv Detail & Related papers (2023-08-04T08:17:52Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.