Exponentially Weighted l_2 Regularization Strategy in Constructing
Reinforced Second-order Fuzzy Rule-based Model
- URL: http://arxiv.org/abs/2007.01208v1
- Date: Thu, 2 Jul 2020 15:42:15 GMT
- Title: Exponentially Weighted l_2 Regularization Strategy in Constructing
Reinforced Second-order Fuzzy Rule-based Model
- Authors: Congcong Zhang, Sung-Kwun Oh, Witold Pedrycz, Zunwei Fu and Shanzhen
Lu
- Abstract summary: In the conventional Takagi-Sugeno-Kang (TSK)-type fuzzy models, constant or linear functions are usually utilized as the consequent parts of the fuzzy rules.
We introduce an exponential weight approach inspired by the weight function theory encountered in harmonic analysis.
- Score: 72.57056258027336
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the conventional Takagi-Sugeno-Kang (TSK)-type fuzzy models, constant or
linear functions are usually utilized as the consequent parts of the fuzzy
rules, but they cannot effectively describe the behavior within local regions
defined by the antecedent parts. In this article, a theoretical and practical
design methodology is developed to address this problem. First, the information
granulation (Fuzzy C-Means) method is applied to capture the structure in the
data and split the input space into subspaces, as well as form the antecedent
parts. Second, the quadratic polynomials (QPs) are employed as the consequent
parts. Compared with constant and linear functions, QPs can describe the
input-output behavior within the local regions (subspaces) by refining the
relationship between input and output variables. However, although QP can
improve the approximation ability of the model, it could lead to the
deterioration of the prediction ability of the model (e.g., overfitting). To
handle this issue, we introduce an exponential weight approach inspired by the
weight function theory encountered in harmonic analysis. More specifically, we
adopt the exponential functions as the targeted penalty terms, which are
equipped with l2 regularization (l2) (i.e., exponential weighted l2, ewl_2) to
match the proposed reinforced second-order fuzzy rule-based model (RSFRM)
properly. The advantage of el 2 compared to ordinary l2 lies in separately
identifying and penalizing different types of polynomial terms in the
coefficient estimation, and its results not only alleviate the overfitting and
prevent the deterioration of generalization ability but also effectively
release the prediction potential of the model.
Related papers
- Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Double Duality: Variational Primal-Dual Policy Optimization for
Constrained Reinforcement Learning [132.7040981721302]
We study the Constrained Convex Decision Process (MDP), where the goal is to minimize a convex functional of the visitation measure.
Design algorithms for a constrained convex MDP faces several challenges, including handling the large state space.
arXiv Detail & Related papers (2024-02-16T16:35:18Z) - Wasserstein proximal operators describe score-based generative models
and resolve memorization [12.321631823103894]
We first formulate SGMs in terms of the Wasserstein proximal operator (WPO)
We show that WPO describes the inductive bias of diffusion and score-based models.
We present an interpretable kernel-based model for the score function which dramatically improves the performance of SGMs.
arXiv Detail & Related papers (2024-02-09T03:33:13Z) - Physics-Informed Gaussian Process Regression Generalizes Linear PDE Solvers [32.57938108395521]
A class of mechanistic models, Linear partial differential equations, are used to describe physical processes such as heat transfer, electromagnetism, and wave propagation.
specialized numerical methods based on discretization are used to solve PDEs.
By ignoring parameter and measurement uncertainty, classical PDE solvers may fail to produce consistent estimates of their inherent approximation error.
arXiv Detail & Related papers (2022-12-23T17:02:59Z) - Data-Driven Influence Functions for Optimization-Based Causal Inference [105.5385525290466]
We study a constructive algorithm that approximates Gateaux derivatives for statistical functionals by finite differencing.
We study the case where probability distributions are not known a priori but need to be estimated from data.
arXiv Detail & Related papers (2022-08-29T16:16:22Z) - A generalization gap estimation for overparameterized models via the
Langevin functional variance [6.231304401179968]
We show that a functional variance characterizes the generalization gap even in overparameterized settings.
We propose an efficient approximation of the function variance, the Langevin approximation of the functional variance (Langevin FV)
arXiv Detail & Related papers (2021-12-07T12:43:05Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - Learning Likelihoods with Conditional Normalizing Flows [54.60456010771409]
Conditional normalizing flows (CNFs) are efficient in sampling and inference.
We present a study of CNFs where the base density to output space mapping is conditioned on an input x, to model conditional densities p(y|x)
arXiv Detail & Related papers (2019-11-29T19:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.