Mutual Information Learned Regressor: an Information-theoretic Viewpoint
of Training Regression Systems
- URL: http://arxiv.org/abs/2211.12685v1
- Date: Wed, 23 Nov 2022 03:43:22 GMT
- Title: Mutual Information Learned Regressor: an Information-theoretic Viewpoint
of Training Regression Systems
- Authors: Jirong Yi, Qiaosheng Zhang, Zhen Chen, Qiao Liu, Wei Shao, Yusen He,
Yaohua Wang
- Abstract summary: An existing common practice for solving regression problems is the mean square error (MSE) minimization approach.
Recently, Yi et al., proposed a mutual information based supervised learning framework where they introduced a label entropy regularization.
In this paper, we investigate the regression under the mutual information based supervised learning framework.
- Score: 10.314518385506007
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As one of the central tasks in machine learning, regression finds lots of
applications in different fields. An existing common practice for solving
regression problems is the mean square error (MSE) minimization approach or its
regularized variants which require prior knowledge about the models. Recently,
Yi et al., proposed a mutual information based supervised learning framework
where they introduced a label entropy regularization which does not require any
prior knowledge. When applied to classification tasks and solved via a
stochastic gradient descent (SGD) optimization algorithm, their approach
achieved significant improvement over the commonly used cross entropy loss and
its variants. However, they did not provide a theoretical convergence analysis
of the SGD algorithm for the proposed formulation. Besides, applying the
framework to regression tasks is nontrivial due to the potentially infinite
support set of the label. In this paper, we investigate the regression under
the mutual information based supervised learning framework. We first argue that
the MSE minimization approach is equivalent to a conditional entropy learning
problem, and then propose a mutual information learning formulation for solving
regression problems by using a reparameterization technique. For the proposed
formulation, we give the convergence analysis of the SGD algorithm for solving
it in practice. Finally, we consider a multi-output regression data model where
we derive the generalization performance lower bound in terms of the mutual
information associated with the underlying data distribution. The result shows
that the high dimensionality can be a bless instead of a curse, which is
controlled by a threshold. We hope our work will serve as a good starting point
for further research on the mutual information based regression.
Related papers
- Deep Generative Symbolic Regression [83.04219479605801]
Symbolic regression aims to discover concise closed-form mathematical equations from data.
Existing methods, ranging from search to reinforcement learning, fail to scale with the number of input variables.
We propose an instantiation of our framework, Deep Generative Symbolic Regression.
arXiv Detail & Related papers (2023-12-30T17:05:31Z) - Federated Empirical Risk Minimization via Second-Order Method [18.548661105227488]
We present an interior point method (IPM) to solve a general empirical risk minimization problem under the federated learning setting.
We show that the communication complexity of each iteration of our IPM is $tildeO(d3/2)$, where $d$ is the dimension (i.e., number of features) of the dataset.
arXiv Detail & Related papers (2023-05-27T14:23:14Z) - A Bayesian Robust Regression Method for Corrupted Data Reconstruction [5.298637115178182]
We develop an effective robust regression method that can resist adaptive adversarial attacks.
First, we propose the novel TRIP (hard Thresholding approach to Robust regression with sImple Prior) algorithm.
We then use the idea of Bayesian reweighting to construct the more robust BRHT (robust Bayesian Reweighting regression via Hard Thresholding) algorithm.
arXiv Detail & Related papers (2022-12-24T17:25:53Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - Unrolling SGD: Understanding Factors Influencing Machine Unlearning [17.6607904333012]
Machine unlearning is the process through which a deployed machine learning model forgets about one of its training data points.
We first taxonomize approaches and metrics of approximate unlearning.
We identify verification error, i.e., the L2 difference between the weights of an approximately unlearned and a naively retrained model.
arXiv Detail & Related papers (2021-09-27T23:46:59Z) - Regression Bugs Are In Your Model! Measuring, Reducing and Analyzing
Regressions In NLP Model Updates [68.09049111171862]
This work focuses on quantifying, reducing and analyzing regression errors in the NLP model updates.
We formulate the regression-free model updates into a constrained optimization problem.
We empirically analyze how model ensemble reduces regression.
arXiv Detail & Related papers (2021-05-07T03:33:00Z) - A Hypergradient Approach to Robust Regression without Correspondence [85.49775273716503]
We consider a variant of regression problem, where the correspondence between input and output data is not available.
Most existing methods are only applicable when the sample size is small.
We propose a new computational framework -- ROBOT -- for the shuffled regression problem.
arXiv Detail & Related papers (2020-11-30T21:47:38Z) - Robust priors for regularized regression [12.945710636153537]
Penalized regression approaches like ridge regression shrink toward zero but zero weights is usually not a sensible prior.
Inspired by simple and robust decisions humans use, we constructed non-zero priors for penalized regression models.
Models with robust priors had excellent worst-case performance.
arXiv Detail & Related papers (2020-10-06T10:43:14Z) - Information Theoretic Meta Learning with Gaussian Processes [74.54485310507336]
We formulate meta learning using information theoretic concepts; namely, mutual information and the information bottleneck.
By making use of variational approximations to the mutual information, we derive a general and tractable framework for meta learning.
arXiv Detail & Related papers (2020-09-07T16:47:30Z) - Fast OSCAR and OWL Regression via Safe Screening Rules [97.28167655721766]
Ordered $L_1$ (OWL) regularized regression is a new regression analysis for high-dimensional sparse learning.
Proximal gradient methods are used as standard approaches to solve OWL regression.
We propose the first safe screening rule for OWL regression by exploring the order of the primal solution with the unknown order structure.
arXiv Detail & Related papers (2020-06-29T23:35:53Z) - Joint learning of variational representations and solvers for inverse
problems with partially-observed data [13.984814587222811]
In this paper, we design an end-to-end framework allowing to learn actual variational frameworks for inverse problems in a supervised setting.
The variational cost and the gradient-based solver are both stated as neural networks using automatic differentiation for the latter.
This leads to a data-driven discovery of variational models.
arXiv Detail & Related papers (2020-06-05T19:53:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.