An implicit function learning approach for parametric modal regression
- URL: http://arxiv.org/abs/2002.06195v2
- Date: Thu, 29 Oct 2020 17:46:18 GMT
- Title: An implicit function learning approach for parametric modal regression
- Authors: Yangchen Pan, Ehsan Imani, Martha White, Amir-massoud Farahmand
- Abstract summary: We use implicit function theorem to develop an objective, for learning a joint function over inputs and targets.
We empirically demonstrate on several synthetic problems that our method can learn multi-valued functions and produce the conditional modes.
- Score: 36.568208312835196
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For multi-valued functions---such as when the conditional distribution on
targets given the inputs is multi-modal---standard regression approaches are
not always desirable because they provide the conditional mean. Modal
regression algorithms address this issue by instead finding the conditional
mode(s). Most, however, are nonparametric approaches and so can be difficult to
scale. Further, parametric approximators, like neural networks, facilitate
learning complex relationships between inputs and targets. In this work, we
propose a parametric modal regression algorithm. We use the implicit function
theorem to develop an objective, for learning a joint function over inputs and
targets. We empirically demonstrate on several synthetic problems that our
method (i) can learn multi-valued functions and produce the conditional modes,
(ii) scales well to high-dimensional inputs, and (iii) can even be more
effective for certain uni-modal problems, particularly for high-frequency
functions. We demonstrate that our method is competitive in a real-world modal
regression problem and two regular regression datasets.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Stochastic Optimization Algorithms for Instrumental Variable Regression with Streaming Data [17.657917523817243]
We develop and analyze algorithms for instrumental variable regression by viewing the problem as a conditional optimization problem.
In the context of least-squares instrumental variable regression, our algorithms neither require matrix inversions nor mini-batches.
We derive rates of convergence in expectation, that are of order $mathcalO(log T/T)$ and $mathcalO (1/T1-iota)$ for any $iota>0$.
arXiv Detail & Related papers (2024-05-29T19:21:55Z) - MMSR: Symbolic Regression is a Multimodal Task [12.660401635672967]
Symbolic regression was originally formulated as a optimization problem, and GP and reinforcement learning algorithms were used to solve it.
To solve this problem, researchers treat the mapping from data to expressions as a translation problem.
In this paper, we propose MMSR, which achieves the most advanced results on multiple mainstream datasets.
arXiv Detail & Related papers (2024-02-28T08:29:42Z) - Learning Unseen Modality Interaction [54.23533023883659]
Multimodal learning assumes all modality combinations of interest are available during training to learn cross-modal correspondences.
We pose the problem of unseen modality interaction and introduce a first solution.
It exploits a module that projects the multidimensional features of different modalities into a common space with rich information preserved.
arXiv Detail & Related papers (2023-06-22T10:53:10Z) - Mutual Information Learned Regressor: an Information-theoretic Viewpoint
of Training Regression Systems [10.314518385506007]
An existing common practice for solving regression problems is the mean square error (MSE) minimization approach.
Recently, Yi et al., proposed a mutual information based supervised learning framework where they introduced a label entropy regularization.
In this paper, we investigate the regression under the mutual information based supervised learning framework.
arXiv Detail & Related papers (2022-11-23T03:43:22Z) - Communication-Efficient Distributed Quantile Regression with Optimal
Statistical Guarantees [2.064612766965483]
We address the problem of how to achieve optimal inference in distributed quantile regression without stringent scaling conditions.
The difficulties are resolved through a double-smoothing approach that is applied to the local (at each data source) and global objective functions.
Despite the reliance on a delicate combination of local and global smoothing parameters, the quantile regression model is fully parametric.
arXiv Detail & Related papers (2021-10-25T17:09:59Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - A Hypergradient Approach to Robust Regression without Correspondence [85.49775273716503]
We consider a variant of regression problem, where the correspondence between input and output data is not available.
Most existing methods are only applicable when the sample size is small.
We propose a new computational framework -- ROBOT -- for the shuffled regression problem.
arXiv Detail & Related papers (2020-11-30T21:47:38Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.