Deep Ordinal Regression using Optimal Transport Loss and Unimodal Output
Probabilities
- URL: http://arxiv.org/abs/2011.07607v2
- Date: Thu, 18 Nov 2021 14:10:57 GMT
- Title: Deep Ordinal Regression using Optimal Transport Loss and Unimodal Output
Probabilities
- Authors: Uri Shaham, Igal Zaidman, Jonathan Svirsky
- Abstract summary: We propose a framework for deep ordinal regression based on unimodal output distribution and optimal transport loss.
We empirically analyze the different components of our proposed approach and demonstrate their contribution to the performance of the model.
Experimental results on eight real-world datasets demonstrate that our proposed approach consistently performs on par with and often better than several recently proposed deep learning approaches.
- Score: 3.169089186688223
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is often desired that ordinal regression models yield unimodal
predictions. However, in many recent works this characteristic is either
absent, or implemented using soft targets, which do not guarantee unimodal
outputs at inference. In addition, we argue that the standard maximum
likelihood objective is not suitable for ordinal regression problems, and that
optimal transport is better suited for this task, as it naturally captures the
order of the classes. In this work, we propose a framework for deep ordinal
regression, based on unimodal output distribution and optimal transport loss.
Inspired by the well-known Proportional Odds model, we propose to modify its
design by using an architectural mechanism which guarantees that the model
output distribution will be unimodal. We empirically analyze the different
components of our proposed approach and demonstrate their contribution to the
performance of the model. Experimental results on eight real-world datasets
demonstrate that our proposed approach consistently performs on par with and
often better than several recently proposed deep learning approaches for deep
ordinal regression with unimodal output probabilities, while having guarantee
on the output unimodality. In addition, we demonstrate that proposed approach
is less overconfident than current baselines.
Related papers
- Self-Evolutionary Large Language Models through Uncertainty-Enhanced Preference Optimization [9.618391485742968]
Iterative preference optimization has recently become one of the de-facto training paradigms for large language models (LLMs)
We present an uncertainty-enhanced textbfPreference textbfOptimization framework to make the LLM self-evolve with reliable feedback.
Our framework substantially alleviates the noisy problem and improves the performance of iterative preference optimization.
arXiv Detail & Related papers (2024-09-17T14:05:58Z) - Regression-aware Inference with LLMs [52.764328080398805]
We show that an inference strategy can be sub-optimal for common regression and scoring evaluation metrics.
We propose alternate inference strategies that estimate the Bayes-optimal solution for regression and scoring metrics in closed-form from sampled responses.
arXiv Detail & Related papers (2024-03-07T03:24:34Z) - Bayesian Nonparametrics Meets Data-Driven Distributionally Robust Optimization [29.24821214671497]
Training machine learning and statistical models often involve optimizing a data-driven risk criterion.
We propose a novel robust criterion by combining insights from Bayesian nonparametric (i.e., Dirichlet process) theory and a recent decision-theoretic model of smooth ambiguity-averse preferences.
For practical implementation, we propose and study tractable approximations of the criterion based on well-known Dirichlet process representations.
arXiv Detail & Related papers (2024-01-28T21:19:15Z) - Model-based Offline Policy Optimization with Adversarial Network [0.36868085124383626]
We propose a novel Model-based Offline policy optimization framework with Adversarial Network (MOAN)
Key idea is to use adversarial learning to build a transition model with better generalization.
Our approach outperforms existing state-of-the-art baselines on widely studied offline RL benchmarks.
arXiv Detail & Related papers (2023-09-05T11:49:33Z) - Unifying Distributionally Robust Optimization via Optimal Transport
Theory [13.19058156672392]
This paper introduces a novel approach that unifies these methods into a single framework based on optimal transport.
Our proposed approach makes it possible for optimal adversarial distributions to simultaneously perturb likelihood and outcomes.
The paper investigates several duality results and presents tractable reformulations that enhance the practical applicability of this unified framework.
arXiv Detail & Related papers (2023-08-10T08:17:55Z) - When Demonstrations Meet Generative World Models: A Maximum Likelihood
Framework for Offline Inverse Reinforcement Learning [62.00672284480755]
This paper aims to recover the structure of rewards and environment dynamics that underlie observed actions in a fixed, finite set of demonstrations from an expert agent.
Accurate models of expertise in executing a task has applications in safety-sensitive applications such as clinical decision making and autonomous driving.
arXiv Detail & Related papers (2023-02-15T04:14:20Z) - Trustworthy Multimodal Regression with Mixture of Normal-inverse Gamma
Distributions [91.63716984911278]
We introduce a novel Mixture of Normal-Inverse Gamma distributions (MoNIG) algorithm, which efficiently estimates uncertainty in principle for adaptive integration of different modalities and produces a trustworthy regression result.
Experimental results on both synthetic and different real-world data demonstrate the effectiveness and trustworthiness of our method on various multimodal regression tasks.
arXiv Detail & Related papers (2021-11-11T14:28:12Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Model-based Policy Optimization with Unsupervised Model Adaptation [37.09948645461043]
We investigate how to bridge the gap between real and simulated data due to inaccurate model estimation for better policy optimization.
We propose a novel model-based reinforcement learning framework AMPO, which introduces unsupervised model adaptation.
Our approach achieves state-of-the-art performance in terms of sample efficiency on a range of continuous control benchmark tasks.
arXiv Detail & Related papers (2020-10-19T14:19:42Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Distributionally Robust Bayesian Quadrature Optimization [60.383252534861136]
We study BQO under distributional uncertainty in which the underlying probability distribution is unknown except for a limited set of its i.i.d. samples.
A standard BQO approach maximizes the Monte Carlo estimate of the true expected objective given the fixed sample set.
We propose a novel posterior sampling based algorithm, namely distributionally robust BQO (DRBQO) for this purpose.
arXiv Detail & Related papers (2020-01-19T12:00:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.