Data-driven Preference Learning Methods for Sorting Problems with
Multiple Temporal Criteria
- URL: http://arxiv.org/abs/2309.12620v2
- Date: Thu, 16 Nov 2023 09:21:20 GMT
- Title: Data-driven Preference Learning Methods for Sorting Problems with
Multiple Temporal Criteria
- Authors: Yijun Li, Mengzhuo Guo, Mi{\l}osz Kadzi\'nski, Qingpeng Zhang
- Abstract summary: This study presents novel preference learning approaches to multiple criteria sorting problems in the presence of temporal criteria.
To enhance scalability and accommodate learnable time discount factors, we introduce a novel monotonic Recurrent Neural Network (mRNN)
The proposed mRNN can describe the preference dynamics by depicting marginal value functions and personalized time discount factors along with time.
- Score: 17.673512636899076
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The advent of predictive methodologies has catalyzed the emergence of
data-driven decision support across various domains. However, developing models
capable of effectively handling input time series data presents an enduring
challenge. This study presents novel preference learning approaches to multiple
criteria sorting problems in the presence of temporal criteria. We first
formulate a convex quadratic programming model characterized by fixed time
discount factors, operating within a regularization framework. To enhance
scalability and accommodate learnable time discount factors, we introduce a
novel monotonic Recurrent Neural Network (mRNN). It is designed to capture the
evolving dynamics of preferences over time while upholding critical properties
inherent to MCS problems, including criteria monotonicity, preference
independence, and the natural ordering of classes. The proposed mRNN can
describe the preference dynamics by depicting marginal value functions and
personalized time discount factors along with time, effectively amalgamating
the interpretability of traditional MCS methods with the predictive potential
offered by deep preference learning models. Comprehensive assessments of the
proposed models are conducted, encompassing synthetic data scenarios and a
real-case study centered on classifying valuable users within a mobile gaming
app based on their historical in-app behavioral sequences. Empirical findings
underscore the notable performance improvements achieved by the proposed models
when compared to a spectrum of baseline methods, spanning machine learning,
deep learning, and conventional multiple criteria sorting approaches.
Related papers
- Recurrent Neural Goodness-of-Fit Test for Time Series [8.22915954499148]
Time series data are crucial across diverse domains such as finance and healthcare.
Traditional evaluation metrics fall short due to the temporal dependencies and potential high dimensionality of the features.
We propose the REcurrent NeurAL (RENAL) Goodness-of-Fit test, a novel and statistically rigorous framework for evaluating generative time series models.
arXiv Detail & Related papers (2024-10-17T19:32:25Z) - An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Classification of High-dimensional Time Series in Spectral Domain using Explainable Features [8.656881800897661]
We propose a model-based approach for classifying high-dimensional stationary time series.
Our approach emphasizes the interpretability of model parameters, making it especially suitable for fields like neuroscience.
The novelty of our method lies in the interpretability of the model parameters, addressing critical needs in neuroscience.
arXiv Detail & Related papers (2024-08-15T19:10:12Z) - Diversified Batch Selection for Training Acceleration [68.67164304377732]
A prevalent research line, known as online batch selection, explores selecting informative subsets during the training process.
vanilla reference-model-free methods involve independently scoring and selecting data in a sample-wise manner.
We propose Diversified Batch Selection (DivBS), which is reference-model-free and can efficiently select diverse and representative samples.
arXiv Detail & Related papers (2024-06-07T12:12:20Z) - FocusLearn: Fully-Interpretable, High-Performance Modular Neural Networks for Time Series [0.3277163122167434]
This paper proposes a novel modular neural network model for time series prediction that is interpretable by construction.
A recurrent neural network learns the temporal dependencies in the data while an attention-based feature selection component selects the most relevant features.
A modular deep network is trained from the selected features independently to show the users how features influence outcomes, making the model interpretable.
arXiv Detail & Related papers (2023-11-28T14:51:06Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive
Learning [67.07363529640784]
We propose OpenSTL to categorize prevalent approaches into recurrent-based and recurrent-free models.
We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow and forecasting weather.
We find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models.
arXiv Detail & Related papers (2023-06-20T03:02:14Z) - Time Series Continuous Modeling for Imputation and Forecasting with Implicit Neural Representations [15.797295258800638]
We introduce a novel modeling approach for time series imputation and forecasting, tailored to address the challenges often encountered in real-world data.
Our method relies on a continuous-time-dependent model of the series' evolution dynamics.
A modulation mechanism, driven by a meta-learning algorithm, allows adaptation to unseen samples and extrapolation beyond observed time-windows.
arXiv Detail & Related papers (2023-06-09T13:20:04Z) - Modeling Time-Series and Spatial Data for Recommendations and Other
Applications [1.713291434132985]
We address the problems that may arise due to the poor quality of CTES data being fed into a recommender system.
To improve the quality of the CTES data, we address a fundamental problem of overcoming missing events in temporal sequences.
We extend their abilities to design solutions for large-scale CTES retrieval and human activity prediction.
arXiv Detail & Related papers (2022-12-25T09:34:15Z) - Continuous-Time Modeling of Counterfactual Outcomes Using Neural
Controlled Differential Equations [84.42837346400151]
Estimating counterfactual outcomes over time has the potential to unlock personalized healthcare.
Existing causal inference approaches consider regular, discrete-time intervals between observations and treatment decisions.
We propose a controllable simulation environment based on a model of tumor growth for a range of scenarios.
arXiv Detail & Related papers (2022-06-16T17:15:15Z) - Learning summary features of time series for likelihood free inference [93.08098361687722]
We present a data-driven strategy for automatically learning summary features from time series data.
Our results indicate that learning summary features from data can compete and even outperform LFI methods based on hand-crafted values.
arXiv Detail & Related papers (2020-12-04T19:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.