Lexicographic optimization-based approaches to learning a representative model for multi-criteria sorting with non-monotonic criteria
- URL: http://arxiv.org/abs/2409.01612v1
- Date: Tue, 3 Sep 2024 05:29:05 GMT
- Title: Lexicographic optimization-based approaches to learning a representative model for multi-criteria sorting with non-monotonic criteria
- Authors: Zhen Zhang, Zhuolin Li, Wenyu Yu,
- Abstract summary: This paper proposes some approaches to learning a representative model for MCS problems with non-monotonic criteria.
We first define some transformation functions to map the marginal values and category thresholds into a UTA-like functional space.
We then construct constraint sets to model non-monotonic criteria in MCS problems and develop optimization models to check and rectify the inconsistency of the decision maker's assignment example preference information.
- Score: 5.374419989598479
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deriving a representative model using value function-based methods from the perspective of preference disaggregation has emerged as a prominent and growing topic in multi-criteria sorting (MCS) problems. A noteworthy observation is that many existing approaches to learning a representative model for MCS problems traditionally assume the monotonicity of criteria, which may not always align with the complexities found in real-world MCS scenarios. Consequently, this paper proposes some approaches to learning a representative model for MCS problems with non-monotonic criteria through the integration of the threshold-based value-driven sorting procedure. To do so, we first define some transformation functions to map the marginal values and category thresholds into a UTA-like functional space. Subsequently, we construct constraint sets to model non-monotonic criteria in MCS problems and develop optimization models to check and rectify the inconsistency of the decision maker's assignment example preference information. By simultaneously considering the complexity and discriminative power of the models, two distinct lexicographic optimization-based approaches are developed to derive a representative model for MCS problems with non-monotonic criteria. Eventually, we offer an illustrative example and conduct comprehensive simulation experiments to elaborate the feasibility and validity of the proposed approaches.
Related papers
- An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - LLM-enhanced Reranking in Recommender Systems [49.969932092129305]
Reranking is a critical component in recommender systems, playing an essential role in refining the output of recommendation algorithms.
We introduce a comprehensive reranking framework, designed to seamlessly integrate various reranking criteria.
A customizable input mechanism is also integrated, enabling the tuning of the language model's focus to meet specific reranking needs.
arXiv Detail & Related papers (2024-06-18T09:29:18Z) - Latent Semantic Consensus For Deterministic Geometric Model Fitting [109.44565542031384]
We propose an effective method called Latent Semantic Consensus (LSC)
LSC formulates the model fitting problem into two latent semantic spaces based on data points and model hypotheses.
LSC is able to provide consistent and reliable solutions within only a few milliseconds for general multi-structural model fitting.
arXiv Detail & Related papers (2024-03-11T05:35:38Z) - Data-driven Preference Learning Methods for Sorting Problems with
Multiple Temporal Criteria [17.673512636899076]
This study presents novel preference learning approaches to multiple criteria sorting problems in the presence of temporal criteria.
To enhance scalability and accommodate learnable time discount factors, we introduce a novel monotonic Recurrent Neural Network (mRNN)
The proposed mRNN can describe the preference dynamics by depicting marginal value functions and personalized time discount factors along with time.
arXiv Detail & Related papers (2023-09-22T05:08:52Z) - Switchable Representation Learning Framework with Self-compatibility [50.48336074436792]
We propose a Switchable representation learning Framework with Self-Compatibility (SFSC)
SFSC generates a series of compatible sub-models with different capacities through one training process.
SFSC achieves state-of-the-art performance on the evaluated datasets.
arXiv Detail & Related papers (2022-06-16T16:46:32Z) - MCCE: Monte Carlo sampling of realistic counterfactual explanations [2.156170153103442]
MCCE is a novel on-manifold, actionable and valid counterfactual explanation method.
It generates on-manifold, actionable and valid counterfactuals by modeling the joint distribution of the mutable features.
We compare MCCE with a range of state-of-the-art on-manifold counterfactual methods using four well-known data sets.
arXiv Detail & Related papers (2021-11-18T16:40:44Z) - Model-Based Counterfactual Synthesizer for Interpretation [40.01787107375103]
We propose a Model-based Counterfactual Synthesizer (MCS) framework for interpreting machine learning models.
We first analyze the model-based counterfactual process and construct a base synthesizer using a conditional generative adversarial net (CGAN)
To better approximate the counterfactual universe for those rare queries, we novelly employ the umbrella sampling technique to conduct the MCS framework training.
arXiv Detail & Related papers (2021-06-16T17:09:57Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z) - Semi-nonparametric Latent Class Choice Model with a Flexible Class
Membership Component: A Mixture Model Approach [6.509758931804479]
The proposed model formulates the latent classes using mixture models as an alternative approach to the traditional random utility specification.
Results show that mixture models improve the overall performance of latent class choice models.
arXiv Detail & Related papers (2020-07-06T13:19:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.