A new fuzzy multi-attribute group decision-making method based on TOPSIS
and optimization models
- URL: http://arxiv.org/abs/2311.15933v1
- Date: Mon, 27 Nov 2023 15:41:30 GMT
- Title: A new fuzzy multi-attribute group decision-making method based on TOPSIS
and optimization models
- Authors: Qixiao Hu, Shiquan Zhang, Chaolang Hu, Yuetong Liu
- Abstract summary: A new method is proposed for multi-attribute group decision-making in interval-valued intuitionistic fuzzy sets.
By minimizing the sum of differences between individual evaluations and the overallconsistent evaluations of all experts, a new optimization model is established for determining expert weights.
The complete fuzzy multi-attribute group decision-making algorithm is formulated, which can give full play to the advantages of subjective and objective weighting methods.
- Score: 3.697049647195136
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, a new method based on TOPSIS and optimization models is
proposed for multi-attribute group decision-making in the environment of
interval-valued intuitionistic fuzzy sets.Firstly, by minimizing the sum of
differences between individual evaluations and the overallconsistent
evaluations of all experts, a new optimization model is established for
determining expert weights. Secondly, based on TOPSIS method, the improved
closeness index for evaluating each alternative is obtained. Finally, the
attribute weight is determined by establishing an optimization model with the
goal of maximizing the closeness of each alternative, and it is brought into
the closeness index so that the alternatives can be ranked. Combining all these
together, the complete fuzzy multi-attribute group decision-making algorithm is
formulated, which can give full play to the advantages of subjective and
objective weighting methods. In the end, the feasibility and effectiveness of
the provided method are verified by a real case study.
Related papers
- LLaMA-Berry: Pairwise Optimization for O1-like Olympiad-Level Mathematical Reasoning [56.273799410256075]
The framework combines Monte Carlo Tree Search (MCTS) with iterative Self-Refine to optimize the reasoning path.
The framework has been tested on general and advanced benchmarks, showing superior performance in terms of search efficiency and problem-solving capability.
arXiv Detail & Related papers (2024-10-03T18:12:29Z) - Balancing Optimality and Diversity: Human-Centered Decision Making through Generative Curation [6.980546503227467]
We introduce a novel framework called generative curation, which optimize the true desirability of decision options by integrating both quantitative and qualitative aspects.
We propose two implementation approaches: a generative neural network architecture that produces a distribution $pi$ to efficiently sample a diverse set of near-optimal actions, and a sequential optimization method to iteratively generate solutions.
We validate our approach with extensive datasets, demonstrating its effectiveness in enhancing decision-making processes across a range of complex environments.
arXiv Detail & Related papers (2024-09-17T20:13:32Z) - Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - Pareto Set Learning for Expensive Multi-Objective Optimization [5.419608513284392]
Expensive multi-objective optimization problems can be found in many real-world applications.
This paper develops a novel learning-based method to approximate the whole Pareto set for MOBO.
arXiv Detail & Related papers (2022-10-16T09:41:54Z) - A unified surrogate-based scheme for black-box and preference-based
optimization [2.561649173827544]
We show that black-box and preference-based optimization problems are closely related and can be solved using the same family of approaches.
We propose the generalized Metric Response Surface (gMRS) algorithm, an optimization scheme that is a generalization of the popular MSRS framework.
arXiv Detail & Related papers (2022-02-03T08:47:54Z) - On the implementation of a global optimization method for mixed-variable
problems [0.30458514384586394]
The algorithm is based on the radial basis function of Gutmann and the metric response surface method of Regis and Shoemaker.
We propose several modifications aimed at generalizing and improving these two algorithms.
arXiv Detail & Related papers (2020-09-04T13:36:56Z) - Robust, Accurate Stochastic Optimization for Variational Inference [68.83746081733464]
We show that common optimization methods lead to poor variational approximations if the problem is moderately large.
Motivated by these findings, we develop a more robust and accurate optimization framework by viewing the underlying algorithm as producing a Markov chain.
arXiv Detail & Related papers (2020-09-01T19:12:11Z) - Stochastic Optimization Forests [60.523606291705214]
We show how to train forest decision policies by growing trees that choose splits to directly optimize the downstream decision quality, rather than splitting to improve prediction accuracy as in the standard random forest algorithm.
We show that our approximate splitting criteria can reduce running time hundredfold, while achieving performance close to forest algorithms that exactly re-optimize for every candidate split.
arXiv Detail & Related papers (2020-08-17T16:56:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.