Linguistic Fuzzy Information Evolution with Random Leader Election Mechanism for Decision-Making Systems
- URL: http://arxiv.org/abs/2410.15171v1
- Date: Sat, 19 Oct 2024 18:15:24 GMT
- Title: Linguistic Fuzzy Information Evolution with Random Leader Election Mechanism for Decision-Making Systems
- Authors: Qianlei Jia, Witold Pedrycz,
- Abstract summary: Linguistic fuzzy information evolution is crucial in understanding information exchange among agents.
Different agent weights may lead to different convergence results in the classic DeGroot model.
This paper proposes three new models of linguistic fuzzy information dynamics.
- Score: 58.67035332062508
- License:
- Abstract: Linguistic fuzzy information evolution is crucial in understanding information exchange among agents. However, different agent weights may lead to different convergence results in the classic DeGroot model. Similarly, in the Hegselmann-Krause bounded confidence model (HK model), changing the confidence threshold values of agents can lead to differences in the final results. To address these limitations, this paper proposes three new models of linguistic fuzzy information dynamics: the per-round random leader election mechanism-based DeGroot model (PRRLEM-DeGroot), the PRRLEM-based homogeneous HK model (PRRLEM-HOHK), and the PRRLEM-based heterogeneous HK model (PRRLEM-HEHK). In these models, after each round of fuzzy information updates, an agent is randomly selected to act as a temporary leader with more significant influence, with the leadership structure being reset after each update. This strategy increases the information sharing and enhances decision-making by integrating multiple agents' evaluation information, which is also in line with real life (\emph{Leader is not unchanged}). The Monte Carlo method is then employed to simulate the behavior of complex systems through repeated random tests, obtaining confidence intervals for different fuzzy information. Subsequently, an improved golden rule representative value (GRRV) in fuzzy theory is proposed to rank these confidence intervals. Simulation examples and a real-world scenario about space situational awareness validate the effectiveness of the proposed models. Comparative analysis with the other models demonstrate our ability to address the echo chamber and improve the robustness.
Related papers
- Using Deep Q-Learning to Dynamically Toggle between Push/Pull Actions in Computational Trust Mechanisms [0.0]
In previous work, we compared CA to FIRE, a well-known trust and reputation model, and found that CA is superior when the trustor population changes.
We frame this problem as a machine learning problem in a partially observable environment, where the presence of several dynamic factors is not known to the trustor.
We show that an adaptable agent is indeed capable of learning when to use each model and, thus, perform consistently in dynamic environments.
arXiv Detail & Related papers (2024-04-28T19:44:56Z) - Causal Coordinated Concurrent Reinforcement Learning [8.654978787096807]
We propose a novel algorithmic framework for data sharing and coordinated exploration for the purpose of learning more data-efficient and better performing policies under a concurrent reinforcement learning setting.
Our algorithm leverages a causal inference algorithm in the form of Additive Noise Model - Mixture Model (ANM-MM) in extracting model parameters governing individual differentials via independence enforcement.
We propose a new data sharing scheme based on a similarity measure of the extracted model parameters and demonstrate superior learning speeds on a set of autoregressive, pendulum and cart-pole swing-up tasks.
arXiv Detail & Related papers (2024-01-31T17:20:28Z) - Data Assimilation in Chaotic Systems Using Deep Reinforcement Learning [0.5999777817331317]
Data assimilation plays a pivotal role in diverse applications, ranging from climate predictions and weather forecasts to trajectory planning for autonomous vehicles.
Recent advancements have seen the emergence of deep learning approaches in this domain, primarily within a supervised learning framework.
In this study, we introduce a novel DA strategy that utilizes reinforcement learning (RL) to apply state corrections using full or partial observations of the state variables.
arXiv Detail & Related papers (2024-01-01T06:53:36Z) - Value-Distributional Model-Based Reinforcement Learning [59.758009422067]
Quantifying uncertainty about a policy's long-term performance is important to solve sequential decision-making tasks.
We study the problem from a model-based Bayesian reinforcement learning perspective.
We propose Epistemic Quantile-Regression (EQR), a model-based algorithm that learns a value distribution function.
arXiv Detail & Related papers (2023-08-12T14:59:19Z) - Generative Causal Representation Learning for Out-of-Distribution Motion
Forecasting [13.99348653165494]
We propose Generative Causal Learning Representation to facilitate knowledge transfer under distribution shifts.
While we evaluate the effectiveness of our proposed method in human trajectory prediction models, GCRL can be applied to other domains as well.
arXiv Detail & Related papers (2023-02-17T00:30:44Z) - DeGroot-based opinion formation under a global steering mechanism [3.0215418312651057]
We study the opinion formation process under the effect of a global steering mechanism (GSM), which aggregates the opinion-driven agent states at the network level.
We propose a new two-layer agent-based opinion formation model, called GSM-DeGroot, that captures the coupled dynamics between agent-to-agent local interactions.
arXiv Detail & Related papers (2022-10-21T22:02:47Z) - Efficient Model-based Multi-agent Reinforcement Learning via Optimistic
Equilibrium Computation [93.52573037053449]
H-MARL (Hallucinated Multi-Agent Reinforcement Learning) learns successful equilibrium policies after a few interactions with the environment.
We demonstrate our approach experimentally on an autonomous driving simulation benchmark.
arXiv Detail & Related papers (2022-03-14T17:24:03Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z) - Interpretable Learning-to-Rank with Generalized Additive Models [78.42800966500374]
Interpretability of learning-to-rank models is a crucial yet relatively under-examined research area.
Recent progress on interpretable ranking models largely focuses on generating post-hoc explanations for existing black-box ranking models.
We lay the groundwork for intrinsically interpretable learning-to-rank by introducing generalized additive models (GAMs) into ranking tasks.
arXiv Detail & Related papers (2020-05-06T01:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.