Enhancing Explainability and Reliable Decision-Making in Particle Swarm Optimization through Communication Topologies
- URL: http://arxiv.org/abs/2504.12803v1
- Date: Thu, 17 Apr 2025 10:05:10 GMT
- Title: Enhancing Explainability and Reliable Decision-Making in Particle Swarm Optimization through Communication Topologies
- Authors: Nitin Gupta, Indu Bala, Bapi Dutta, Luis MartÃnez, Anupam Yadav,
- Abstract summary: This study focuses on how different communication topologies affect convergence and search behaviors.<n>Using an adapted IOHxplainer, we investigate how these topologies influence information flow, diversity, and convergence speed.
- Score: 14.88267665338613
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Swarm intelligence effectively optimizes complex systems across fields like engineering and healthcare, yet algorithm solutions often suffer from low reliability due to unclear configurations and hyperparameters. This study analyzes Particle Swarm Optimization (PSO), focusing on how different communication topologies Ring, Star, and Von Neumann affect convergence and search behaviors. Using an adapted IOHxplainer , an explainable benchmarking tool, we investigate how these topologies influence information flow, diversity, and convergence speed, clarifying the balance between exploration and exploitation. Through visualization and statistical analysis, the research enhances interpretability of PSO's decisions and provides practical guidelines for choosing suitable topologies for specific optimization tasks. Ultimately, this contributes to making swarm based optimization more transparent, robust, and trustworthy.
Related papers
- Learning Strategies in Particle Swarm Optimizer: A Critical Review and Performance Analysis [0.6437284704257459]
Particle swarm optimization (PSO) is widely adopted among SI algorithms due to its simplicity and efficiency.<n>We review and classify various learning strategies to address this gap, assessing their impact on optimization performance.<n>We discuss open challenges and future directions, emphasizing the need for self-adaptive, intelligent PSO variants.
arXiv Detail & Related papers (2025-04-16T06:50:02Z) - End-to-End Optimal Detector Design with Mutual Information Surrogates [1.7042756021131187]
We introduce a novel approach for end-to-end black-box optimization of high energy physics detectors using local deep learning (DL) surrogates.<n>In addition to a standard reconstruction-based metric commonly used in the field, we investigate the information-theoretic metric of mutual information.<n>Our findings reveal three key insights: (1) end-toend black-box optimization using local surrogates is a practical and compelling approach for detector design; (2) mutual information-based optimization yields design choices that closely match those from state-of-the-art physics-informed methods; and (3) information-theoretic methods provide a
arXiv Detail & Related papers (2025-03-18T15:23:03Z) - Equation discovery framework EPDE: Towards a better equation discovery [50.79602839359522]
We enhance the EPDE algorithm -- an evolutionary optimization-based discovery framework.<n>Our approach generates terms using fundamental building blocks such as elementary functions and individual differentials.<n>We validate our algorithm's noise resilience and overall performance by comparing its results with those from the state-of-the-art equation discovery framework SINDy.
arXiv Detail & Related papers (2024-12-28T15:58:44Z) - A Survey on Inference Optimization Techniques for Mixture of Experts Models [50.40325411764262]
Large-scale Mixture of Experts (MoE) models offer enhanced model capacity and computational efficiency through conditional computation.<n> deploying and running inference on these models presents significant challenges in computational resources, latency, and energy efficiency.<n>This survey analyzes optimization techniques for MoE models across the entire system stack.
arXiv Detail & Related papers (2024-12-18T14:11:15Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Optimizing $CO_{2}$ Capture in Pressure Swing Adsorption Units: A Deep
Neural Network Approach with Optimality Evaluation and Operating Maps for
Decision-Making [0.0]
This study focuses on enhancing Pressure Swing Adsorption units for carbon dioxide capture.
We developed and implemented a multiple-input, single-output (MISO) framework comprising two deep neural network (DNN) models.
This approach delineated feasible operational regions (FORs) and highlighted the spectrum of optimal decision-making scenarios.
arXiv Detail & Related papers (2023-12-06T19:43:37Z) - An Interactive Knowledge-based Multi-objective Evolutionary Algorithm
Framework for Practical Optimization Problems [5.387300498478744]
This paper proposes an interactive knowledge-based evolutionary multi-objective optimization (IK-EMO) framework.
It extracts hidden variable-wise relationships as knowledge from evolving high-performing solutions, shares them with users to receive feedback, and applies them back to the optimization process to improve its effectiveness.
The working of the proposed IK-EMO is demonstrated on three large-scale real-world engineering design problems.
arXiv Detail & Related papers (2022-09-18T16:51:01Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Dynamic communication topologies for distributed heuristics in energy
system optimization algorithms [0.0]
We present an approach for adapting the communication topology during runtime.
We compare the approach to common static topologies regarding the performance of an exemplary distributed optimization.
arXiv Detail & Related papers (2021-08-03T09:30:56Z) - Harnessing Heterogeneity: Learning from Decomposed Feedback in Bayesian
Modeling [68.69431580852535]
We introduce a novel GP regression to incorporate the subgroup feedback.
Our modified regression has provably lower variance -- and thus a more accurate posterior -- compared to previous approaches.
We execute our algorithm on two disparate social problems.
arXiv Detail & Related papers (2021-07-07T03:57:22Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.