Hybrid Particle Swarm Optimization for Fast and Reliable Parameter Extraction in Thermoreflectance
- URL: http://arxiv.org/abs/2507.22960v1
- Date: Wed, 30 Jul 2025 01:35:42 GMT
- Title: Hybrid Particle Swarm Optimization for Fast and Reliable Parameter Extraction in Thermoreflectance
- Authors: Bingjia Xiao, Tao Chen, Wenbin Zhang, Xin Qian, Puqing Jiang,
- Abstract summary: We study a technique for characterizing thermal properties of multilayer thin films using frequency-domain thermoreflectance (FDTR)<n>To improve speed and accuracy, we propose an AI-driven hybrid optimization framework that combines each global algorithm with a local local method.<n>Among these, HPSO outperforms all other methods, with 80% of reaching the target robustness in 60 seconds.
- Score: 7.375644602553432
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Frequency-domain thermoreflectance (FDTR) is a widely used technique for characterizing thermal properties of multilayer thin films. However, extracting multiple parameters from FDTR measurements presents a nonlinear inverse problem due to its high dimensionality and multimodal, non-convex solution space. This study evaluates four popular global optimization algorithms: Genetic Algorithm (GA), Quantum Genetic Algorithm (QGA), Particle Swarm Optimization (PSO), and Fireworks Algorithm (FWA), for extracting parameters from FDTR measurements of a GaN/Si heterostructure. However, none achieve reliable convergence within 60 seconds. To improve convergence speed and accuracy, we propose an AI-driven hybrid optimization framework that combines each global algorithm with a Quasi-Newton local refinement method, resulting in four hybrid variants: HGA, HQGA, HPSO, and HFWA. Among these, HPSO outperforms all other methods, with 80% of trials reaching the target fitness value within 60 seconds, showing greater robustness and a lower risk of premature convergence. In contrast, only 30% of HGA and HQGA trials and 20% of HFWA trials achieve this threshold. We then evaluate the worst-case performance across 100 independent trials for each algorithm when the time is extended to 1000 seconds. Only HPSO, PSO, and HGA consistently reach the target accuracy, with HPSO converging five times faster than the others. HPSO provides a general-purpose solution for inverse problems in thermal metrology and can be readily extended to other model-fitting techniques.
Related papers
- A Study of Hybrid and Evolutionary Metaheuristics for Single Hidden Layer Feedforward Neural Network Architecture [1.024113475677323]
Training Artificial Neural Networks (ANNs) with Gradient Descent (SGD) frequently difficulties encounters.<n>This work investigates Particle Swarm Optimization (PSO) and Genetic Algorithms (GAs)<n>A hybrid PSO-SGD strategy is developed to improve local search efficiency.
arXiv Detail & Related papers (2025-06-17T04:12:58Z) - A Gradient Meta-Learning Joint Optimization for Beamforming and Antenna Position in Pinching-Antenna Systems [63.213207442368294]
We consider a novel optimization design for multi-waveguide pinching-antenna systems.<n>The proposed GML-JO algorithm is robust to different choices and better performance compared with the existing optimization methods.
arXiv Detail & Related papers (2025-06-14T17:35:27Z) - Accelerating Evolution: Integrating PSO Principles into Real-Coded Genetic Algorithm Crossover [2.854482269849925]
This study introduces an innovative crossover operator named Particle Swarm Optimization-inspired Crossover (PSOX)<n>PSOX uniquely incorporates guidance from both the current global best solution and historical optimal solutions across multiple generations.<n>This novel mechanism enables the algorithm to maintain population diversity while simultaneously accelerating convergence toward promising regions of the search space.
arXiv Detail & Related papers (2025-05-06T06:17:57Z) - Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment [81.84950252537618]
This paper reveals a unified game-theoretic connection between iterative BOND and self-play alignment.<n>We establish a novel framework, WIN rate Dominance (WIND), with a series of efficient algorithms for regularized win rate dominance optimization.
arXiv Detail & Related papers (2024-10-28T04:47:39Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - A Deep Unrolling Model with Hybrid Optimization Structure for Hyperspectral Image Deconvolution [50.13564338607482]
We propose a novel optimization framework for the hyperspectral deconvolution problem, called DeepMix.<n>It consists of three distinct modules, namely, a data consistency module, a module that enforces the effect of the handcrafted regularizers, and a denoising module.<n>This work proposes a context aware denoising module designed to sustain the advancements achieved by the cooperative efforts of the other modules.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Multipoint-BAX: A New Approach for Efficiently Tuning Particle
Accelerator Emittance via Virtual Objectives [47.52324722637079]
We propose a new information-theoretic algorithm, Multipoint-BAX, for black-box optimization on multipoint queries.
We use Multipoint-BAX to minimize emittance at the Linac Coherent Light Source (LCLS) and the Facility for Advanced Accelerator Experimental Tests II (FACET-II)
arXiv Detail & Related papers (2022-09-10T04:01:23Z) - LassoBench: A High-Dimensional Hyperparameter Optimization Benchmark
Suite for Lasso [84.6451154376526]
LassoBench is a new benchmark suite tailored for an important open research topic in the Lasso community.
We evaluate 5 state-of-the-art HPO methods and 3 baselines, and demonstrate that Bayesian optimization, in particular, can improve over the methods commonly used for sparse regression.
arXiv Detail & Related papers (2021-11-04T12:05:09Z) - Momentum Accelerates the Convergence of Stochastic AUPRC Maximization [80.8226518642952]
We study optimization of areas under precision-recall curves (AUPRC), which is widely used for imbalanced tasks.
We develop novel momentum methods with a better iteration of $O (1/epsilon4)$ for finding an $epsilon$stationary solution.
We also design a novel family of adaptive methods with the same complexity of $O (1/epsilon4)$, which enjoy faster convergence in practice.
arXiv Detail & Related papers (2021-07-02T16:21:52Z) - Convergence Analysis of Homotopy-SGD for non-convex optimization [43.71213126039448]
We present a first-order algorithm based on a combination of homotopy methods and SGD, called Gradienty-Stoch Descent (H-SGD)
Under some assumptions, we conduct a theoretical analysis of the proposed problem.
Experimental results show that H-SGD can outperform SGD.
arXiv Detail & Related papers (2020-11-20T09:50:40Z) - An Adaptive EM Accelerator for Unsupervised Learning of Gaussian Mixture
Models [0.7340845393655052]
We propose an Anderson Acceleration scheme for the adaptive Expectation-Maximization (EM) algorithm for unsupervised learning.
The proposed algorithm is able to determine the optimal number of mixture components autonomously, and converges to the optimal solution much faster than its non-accelerated version.
arXiv Detail & Related papers (2020-09-26T22:55:44Z) - Efficient Machine Learning Approach for Optimizing the Timing Resolution
of a High Purity Germanium Detector [0.0]
We describe an efficient machine-learning based approach for the optimization of parameters generated by the detection of 511 keV gamma-rays by a 60 cm3 coaxial high purity germanium detector (HPGe)
The method utilizes a type of artificial neural network (ANN) called a self-organizing map (SOM) to cluster the HPGe waveforms based on the shape of their rising edges.
Applying these variable timing parameters to the HPGe signals achieved a gamma-coincidence timing resolution of 4.3 ns at the 511 keV photo peak and a timing resolution of 6.5 ns for the entire
arXiv Detail & Related papers (2020-03-31T16:04:21Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.