Simultaneous Optimization of Efficiency and Degradation in Tunable HTL-Free Perovskite Solar Cells with MWCNT-Integrated Back Contact Using a Machine Learning-Derived Polynomial Regressor
- URL: http://arxiv.org/abs/2505.18693v1
- Date: Sat, 24 May 2025 13:37:48 GMT
- Title: Simultaneous Optimization of Efficiency and Degradation in Tunable HTL-Free Perovskite Solar Cells with MWCNT-Integrated Back Contact Using a Machine Learning-Derived Polynomial Regressor
- Authors: Ihtesham Ibn Malek, Hafiz Imtiaz, Samia Subrina,
- Abstract summary: Perovskite solar cells (PSCs) without a hole transport layer (HTL) offer a cost-effective and stable alternative to conventional architectures.<n>This study presents a machine learning (ML)-driven framework to optimize the efficiency and stability of HTL-free PSCs.
- Score: 0.8739101659113155
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Perovskite solar cells (PSCs) without a hole transport layer (HTL) offer a cost-effective and stable alternative to conventional architectures, utilizing only an absorber layer and an electron transport layer (ETL). This study presents a machine learning (ML)-driven framework to optimize the efficiency and stability of HTL-free PSCs by integrating experimental validation with numerical simulations. Excellent agreement is achieved between a fabricated device and its simulated counterpart at a molar fraction \( x = 68.7\% \) in \(\mathrm{MAPb}_{1-x}\mathrm{Sb}_{2x/3}\mathrm{I}_3\), where MA is methylammonium. A dataset of 1650 samples is generated by varying molar fraction, absorber defect density, thickness, and ETL doping, with corresponding efficiency and 50-hour degradation as targets. A fourth-degree polynomial regressor (PR-4) shows the best performance, achieving RMSEs of 0.0179 and 0.0117, and \( R^2 \) scores of 1 and 0.999 for efficiency and degradation, respectively. The derived model generalizes beyond the training range and is used in an L-BFGS-B optimization algorithm with a weighted objective function to maximize efficiency and minimize degradation. This improves device efficiency from 13.7\% to 16.84\% and reduces degradation from 6.61\% to 2.39\% over 1000 hours. Finally, the dataset is labeled into superior and inferior classes, and a multilayer perceptron (MLP) classifier achieves 100\% accuracy, successfully identifying optimal configurations.
Related papers
- Machine Learning-Assisted Surrogate Modeling with Multi-Objective Optimization and Decision-Making of a Steam Methane Reforming Reactor [10.704938459679978]
This study presents an integrated modeling and optimization framework for a steam methane reforming (SMR) reactor.<n>A one-dimensional fixed-bed reactor model accounting for internal mass transfer resistance was employed to simulate reactor performance.<n>To reduce the high computational cost of the mathematical model, a hybrid ANN surrogate was constructed, achieving a 93.8% reduction in average simulation time.
arXiv Detail & Related papers (2025-07-10T11:10:16Z) - COSMOS: A Hybrid Adaptive Optimizer for Memory-Efficient Training of LLMs [81.01082659623552]
Large Language Models (LLMs) have demonstrated remarkable success across various domains.<n>Their optimization remains a significant challenge due to the complex and high-dimensional loss landscapes they inhabit.
arXiv Detail & Related papers (2025-02-24T18:42:19Z) - Determining Layer-wise Sparsity for Large Language Models Through a Theoretical Perspective [55.90119819642064]
We address the challenge of determining the layer-wise sparsity rates of large language models (LLMs) through a theoretical perspective.<n>This refers to the cumulative effect of reconstruction errors throughout the sparsification process.<n>We derive a simple yet effective approach to layer-wise sparsity allocation that mitigates this issue.
arXiv Detail & Related papers (2025-02-20T17:51:10Z) - Neural Network Surrogate and Projected Gradient Descent for Fast and Reliable Finite Element Model Calibration: a Case Study on an Intervertebral Disc [9.456474817418703]
This study introduces a novel, efficient, and effective calibration method demonstrated on a human L4-L5 IVD FE model.<n>The NN surrogate predicts simulation outcomes with high accuracy, outperforming other machine learning models, and significantly reduces the computational cost.<n>Such efficiency paves the way for applying more complex FE models, potentially extending beyond IVDs.
arXiv Detail & Related papers (2024-08-12T11:39:21Z) - Bypass Back-propagation: Optimization-based Structural Pruning for Large Language Models via Policy Gradient [57.9629676017527]
We propose an optimization-based structural pruning on Large-Language Models.
We learn the pruning masks in a probabilistic space directly by optimizing the loss of the pruned model.
Our method operates for 2.7 hours with around 35GB memory for the 13B models on a single A100 GPU.
arXiv Detail & Related papers (2024-06-15T09:31:03Z) - Parameter Identification for Electrochemical Models of Lithium-Ion Batteries Using Bayesian Optimization [5.637250168164636]
Gradient-based and metaheuristic optimization techniques are limited by their lack of robustness, high computational costs, and susceptibility to local minima.
In this study, Bayesian Optimization is used for tuning the dynamic parameters of an electrochemical equivalent circuit battery model (E-ECM) for a nickel-manganese-cobalt (NMC)-graphite cell.
arXiv Detail & Related papers (2024-05-17T12:59:15Z) - DB-LLM: Accurate Dual-Binarization for Efficient LLMs [83.70686728471547]
Large language models (LLMs) have significantly advanced the field of natural language processing.
Existing ultra-low-bit quantization always causes severe accuracy drops.
We propose a novel Dual-Binarization method for LLMs, namely DB-LLM.
arXiv Detail & Related papers (2024-02-19T09:04:30Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Gaussian process regression and conditional Karhunen-Lo\'{e}ve models
for data assimilation in inverse problems [68.8204255655161]
We present a model inversion algorithm, CKLEMAP, for data assimilation and parameter estimation in partial differential equation models.
The CKLEMAP method provides better scalability compared to the standard MAP method.
arXiv Detail & Related papers (2023-01-26T18:14:12Z) - Highly efficient reliability analysis of anisotropic heterogeneous
slopes: Machine Learning aided Monte Carlo method [0.0]
This paper presents a highly efficient Machine Learning aided reliability technique.
It is able to accurately predict the results of a Monte Carlo (MC) reliability study, and yet performs 500 times faster.
The proposed technique reduces the computational time required for our study from 306 days to only 14 hours, providing 500 times higher efficiency.
arXiv Detail & Related papers (2022-04-04T16:28:53Z) - A deep learning-based model reduction (DeePMR) method for simplifying
chemical kinetics [10.438320849775224]
The DeePMR is proposed and validated using high-temperature auto-ignitions, perfectly stirred reactors (PSR), and one-dimensional freely propagating flames of n-heptane/air mixtures.
The key idea of the DeePMR is to employ a deep neural network (DNN) to formulate the objective function in the optimization problem.
arXiv Detail & Related papers (2022-01-06T12:31:32Z) - Pruning Redundant Mappings in Transformer Models via Spectral-Normalized
Identity Prior [54.629850694790036]
spectral-normalized identity priors (SNIP) is a structured pruning approach that penalizes an entire residual module in a Transformer model toward an identity mapping.
We conduct experiments with BERT on 5 GLUE benchmark tasks to demonstrate that SNIP achieves effective pruning results while maintaining comparable performance.
arXiv Detail & Related papers (2020-10-05T05:40:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.