Optimizing $CO_{2}$ Capture in Pressure Swing Adsorption Units: A Deep
Neural Network Approach with Optimality Evaluation and Operating Maps for
Decision-Making
- URL: http://arxiv.org/abs/2312.03873v1
- Date: Wed, 6 Dec 2023 19:43:37 GMT
- Title: Optimizing $CO_{2}$ Capture in Pressure Swing Adsorption Units: A Deep
Neural Network Approach with Optimality Evaluation and Operating Maps for
Decision-Making
- Authors: Carine Menezes Rebello, Idelfonso B. R. Nogueira
- Abstract summary: This study focuses on enhancing Pressure Swing Adsorption units for carbon dioxide capture.
We developed and implemented a multiple-input, single-output (MISO) framework comprising two deep neural network (DNN) models.
This approach delineated feasible operational regions (FORs) and highlighted the spectrum of optimal decision-making scenarios.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study presents a methodology for surrogate optimization of cyclic
adsorption processes, focusing on enhancing Pressure Swing Adsorption units for
carbon dioxide ($CO_{2}$) capture. We developed and implemented a
multiple-input, single-output (MISO) framework comprising two deep neural
network (DNN) models, predicting key process performance indicators. These
models were then integrated into an optimization framework, leveraging particle
swarm optimization (PSO) and statistical analysis to generate a comprehensive
Pareto front representation. This approach delineated feasible operational
regions (FORs) and highlighted the spectrum of optimal decision-making
scenarios. A key aspect of our methodology was the evaluation of optimization
effectiveness. This was accomplished by testing decision variables derived from
the Pareto front against a phenomenological model, affirming the surrogate
models reliability. Subsequently, the study delved into analyzing the feasible
operational domains of these decision variables. A detailed correlation map was
constructed to elucidate the interplay between these variables, thereby
uncovering the most impactful factors influencing process behavior. The study
offers a practical, insightful operational map that aids operators in
pinpointing the optimal process location and prioritizing specific operational
goals.
Related papers
- See Further for Parameter Efficient Fine-tuning by Standing on the Shoulders of Decomposition [56.87609859444084]
parameter-efficient fine-tuning (PEFT) focuses on optimizing a select subset of parameters while keeping the rest fixed, significantly lowering computational and storage overheads.
We take the first step to unify all approaches by dissecting them from a decomposition perspective.
We introduce two novel PEFT methods alongside a simple yet effective framework designed to enhance the performance of PEFT techniques across various applications.
arXiv Detail & Related papers (2024-07-07T15:44:42Z) - Beyond Single-Model Views for Deep Learning: Optimization versus
Generalizability of Stochastic Optimization Algorithms [13.134564730161983]
This paper adopts a novel approach to deep learning optimization, focusing on gradient descent (SGD) and its variants.
We show that SGD and its variants demonstrate performance on par with flat-minimas like SAM, albeit with half the gradient evaluations.
Our study uncovers several key findings regarding the relationship between training loss and hold-out accuracy, as well as the comparable performance of SGD and noise-enabled variants.
arXiv Detail & Related papers (2024-03-01T14:55:22Z) - Enhanced Bayesian Optimization via Preferential Modeling of Abstract
Properties [49.351577714596544]
We propose a human-AI collaborative Bayesian framework to incorporate expert preferences about unmeasured abstract properties into surrogate modeling.
We provide an efficient strategy that can also handle any incorrect/misleading expert bias in preferential judgments.
arXiv Detail & Related papers (2024-02-27T09:23:13Z) - Constrained Bayesian Optimization Under Partial Observations: Balanced
Improvements and Provable Convergence [6.461785985849886]
We endeavor to design an efficient and provable method for expensive POCOPs under the framework of constrained Bayesian optimization.
We present an improved design of the acquisition functions that introduces balanced exploration during optimization.
We propose a Gaussian process embedding different likelihoods as the surrogate model for a partially observable constraint.
arXiv Detail & Related papers (2023-12-06T01:00:07Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - Differentiable Multi-Target Causal Bayesian Experimental Design [43.76697029708785]
We introduce a gradient-based approach for the problem of Bayesian optimal experimental design to learn causal models in a batch setting.
Existing methods rely on greedy approximations to construct a batch of experiments.
We propose a conceptually simple end-to-end gradient-based optimization procedure to acquire a set of optimal intervention target-state pairs.
arXiv Detail & Related papers (2023-02-21T11:32:59Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Approximate Bayesian Optimisation for Neural Networks [6.921210544516486]
A body of work has been done to automate machine learning algorithm to highlight the importance of model choice.
The necessity to solve the analytical tractability and the computational feasibility in a idealistic fashion enables to ensure the efficiency and the applicability.
arXiv Detail & Related papers (2021-08-27T19:03:32Z) - Robust, Accurate Stochastic Optimization for Variational Inference [68.83746081733464]
We show that common optimization methods lead to poor variational approximations if the problem is moderately large.
Motivated by these findings, we develop a more robust and accurate optimization framework by viewing the underlying algorithm as producing a Markov chain.
arXiv Detail & Related papers (2020-09-01T19:12:11Z) - Causal Bayesian Optimization [8.958125394444679]
We study the problem of globally optimizing a variable of interest that is part of a causal model in which a sequence of interventions can be performed.
Our approach combines ideas from causal inference, uncertainty quantification and sequential decision making.
We show how knowing the causal graph significantly improves the ability to reason about optimal decision making strategies.
arXiv Detail & Related papers (2020-05-24T13:20:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.