Deep-learning-based Early Fixing for Gas-lifted Oil Production
Optimization: Supervised and Weakly-supervised Approaches
- URL: http://arxiv.org/abs/2309.00197v1
- Date: Fri, 1 Sep 2023 01:23:28 GMT
- Title: Deep-learning-based Early Fixing for Gas-lifted Oil Production
Optimization: Supervised and Weakly-supervised Approaches
- Authors: Bruno Machado Pacheco and Laio Oriel Seman and Eduardo Camponogara
- Abstract summary: Mixed-Integer Linear Programs (MILPs) are used to maximize oil production from gas-lifted oil wells.
We propose a tailor-made solution based on deep learning models trained to provide values to all integer variables.
- Score: 7.676408770854476
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Maximizing oil production from gas-lifted oil wells entails solving
Mixed-Integer Linear Programs (MILPs). As the parameters of the wells, such as
the basic-sediment-to-water ratio and the gas-oil ratio, are updated, the
problems must be repeatedly solved. Instead of relying on costly exact methods
or the accuracy of general approximate methods, in this paper, we propose a
tailor-made heuristic solution based on deep learning models trained to provide
values to all integer variables given varying well parameters, early-fixing the
integer variables and, thus, reducing the original problem to a linear program
(LP). We propose two approaches for developing the learning-based heuristic: a
supervised learning approach, which requires the optimal integer values for
several instances of the original problem in the training set, and a
weakly-supervised learning approach, which requires only solutions for the
early-fixed linear problems with random assignments for the integer variables.
Our results show a runtime reduction of 71.11% Furthermore, the
weakly-supervised learning model provided significant values for early fixing,
despite never seeing the optimal values during training.
Related papers
- Learning to Optimize for Mixed-Integer Non-linear Programming [20.469394148261838]
Mixed-integer non-NLP programs (MINLPs) arise in various domains, such as energy systems and transportation, but are notoriously difficult to solve.
Recent advances in machine learning have led to remarkable successes in optimization, area broadly known as learning to optimize.
We propose two differentiable correction layers that generate integer outputs while preserving gradient.
arXiv Detail & Related papers (2024-10-14T20:14:39Z) - Learning Constrained Optimization with Deep Augmented Lagrangian Methods [54.22290715244502]
A machine learning (ML) model is trained to emulate a constrained optimization solver.
This paper proposes an alternative approach, in which the ML model is trained to predict dual solution estimates directly.
It enables an end-to-end training scheme is which the dual objective is as a loss function, and solution estimates toward primal feasibility, emulating a Dual Ascent method.
arXiv Detail & Related papers (2024-03-06T04:43:22Z) - How Many Pretraining Tasks Are Needed for In-Context Learning of Linear Regression? [92.90857135952231]
Transformers pretrained on diverse tasks exhibit remarkable in-context learning (ICL) capabilities.
We study ICL in one of its simplest setups: pretraining a linearly parameterized single-layer linear attention model for linear regression.
arXiv Detail & Related papers (2023-10-12T15:01:43Z) - Pruning Pre-trained Language Models with Principled Importance and
Self-regularization [18.088550230146247]
Iterative pruning is one of the most effective compression methods for pre-trained language models.
We propose a self-regularization scheme where model prediction is regularized by the latest checkpoint with increasing sparsity throughout pruning.
Our experiments on natural language understanding, question-answering, named entity recognition, and data-to-text generation with various Transformer-based PLMs show the effectiveness of the approach at various sparsity levels.
arXiv Detail & Related papers (2023-05-21T08:15:12Z) - Learning To Dive In Branch And Bound [95.13209326119153]
We propose L2Dive to learn specific diving structurals with graph neural networks.
We train generative models to predict variable assignments and leverage the duality of linear programs to make diving decisions.
arXiv Detail & Related papers (2023-01-24T12:01:45Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - A framework for bilevel optimization that enables stochastic and global
variance reduction algorithms [17.12280360174073]
Bilevel optimization is a problem of minimizing a value function which involves the arg-minimum of another function.
We introduce a novel framework, in which the solution of the inner problem, the solution of the linear system, and the main variable evolve at the same time.
We demonstrate that SABA, an adaptation of the celebrated SAGA algorithm in our framework, has $O(frac1T)$ convergence rate, and that it achieves linear convergence under Polyak-Lojasciewicz assumption.
arXiv Detail & Related papers (2022-01-31T18:17:25Z) - Learning to Reformulate for Linear Programming [11.628932152805724]
We propose a reinforcement learning-based reformulation method for linear programming (LP) to improve the performance of solving process.
We implement the proposed method over two public research LP datasets and one large-scale LP dataset collected from practical production planning scenario.
arXiv Detail & Related papers (2022-01-17T04:58:46Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.