Investigating Constraint Relationship in Evolutionary Many-Constraint
Optimization
- URL: http://arxiv.org/abs/2010.04445v1
- Date: Fri, 9 Oct 2020 09:15:08 GMT
- Title: Investigating Constraint Relationship in Evolutionary Many-Constraint
Optimization
- Authors: Mengjun Ming, Rui Wang, Tao Zhang
- Abstract summary: In a conflicting relationship, the functional value of one constraint increases as the value in another constraint decreases.
In an independent relationship, the adjustment to one constraint never affects the adjustment to the other.
The transitivity of the relationships is further discussed at the aim of determining the relationship in a new pair of constraints.
- Score: 7.383262009823001
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper contributes to the treatment of extensive constraints in
evolutionary many-constraint optimization through consideration of the
relationships between pair-wise constraints. In a conflicting relationship, the
functional value of one constraint increases as the value in another constraint
decreases. In a harmonious relationship, the improvement in one constraint is
rewarded with simultaneous improvement in the other constraint. In an
independent relationship, the adjustment to one constraint never affects the
adjustment to the other. Based on the different features, methods for
identifying constraint relationships are discussed, helping to simplify
many-constraint optimization problems (MCOPs). Additionally, the transitivity
of the relationships is further discussed at the aim of determining the
relationship in a new pair of constraints.
Related papers
- Mitigating Hallucinations in Multimodal Spatial Relations through Constraint-Aware Prompting [7.962140902232628]
Spatial relation hallucinations pose a persistent challenge in large vision-language models (LVLMs)
We propose a constraint-aware prompting framework designed to reduce spatial relation hallucinations.
arXiv Detail & Related papers (2025-02-12T11:32:19Z) - A space-decoupling framework for optimization on bounded-rank matrices with orthogonally invariant constraints [4.917399520581689]
We propose a space-decoupling framework for optimization on bounded-rank matrices.
We show that the tangent cone of coupled constraints is the intersection of tangent cones of each constraint.
We unveil the equivalence between the reformulated problem and the original problem.
arXiv Detail & Related papers (2025-01-23T16:54:03Z) - A Dual Perspective of Reinforcement Learning for Imposing Policy Constraints [0.0]
We use a generic primal-dual framework for value-based and actor-critic reinforcement learning methods.
The obtained dual formulations turn out to be especially useful for imposing additional constraints on the learned policy.
A practical algorithm is derived that supports various combinations of policy constraints that are automatically handled throughout training.
arXiv Detail & Related papers (2024-04-25T09:50:57Z) - Double Duality: Variational Primal-Dual Policy Optimization for
Constrained Reinforcement Learning [132.7040981721302]
We study the Constrained Convex Decision Process (MDP), where the goal is to minimize a convex functional of the visitation measure.
Design algorithms for a constrained convex MDP faces several challenges, including handling the large state space.
arXiv Detail & Related papers (2024-02-16T16:35:18Z) - Resilient Constrained Reinforcement Learning [87.4374430686956]
We study a class of constrained reinforcement learning (RL) problems in which multiple constraint specifications are not identified before study.
It is challenging to identify appropriate constraint specifications due to the undefined trade-off between the reward training objective and the constraint satisfaction.
We propose a new constrained RL approach that searches for policy and constraint specifications together.
arXiv Detail & Related papers (2023-12-28T18:28:23Z) - On Regularization and Inference with Label Constraints [62.60903248392479]
We compare two strategies for encoding label constraints in a machine learning pipeline, regularization with constraints and constrained inference.
For regularization, we show that it narrows the generalization gap by precluding models that are inconsistent with the constraints.
For constrained inference, we show that it reduces the population risk by correcting a model's violation, and hence turns the violation into an advantage.
arXiv Detail & Related papers (2023-07-08T03:39:22Z) - Algorithm for Constrained Markov Decision Process with Linear
Convergence [55.41644538483948]
An agent aims to maximize the expected accumulated discounted reward subject to multiple constraints on its costs.
A new dual approach is proposed with the integration of two ingredients: entropy regularized policy and Vaidya's dual.
The proposed approach is shown to converge (with linear rate) to the global optimum.
arXiv Detail & Related papers (2022-06-03T16:26:38Z) - Query Answering with Transitive and Linear-Ordered Data [7.879958190837517]
We consider entailment problems involving powerful constraint languages such as frontier-guarded existential rules.
We show that slight changes in these conditions lead to undecidability.
arXiv Detail & Related papers (2022-02-17T10:00:08Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.