MO-SeGMan: Rearrangement Planning Framework for Multi Objective Sequential and Guided Manipulation in Constrained Environments
- URL: http://arxiv.org/abs/2511.01476v1
- Date: Mon, 03 Nov 2025 11:38:57 GMT
- Title: MO-SeGMan: Rearrangement Planning Framework for Multi Objective Sequential and Guided Manipulation in Constrained Environments
- Authors: Cankut Bora Tuncer, Marc Toussaint, Ozgur S. Oguz,
- Abstract summary: We introduce MO-SeGMan, a Sequential and Guided Manipulation planner for highly constrained rearrangement problems.<n>Mo-SeGMan generates object placement sequences that minimize both replanning per object and robot travel distance.<n>We show that MO-SeGMan consistently achieves faster solution times and superior solution quality compared to the baselines.
- Score: 14.799742504098603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we introduce MO-SeGMan, a Multi-Objective Sequential and Guided Manipulation planner for highly constrained rearrangement problems. MO-SeGMan generates object placement sequences that minimize both replanning per object and robot travel distance while preserving critical dependency structures with a lazy evaluation method. To address highly cluttered, non-monotone scenarios, we propose a Selective Guided Forward Search (SGFS) that efficiently relocates only critical obstacles and to feasible relocation points. Furthermore, we adopt a refinement method for adaptive subgoal selection to eliminate unnecessary pick-and-place actions, thereby improving overall solution quality. Extensive evaluations on nine benchmark rearrangement tasks demonstrate that MO-SeGMan generates feasible motion plans in all cases, consistently achieving faster solution times and superior solution quality compared to the baselines. These results highlight the robustness and scalability of the proposed framework for complex rearrangement planning problems.
Related papers
- HiPlan: Hierarchical Planning for LLM-Based Agents with Adaptive Global-Local Guidance [11.621973074884002]
HiPlan is a hierarchical planning framework for large language model (LLM)-based agents.<n>It decomposes complex tasks into milestone action guides for general direction and step-wise hints for detailed actions.<n>In the offline phase, we construct a milestone library from expert demonstrations, enabling structured experience reuse.<n>In the execution phase, trajectory segments from past milestones are dynamically adapted to generate step-wise hints.
arXiv Detail & Related papers (2025-08-26T14:37:48Z) - Recursive Reward Aggregation [60.51668865089082]
We propose an alternative approach for flexible behavior alignment that eliminates the need to modify the reward function.<n>By introducing an algebraic perspective on Markov decision processes (MDPs), we show that the Bellman equations naturally emerge from the generation and aggregation of rewards.<n>Our approach applies to both deterministic and deterministic settings and seamlessly integrates with value-based and actor-critic algorithms.
arXiv Detail & Related papers (2025-07-11T12:37:20Z) - PlanGEN: A Multi-Agent Framework for Generating Planning and Reasoning Trajectories for Complex Problem Solving [89.60370366013142]
We propose PlanGEN, a model-agnostic and easily scalable agent framework with three key components: constraint, verification, and selection agents.<n>Specifically, our approach proposes constraint-guided iterative verification to enhance performance of inference-time algorithms.
arXiv Detail & Related papers (2025-02-22T06:21:56Z) - Hierarchical Object-Oriented POMDP Planning for Object Rearrangement [19.62753215239688]
Current object rearrangement solutions, primarily based on Reinforcement Learning or hand-coded planning methods, often lack adaptability to diverse challenges.<n>To address this limitation, we introduce a novel Hierarchical Object-Oriented Partially Observed Markov Decision Process (HOO-POMDP) planning approach.<n>We present an online planning framework and a new benchmark dataset for solving multi-object rearrangement problems in partially observable, multi-room environments.
arXiv Detail & Related papers (2024-12-02T10:19:36Z) - A Meta-Engine Framework for Interleaved Task and Motion Planning using Topological Refinements [51.54559117314768]
Task And Motion Planning (TAMP) is the problem of finding a solution to an automated planning problem.
We propose a general and open-source framework for modeling and benchmarking TAMP problems.
We introduce an innovative meta-technique to solve TAMP problems involving moving agents and multiple task-state-dependent obstacles.
arXiv Detail & Related papers (2024-08-11T14:57:57Z) - Weighted strategies to guide a multi-objective evolutionary algorithm
for multi-UAV mission planning [12.97430155510359]
This work proposes a weighted random generator for the creation and mutation of new individuals.
The main objective of this work is to reduce the convergence rate of the MOEA solver for multi-UAV mission planning.
arXiv Detail & Related papers (2024-02-28T23:05:27Z) - MANER: Multi-Agent Neural Rearrangement Planning of Objects in Cluttered
Environments [8.15681999722805]
This paper proposes a learning-based framework for multi-agent object rearrangement planning.
It addresses the challenges of task sequencing and path planning in complex environments.
arXiv Detail & Related papers (2023-06-10T23:53:28Z) - Multi-Objective Policy Gradients with Topological Constraints [108.10241442630289]
We present a new algorithm for a policy gradient in TMDPs by a simple extension of the proximal policy optimization (PPO) algorithm.
We demonstrate this on a real-world multiple-objective navigation problem with an arbitrary ordering of objectives both in simulation and on a real robot.
arXiv Detail & Related papers (2022-09-15T07:22:58Z) - Goal Kernel Planning: Linearly-Solvable Non-Markovian Policies for Logical Tasks with Goal-Conditioned Options [54.40780660868349]
We introduce a compositional framework called Linearly-Solvable Goal Kernel Dynamic Programming (LS-GKDP)<n>LS-GKDP combines the Linearly-Solvable Markov Decision Process (LMDP) formalism with the Options Framework of Reinforcement Learning.<n>We show how an LMDP with a goal kernel enables the efficient optimization of meta-policies in a lower-dimensional subspace defined by the task grounding.
arXiv Detail & Related papers (2020-07-06T05:13:20Z) - Divide-and-Conquer Monte Carlo Tree Search For Goal-Directed Planning [78.65083326918351]
We consider alternatives to an implicit sequential planning assumption.
We propose Divide-and-Conquer Monte Carlo Tree Search (DC-MCTS) for approximating the optimal plan.
We show that this algorithmic flexibility over planning order leads to improved results in navigation tasks in grid-worlds.
arXiv Detail & Related papers (2020-04-23T18:08:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.