Co-Layout: LLM-driven Co-optimization for Interior Layout
- URL: http://arxiv.org/abs/2511.12474v1
- Date: Sun, 16 Nov 2025 06:20:55 GMT
- Title: Co-Layout: LLM-driven Co-optimization for Interior Layout
- Authors: Chucheng Xiang, Ruchao Bao, Biyin Feng, Wenzheng Wu, Zhongyuan Liu, Yirui Guan, Ligang Liu,
- Abstract summary: We present a framework for automated interior design that combines large language models (LLMs) with grid-based integer programming to jointly optimize room layout and furniture placement.<n>Our formulation accounts for key design requirements, including corridor connectivity, room accessibility, spatial exclusivity, and user-specified preferences.
- Score: 8.182031753612875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel framework for automated interior design that combines large language models (LLMs) with grid-based integer programming to jointly optimize room layout and furniture placement. Given a textual prompt, the LLM-driven agent workflow extracts structured design constraints related to room configurations and furniture arrangements. These constraints are encoded into a unified grid-based representation inspired by ``Modulor". Our formulation accounts for key design requirements, including corridor connectivity, room accessibility, spatial exclusivity, and user-specified preferences. To improve computational efficiency, we adopt a coarse-to-fine optimization strategy that begins with a low-resolution grid to solve a simplified problem and guides the solution at the full resolution. Experimental results across diverse scenarios demonstrate that our joint optimization approach significantly outperforms existing two-stage design pipelines in solution quality, and achieves notable computational efficiency through the coarse-to-fine strategy.
Related papers
- BAMBO: Construct Ability and Efficiency LLM Pareto Set via Bayesian Adaptive Multi-objective Block-wise Optimization [4.196004665145396]
BAMBO (Bayesian Adaptive Multi-objective Block-wise Optimization) is a novel framework that automatically constructs the Large Language Models (LLMs)<n>Formulated as a 1D clustering problem, this strategy leverages a dynamic programming approach to optimally balance intra-blockvolume and inter-block information distribution.
arXiv Detail & Related papers (2025-12-10T15:32:56Z) - DisCo-Layout: Disentangling and Coordinating Semantic and Physical Refinement in a Multi-Agent Framework for 3D Indoor Layout Synthesis [76.7196710324494]
3D indoor layout synthesis is crucial for creating virtual environments.<n>DisCo is a novel framework that disentangles and coordinates physical and semantic refinement.
arXiv Detail & Related papers (2025-10-02T16:30:37Z) - Learn to Relax with Large Language Models: Solving Nonlinear Combinatorial Optimization Problems via Bidirectional Coevolution [10.160534429260228]
We introduce the first end-to-end textbfAutomated textbfConst textbfOptimization (AutoCO) method, which revolutionizes NCOPs resolution through learning to relax with code.
arXiv Detail & Related papers (2025-09-16T03:59:51Z) - A Markovian Framing of WaveFunctionCollapse for Procedurally Generating Aesthetically Complex Environments [5.114029940159893]
Procedural content generation often requires satisfying both designer-specified objectives and adjacency constraints implicitly imposed by the underlying tile set.<n>We reformulate WaveFunctionCol (WFC) as a Markov Decision Process (MDP)<n>We find that joint optimization not only struggles as task complexity increases, but consistently underperforms relative to optimization over the WFC-MDP.
arXiv Detail & Related papers (2025-09-12T01:51:01Z) - LLM4CMO: Large Language Model-aided Algorithm Design for Constrained Multiobjective Optimization [54.35609820607923]
Large language models (LLMs) offer new opportunities for assisting with algorithm design.<n>We propose LLM4CMO, a novel CMOEA based on a dual-population, two-stage framework.<n>LLMs can serve as efficient co-designers in the development of complex evolutionary optimization algorithms.
arXiv Detail & Related papers (2025-08-16T02:00:57Z) - Leveraging Importance Sampling to Detach Alignment Modules from Large Language Models [48.15777554876988]
Traditional alignment methods often require retraining large pretrained models.<n>We propose a novel textitResidual Alignment Model (textitRAM) that formalizes the alignment process as a type of importance sampling.<n>We develop a resampling algorithm with iterative token-level decoding to address the common first-token latency issue in comparable methods.
arXiv Detail & Related papers (2025-05-26T08:53:02Z) - Combinatorial Optimization via LLM-driven Iterated Fine-tuning [47.66752049943335]
We present a novel way to integrate flexible, context-dependent constraints into optimization by leveraging Large Language Models (LLMs)<n>Our framework balances locally constraints with rigorous global optimization more effectively than baseline sampling methods.
arXiv Detail & Related papers (2025-03-10T04:58:18Z) - FlairGPT: Repurposing LLMs for Interior Designs [29.903931425159925]
We investigate if large language models (LLMs) can be directly utilized for interior design.<n>By systematically probing LLMs, we can reliably generate a list of objects along with relevant constraints.<n>We translate this information into a design layout graph, which is then solved using an off-the-shelf constrained optimization setup.
arXiv Detail & Related papers (2025-01-08T18:01:49Z) - Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design [59.00758127310582]
We propose a novel framework Read-ME that transforms pre-trained dense LLMs into smaller MoE models.
Our approach employs activation sparsity to extract experts.
Read-ME outperforms other popular open-source dense models of similar scales.
arXiv Detail & Related papers (2024-10-24T19:48:51Z) - PosterLLaVa: Constructing a Unified Multi-modal Layout Generator with LLM [58.67882997399021]
Our research introduces a unified framework for automated graphic layout generation.<n>Our data-driven method employs structured text (JSON format) and visual instruction tuning to generate layouts.<n>We develop an automated text-to-poster system that generates editable posters based on users' design intentions.
arXiv Detail & Related papers (2024-06-05T03:05:52Z) - Heuristic Solution to Joint Deployment and Beamforming Design for STAR-RIS Aided Networks [23.4781981471893]
This paper emphasizes the joint optimization of the location and orientation of STAR-RIS.
We consider a sum rate problem with joint optimization and hybrid beamforming design.
Numerical results demonstrate the substantial performance gains achievable through optimal deployment design.
arXiv Detail & Related papers (2024-04-14T05:45:41Z) - Robust Topology Optimization Using Multi-Fidelity Variational Autoencoders [1.0124625066746595]
A robust topology optimization (RTO) problem identifies a design with the best average performance.
A neural network method is proposed that offers computational efficiency.
Numerical application of the method is shown on the robust design of L-bracket structure with single point load as well as multiple point loads.
arXiv Detail & Related papers (2021-07-19T20:40:51Z) - Optimization-Inspired Learning with Architecture Augmentations and
Control Mechanisms for Low-Level Vision [74.9260745577362]
This paper proposes a unified optimization-inspired learning framework to aggregate Generative, Discriminative, and Corrective (GDC) principles.
We construct three propagative modules to effectively solve the optimization models with flexible combinations.
Experiments across varied low-level vision tasks validate the efficacy and adaptability of GDC.
arXiv Detail & Related papers (2020-12-10T03:24:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.