GenCtrl -- A Formal Controllability Toolkit for Generative Models
- URL: http://arxiv.org/abs/2601.05637v1
- Date: Fri, 09 Jan 2026 08:50:02 GMT
- Title: GenCtrl -- A Formal Controllability Toolkit for Generative Models
- Authors: Emily Cheng, Carmen Amo Alonso, Federico Danieli, Arno Blaas, Luca Zappella, Pau Rodriguez, Xavier Suau,
- Abstract summary: We propose a novel algorithm to estimate the controllable sets of models in a dialogue setting.<n>We show that model controllability is surprisingly fragile and highly dependent on the experimental setting.
- Score: 10.810813965290707
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As generative models become ubiquitous, there is a critical need for fine-grained control over the generation process. Yet, while controlled generation methods from prompting to fine-tuning proliferate, a fundamental question remains unanswered: are these models truly controllable in the first place? In this work, we provide a theoretical framework to formally answer this question. Framing human-model interaction as a control process, we propose a novel algorithm to estimate the controllable sets of models in a dialogue setting. Notably, we provide formal guarantees on the estimation error as a function of sample complexity: we derive probably-approximately correct bounds for controllable set estimates that are distribution-free, employ no assumptions except for output boundedness, and work for any black-box nonlinear control system (i.e., any generative model). We empirically demonstrate the theoretical framework on different tasks in controlling dialogue processes, for both language models and text-to-image generation. Our results show that model controllability is surprisingly fragile and highly dependent on the experimental setting. This highlights the need for rigorous controllability analysis, shifting the focus from simply attempting control to first understanding its fundamental limits.
Related papers
- ScaleWeaver: Weaving Efficient Controllable T2I Generation with Multi-Scale Reference Attention [86.93601565563954]
ScaleWeaver is a framework designed to achieve high-fidelity, controllable generation upon advanced visual autoregressive( VAR) models.<n>The proposed Reference Attention module discards the unnecessary attention from image$rightarrow$condition, reducing computational cost.<n>Experiments show that ScaleWeaver delivers high-quality generation and precise control while attaining superior efficiency over diffusion-based methods.
arXiv Detail & Related papers (2025-10-16T17:00:59Z) - Modeling and Control of Deep Sign-Definite Dynamics with Application to Hybrid Powertrain Control [1.9116784879310027]
Deep learning is increasingly used for enforce physical systems or systems where first-principles modeling is difficult.<n>We develop a new type of deep learning model that satisfies constraints on Jacobian entries--that monotonicity, and sign-definite positivity.<n>On a two-tank system hybrid, the proposed approach improves control inputs than existing methods.
arXiv Detail & Related papers (2025-09-24T08:15:26Z) - Neural Internal Model Control: Learning a Robust Control Policy via Predictive Error Feedback [16.46487826869775]
We propose a novel framework, Neural Internal Model Control, which integrates model-based control with RL-based control to enhance robustness.<n>Our framework streamlines the predictive model by applying Newton-Euler equations for rigid-body dynamics, eliminating the need to capture complex high-dimensional nonlinearities.<n>We demonstrate the effectiveness of our framework on both quadrotors and quadrupedal robots, achieving superior performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2024-11-20T07:07:42Z) - CAR: Controllable Autoregressive Modeling for Visual Generation [100.33455832783416]
Controllable AutoRegressive Modeling (CAR) is a novel, plug-and-play framework that integrates conditional control into multi-scale latent variable modeling.
CAR progressively refines and captures control representations, which are injected into each autoregressive step of the pre-trained model to guide the generation process.
Our approach demonstrates excellent controllability across various types of conditions and delivers higher image quality compared to previous methods.
arXiv Detail & Related papers (2024-10-07T00:55:42Z) - Automatically Adaptive Conformal Risk Control [49.95190019041905]
We propose a methodology for achieving approximate conditional control of statistical risks by adapting to the difficulty of test samples.<n>Our framework goes beyond traditional conditional risk control based on user-provided conditioning events to the algorithmic, data-driven determination of appropriate function classes for conditioning.
arXiv Detail & Related papers (2024-06-25T08:29:32Z) - Controllability-Constrained Deep Network Models for Enhanced Control of
Dynamical Systems [4.948174943314265]
Control of a dynamical system without the knowledge of dynamics is an important and challenging task.
Modern machine learning approaches, such as deep neural networks (DNNs), allow for the estimation of a dynamics model from control inputs and corresponding state observation outputs.
We propose a control-theoretical method that explicitly enhances models estimated from data with controllability.
arXiv Detail & Related papers (2023-11-11T00:04:26Z) - Sparsity in Partially Controllable Linear Systems [56.142264865866636]
We study partially controllable linear dynamical systems specified by an underlying sparsity pattern.
Our results characterize those state variables which are irrelevant for optimal control.
arXiv Detail & Related papers (2021-10-12T16:41:47Z) - Learn then Test: Calibrating Predictive Algorithms to Achieve Risk
Control [67.52000805944924]
Learn then Test (LTT) is a framework for calibrating machine learning models.
Our main insight is to reframe the risk-control problem as multiple hypothesis testing.
We use our framework to provide new calibration methods for several core machine learning tasks with detailed worked examples in computer vision.
arXiv Detail & Related papers (2021-10-03T17:42:03Z) - Posterior Control of Blackbox Generation [126.33511630879713]
We consider augmenting neural generation models with discrete control states learned through a structured latent-variable approach.
We find that this method improves over standard benchmarks, while also providing fine-grained control.
arXiv Detail & Related papers (2020-05-10T03:22:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.