FaçAID: A Transformer Model for Neuro-Symbolic Facade Reconstruction
- URL: http://arxiv.org/abs/2406.01829v2
- Date: Fri, 13 Sep 2024 09:39:53 GMT
- Title: FaçAID: A Transformer Model for Neuro-Symbolic Facade Reconstruction
- Authors: Aleksander Plocharski, Jan Swidzinski, Joanna Porter-Sobieraj, Przemyslaw Musialski,
- Abstract summary: We introduce a neuro-symbolic transformer-based model that converts flat, segmented facade structures into procedural definitions using a custom-designed split grammar.
This dataset is used to train our transformer model to convert segmented, flat facades into the procedural language of our grammar.
- Score: 43.58572466488356
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a neuro-symbolic transformer-based model that converts flat, segmented facade structures into procedural definitions using a custom-designed split grammar. To facilitate this, we first develop a semi-complex split grammar tailored for architectural facades and then generate a dataset comprising of facades alongside their corresponding procedural representations. This dataset is used to train our transformer model to convert segmented, flat facades into the procedural language of our grammar. During inference, the model applies this learned transformation to new facade segmentations, providing a procedural representation that users can adjust to generate varied facade designs. This method not only automates the conversion of static facade images into dynamic, editable procedural formats but also enhances the design flexibility, allowing for easy modifications.
Related papers
- Pro-DG: Procedural Diffusion Guidance for Architectural Facade Generation [46.76076836382595]
Pro-DG is a framework for procedurally controllable photo-realistic facade generation.
We reconstruct its facade layout using grammar rules, then edit that structure through user-defined transformations.
arXiv Detail & Related papers (2025-04-02T10:16:19Z) - Synthesizing 3D Abstractions by Inverting Procedural Buildings with Transformers [2.199128905898291]
We generate abstractions of buildings by learning to invert procedural models.
Our approach achieves good reconstruction accuracy in terms of geometry and structure, as well as structurally consistent inpainting.
arXiv Detail & Related papers (2025-01-28T16:09:34Z) - ParGAN: Learning Real Parametrizable Transformations [50.51405390150066]
We propose ParGAN, a generalization of the cycle-consistent GAN framework to learn image transformations.
The proposed generator takes as input both an image and a parametrization of the transformation.
We show how, with disjoint image domains with no annotated parametrization, our framework can create smooths as well as learn multiple transformations simultaneously.
arXiv Detail & Related papers (2022-11-09T16:16:06Z) - Structural Biases for Improving Transformers on Translation into
Morphologically Rich Languages [120.74406230847904]
TP-Transformer augments the traditional Transformer architecture to include an additional component to represent structure.
The second method imbues structure at the data level by segmenting the data with morphological tokenization.
We find that each of these two approaches allows the network to achieve better performance, but this improvement is dependent on the size of the dataset.
arXiv Detail & Related papers (2022-08-11T22:42:24Z) - N-Grammer: Augmenting Transformers with latent n-grams [35.39961549040385]
We propose a simple yet effective modification to the Transformer architecture inspired by the literature in statistical language modeling, by augmenting the model with n-grams that are constructed from a discrete latent representation of the text sequence.
We evaluate our model, the N-Grammer on language modeling on the C4 data-set as well as text classification on the SuperGLUE data-set, and find that it outperforms several strong baselines such as the Transformer and the Primer.
arXiv Detail & Related papers (2022-07-13T17:18:02Z) - Transformer Grammars: Augmenting Transformer Language Models with
Syntactic Inductive Biases at Scale [31.293175512404172]
We introduce Transformer Grammars -- a class of Transformer language models that combine expressive power, scalability, and strong performance of Transformers.
We find that Transformer Grammars outperform various strong baselines on multiple syntax-sensitive language modeling evaluation metrics.
arXiv Detail & Related papers (2022-03-01T17:22:31Z) - Frame Averaging for Equivariant Shape Space Learning [85.42901997467754]
A natural way to incorporate symmetries in shape space learning is to ask that the mapping to the shape space (encoder) and mapping from the shape space (decoder) are equivariant to the relevant symmetries.
We present a framework for incorporating equivariance in encoders and decoders by introducing two contributions.
arXiv Detail & Related papers (2021-12-03T06:41:19Z) - Structure-aware Fine-tuning of Sequence-to-sequence Transformers for
Transition-based AMR Parsing [20.67024416678313]
We explore the integration of general pre-trained sequence-to-sequence language models and a structure-aware transition-based approach.
We propose a simplified transition set, designed to better exploit pre-trained language models for structured fine-tuning.
We show that the proposed parsing architecture retains the desirable properties of previous transition-based approaches, while being simpler and reaching the new state of the art for AMR 2.0, without the need for graph re-categorization.
arXiv Detail & Related papers (2021-10-29T04:36:31Z) - Structured Reordering for Modeling Latent Alignments in Sequence
Transduction [86.94309120789396]
We present an efficient dynamic programming algorithm performing exact marginal inference of separable permutations.
The resulting seq2seq model exhibits better systematic generalization than standard models on synthetic problems and NLP tasks.
arXiv Detail & Related papers (2021-06-06T21:53:54Z) - Disentangling images with Lie group transformations and sparse coding [3.3454373538792552]
We train a model that learns to disentangle spatial patterns and their continuous transformations in a completely unsupervised manner.
Training the model on a dataset consisting of controlled geometric transformations of specific MNIST digits shows that it can recover these transformations along with the digits.
arXiv Detail & Related papers (2020-12-11T19:11:32Z) - FLAT: Few-Shot Learning via Autoencoding Transformation Regularizers [67.46036826589467]
We present a novel regularization mechanism by learning the change of feature representations induced by a distribution of transformations without using the labels of data examples.
It could minimize the risk of overfitting into base categories by inspecting the transformation-augmented variations at the encoded feature level.
Experiment results show the superior performances to the current state-of-the-art methods in literature.
arXiv Detail & Related papers (2019-12-29T15:26:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.