Growing and Evolving 3D Prints
- URL: http://arxiv.org/abs/2107.02976v1
- Date: Wed, 7 Jul 2021 01:51:29 GMT
- Title: Growing and Evolving 3D Prints
- Authors: Jon McCormack and Camilo Cruz Gambardella
- Abstract summary: We describe a biologically-inspired developmental model as the basis of a generative form-finding system.
Unlike previous systems, our method is capable of directly producing 3D printable objects.
- Score: 5.837881923712394
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Design - especially of physical objects - can be understood as creative acts
solving practical problems. In this paper we describe a biologically-inspired
developmental model as the basis of a generative form-finding system. Using
local interactions between cells in a two-dimensional environment, then
capturing the state of the system at every time step, complex three-dimensional
(3D) forms can be generated by the system. Unlike previous systems, our method
is capable of directly producing 3D printable objects, eliminating intermediate
transformations and manual manipulation often necessary to ensure the 3D form
is printable. We devise fitness measures for optimising 3D printability and
aesthetic complexity and use a Covariance Matrix Adaptation Evolutionary
Strategies algorithm (CMA-ES) to find 3D forms that are both aesthetically
interesting and physically printable using fused deposition modelling printing
techniques. We investigate the system's capabilities by evolving and 3D
printing objects at different levels of structural consistency, and assess the
quality of the fitness measures presented to explore the design space of our
generative system. We find that by evolving first for aesthetic complexity,
then evolving for structural consistency until the form is 'just printable',
gives the best results.
Related papers
- Atlas3D: Physically Constrained Self-Supporting Text-to-3D for Simulation and Fabrication [50.541882834405946]
We introduce Atlas3D, an automatic and easy-to-implement text-to-3D method.
Our approach combines a novel differentiable simulation-based loss function with physically inspired regularization.
We verify Atlas3D's efficacy through extensive generation tasks and validate the resulting 3D models in both simulated and real-world environments.
arXiv Detail & Related papers (2024-05-28T18:33:18Z) - ID-to-3D: Expressive ID-guided 3D Heads via Score Distillation Sampling [96.87575334960258]
ID-to-3D is a method to generate identity- and text-guided 3D human heads with disentangled expressions.
Results achieve an unprecedented level of identity-consistent and high-quality texture and geometry generation.
arXiv Detail & Related papers (2024-05-26T13:36:45Z) - NeuSDFusion: A Spatial-Aware Generative Model for 3D Shape Completion, Reconstruction, and Generation [52.772319840580074]
3D shape generation aims to produce innovative 3D content adhering to specific conditions and constraints.
Existing methods often decompose 3D shapes into a sequence of localized components, treating each element in isolation.
We introduce a novel spatial-aware 3D shape generation framework that leverages 2D plane representations for enhanced 3D shape modeling.
arXiv Detail & Related papers (2024-03-27T04:09:34Z) - En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D
Synthetic Data [36.51674664590734]
We present En3D, an enhanced izable scheme for high-qualityd 3D human avatars.
Unlike previous works that rely on scarce 3D datasets or limited 2D collections with imbalance viewing angles and pose priors, our approach aims to develop a zero-shot 3D capable of producing 3D humans.
arXiv Detail & Related papers (2024-01-02T12:06:31Z) - Explorable Mesh Deformation Subspaces from Unstructured Generative
Models [53.23510438769862]
Deep generative models of 3D shapes often feature continuous latent spaces that can be used to explore potential variations.
We present a method to explore variations among a given set of landmark shapes by constructing a mapping from an easily-navigable 2D exploration space to a subspace of a pre-trained generative model.
arXiv Detail & Related papers (2023-10-11T18:53:57Z) - Guide3D: Create 3D Avatars from Text and Image Guidance [55.71306021041785]
Guide3D is a text-and-image-guided generative model for 3D avatar generation based on diffusion models.
Our framework produces topologically and structurally correct geometry and high-resolution textures.
arXiv Detail & Related papers (2023-08-18T17:55:47Z) - Michelangelo: Conditional 3D Shape Generation based on Shape-Image-Text
Aligned Latent Representation [47.945556996219295]
We present a novel alignment-before-generation approach to generate 3D shapes based on 2D images or texts.
Our framework comprises two models: a Shape-Image-Text-Aligned Variational Auto-Encoder (SITA-VAE) and a conditional Aligned Shape Latent Diffusion Model (ASLDM)
arXiv Detail & Related papers (2023-06-29T17:17:57Z) - Pushing the Limits of 3D Shape Generation at Scale [65.24420181727615]
We present a significant breakthrough in 3D shape generation by scaling it to unprecedented dimensions.
We have developed a model with an astounding 3.6 billion trainable parameters, establishing it as the largest 3D shape generation model to date, named Argus-3D.
arXiv Detail & Related papers (2023-06-20T13:01:19Z) - Learning Internal Representations of 3D Transformations from 2D
Projected Inputs [13.029330360766595]
We show how our model infers depth from moving 2D projected points, learns 3D rotational transformations from 2D training stimuli, and compares to human performance on psychophysical structure-from-motion experiments.
arXiv Detail & Related papers (2023-03-31T02:43:01Z) - Searching for Designs in-between [5.837881923712394]
We introduce an evolutionary system for design that combines optimisation and exploration.
We test our methods using a biologically-inspired generative system capable of producing 3D objects.
We investigate the system's capabilities by evolving highly fit artefacts and then combining them with aesthetically interesting ones.
arXiv Detail & Related papers (2021-02-11T06:44:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.