Computational Co-Design for Variable Geometry Truss
- URL: http://arxiv.org/abs/2211.14663v1
- Date: Sat, 26 Nov 2022 20:52:03 GMT
- Title: Computational Co-Design for Variable Geometry Truss
- Authors: Jianzhe Gu and Lining Yao
- Abstract summary: We introduce a learning-based model to find a sub-optimal design for variable geometry truss (VGT) structures.
We show that our method enables a robotic table-based VGT to achieve various motions with a limited number of control inputs.
- Score: 28.557274577961223
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Living creatures and machines interact with the world through their
morphology and motions. Recent advances in creating bio-inspired morphing
robots and machines have led to the study of variable geometry truss (VGT),
structures that can approximate arbitrary geometries and has large degree of
freedom to deform. However, they are limited to simple geometries and motions
due to the excessively complex control system. While a recent work PneuMesh
solves this challenge with a novel VGT design that introduces a selective
channel connection strategy, it imposes new challenge in identifying effective
channel groupings and control methods.
Building on top of the hardware concept presented in PneuMesh, we frame the
challenge into a co-design problem and introduce a learning-based model to find
a sub-optimal design. Specifically, given an initial truss structure provided
by a human designer, we first adopt a genetic algorithm (GA) to optimize the
channel grouping, and then couple GA with reinforcement learning (RL) for the
control. The model is tailored to the PneuMesh system with customized
initialization, mutation and selection functions, as well as the customized
translation-invariant state vector for reinforcement learning. The result shows
that our method enables a robotic table-based VGT to achieve various motions
with a limited number of control inputs. The table is trained to move, lower
its body or tilt its tabletop to accommodate multiple use cases such as
benefiting kids and painters to use it in different shape states, allowing
inclusive and adaptive design through morphing trusses.
Related papers
- AR-MOT: Autoregressive Multi-object Tracking [56.09738000988466]
We propose a novel autoregressive paradigm that formulates MOT as a sequence generation task within a large language model (LLM) framework.<n>This design enables the model to output structured results through flexible sequence construction, without requiring any task-specific heads.<n>To enhance region-level visual perception, we introduce an Object Tokenizer based on a pretrained detector.
arXiv Detail & Related papers (2026-01-05T09:17:28Z) - GeoAda: Efficiently Finetune Geometric Diffusion Models with Equivariant Adapters [61.51810815162003]
We propose an SE(3)-equivariant adapter framework ( GeoAda) that enables flexible and parameter-efficient fine-tuning for controlled generative tasks.<n>GeoAda preserves the model's geometric consistency while mitigating overfitting and catastrophic forgetting.<n>We demonstrate the wide applicability of GeoAda across diverse geometric control types, including frame control, global control, subgraph control, and a broad range of application domains.
arXiv Detail & Related papers (2025-07-02T18:44:03Z) - Geometry-Informed Neural Networks [15.27249535281444]
We introduce geometry-informed neural networks (GINNs)
GINNs are a framework for training shape-generative neural fields without data.
We apply GINNs to several validation problems and a realistic 3D engineering design problem.
arXiv Detail & Related papers (2024-02-21T18:50:12Z) - Universal Neural Functionals [67.80283995795985]
A challenging problem in many modern machine learning tasks is to process weight-space features.
Recent works have developed promising weight-space models that are equivariant to the permutation symmetries of simple feedforward networks.
This work proposes an algorithm that automatically constructs permutation equivariant models for any weight space.
arXiv Detail & Related papers (2024-02-07T20:12:27Z) - Compositional Generative Inverse Design [69.22782875567547]
Inverse design, where we seek to design input variables in order to optimize an underlying objective function, is an important problem.
We show that by instead optimizing over the learned energy function captured by the diffusion model, we can avoid such adversarial examples.
In an N-body interaction task and a challenging 2D multi-airfoil design task, we demonstrate that by composing the learned diffusion model at test time, our method allows us to design initial states and boundary shapes.
arXiv Detail & Related papers (2024-01-24T01:33:39Z) - Learning Modulated Transformation in GANs [69.95217723100413]
We equip the generator in generative adversarial networks (GANs) with a plug-and-play module, termed as modulated transformation module (MTM)
MTM predicts spatial offsets under the control of latent codes, based on which the convolution operation can be applied at variable locations.
It is noteworthy that towards human generation on the challenging TaiChi dataset, we improve the FID of StyleGAN3 from 21.36 to 13.60, demonstrating the efficacy of learning modulated geometry transformation.
arXiv Detail & Related papers (2023-08-29T17:51:22Z) - Vision Transformer with Quadrangle Attention [76.35955924137986]
We propose a novel quadrangle attention (QA) method that extends the window-based attention to a general quadrangle formulation.
Our method employs an end-to-end learnable quadrangle regression module that predicts a transformation matrix to transform default windows into target quadrangles.
We integrate QA into plain and hierarchical vision transformers to create a new architecture named QFormer, which offers minor code modifications and negligible extra computational cost.
arXiv Detail & Related papers (2023-03-27T11:13:50Z) - Variational Autoencoding Neural Operators [17.812064311297117]
Unsupervised learning with functional data is an emerging paradigm of machine learning research with applications to computer vision, climate modeling and physical systems.
We present Variational Autoencoding Neural Operators (VANO), a general strategy for making a large class of operator learning architectures act as variational autoencoders.
arXiv Detail & Related papers (2023-02-20T22:34:43Z) - Engineering flexible machine learning systems by traversing
functionally-invariant paths [1.4999444543328289]
We introduce a differential geometry framework that provides flexible and continuous adaptation of neural networks.
We formalize adaptation as movement along a geodesic path in weight space while searching for networks that accommodate secondary objectives.
With modest computational resources, the FIP algorithm achieves comparable to state of the art performance on continual learning and sparsification tasks.
arXiv Detail & Related papers (2022-04-30T19:44:56Z) - DeepMLS: Geometry-Aware Control Point Deformation [76.51312491336343]
We introduce DeepMLS, a space-based deformation technique, guided by a set of displaced control points.
We leverage the power of neural networks to inject the underlying shape geometry into the deformation parameters.
Our technique facilitates intuitive piecewise smooth deformations, which are well suited for manufactured objects.
arXiv Detail & Related papers (2022-01-05T23:55:34Z) - Analogous to Evolutionary Algorithm: Designing a Unified Sequence Model [58.17021225930069]
We explain the rationality of Vision Transformer by analogy with the proven practical Evolutionary Algorithm (EA)
We propose a more efficient EAT model, and design task-related heads to deal with different tasks more flexibly.
Our approach achieves state-of-the-art results on the ImageNet classification task compared with recent vision transformer works.
arXiv Detail & Related papers (2021-05-31T16:20:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.